Untitled Linux Show 246 Transcript
Please be advised this transcript is AI-generated and may not be word for word. Time codes refer to the approximate times in the ad-supported version of the show.
Jonathan Bennett [00:00:00]:
Hey, this week we have a bunch of updates to cover with things like Calibre, GIMP, Handbrake, and KeePass. But there's also Fedora's RISC-V complaint, news in the kernel about an API specification that's perhaps going to get published. systemd is adding some AI-specific documentation. And then SUSE may be for sale. We have thoughts about who we would like or not like to see purchase it. And then TrueNAS has made some serious waves by closing the source on their build scripts, and a lot more. You don't want to miss it, so stay tuned. Podcasts you love from people you trust.
Ken McDonald [00:00:42]:
This is TWiT.
Jonathan Bennett [00:00:47]:
This is the Untitled Linux Show, episode 246, recorded Saturday, March 14th, chasing the sun. Hey folks, it is Saturday and it is time. It's time for the Untitled Linux Show. We're going to talk hardware, software, gaming. We're basically just going to nerd out over Linux and open source stuff. It's going to be a lot of fun. I've got Ken, I've got Jeff. I have half of Ken at least.
Jonathan Bennett [00:01:12]:
There we go. Rob is flaked out. He said something about having family stuff. Bah, this is the place to be. We've got some really cool stuff to talk about. If I'm a little weird today, if my voice is extra bassy, I just got back from Germany where I was at Embedded World, which was a lot of fun, but the jet lag is real. I got in last night at about 3:30 in the morning local time, which was also interesting. A very interesting sleepy drive because, you know, you take off from Germany and it's morning time and you fly, but when you fly west, you're chasing the sun.
Jonathan Bennett [00:01:54]:
So my day was long. Let's put it that way. It was a very long day and then a drive at the end of it. I can't decide which one is better, driving at the beginning of a long trip or driving at the end of a long trip. They're both sort of terrible. I think next time I'm going to try to either fly out of a closer spot or maybe do a motel room in the morning and the evening, you know, the day before I fly out and the day after I fly out, just because, ugh, do not like.
Ken McDonald [00:02:24]:
I found it better to have somebody else drive.
Jonathan Bennett [00:02:27]:
I suppose I could do that. My normal assistant also has 4 children to take care of, so that's kind kind of a no-go. But yeah, I suppose—
Ken McDonald [00:02:36]:
who's she got to blame for that?
Jonathan Bennett [00:02:39]:
I have no idea what you're talking about. All right, let's move into the show. And Ken is actually going to kick us off. And he's got an update for Calibre, or Calibre if you prefer, because it's not spelled like the word caliber is normally spelled. It's intentionally spelled with the word libre at the end, which is why we sometimes call it Calibre. There is a reason why this is crazy.
Ken McDonald [00:03:02]:
Well, even though, even though it says it should be Caliber.
Jonathan Bennett [00:03:07]:
Yes. Well, it's the internet. We can pronounce things however we want to. Ken, take it away. Yeah. Well, I suppose take it away. Tell us what's new in 9.5.
Ken McDonald [00:03:17]:
Well, I want to start off by reminding everybody it's been over a month since we last covered Caliber 9.0's release. According to Bobby Borisov and Marcus Nestor, Hovid Goyle, who we just mentioned earlier, just released Calibre 9.5. According to Marcus, the latest release introduces a new tool in the Edit Book component to remove unused images, an option to display the pages from the paper book page list while also showing the last page number, and a reset button to, for the reading stats panel in the ebook viewer, with the reading stats panel actually being added in version 9.4. Now, according to Bobby, you can now create a custom column that displays reading progress, and the annotations browser now includes filtering by highlight style, making it easier to locate specific highlights especially in large collections with extensive notes. We also see some new features added since 9.0, including improving the menu option Preferences, Tweaks, that allows you to get rid of the need for double applying it. To read about the numerous bug fixes since Calibre 9.0, I do recommend reading Bobby and Marcus's articles. As well as the release notes for all the versions since 9.0.
Jonathan Bennett [00:04:57]:
Yeah, interesting. There's some fun stuff in there. I still have not actually used Calibre. I continue to want to, and I've just never sat down and done an install and imported any ebooks into it. Part of it is because I don't really have a great Linux machine for doing reading. I don't really want to sit at one of my desks to read.
Ken McDonald [00:05:19]:
How many DRM-free ebooks do you have?
Jonathan Bennett [00:05:22]:
I mean, there are definitely places to get those. I have some where I've purchased.
Ken McDonald [00:05:28]:
Humble Bundle is a great place for getting them.
Jonathan Bennett [00:05:30]:
I've got some from Humble Bundle. There's also, is it Project Gutenberg, I think, that has a bunch of DRM-free classics?
Ken McDonald [00:05:37]:
Project Gutenberg, Internet Archive, and there's one. Uh, I want to say eread.com. I'd have to look at my history. Let's see here.
Jeff Massie [00:05:59]:
Um, so I don't have any ebooks.
Jonathan Bennett [00:06:01]:
No, I— if I was to do—
Jeff Massie [00:06:06]:
I do have like Dungeons and Dragons and Pathfinder, but they're all in like a hierarchy org structure, you know.
Jonathan Bennett [00:06:13]:
Yeah. What's that, Kim?
Ken McDonald [00:06:15]:
It's standardebooks.org. I'll go ahead and post the link in the Discord.
Jonathan Bennett [00:06:23]:
Yeah. You know, my ebook reading at this point is going to be on one of the tablets, either the little Android tab or an iPad. And unfortunately, Calibre doesn't actually run on those.
Ken McDonald [00:06:35]:
Oh, you just use, you can either hook up a USB cable between them.
Jonathan Bennett [00:06:39]:
And you can use, you can use Calibre to, to manage your library. Yeah, that would be, that would be probably the thing that I need to do with it.
Ken McDonald [00:06:46]:
Or you can set up Calibre to act as a server where you connect it through it through a browser and download via from Calibre that way.
Jonathan Bennett [00:06:55]:
Yeah, absolutely. I like what, I like what Harold Finch points out here. My hard copies are all DRM-free. And yes, as you can see, you can see it in the background of, of Jeff and Ken. You can't see in my background, but I promise it's there. My whole wall over here is bookshelves with books.
Ken McDonald [00:07:14]:
Um, maybe one day take a picture for us.
Jonathan Bennett [00:07:16]:
I could do that. Hold on, we'll, we'll do it live. We're doing it live, uh, just for fun.
Jeff Massie [00:07:23]:
We'll do it live.
Ken McDonald [00:07:24]:
Good on power, shockproof, recyclable, cheap, but not as easy to carry around.
Jeff Massie [00:07:34]:
I mean, I think they are.
Jonathan Bennett [00:07:36]:
It depends upon how many you're talking about.
Jeff Massie [00:07:38]:
One at a time.
Jonathan Bennett [00:07:39]:
One at a time is pretty easy.
Jeff Massie [00:07:41]:
I don't read more than one book at a time. I put up one page and that's all I can handle.
Ken McDonald [00:07:48]:
Believe it or not, I'll find I jump between two or three different books.
Jonathan Bennett [00:07:54]:
Jeff just saw the picture. Yes, it is very impressive.
Jeff Massie [00:08:00]:
I, I thought I had a lot of books, and I'm like, uh, you got me beat.
Ken McDonald [00:08:04]:
I— okay, what library did you just go to to get that?
Jonathan Bennett [00:08:07]:
No, that's literally my office. Um, we, we have so many books that we actually need to go through them and, and purge. Is it— what, what we did is we've— my wife and I, my wife and I both had a book collection, and then as our parents and grandparents were aging, they started downsizing theirs. And we basically just, every time someone said, hey, I have some books I'm getting rid of, we said, oh, we'll take them. Yes, yes please, we'll take your books.
Ken McDonald [00:08:36]:
And so now, now we're at the point where we need to see behind me is after doing some purging. I've still got a trunk of books that I haven't unpacked since I moved back to the States from England.
Jonathan Bennett [00:08:50]:
A truck or a trunk?
Ken McDonald [00:08:53]:
Trunk.
Jonathan Bennett [00:08:53]:
Oh, okay. Those are two very different things.
Jeff Massie [00:08:56]:
Yeah. For a second I thought you said truck and I'm like, wait, what?
Jonathan Bennett [00:08:58]:
He's got a box truck in his backyard. Oh my goodness. All right.
Jeff Massie [00:09:02]:
He's in the military. He's got a 6x6.
Ken McDonald [00:09:04]:
Boy, would I be in trouble with my wife if that were the absolute—
Jonathan Bennett [00:09:07]:
Yeah, absolutely.
Jeff Massie [00:09:07]:
I probably have, my books are probably about 4 of those units, maybe 4.5. So you got me beat soundly.
Jonathan Bennett [00:09:18]:
On books.
Ken McDonald [00:09:21]:
Yeah.
Jonathan Bennett [00:09:21]:
All right. So I've got a story here that we've talked about before, and that is GIMP 3.2 is out. And we've covered this some as it's gone through the alpha and the beta process, but 3.2 is out and it's got some, it's got some pretty cool stuff. One of the big ones in 3.2 is non-destructive layers. It's moving GIMP into that nonlinear workflow, the non-destructive workflow where you can, you can move things around and then un-move them. You can undo and redo, move things on top of each other without permanently overwriting. It's got some over— some other interesting things with the paintbrush tooling. SVG export support, that's super interesting.
Jonathan Bennett [00:10:08]:
I need to go play with that. Of course, the normal UX and UI improvements and polishing. You know, we had to wait for a long time for GIMP 3.0 to come out. But now that they've finally gotten that done, things are coming faster, quite a bit speedier. So GIMP 3.2, and I've not downloaded it and played with it yet, but it definitely has some cool stuff. I want to go try the SVG support. That sounds pretty cool.
Ken McDonald [00:10:39]:
Hoping that when Ubuntu 26.04 comes out, that that's part of it.
Jonathan Bennett [00:10:46]:
Yeah, well, you would hope.
Jeff Massie [00:10:49]:
Yeah, you can look at DistroWatch too, and it'll probably tell you.
Ken McDonald [00:10:54]:
Have to do that during your story.
Jeff Massie [00:10:58]:
No, you should be riveted to the screen during my story.
Jonathan Bennett [00:11:02]:
Indeed, of course.
Ken McDonald [00:11:03]:
That's while I forget to look at it.
Jonathan Bennett [00:11:08]:
All right, well, speaking of Jeff's story, he's got some Linux 7.0 file system information, and we're going to get to that right after this.
Jeff Massie [00:11:18]:
I have two articles in the show notes, and both are about file system benchmarks. The first one's comparing several file systems using the latest 7.0 kernel code, and it's file systems BTRFS or BTRFS. ButterFS, BetterFS, however you want to say it, ext4, F2FS, and XFS. Now all, all are in their default settings except for BTRFS. So it was, it had a default setting, and then they also, Michael Arable turned on a mode where it, it runs but the copy on write was turned off just to see what kind of performance difference that would make. Now these, these tests were all done with an AMD EPYC 9745, which is a 128-core CPU. So it's, it's a beefy system with a lot of PCIe lanes available. Now the overall results showed that XFS came out on top, with a close second for ext4, with a moderate step down.
Jeff Massie [00:12:20]:
F2FS had a mid-pack ranking, and BetterFS with no copy on write, and then at the— which was a little step, it had a decent step down, and then another step down was the stock BetterFS. Now it was noted by the author that of course there's a lot more to consider when selecting a file system beyond just raw performance, like the features and reliability most importantly.. Now most of us are not going to have a workload that's really going to make much of a difference, at least not in our home PCs. Now if you do notice, if you look at the results, bcacheFS and OpenZFS were not included in the benchmarks. Now that's because they're not currently running with the 7.0 kernel, at least not at the time of this article. Now it did say as soon as everything was— if as soon as they were working those results will be performed. They'll be— those two file systems will be benchmarked. Their results will be added into the results to fill out the table even more.
Jeff Massie [00:13:28]:
Now the second article takes a look at the, the two front-runners of the first article and then looks at their performance, how it's changed over the different kernel versions. So now the benchmarking didn't go back really far, but it did go back to the 6.12 LTS kernel and hit all the major releases including 7.0. Now when I say major releases, you know, 6.12, 6.13, 6.14, so on. Now while the last article had a lot of file systems, this one is only XFS and ext4, which were the top two of the, the first article I talked about. These, they, they were the only ones that were benchmarked. Now, a lot of the benchmarks between the two look pretty overlaid on top of each other. And some, some look like they're a little twisted where one's faster for a release or two and then the other one's on top for release or two. You know, kind of like you had two wires and you twisted them.
Jeff Massie [00:14:26]:
They just kind of keep changing, changing places. But, you know, overall they were really pretty close. The big difference in the two come to ext4 at the 6.16 kernel. Where the flexible I/O tester benchmarks, the ext4 had a rather large speed increase. It was lagging quite a bit behind XFS, and you know, while it's even after the, the bump in speed, you know, it's still a little less performance, but it really closed the gap. I mean, it— and the 6.16 looks almost like a step function for speed, so it's 15 is at one level, and then it just jumps up to almost XFS for 6.16. Now, when you look at the overall results, XFS is slightly on top, but you know, it's narrow enough that it, it could be statistically insignificant. Now, the article doesn't go in to say if it is or isn't, and I didn't go through all the data to verify what the margin of error is and the statistical reliability of, of the benchmarking results, but it's pretty close.
Jeff Massie [00:15:33]:
Now, on both these benchmarks, I should say that people need to take them with a grain of salt. Now, these were all done on an SSD and, like I said, a processor with 128 cores. So results for people running lesser hardware, like say you got a spinning Rust disk or you, you've only got 16 cores or less, that's going to have an effect too. You know, how— what interface is your drive connecting to the rest of the system. You know, really, for a home user, I would personally say that the difference isn't going to be something that you'll never— you'll never really see it. You know, and as stated in the last article, things like file system stability, error recovery, you know, other, other features that should factor more into your decision on which one to use. And if you, if you're going down the rabbit hole of which you should use and you're not sure what to do, you know, you're thinking, man, I better research all these. I honestly would say stick with your default that your distribution recommends.
Jeff Massie [00:16:34]:
Personally, on my machines, I have a mix of ext4. Now that's on my Kubuntu machine, so Debian Ubuntu machine, and BetterFS, which is on my CacheOS system, which is kind of the default file system for that. So, you know, I, I just stick with what I have because to me the difference isn't worth converting. So even though ext4 or btrfs is slower, you know, between the machines I can't tell. And when I went from Kubuntu on my, on my main machine to Cache and changed file systems, and the drives are all SSD, they're all connected to the PCIe interface, which, which would give me the greatest, uh, chance to see a difference. I couldn't see it, so I don't think the difference is worth converting unless, you know, you're adventurous and want to try something different. But any, uh, thoughts from my illustrious co-hosts?
Ken McDonald [00:17:39]:
Well, what I see is, uh, helix in a few of those graphs.
Jeff Massie [00:17:45]:
Yeah, that's what I was talking about, the, the twisted, you know, where one wins and the other wins.
Jonathan Bennett [00:17:51]:
Helix. Yeah, I thought he was talking about a Helix file system, like I've never heard of that, I don't know what that is.
Jeff Massie [00:17:57]:
No, I can speak, I can speak Kenanese.
Ken McDonald [00:18:04]:
Oh yeah, that, I, that's why you don't ask me to speak Cantonese.
Jeff Massie [00:18:08]:
Yeah, no, that's why I said it's like a twisted where one, one one wins for one release, next release they, you know.
Jonathan Bennett [00:18:15]:
Did you talk about the huge bump between 6.12 and 6.13? Do we know what that was?
Jeff Massie [00:18:23]:
No, I did not mention that one.
Ken McDonald [00:18:25]:
There's a— so 6.12 to 6.13 or 6.15 to 6.16?
Jonathan Bennett [00:18:29]:
And at least one of the— I guess there's been several of these. So Ext4 from 6.15 to 6.16 did much better. In like Flexible I/O Tester. But if you go and you look at the, um, MariaDB, yes, there's quite a large increase. Um, I don't know, maybe a 50% increase in performance between 6.12 and 6.13 in the MariaDB performance for both of them.
Jeff Massie [00:18:57]:
Uh, so yeah, I didn't, I didn't go into the real older ones.
Jonathan Bennett [00:18:59]:
I just picked off like the 15 to 16 and Yeah, so if you look at the geometric mean, there's— you can see that there's a couple of noticeable bumps. 12 to 13 must have fixed something, and 15 to 16 also fixed something.
Ken McDonald [00:19:14]:
Uh, it's probably, uh, Michael says that it was probably because of changing to using the AMD pState in Linux 6.13.
Jonathan Bennett [00:19:22]:
Oh yes, performance stuff then. Makes sense, it really does. All right, um, so there is, there is something brewing that I find very interesting in the kernel. Moving on from file systems, although this could be useful for file systems, uh, there is a specification in the kernel that is now, um, beyond the request for comments stage, and patches are sent out for actual consideration for inclusion. And from what I can tell, this isn't any actual kernel code,, but no, we're talking about a specification framework for the kernel API, which is basically documentation for how user space programs call into the kernel and make— ask it to do things. Um, so it's going to be— and actually it's going to be a machine-readable API specification, and it's going to include things like parameter types, the valid ranges for those variables, the constraints, alignment requirements, things like bit alignment. Return value details like the success conditions and error codes and their meaning. And I will say that all of these things you can find already, but it would actually be really useful to have a single place where you can get to all of them and look up all of these bits of information about things in the kernel.
Jonathan Bennett [00:20:53]:
You know, the kernel has for the longest time said we don't break user space. I think this actually will help with that because it's going to distill all of that data down to a single place to check for changes. But it's also, it's going to be super useful for generating documentation and doing all sorts of things like that. And interesting to see. I don't know when this is going to land. It probably will land. I don't know if it's going to be in 7.1 or 7.2, but it's pretty interesting to see that this is something that is being worked on. The series also includes a KUnit test suite.
Jonathan Bennett [00:21:31]:
So they've got like 38 tests and runtime verification of it, which is pretty interesting that they haven't tested that thoroughly for just being documentation changes. But still, it's, it's, it's pretty cool to see that this is something being worked on. It's, it's the kernel in some ways growing up and becoming more of a standardized thing, make it easier for people to work with.
Jeff Massie [00:21:55]:
No, I like it.
Jonathan Bennett [00:21:56]:
I do too. I mean, I almost picked up that people—
Ken McDonald [00:22:00]:
go ahead, Jeff.
Jeff Massie [00:22:01]:
Oh, I was gonna say, I almost, uh, picked up this story because I thought it was pretty cool where it was just, yeah, like you said, just standardizing it, just helping people that want to interact with the kernel too. Just, oh, here's my one resource rather than I had to dig through all these email lists and various documentation spread from, you know, held to breakfast.
Jonathan Bennett [00:22:22]:
Now I have a thought with this. With it being machine readable, does that mean it's going to be easier to do AI coding of kernel stuff, kernel calls and all of that?
Ken McDonald [00:22:33]:
We'll discuss that later, uh, at the end of the show.
Jeff Massie [00:22:36]:
I see. I'm not surprised.
Jonathan Bennett [00:22:37]:
No, not at the end, just a later story. Towards the end. Closer to the end. Closer to the end than we are now. All right, well Moving away from kernel stuff, what about media management and conversion and, well, transcoding. Transcoding, yes. The sorts of things that Handbrake would let you do. I think we have an update for Handbrake and Ken has the scoop.
Ken McDonald [00:23:07]:
Yes, Jonathan, I do. Now, I just want to remind everybody, the last version was released over 5 months ago. So that was version 1.10.2. Now, according to Bobby Borisov, Handbrake version 1.11 has rolled out, adding support for encoding video to the MOV or MOV container format. It also introduces new digital nonlinear extensionable high resolution. Now, I'm going to abbreviate that as I say it later as DNxHR. As well as ProRes encoders, both widely used in professional video production. Handbrake 1.11 also improves AV1 support by adding a new AV1VCN 2160p 4K preset for AMD GPUs, starting with the Radeon RX 9000 series.
Ken McDonald [00:24:15]:
And introduces an AMD VCN AV1 10-bit encoder. The audio subsystem receives updates as well. Now, HandBrake 1.11 adds PCM encoding and passthrough support and introduces the ability to define custom channel ordering. I guess that'll be handy if your 7.1 system that you've got in your home doesn't match the regular ordering. Linux users receive several interface and usability improvements, and HandBrake 1.11 now uses GTK file launcher when opening files in sandboxed environments, improving compatibility with Flatpak and other sandboxed distributions. It also adds buttons to the cycle through previews on the summary page. An option to change the user interface display language, and updated existing as well as already maintained locales. As always, I've just touched on the highlights, so I do recommend you get more details from Bobby's article.
Jonathan Bennett [00:25:32]:
Yeah, absolutely. I was super curious about that custom channel ordering, and I'm now I'm now diving into it to discover if that is talking about surround sound stuff or if that's like, um, when you've got more than one audio track. I think it's a combination. Uh, I'm looking, looking now at the bug that it says it fixed. When I export a video with multi-channel 5.1 audio from FCPX, resultant file contains a 5.1 channel 16-bit LPCM audio track which plays back just fine. But if I import this into Handbrake 1.8 for compression, HB reports some wildly inaccurate number of audio channels, usually 13 to 30, and only allows for a stereo mixdown. And in his screenshot, he does indeed have multiple— oh, uh, so in this case, they're both English. But it's two different encodings.
Jonathan Bennett [00:26:34]:
So it's like the Dolby Pro Logic and the AC3 regular 5.1. So it's like two different encoding standards in the same. It sounds like that would also, you would also get into that with multiple audio tracks for different, like different languages.
Ken McDonald [00:26:52]:
Languages or where you have one audio track with director's overdubbed.
Jonathan Bennett [00:26:58]:
Oh yeah, yeah, for sure. Same sort of thing. So cool, neat to see better support for all of that.
Ken McDonald [00:27:05]:
Though my wife can't understand me while I'll sit down and watch the same movie again right after I've watched it just to hear the director's, uh, notes.
Jonathan Bennett [00:27:14]:
I usually don't have the patience to do a second watch for the— with the director's commentary.
Ken McDonald [00:27:19]:
Um, once or twice I have, but If I find the time, I will try to do that. Yeah. Usually I end up doing that. I'll watch the special behind-the-scenes stuff before I actually watch the movie.
Jonathan Bennett [00:27:38]:
Spoilers. All right. So I've got a news story about RISC-V. And I actually had some really interesting stuff happen while I was at Embedded World that we're going to talk about while we talk about this story. First, we're going to take a, a super quick break and we'll be right back after this. So Fedora has a complaint about RISC-V. Actually, uh, RISC-V is causing headaches for Fedora because the builds are slow. And this is something that we've talked about, that I've talked about with RISC-V, and that is that these are not super performant chips, at least so far.
Jonathan Bennett [00:28:17]:
The various options that are out there for RISC-V are kind of slow. Now, this is, uh, this is specifically a Red Hat engineer. I believe it's, uh, Markin, uh, Justikiewicz. Oh my goodness, I'm sure I just slaughtered that name. But anyway, uh, he has a blog post on the subject that RISC-V is slow. And basically the fastest machine that they've got will build binutils without link-time optimization in about 143 minutes, whereas, um, they can do it on an ARM64, an AArch64 board, at 36 minutes. Uh, and you can do it on their x86-64 builds in like 29 minutes. So significantly slower.
Jonathan Bennett [00:29:08]:
So RISC-V, and this is really not terribly surprising, RISC-V is not at this point designed to be a huge powerhouse. It's not for desktops, at least not yet. Um, there's been a lot of dev boards that get embedded in a lot of things. Uh, in fact, people run a lot of RISC-V soft cores on FPGAs, but not super, super performant. Now, there are— there is at least one board that's coming. I was looking to see if he mentioned it in this article what the name of the board is. The, the Milk 5 Titan. That's the one because it's got an UltraRISC URDP1000, which is a very impressive name.
Jonathan Bennett [00:29:55]:
It can have up to 64 gigs of RAM and quite a few cores. So they're looking forward to that because they're going to get some more performance out of it. But for building packages, and this is something that I've mentioned when I've done reviews on RISC-V, it is performant enough to build packages, but not necessarily performant enough that you'd want to use it. And they're seeing that performance difference even in doing these package builds. Now, I think I teased before I started this story that I met someone. Well, I did indeed. I went to Embedded World like we talked about at the top of the show. And one of the days— so they have, they have things broken out into different halls.
Jonathan Bennett [00:30:31]:
And I— we were in Hall— I think it was Hall 3, which is like where the edge devices are. And so that's where Meshtastic was at, because we're, we're at the edge, uh, But, uh, no, machine to machine. That's what they call us, M2M, machine to machine. Um, but they had Hall 4 at Embedded World that was all software stuff. And I started reading through the, the list of software things and started recognizing names like Ubuntu was there, uh, Igalia was there, uh, Canonical, Ubuntu Canonical was there. Uh, there was also, I think Hall 4 was RISC-V. The, the RISC-V guys were there. They had a booth.
Jonathan Bennett [00:31:06]:
And so I took one day and just sort of all of the names that I recognized, I went through and talked to them, handed them business cards. I'm like, hey, my business card says that I'm here from Meshtastic Solutions, but I wear multiple hats and I'm also the guy for podcasting. And turns out that when I walked up to the RISC-V booth, the guy that I talked to was Andrea Gallo, who is their CEO. And so I've got his business card and a soft commitment to have him on as a show host on FLOSS Weekly. So super duper looking forward to that. Hopefully sometime soon. We'll let you guys know about it when that happens. But anyway, over in RISC-V world, things are— well, let's just say that they are— they're looking very hard.
Jonathan Bennett [00:31:48]:
But the vendors building RISC-V chips so far have been concentrating on more performance per watt numbers than outright performance. And Fedora is sick of it.
Jeff Massie [00:32:02]:
They're tired of it.
Jonathan Bennett [00:32:02]:
Well, I was wondering when you said it was sleep.
Jeff Massie [00:32:06]:
What's that, Jeff? Oh, I was gonna say, I was, I was gonna, I was wondering when you were doing the article partway through, I'm like, is this an optimization thing or is this an actual just a hardware thing? So I think it's a hard thing. Yeah, well, and there's room for, we, we need efficient processors, you know, it's got to run on a battery or something, and we can, we can wait a little bit longer as long as we keep that battery up.
Jonathan Bennett [00:32:28]:
Part, part of the problem with RISC-V, and we talked about this in the past a little bit, is that when AMD or Intel, or AMD and Intel actually, they'll get together and they'll say, okay, we're going to implement this set of features and it's going to be called this in the x86-64 spec. And that will allow for faster process, like AVX-512, let's just say. So here's, here's the AVX-512 extensions that we're going to add to x86-64. Both companies support it. And then you can go and you can use that in your, you know, your builds. You have those, you have those new instructions, and suddenly things are faster because you have dedicated instructions. In RISC-V, there is a need for some of those instructions that will help, but it is difficult because there's so many players and it's an open spec. It is difficult to get those things standardized.
Jonathan Bennett [00:33:21]:
And this is a, this is a complaint that distros like Fedora and the like have had before is because it's so all over the place, they can't turn on the equivalent of AVX-512, you know, the SIMD instructions. Like over in ARM land, you've got NEON and all those. There's just, there's not been, at least so far, a kind of a step in instruction stepping like that where, okay, this is the name of it, here's the set of instructions, everybody's going to implement it. Now it's being worked on, and I think there is actually one of those sort of named instruction pools that is getting rolled out. But it takes time. And again, it's just not quite as efficient yet as the x86-64 or ARM guys are about it. But I know they're making progress.
Ken McDonald [00:34:09]:
Ken, you were going to throw something in there. I actually got distracted by what you were saying.
Jonathan Bennett [00:34:17]:
You had forgotten what you wanted to say.
Jeff Massie [00:34:21]:
Well, that is all right. I was just going to say, it's, you know, it's, it's really that double-edged sword of, hey, we've got all the flexibility we want.
Jonathan Bennett [00:34:29]:
Well, but that flexibility also kind of slows down some adoptions of things, and it leads to— in this case, it's, it's actually a form of fragmentation. That flexibility leads to a form of fragmentation in the actual instructions that each CPU will support. And so you're— you, you get to this point to where distros to combat that fragmentation, they sort of have to support the slowest common denominator of instruction sets, which makes things difficult.
Jeff Massie [00:34:59]:
You could, you could say a software analog would be just the Linux file, you know, the Linux distribution systems where you can change the, the versioning of stuff, how things work, where, you know, sort of like when you still use the 32-bit on some systems.
Jonathan Bennett [00:35:20]:
In some ways, yeah. Yeah, because everybody got a performance bump for going to 64-bit. All right, Jeff, let's talk about systemd. What is new? Now, we're covering an RC3 here. We're covering a release candidate. I feel like maybe there's a specific story in here that you found interesting.
Jeff Massie [00:35:40]:
What's going on in this particular release? There is. So I'm going to first talk about kind of what's coming new in 260, and then we're going to hit something in Release Candidate 3 that came out, even though— and I'll cover it later too— they just released RC4 like a couple days ago. So it's churning fast. But to step back, you know, we talk about systemd from time to time, and, you know, a lot of people, you know, well, some people don't like it while others are happy for it, happy with it. Uh, you know, for the initialization of a distribution, it's the equivalent kind of the display of, you know, X11 versus Wayland. You know, there's people that argue back and forth, though I would say the systemd is probably more adopted roughly than the X11 versus Wayland, but there's still people that don't like systemd. That being said, I wanted to cover systemd version 260 that's Release Candidate 3 that's being released, that they got released. Just like the kernel, you know, there's release candidates.
Jeff Massie [00:36:47]:
You know, systemd has the same method so they can keep putting these out, make sure everything's polished before it actually gets fully released. Now in this coming release in 260, they've removed the systemd service script support. Now this isn't as bad as it sounds because the support's been deprecated for a while and it's, it's been known for a long time that this was going away. So So this should not take anybody by surprise. If, if it does, they have not been paying attention at all. A big feature that's being added on this is mstack, and I'm not going to go into deep detail because systemd itself is a whole lot of deep detail, but overall it's a, a new feature for defining and managing structured overlay file systems and bind mounts using a self-descriptive directory structure. Basically, it simplifies complex containers and service root file system setups by organizing multiple mount layers. So it, it'll just help when, uh, with containers in a few, a few other places too, but it just keeps your file system a little more organized.
Jeff Massie [00:37:57]:
Now there have been several dependencies which have been raised with this one, such as OpenSSL went from version 1.1.0 to 3.0.0. So you need to have a newer version of OpenSSL to make 260 work. Same thing with Python. For example, Python went from 3.7 to 3.9. There's other ones in there, but just using those as examples. So when you— if you make a switch to this, there's other libraries you're going to have to make sure that you have newer versions of. The kernel version was also raised from 5.4 to 5.10 for the baseline. That's just the baseline.
Jeff Massie [00:38:36]:
The recommended baseline went from 5.7 to 5.14. But if you really want full functionality of systemd version 260, you're going to have to have kernel 6.6 or later. Now there's a lot of other additions and changes to the release and. But this is the thing that really caught my eye. And like I said, I went to GitHub to look at this, and that's when I found they did release RC4. So these, these RCs are coming out pretty quickly. They're, they're generating a lot of, uh, activity over there on systemd. So it's, it's definitely being worked on by a lot of people.
Jeff Massie [00:39:18]:
But what caught my eye in Release Candidate 3 is, and I'm sorry, some people are going to hate this, but they added AI agents documentation. Now I'm saying documentation, so they're not adding AI to systemd, it's documentation. So there's now an agents.md file in the Git archive with the idea of helping and guiding AI scraper bots. So the file will help. The AI coding agents guide them on the systemd architecture, the development workflow, systemd's coding style, and systemd's contribution guidelines. Plus help in running various systemd commands and integration testing. Plus noting that systemd contributions do require AI disclosures akin to the co-developed by tag on the patches. So it'll help AI know that it needs to make sure it says, oh, co-developed by whatever AI it is.
Jeff Massie [00:40:25]:
Uh, the agents.md also cited in a new cloud.md file as a helper for cloud code. And also new for helping AI agents and systemd is adding the cloud-review.yml file as the YAML file outputting of reviewing of systemd pull requests with Claude Code as the AI assistant. So now, in the past— and I thought this was interesting because in the past we've covered how AI agent, you know, pull requests can overwhelm a project because, you know, so many times there's just too much garbage in those requests. Now, we've talked about in the past about not letting AI requests in, or only taking code from existing developers, and, you know, than other people saying, but the tools are out there, people are going to use them, and what do you count as AI? And, you know, I saw this and I thought this might be a good middle ground. So, you know, knowing that AI is here to stay, I don't see it going anywhere, at least not for a long time. Why not lean into it and help the AI get better code output and guided on what it should be doing and what it shouldn't be doing? So you're just giving it the, the boundary parameter boundaries and a lot of help to make sure anything it does write is going to be more aligned with your project and actually be of more value to your project. Take a look at the article linked in the show notes for more details, and there's also links in the article to the GitHub page where you can get into all the technical details and you can get into the code and everything, you know. But, uh, Jonathan and Ken, do you, do you think we might be on the way to other projects that can handle it the same way?
Jonathan Bennett [00:42:16]:
Yes, I do, actually. Um, and I'm curious, do you think this is part of— I, I, again, at Embedded World and with various partners we have there, I kind of conversations about this. And, you know, I call myself an AI skeptic and was challenged on that and had some great conversations as a result. And One of the conclusions that the smart money had was that the current spending craze around AI is going to crash inevitably. But there is like the places where AI makes sense, it's going to continue to be there. And I was thinking about this afterwards and I think essentially what's going to happen is AI is going to disappear. And by disappear, I don't mean go away., but like the good parts of it are just going to get sort of absorbed into the fabric to where it's not in front of your face anymore. And that's kind of happened, for example, with Google Search.
Jonathan Bennett [00:43:09]:
I've gotten to the point now to where I search for something and you get the, it's generated by AI, the little blurb at the top. That's sort of trustworthy these days. Now, obviously, it's good to double-check, but it's much better than it was. And it's gotten to the point to where I don't see that and go, oh my goodness, it's AI, get this out of here. It's now, Oh, okay. Well, that's pretty— okay. Like about half of the time, it just flat out answers my question and I can stop with that.
Ken McDonald [00:43:35]:
So it's just kind of—
Jonathan Bennett [00:43:36]:
I still like to follow the links that it uses. Well, yes. And it depends upon what you're looking for, right? So if you're just looking for when does daylight savings time start in Europe, it's going to give you an answer and it's probably going to be accurate. But if you say, you know, tell me about the history of, the Rolling Stones, well, sure, it'll give you some history blurbs, but you might also want to go to the Wikipedia page or the Rolling Stones website, you know, what have you. Um, but I think, I think we're, we're sort of approaching this point where AI's— I don't know, this is quite the right way to put it yet, but we'll disappear just because the good parts get absorbed. All that to say, things like this inside of projects where you have some AI documentation intended for the AI, I think maybe part of that it may help the AI sort of disappear in that it will just automatically do the right thing more often. Does that track, Jeff? Jeff works with this stuff, so I'm curious what he—
Jeff Massie [00:44:37]:
and then I'll let Ken jump in. Yeah, I'm immersed in this stuff. And I agree. And you always do, to your point, Ken, you do have to kind of check. And if I'm searching, how high is Mount Everest? And it comes back and tells me it's 5,000 feet. That doesn't seem like a reasonable answer to me, you know. And so there's some of that. And, and like John said, the complexity of the question— oh, okay, I'm gonna need a lot more detail than this.
Jeff Massie [00:45:04]:
But, you know, but yeah, there's, there's a lot of this that I think we're gonna be moving past the, oh wow, look at the sparkly stuff, to, oh, this is, this is a darn good tool, you know. I And I basically agree with what you're saying. It's going to fade back just like when the paradigm shift, when computers first came into general use and people had them at their desk at businesses or the internet became a big thing. And it's like, yeah, it kind of disappears. It becomes almost a utility at that point.
Ken McDonald [00:45:46]:
A servant. Yeah. Yeah. Just a standard tool we use.
Jonathan Bennett [00:45:48]:
Blends into the wall when you're not using it.
Ken McDonald [00:45:54]:
Just does its job. Absolutely. Anything to add, Ken? I think that's going to be a— I'm going to go with a forecast of maybe a decade before we see that.
Jeff Massie [00:46:07]:
Oh, wow. You think that long?
Jonathan Bennett [00:46:09]:
I bet a couple of years. Yeah, jinx. Yell me a Coke. All right, so there is some business stories in the Linux world this week too. This one caught me off guard. I was not expecting this. I was reminded that SUSE is actually owned by a private equity firm, EQTAB, which is, I believe it's a German firm. And, uh, they— excuse me, a Swedish firm.
Jonathan Bennett [00:46:41]:
EQT AB is based in Sweden. So EQT is the name of it, and AB is probably the Swedish equivalent of LLC or something like that. It's limited liability of some sort. Anyway, uh, EQT purchased, uh, it took Sousa private actually, uh, back in 2023. It was already a majority owner, but it took the company private in 2023. The valuation— the valuation then, uh, €2.72 billion or $2.96 billion US dollars. And, uh, so just about 3 years ago, and they are now looking at trying to sell SUSE, but not for less than they valued it at. No, EQT is trying to sell SUSE for around $6 billion.
Jonathan Bennett [00:47:29]:
Um, I don't know, I don't know offhand what that is in euros. I'm sure the, I'm sure the story here has it somewhere. 5.1 billion, there it is. Uh, so 5.1 billion euros, up almost $6 billion US, is what they consider it valued at now. Uh, that's a really good return on investment in 3 years. And I, I dare say that unlike sometimes when we cover stories like this, they have not run SUSE into the ground. OpenSUSE is still strong. Uh, if, if I ever have a company get bought out, I would like it to go that well.
Jonathan Bennett [00:48:02]:
Oh my goodness, it's a, a double in valuation in, in 3 years. It's definitely successful. Um, so it, uh, yeah, very interesting to see this. Now, this is not set in stone. This is, in fact, um, you might consider this more like a rumor than even a news release. Um, but it's, it sounds like this is a thing that is being explored. Um, what's really interesting to think about though is who would come along and buy Sousa? Who would the, who would the new owner of Sousa be? Uh, I could think of a couple of interesting ideas there. I'm curious, what do you guys think? Who, in, in a, first off, in an ideal world, who would you like to own Sousa? And secondly, Uh, who do you— what's some— what's some company names that come to mind that might be interested in it?
Ken McDonald [00:48:55]:
Ah, boy, that's— I can't really think of who I would like to have own.
Jeff Massie [00:49:01]:
I know I'm not sure I'd want IBM to own it. I—
Ken McDonald [00:49:09]:
well, IBM has already got one. Yeah, Red Hat. Yep.
Jonathan Bennett [00:49:13]:
And you'd probably see some pushback if they did try to buy it.
Jeff Massie [00:49:18]:
Yeah, probably regulatory pushback. I could see maybe Microsoft being a little interested because then they get a complete Linux package that they can do whatever they want with and just suck it right up.
Jonathan Bennett [00:49:33]:
Or even Amazon. Amazon also came to mind. I could see Amazon buying it. Uh, there are, there are 3 customers that this Reuters story mentions that actually I could see any of these three wanting to just outright own, um, and that's Walmart, Deutsche Bank, and Intel.
Ken McDonald [00:49:58]:
That's actually an interesting idea.
Jonathan Bennett [00:49:59]:
Why would— does Intel have the money right now? No, it doesn't.
Jeff Massie [00:50:03]:
That's the thing. Intel, Intel does not have that. They'd have, they'd have to give them stock or something, right? But, but there's ways around that.
Jonathan Bennett [00:50:08]:
You could, you can, you know, you can do stock. When you're a corporation, you don't have to have money to buy things.
Jeff Massie [00:50:15]:
Yeah, yeah, you really don't. There's all sorts of smoke and mirrors you can use. Absolutely. But I don't see why would Walmart want it. They could just grab whatever distro they want unless Walmart is trying to get into the cloud realm.
Jonathan Bennett [00:50:32]:
And I've heard some, I've heard some mumblings Walmart is trying to chase Amazon in almost everything that they do.
Ken McDonald [00:50:39]:
So that would not terribly surprise me.
Jonathan Bennett [00:50:42]:
I think the only thing Walmart wouldn't want to get into is one-hour delivery nationwide. Aren't they?
Jeff Massie [00:50:51]:
But they have delivery to the door. They are starting to move into that. They're there for delivery stuff. But cloud, Walmart does have the name recognition, the pocketbooks, the— You know, they wanted to, I could see them being a viable, or at least giving it a, a true, uh, competitive effort.
Jonathan Bennett [00:51:16]:
Yeah, I, I will say though that when I first asked this question, Microsoft and Amazon were the two names that really came to mind as, as potential suitors.
Ken McDonald [00:51:25]:
And I know it— go ahead, Ken. I was just going to say that Zeus has got a history of being, uh, sold either, uh as all by itself or along with whatever its parent company at the time was.
Jeff Massie [00:51:43]:
Yeah, it's changed hands quite a few times. Now I'm going to throw something out here a little bit. I could see when you mentioned Deutsche Bank, because okay, its value went up, but right now a lot of Europe is trying to get out from underneath American software and some of them have picked Red Hat, but SUSE is— that's European, it's German company. And as everybody tries to jump on a lot, or a lot of people are jumping on Linux in Europe and they want home-based software, I could see the potential market for SUSE going up tremendously. So I could see where they could be The current holders are going, "We see the potential here. There's a lot of upside on this market, but we're just going to get out of it now." And then, because a lot of times those funds, they're not in it for super long-term anyway.
Jonathan Bennett [00:52:48]:
They hold stuff for a while and then in general, there's exceptions, but There's, there is yet another German company that comes to mind that is large and could have use for something in this space, uh, and that's Siemens. I could see somebody like Siemens, a dark horse, come in and say, we'll take, we'll take that, thank you very much. Siemens, Siemens could do this without breaking a sweat. They are huge. Um, and there's other, there's other European companies that sort of fit into that. Uh, I don't I don't know if I technically need to make this disclosure, uh, but technically the parent company of the parent company that— see, let me put it this way. Siemens is the parent company of the parent company of one of the places that write me a paycheck each month. They are involved with Hackaday.
Jonathan Bennett [00:53:34]:
I have no insider knowledge of anything going on at Siemens. Uh, I didn't even go by the Siemens booth at, uh, at Embedded World. I went by the, the Supplyframe booth, but not Siemens. And they didn't know who I was at the Supplyframe booth anyway, so I'm not that big of a deal to Siemens.
Jeff Massie [00:53:52]:
Yeah, for those that don't know, Siemens makes a lot of stuff that goes into things.
Jonathan Bennett [00:53:59]:
They don't have as much consumer-facing things as— yeah, very few consumer-facing things, but industrial— that the factory that makes your consumer-facing things almost certainly has something made by Siemens in it. Factory controllers.
Jeff Massie [00:54:13]:
Um, oh yeah, that's a lot of what they do is like factory controllers.
Jonathan Bennett [00:54:22]:
Semiconductors. Another company that we hadn't mentioned is SAP. Yeah. Also a European, very large European company. I don't know that there's as much of a business case for them to own a Linux supplier, but it's definitely not outside their own possibilities. It's a strategic investment for them. Yeah, that is an interesting thought. Again, we don't know anything.
Jonathan Bennett [00:54:50]:
We don't know anything.
Jeff Massie [00:54:51]:
We're just guessing.
Ken McDonald [00:54:52]:
We're just talking. Yeah, we're thinking about speculation.
Jeff Massie [00:54:56]:
Think of it as hallucinating by AIs. Yeah, exactly. No, this is like a user group and we're all just throwing in our two cents while we drink our beer.
Jonathan Bennett [00:55:06]:
Or one cent in some cases. I get to do that in Germany. We stopped at an Italian pizza place that made like actual authentic Italian pizza. And I teased the guys that Americans perfected pizza, but it was a very different sort of pizza than we get here in the US.
Jeff Massie [00:55:26]:
Now you're making me want to take a break. Well, before anything, I do want to say, so Wizardling had a comment. He said, "There is a significant concern overseas about the US tech having too much power." He said, "No offense, but maybe there isn't much awareness of the depth of the feeling about this issue outside the US." Yeah, especially for people like us, I don't deal with Europe a lot. I do some, but not a ton, and everything I do is hardware. We don't have a good barometer on how, how strong that feeling is.
Jonathan Bennett [00:56:09]:
Yeah, I, I have a little bit. Um, I've got a couple of my partners that are in— two partners that in Europe, and then one partner is a Frenchman living in Hong Kong, which is an interesting sentence to say. Um, so I, I do get to hear a little bit about that from them. I have conversations with folks overseas quite a bit. Uh, but it, it is a thing. It is a Uh, there is some common sentiment, but I will also tell you that it is something that European governments are at least thinking about in, in the same way that the U.S. is thinking about, and I hope Europe's thinking about this too, um, about diversifying its semiconductor sourcing so that not as much of it is made in China and Taiwan. Um, Europe is thinking about diversifying its software stack so that not all of it comes from the U.S.
Jonathan Bennett [00:56:58]:
Uh, one other thing to throw in here that, uh, uh, Keith512 says is, uh, he says maybe a woman called Sue will buy it. And that, of course, is Lisa Su at AMD, which is another interesting thought. Um, they definitely have the market cap to be able to do it right now. I don't know if that makes sense to their business model, but it's, it's definitely another, another player that could be mentioned in the same conversation, let's say.
Jeff Massie [00:57:22]:
I could, I could at least say it could be reasonable because they're trying so hard to get into the AI and all the, the enterprise, which their CPUs are, but their GPUs are not near the player that NVIDIA is. I could see them going, here, we're, we're building you the operating system that you just load up and it will work.
Jonathan Bennett [00:57:44]:
It's, it's more, it's more of a vertical stack if they had, if they had the Linux OS as part of their portfolio. They can say, look, here's, here's our vertical Slack, uh, our vertical Slack. No, no, no, no, our vertical software. Yeah, vertical hardware and software stack. You know, buy our OS, buy our hardware, we guarantee that it works together. You can get an AMD CPU and AMD GPU, uh, and an AMD operating system. You put it all together and we guarantee it's going to work.
Jeff Massie [00:58:11]:
That's actually, that's an interesting idea. I can see that. And it will have AMD support so that you are not left without support to call. That's sometimes a lot of what hardware companies want because they don't want to try to figure all this stuff out and they go, you know what, this ROI on this investment is going to do good for us.
Jonathan Bennett [00:58:36]:
In fact, I think this week Michael Larabel wrote an article about AMD AI NPUs. I mean, it's definitely a thing that they're, they're pushing. They're trying to get it, continue to break into that market.
Jeff Massie [00:58:50]:
The reason we're seeing— go ahead. Oh, I was gonna say, the reason we're seeing so much on the consumer side on Linux and for GPUs is because it— how it ties into the enterprise AI compute market. It's, it's very, very similar. So then they can leverage it for gaming and other things when it's like, oh, well, we're 99% of the way there. Okay, we dot an I, cross a T, and there we just opened up another little market for almost no effort.
Ken McDonald [00:59:22]:
Yeah, absolutely.
Jonathan Bennett [00:59:23]:
All right, they've got a great community to help support it. We best move on, and Ken has a story about KeePassXC. That's one of the open source password managers. And we will get to that.
Ken McDonald [00:59:36]:
But after a quick break, we'll be right back. Well, Jonathan, it's been over 4 months since KeePass released an update. According to Marcus Nexter, KeePass XC 2.7.12 was released this week, adding support for nested folders when importing passwords from Bitwarden. It also adds support for timeOTP, an auto-type and entry placeholder, and for setting the BE and BS flags to true for passkeys. Now, KeePassXC 2.7.12 also prevents exploits through OpenSSL configurations, fixes showing correct checkbox value in entry browser integration settings, and adds public key to register response. As always, you can get more details from Marcus's article. And, and plus, I don't want to be tripping all over a lot of those, uh, anagrams or synonyms that they use.
Jonathan Bennett [01:00:45]:
Yeah, I wonder if Time OTP is the same as TOTP. Obviously they're both time-based one-time passwords, but there, there's, uh, the TOTP is an actual implementation of it.
Ken McDonald [01:00:53]:
I wonder if Time OTP is, uh, it is in In fact, one of the things that I'm looking forward to using when I get the upgrade is the fact that it'll automatically, if you are setting up a new password for an account and you've set up two-factor authentication, it'll automatically prompt you to create that.
Jonathan Bennett [01:01:21]:
The Time OTP placeholder generates a time-based one-time password, a TOTP, according to RFC 6238. So yeah, it is the same thing. That is what Google Authenticator gives you as well. And so this allows you to put that top P secret into your—
Jeff Massie [01:01:40]:
Both.
Ken McDonald [01:01:41]:
Into both, yeah, into both places, which is obvious reason for that. I've done that with several two-factor authentications I use. Once I figured out how to do it in KeePass years ago.
Jonathan Bennett [01:01:51]:
Yeah, I, I always— that was always one of the things that concerned me. It's like, if I put all of this stuff into, say, my cell phone, what happens when the cell phone dies or the screen breaks and I no longer have access to it? Now apparently people at Google have the same thought because you can sync all of that stuff up to your Google account, which is terrifying as well, but in a different way. But it does make a lot of sense for, uh, being able to keep access to it.
Ken McDonald [01:02:25]:
For sure. All right. But I actually use KeePass on a daily basis or KeePassXC on a daily basis because that's my go-to. I just keep a copy, a copy of it into the cloud for using with my phone from my Google Drive.
Jonathan Bennett [01:02:42]:
Speaking of which, I actually just received one of these guys in the mail. I've not even plugged it in yet, but I have it. I have it in hand. Thanks, Robert, for sending it to me. This is a Google Titan security key.
Ken McDonald [01:03:01]:
Fancy, fancy stuff. How much are those now?
Jonathan Bennett [01:03:04]:
What's that? How much are those now? I don't know. I got it for free. I bet it may not be cheap. What is the Titan key? It's, it's, it hosts like your, your password stuff and your passkeys.
Jeff Massie [01:03:20]:
Um, I am curious how much it costs, but if you were to buy it—
Ken McDonald [01:03:24]:
I've heard of it before.
Jonathan Bennett [01:03:26]:
Uh, why am I saying, thinking FIDO? That's only like $35.
Jeff Massie [01:03:30]:
It's not that much. It's, it's going to be—
Ken McDonald [01:03:32]:
you're going way back in time if you're thinking FIDO.net.
Jonathan Bennett [01:03:36]:
No, FIDO2 security keys. I was going to say there is, there is a FIDO and security key. Yeah. I was teasing Ken. The security key is indeed built on FIDO open standards. I don't remember what FIDO stands for, but it, it is one of the security key standards. Yeah, it's actually very, very similar to a YubiKey. Um, okay, so yeah, I'll get that set up on some of my accounts and start using it.
Jonathan Bennett [01:04:01]:
Uh, FIDO2 basically is referring to FIDO version 2. Yes. All right, let's talk TrueNAS. So this is the open source, um, open source NAS, open source network-attached storage system. It's an enterprise Linux, enterprise-ready Linux-based NAS solution, and do a lot of stuff in the open on GitHub, except now it no longer hosts its public build repository there. Which I saw this, I saw it on Twitter, actually, the X, the social media network formerly known as Twitter, where Jeff Geerling actually posted about it and said, you know, is this true? Linked to this particular news report. And there was actually a response there directly from TrueNAS who said, yes, you can read some deeper discussion at, and then a link to the forum. And then there's also a podcast by Chris and Chris on the T3 podcast, which I've not gotten a chance to go and listen, uh, listen to that to see what they have to say about it.
Jonathan Bennett [01:05:12]:
But the, the idea is that for security reasons and to be able to support Secure Boot better, they see the need to close the build scripts, pull them internal, and not make them, uh, not make them open the way that they are. So I'll read to you the, the CTO, his statement here, or at least parts of, but why we did it. He says, we had a growing problem with bad actors forking TrueNAS, selling closed-source commercial derivatives under their own brands, and ignoring the GPL and other licensing obligations with no attribution, no contribution back to the project, no supporting the community or the engineering effort that built what they're reselling. And then here's the kicker. Unfortunately, many of these are in regions where we have little to no legal recourse. If you don't know where that is, that would be places like China where it's very, very difficult to, um, to go after someone for a license violation. And other places— China's not the only one, but that's probably the— probably where they're talking about. Um, to address this challenge, we were already planning to take the build scripts internal.
Jonathan Bennett [01:06:19]:
With the upcoming refactor of the new Secure Boot feature, along with myriad other changes, we wanted to make to the building structure, infrastructure. TrueNAS 27 was a natural time to make this change. And what it does not mean, we are not paywalling existing free features, period. If it's free today, it stays free. And then he also said what hasn't changed, we've always made decisions about which new features are fully open source, as in GPL or BSD, which are proprietary, and which land in the free edition versus TrueNAS Enterprise. He says that's how we fund the engineering that builds TrueNAS for everyone. That model isn't new and it isn't changing. And he says he's happy to answer questions.
Jonathan Bennett [01:07:07]:
Um, I get all of that, but at the same time, if you don't have access to the build scripts, then you really can't build your own TrueNAS. And so the whole thing is sort of inaccessible now. You could download their version of it, but not being able to do a build, I don't know, that, that does feel a little gross, icky. The open source part of me really kind of hates that. The business side of me understands that sometimes that is just the reality of a situation. It's that sometimes you have to do the difficult thing because you're otherwise just getting killed on the business side of it. It'll have, obviously it has had and it will have fallout with users. Yeah, I don't know.
Jonathan Bennett [01:07:51]:
You hate to see it, right? You hate to see something like this, something that was developed out in the open, now being said, okay, we're gonna have to take this internal where you can't look at it anymore.
Ken McDonald [01:08:05]:
What's talking about, uh, compiling your own? They never— the link I just posted in Discord has, uh, where, uh, one of the TrueNAS staff says they've never had reproducible builds.
Jonathan Bennett [01:08:21]:
Not reproducible builds, but it, uh, well, let's go take a look at the link. Bottom line, the open source, the build system is another matter. It's currently changing fairly radically internally for a variety of reasons, blah blah blah blah blah. Uh, the repo is still there, folks can fork and redo. Also, okay, they're saying the stuff that is GPL is still out there, they're just not continuing to push to it. All the open source bits can be built if the community desires. 99%. Never done a build from source before.
Jonathan Bennett [01:08:55]:
Yeah, 99% of the folks commenting on this thread have never done a build from source before. That's absolutely true. It's something that you, you know, why would you want to? It's essentially running your own Linux from scratch to run their build scripts. Yeah, so not, not fully reproducible builds. Um, but, uh, yeah, they have had the ability for people to go through and build their own. And now, now that is essentially going away. You can't do their, their newest builds completely, uh, completely apart from using their stuff. So I think something is lost here.
Jonathan Bennett [01:09:26]:
I, I think it's, I think it's wrong to suggest that nothing is lost here. But on the other hand, I think it's fair to say that it's not going to affect it's, you know, like he says, 99% of the people that would use TrueNAS.
Ken McDonald [01:09:40]:
So it's kind of a— it's that 1% that could help improve it from the community that they're losing.
Jonathan Bennett [01:09:47]:
Yeah, although they probably— the build scripts, they probably got very few pull requests into. And oh, I hate to say this, but nowadays it's so easy to write AI pull requests, it's almost better to just turn them off and do it all internally.
Ken McDonald [01:09:59]:
It's probably—
Jonathan Bennett [01:10:01]:
maybe that's That's why they're moving out of GitHub. You know, it would not surprise me if that was actually a consideration with this. But let's just, you know, let's just not mess with it because of the AI pull requests. Keith512 says, most people turn off secure boot as it is a pain. Most home users might turn off secure boot as it is a pain. But you get into commercial environments, you get into things that are underneath the, uh, oh, what's the new, what's the new European, um, the new European law, security law, Cybersecurity Resilience Act, and all those things, uh, especially if you get into government work where you're under FAR and DFAR here in the US. I'm sure countries around the world have their equivalents.
Ken McDonald [01:10:44]:
Uh, you leave Secure Boot on because it's in the contract and you're not going to be worrying about somebody dual booting on it.
Jonathan Bennett [01:10:54]:
Well, no, sometimes you do boot when you're doing this stuff. You just, you're going to be running something like Red Hat that has all the certifications and also has its secure boot stuff already sorted.
Jeff Massie [01:11:04]:
Well, but I wouldn't be able to just do a boot from it into open source to Tumbleweed. No, because a lot of times in these where you're really trying to lock down information and IP, Your USB ports are locked out. The firmware's password protected and encrypted. It's so even if you, you know, you're sitting at the machine, it takes some stuff. I mean, okay, you physically are at the machine, you can get around a lot of stuff, but it is not trivial to— you're not just, oh, I'm gonna pop it open and throw this in.
Jonathan Bennett [01:11:45]:
And no, it's— yeah, there's more to it. And, and, you know, those, those mitigations, they— one of the big things that they're trying to do is make it very difficult to do it either very rapidly or to do it accidentally. And so you see things like Stuxnet, that was— well, it was, it was intentional by the people that wrote the, the malware, but that was, you know, hey, I found a USB key, let's plug it in and see what's on it. And well, the next thing you know, your machine's hosed and all the rest of the machines in the building.
Ken McDonald [01:12:13]:
And then, of course, that one escaped out into the wild because, of course, it did.
Jonathan Bennett [01:12:19]:
It was so viral. Because it plugged into the wrong machine. Yeah. Somebody took a USB key out of the target building and then it just went everywhere. Anyway, let's move on and let's talk about, well, more security stuff.
Jeff Massie [01:12:39]:
But this time, the cost of security. Jeff has the scoop. So it's been a few years now since a lot of the hardware speculation issues for CPUs with hyperthreading have come up. You know, when the first issues came out, turning on the hardware mitigation caused slowdowns on your CPU. Now there are a lot of people who would disable those security features because they wanted the most speed they could get out of one of those chips, uh, CPUs, and you know, a lot of the security issues didn't really apply to home users. They were more of a concern for the cloud and enterprise markets. Not that somebody couldn't leverage them, but the average home user was not the primary, uh, target for a lot of that stuff. Well, fast forward a few years, and today, you know, chips have been designed from the ground up to take care of the issues.
Jeff Massie [01:13:27]:
Now, we've talked about in the past there's a silicon pipeline, so whenever someone says oh, we got to make this design change, it might be 3 years before it actually hits a consumer market, just from the design to testing, fabrication, and so on. I won't go into that again here, but, you know, suffice to say it takes a while. Now, there still is a switch to turn those security features on and off, and that's exactly what Michael Larible over at Phronix did to see if there still a performance issue. Now, the article in the show notes is broken up into two sections. The first section, he just benchmarks with a Panther Lake CPU, specifically a Core Ultra X7-358H, which can be found in current laptops. Side note, did everybody who can name a product reasonably, like, retire or something? I apparently— I don't know. Yeah, anyway. So looking through there, the results showed no difference or very small differences between, you know, with security mitigation on and off, with the exception— a couple exceptions— the RocksDB database 10.0.1 update random benchmark, where the mitigations turned off, it was quite a bit faster, but other databases were just fine.
Jeff Massie [01:14:51]:
And it was, I mean, it was just that one specific test. And other RocksDB benchmarks that did other things, they were fine as well. It was just that specific one. Now there was one other standout, and that was RawTherapee. It's raw photo photography software benchmarking. It also showed a big difference. I don't know why, I don't know what it was hitting that it didn't like, but it was, it was slower as well. Everything else, other database tests, you know, code compiles, other things, just if there was a difference, it was just noise, you know, it was so small that you're never going to notice it.
Jeff Massie [01:15:40]:
Now the second part of the article, Michael did benchmarking across several generations of laptop CPUs to find the performance differences with the mitigations on and off. So this one we had several generations. Now, you know, while some users swear by running their systems with mitigation turned off for better performance, you know, realistically looking at all the generations, there's little benefit in doing so for the Core Ultra Series 3 Panther Lake or even other recent Intel CPU generations for that matter, only if going back several generations is there really a difference between, you know, anything to gain from having security mitigations on or off. So anything the last few generations doesn't really matter, uh, you know. And realistically, if you got an older generation and you want to leave it off you know, at that point, if performance is really that critical, you probably should start looking at a newer— some newer hardware, because that would be a lot— you're going to have a lot greater jump in performance than just turning your mitigations on and off. But you can take a look at the article linked in the show notes for full details and the ability to dive deep into the results to see if your specific situation is going to be affected.
Jonathan Bennett [01:17:03]:
Yeah. Very, very interesting stuff. I know that each of these have had— at least some of them have been big, but all of them have had some performance penalty.
Jeff Massie [01:17:12]:
Some of those penalties they've been able to mitigate in hardware by like fixing the, the actual problem. But, um, and, and that's why some of these you're not really seeing much is because it just took a few couple generations before. And, and going back to Silicon Pipeline we've talked about before, Okay, there, Panther Lake's out being sold right now. There's one that's being tested right now internally to Intel, and all, all chip companies are like this. They're— they've got an internal one that's not going to be out for another year or something like that, and then they're designing the generation after that. That's, you know, doing the circuit layouts right now and probably hasn't hit even test chip fabrication yet. So when they say you got to fix this in hardware, well, you got to go back a couple generations, you know, go in future generations before you can go, oh, we can change the, change the actual layout, fix this in hardware, and then get it through the pipeline.
Jonathan Bennett [01:18:14]:
Yeah, yeah, absolutely. Interesting stuff. All right, well, that is the news for the week. We are about to move into some command line tips, uh, but But first, we're going to take a real quick break.
Ken McDonald [01:18:26]:
We'll be right back. Yep, Jonathan, this week I am introducing you to a command line tool for controlling the runtime behavior of systemd-udevd, requesting kernel events, managing the event queue, and providing simple debugging mechanisms. Now I'm going to show you how you can use udevadm to query the udev database for device information. So let me go ahead and bring up my screen here. And the basic command I'm going to demonstrate is udev with its active action verb of info. I'm going to tell it no pager. That means it doesn't automatically get paged into something where you can scroll up and down through it. And then I'm going to tell it to query all the information for my NVMe drive.
Ken McDonald [01:19:29]:
And for those of y'all listening, I just hit enter and it came back with the information about it, the devices, uh, slash PCI 0000:00/0000:00:01.2/0000:01:00.0/NVMe/NVMe0/NVMe0N1.
Jonathan Bennett [01:20:08]:
And that's just for the only NVMe drive on this system. And again, this is, and this is why we have udev so that it can get mapped to a simple /dev/nvme0n1.
Ken McDonald [01:20:23]:
And then what's interesting though is you've also got some symbolic links that are created for by ID or by path. Or by disk sequence. For those of y'all listening, those are broken out. I'm not going to read those, that'd take too long. But the, uh, you can use the output to grep for some of the information by piping it to grep and say searching for ID_serial. You could get the serial short or the long serial.
Jonathan Bennett [01:20:59]:
And Jonathan, can you tell who the manufacturer of this particular NVMe is?
Ken McDonald [01:21:07]:
It's a Kingston. Yes, it is. Now you can also change the device you look at to say SDA. And that's going to give you a little bit more information. And by looking at the information that I get for my, uh, drive that's, uh, mounted to SDA, what would you say it is?
Jonathan Bennett [01:21:36]:
Well, so it's obviously something connected over SATA, which I believe what the S in SDA stands for.
Ken McDonald [01:21:45]:
Uh, yes, ST—
Jonathan Bennett [01:21:46]:
it's an ST4000, um, which is a Seagate model. Yes, Seagate. Seagate Technology, that's what ST stands for.
Ken McDonald [01:21:54]:
I've, again, I've been off at Embedded World, so ST there stands for like STMicro, different company, totally different. Now I've actually got two, uh, devices with Rust on them in this system, and this one's also an ST model.
Jonathan Bennett [01:22:18]:
That's cool. I like that you can use the short name and get all the long name info about something. I've not made use of this a whole lot in my Linux career, and I will definitely have to.
Ken McDonald [01:22:32]:
This is actually really useful. I actually came across how to use it while I was looking into what was causing Dolphin to lock up for about 10 seconds to sometimes a minute. Ew. Yeah, that's no fun. Did you figure it out? And everything, uh, it says that it could be that I'm starting to see a lot of, uh, sector failures on one or both of these drives. It's possible.
Jonathan Bennett [01:23:04]:
The first one I did, was that the 4 terabyte one? If you, if you run one of the top programs, it'll tell you whether you're waiting on, um, like input or, uh, if something is actually running the CPU. I forget the terms that it uses, but like top and htop, they'll all let you know. Yeah, yeah, it could very well be a drive giving you problems. All right.
Jeff Massie [01:23:35]:
Very cool tip, Jeff. What do you have for us? Well, nothing that serious. I just, I just figured I'd throw in something fun for the Steam Deck fans. And it's ArchDeckify. Basically, it's a script to set up a SteamOS-like gaming environment. And you will need Arch or an Arch-based distribution. The SDDM display manager, a compatible GPU. They do make some notes that NVIDIA hardware might have a few more issues to overcome because Steam Deck is normally AMD, but it doesn't say you can't use NVIDIA.
Jeff Massie [01:24:14]:
It's just be aware there might be a couple little more hoops to jump through. A gamepad is best for UI experience because it, again, a Steam Deck. And KDE Plasma is also recommended for the best experience. You can use other desktops, but again, optimize your, uh, experience, go with KDE. Uh, Deckify lets you choose your desktop session, and you can switch to a full-screen gaming experience just like you'd find on SteamOS, like on the Steam Deck. You can get auto-login through SDDM. So it pops right in, and it allows easy switching between game mode and desktop mode. Uh, there is— to install it, there's one online command which I won't bore our readers going through it since it contains a longer URL and a few layers of directory, but basically it's pretty easy to install and run.
Jeff Massie [01:25:11]:
Now they do add the typical warning that, you know, okay, you're installing this, you could change some important system configurations and you could have some instability, but, you know, basically you're on a non-distribution piece of software. So be, be aware that, you know, in the off chance something goes off the rails, it's not their fault if it does. But people seem to have pretty good luck with it. So I won't go into all the details on usage, but it basically turns your PC into like a Steam Deck. And if you follow if you follow the link in the show notes to the GitHub page for the Arch Deckify, you'll find the install link, the rest of the documentation, and you'll have your very own powerful Steam Deck-ish system.
Jonathan Bennett [01:25:59]:
Yeah. Also useful for like a home theater PC setup, I would think.
Ken McDonald [01:26:04]:
Doing a game on it. Yeah. Pretty cool.
Jeff Massie [01:26:06]:
Yeah.
Jonathan Bennett [01:26:06]:
Did you actually want to do this on bare metal or in a VM?
Jeff Massie [01:26:11]:
On bare metal. Yeah, you're gonna be gaming for sure. Yep, yep. Because then, then if you, if you do it in a VM, then you're going to deal with all the GPU pass-throughs, and there's a whole lot of extra heavy lifting you'd have to do to make it work right. Yeah. And this is, this is a script to set up the configuration, so it's not like it's totally redoing your system or loading a ton of things.
Jonathan Bennett [01:26:41]:
It's just configuring things so that it mimics somewhat a Steam Deck. Yep. So I've got a tip for you today as well. And that is Control+R of all things. And I didn't know about this until just recently, but this is a search function. It's the reverse search built into Bash. And it will, let's see if I can hide the logo. I don't know if I can easily hide the logo, but it will, you see, you say, hey, I remember I did something with PIO, but I don't remember what the PIO command was.
Jonathan Bennett [01:27:20]:
Well, so you can just start typing PIO and it'll show you the most recent command that you ran that had PIO in it. If you want to find the next older one while you're right here, you just hold Ctrl and hit R again, and it will take you back a step in history. So you can do that and look through all of them. And so in this case, I was trying flags, and eventually you get to the very, very end that it says, that's it, kiddo, I don't got any more to show you. Um, and I believe Escape will drop you out of that and happens to put the, uh, the, the most recent command on the screen. So let's see. Um, oh, interesting. Yeah, Ctrl+C then will, uh, drop, drop you out without putting it there.
Jonathan Bennett [01:28:10]:
Um, and so essentially what this is doing is it's moving you back through history in exactly the same way that like hitting the up arrow does. It just, it lets you search instead of stepping through them one at a time.
Jeff Massie [01:28:23]:
Uh, this is not part of my Linux muscle memory, but it really should be. Yeah, how did we not talk about this before?
Jonathan Bennett [01:28:30]:
I use this all the time. I don't, and that's the thing.
Ken McDonald [01:28:33]:
And I'm not sure how I didn't know it was a thing. I just use an alias, srch, to search back through my history for something.
Jonathan Bennett [01:28:44]:
Yeah, and there are other ways that you can do it, but this is nice because it's so, well, it's actually, it's live. You're live interacting with it. And you can just continue hitting Ctrl+R until you get to the one that you actually want to run, and then you hit Enter and it's there.
Ken McDonald [01:29:00]:
So it's pretty cool.
Jonathan Bennett [01:29:01]:
What you start typing doesn't have to be at the beginning either. Correct.
Ken McDonald [01:29:05]:
It searches anywhere in the string. Yeah, that's a good call out for sure.
Jonathan Bennett [01:29:12]:
I started typing just help playing around with it when I had open. My tendency is I will write history pipe symbol and then a grep. And so I'm grepping for a particular thing. This does the exact same thing. It's just, it's faster and you can just run it instead of have to copy and paste or whatever, retype it out. So yeah, it's cool. And I'm with Jeff. How have we never covered this before? Very standard.
Jonathan Bennett [01:29:41]:
All right. That is the show. I'm going to let each of the guys get the last word in.
Jeff Massie [01:29:45]:
I know both of them have something. We'll let Jeff. Ralph, go first. Well, nothing too major. Just a little bit of poetry. A file that big, it might be very useful, but now it's gone.
Jonathan Bennett [01:30:06]:
Have a great week, everybody. All right.
Ken McDonald [01:30:09]:
And same thing for Ken. Any last words for everybody? Yes. I came across a— quote by Ted, and I hope I say this last name right, Tissot, about what happens if we are sloppy about banning all code that has ever been built using AI-assisted technology. You want to read that quote?
Jonathan Bennett [01:30:33]:
Just follow the links in the show notes to actually read the quote.
Ken McDonald [01:30:36]:
Is it not pithy enough to read out for us? Is it pretty long? Uh, it's, uh, gonna take a good minute or two, but if you want, I can try to read it without stumbling over words. Yeah, you can read that. That's not too long. Okay, according to Ted Tsou, quoting him, I will again note that LTS kernels have been created using machine learning. Now here in quotes, we have AI models composed of neural networks as early as 2018 to find kernel commits containing bug fixes that should be backported to the stable branch— branches. Given that people seem to be throwing around AI slop without defining precisely what they mean by AI, if we are sloppy about banning all code that has ever been built using AI-assisted tooling, you'd have to start shipping the Linux kernel back to the version used in Debian 8 Jessie.
Jonathan Bennett [01:31:43]:
There you go. AI has been with us for quite a while.
Jeff Massie [01:31:47]:
It's just now having its moment. It's a moment in the sun. And we talked about that a little bit last week where it was like, well, what does AI actually mean? Where do you exactly draw that line? Because I remember at our programs on the Commodore 64.
Jonathan Bennett [01:32:07]:
True, true. They weren't that great, but yeah, I mean, that's what the Lisp language was originally for. And, and one of the big things that they were doing at the MIT labs is trying to get artificial intelligence working.
Ken McDonald [01:32:18]:
So yeah, it's been around for a long time.
Jonathan Bennett [01:32:24]:
Then they changed to calling machine language. Yep, absolutely. All right, well, that is the show, and we sure appreciate the guys being here. Jeff and Ken, thank you both. And we, uh, we've had it. We had a lot of fun. Um, I will, I will say that if you want to find me, there is of course Hackaday. That is still where Floss Weekly lives.
Jonathan Bennett [01:32:43]:
And also, of course, Meshastic and Meshastic Solutions, that's actually my day job these days. Uh, having a lot of fun there, getting to go to cool and fun places like Embedded World this past week. Possibly going to make it to an Ubuntu conference in a couple of months, getting invited there to do a workshop. Not a keynote speaker or anything like that, but invited to do a 45-minute talk. And looking forward to that as well. I'll get some more information about that as we get closer and it gets all finalized and settled. Yeah, which we appreciate everyone everybody that's here, whether you watch or listen, if you get us live or on the download, we're glad you're here. And we will see you next week on the Untitled Linux Show.