Security Now 1066 transcript
Please be advised this transcript is AI-generated and may not be word for word. Time codes refer to the approximate times in the ad-supported version of the show.
Leo Laporte [00:00:00]:
It's time for Security Now! Steve Gibson is here. Oh, we have a lot to talk about. Uh, Shinyhunters says, uh, they have a lot of personal information from a company that was not going to pay the ransom. Uh, billions of US Social Security numbers leaked. How's that possible? Apple adding cameras to its gadgets. Is that a good idea? And the US's new freedom.gov website. Plus, we'll talk about that study that came out last week, uh, about password managers. Are they secure? TL;DR: don't worry, hair not on fire, but Steve will have the details next on Security Now.
Leo Laporte [00:00:43]:
Podcasts you love from people you trust.
TWiT.tv [00:00:48]:
This is Twit.
Leo Laporte [00:00:53]:
This is Security Now with Steve Gibson. Episode 1066, recorded Tuesday, February 24th, 2026. Password leakage. It's time for Security Now! Oh, we wait all week for Tuesdays, but, uh, Tuesday has come. Congratulations, you made it. Here's our, our hero of the day, Mr. Steve Gibson, our guru in security, privacy, and all of the above. Hey, Steve.
Steve Gibson [00:01:21]:
Leo, great to be with you again as we wrap up February and head into March. We should explain to— I should explain to the 20,000+ listeners whose email address I have, who have signed up for the weekly mailing, that they'll be getting a weak surprise this coming week because You and I are going to be together in Florida on Tuesday, Wednesday, Thursday of next week. So we're pre-recording next week's Tuesday podcast on Sunday before the Sunday show, which means I will be working on it Friday and Saturday to get ready for Sunday. And I'm apt to send it off to everybody Saturday. Might as well if it's done. It's going to be done.
Leo Laporte [00:02:15]:
So, uh, yeah, the upside of this is you're going to get two Security Nows next week because you're going to get the regular Security Now. We'll record on Sunday and release as usual on Tuesday. But then the presentation Steve's doing at Zero Trust World, uh, late Tuesday will also be put out as a special, uh, podcast. So you're going to get— and actually that one's going to be really interesting, I think. Uh, this is— it's taking off on your You've— if you've listened to Security Now, you understand his thesis that the real threat these days is coming from inside the building.
Steve Gibson [00:02:48]:
Yeah, and it is the roughest thing to secure because you're telling your own employees who are well-meaning that, sorry, uh, we don't trust you, right? And it's like, I'd like— no, it's like, we can't. We don't even trust our own CEO. He's going to press the wrong button, you know, sink the whole organization.
Leo Laporte [00:03:12]:
Guarantee you that's the case. Well, what else is coming up this week?
Steve Gibson [00:03:16]:
As promised, we're going to talk about the— what all that was about, uh, ETH Zurich and the deep analysis of the— focused on 3 prominent password managers. Um, and they made a point in their 28-page research document of saying that they did choose these because these were the three that had at least some of their client-side source available. Um, of course we know in the case of Bit— Bitwarden, all of the source is available. But it's, it's a point that I've made before, which is it just seems wrong that researchers are forced to first reverse engineer some product of some sort which— whose security they want to verify, and then after going through all that work, then need to presume, you know, then, then need to proceed to do the verification. It's just like, guys, you know, it's just— we're asking a lot of them. Anyway, I guess it's worth it because they do it. Um, so we're gonna— what I want to focus on is less the minutia of the details because they've been fixed now, but the, the nature of the problem and why— while yes, they found problems, it never was like a pants-on-fire issue, but mostly why there were problems. What was the source of these problems? Because isn't this supposed to be easy? We'll talk about why it's not.
Steve Gibson [00:05:00]:
Um, yeah, so, uh, I, I guess I'm going to start off by sharing a piece of email that I received last week that I just— that just made me look and go, really? You're kidding me. Where my certificate authority is warning me to prepare for the inevitable, which of course they brought about. So, okay, uh, and then I'm gonna drop this, I promise, for a while, although I will I want to make sure that I don't forget to tell people that the 5 most downloaded pieces of freeware on GRC, on the freeware page, are now co-signed not only with the original Digicert EV certificate but with that new one that I managed to finagle, uh, from IdenTrust. Um, I found a beautiful $72 HSM called smartcard-hsm, uh, which is— which I'm very pleased with. It's all supported by open source software. I did struggle to get it all working, and since I'm now not going to need to mess with it for 3 years and I will promptly forget everything I just figured out, I'm going to spend some time to document the details of what I went through, and I will share that online. So I know from all of the feedback that I've had from our listeners that there are many people out there who are interested or in or need to do code signing. And I've worked out, I think, a very flexible, powerful, and inexpensive path for doing that, having just done so myself.
Steve Gibson [00:06:37]:
So good, I can't wait to hear that. This is a cute little thing that, that this little smart card hyphen HSM, um, available internationally and, uh, actually locally there— here in Orange County, the one of the retailers of them. Anyway, so that's all done. Um, then I'm also want to talk about, uh, another piece of lunacy that we're going to have fun with, Leo. 3 U.S. states are attempting to ban 3D-printed firearms by once again legislating it. Even though— sorry, you can't get there from here. There, that, that's not going to, you know, deter anybody.
Steve Gibson [00:07:20]:
Uh, it was triggered by the, the third state to join was— it's so— it's, uh, Washington State, New York State, and then most recently California have decided, yeah, let's just not have those. We don't, you know, we don't want 3D printers to be able to print guns, so we're just going to say you can't., also denied their ransom. Shiny Hunters has leaked just shy of a million, 967,000 personal details online. We'll touch on that. Uh, also in a different report, billions— literally it said billions of U.S. Social Security numbers have been leaked. Only problem is, uh, some only like 400,000 or so have ever been created. So I don't know how you have billions unless you have lots of duplicates.
Steve Gibson [00:08:10]:
Anyway, we'll, we'll, we'll look at that. I wanted to touch on Apple planning to add cameras to 3 new gadgets. Um, also Firefox has hit another end-of-life event for Windows 7 and 8 and 8.1. Uh, Russia made a mistake blocking some open-source software that they themselves need. Um, uh, and there's a weird site, Leo, freedom.gov, which our government is planning to put out. We'll talk about that. And, and I'm interested in knowing from our, uh, European listeners whether they see something different than I do when I go to freedom.gov here in Southern California. Um, apparently LLMs or will be happy to give you a password if you ask them.
Leo Laporte [00:09:02]:
Don't, don't ask them. Well, even if you ask it, don't use it.
Steve Gibson [00:09:07]:
That's, I think, more important. Yeah, I mean, you can ask it, you can ask all you want. We're going to talk about that also. As predicted, that, that exploit that has had me so worried and upset, which is known as the click fix attack, turns out to be every bit as popular as I was worried it was going to be. Uh, we have a listener who was convinced based on his NextDNS logs that his computer had picked up a virus. And based on what he shared with me, I agreed until I looked more closely, uh, and then I had to smile when I saw what the cause was, uh. And then we're going to look at how could 3 popular password managers get things wrong, uh, so wrong, wrong, wrong, wrong, and Uh, and then of course a picture of the week that's, uh, that everyone thinks their first reaction is, well, Photoshop. Turns out apparently not.
Leo Laporte [00:10:07]:
I haven't looked. I shall look. My eyes have been sealed. I'm going to unseal them. We'll all look at it together in just a bit.
Steve Gibson [00:10:14]:
But it is funny. Yeah.
Leo Laporte [00:10:15]:
First, a word from our sponsor. This episode of Security Now brought to you by Guard Square. Uh, are you a mobile app developer? You need to know about Guard Square. Mobile apps today have become an inescapable part of life, ranging from financial services to healthcare, retail, entertainment. We trust mobile apps with our most sensitive personal data. That, that's what makes them useful. But a recent survey showed that 72% of organizations experienced a mobile application security incident last year. 92% of respondents reported rising threat levels over the last 2 years.
Leo Laporte [00:10:55]:
We see it, it's in the headlines. Meanwhile, attackers who want your users' personal data are constantly finding new ways to attack your mobile app. They reverse engineer it, they repackage it, they distribute the modified app via phishing campaigns, via sideloading third-party app stores. You don't want that to happen. By taking a proactive approach to mobile app security, you can stay one step ahead of these attacks and maintain the trust of your users. You owe it to us, you owe it to us users. That's where GuardSquare comes in. GuardSquare delivers mobile app security without compromise, providing advanced protections for both Android and iOS apps, combined with automated mobile application security testing to find vulnerabilities and real-time threat monitoring to gain insights into attacks.
Leo Laporte [00:11:45]:
Discover more about how GuardSquare provides industry-leading security for your mobile apps at guardsquare.com. That's guardsquare.com. Do it for us, do it for yourself.
Steve Gibson [00:11:58]:
guardsquare.com.
Leo Laporte [00:11:58]:
We thank them so much for supporting Security Now. They're certainly in the right place. All right, picture of the week time.
Steve Gibson [00:12:06]:
So I gave this picture the caption, but officer—
Leo Laporte [00:12:11]:
all right, scrolling up. I think we've seen this kind of thing before. It does seem—
Steve Gibson [00:12:23]:
impossible traffic sign. It's like, what is— what? Yeah, uh, uh, and it's okay. So, uh, several of our listeners have taken it upon themselves to try to locate this actual, uh, like, you know, geolocate where this was taken. The clue is there's a Tim Hortons in the background there.
Leo Laporte [00:12:50]:
There you go.
Steve Gibson [00:12:51]:
Maybe it's Canada. And so, yeah, I thought that's what I was thinking. And someone did say that there'd been some modifications to the street. So, okay, so for those who can't see this, um, we have a— we're, we're on a road And this is one of our street sign, uh, pictures. We're on a road approaching a T where you must either— you know, and you can see there's a guardrail on the far side of the intersecting road, so you can't go straight. You've got to turn left or right, right? Because you don't have a choice. There's a stop sign there, so yes, you certainly— you'd want to stop before you made your choice. The problem here is that up in front of all of that is one of the circle red slash signs.
Steve Gibson [00:13:41]:
Normally, you know, there would be like a right arrow with a slash through it telling you that you cannot turn right, or maybe a left arrow depending. And you would expect that that would be reinforced with a one-way sign that you would also be confronting in the distance, just reminding you that like when you get to the stop sign You're— this is a one-way street. Well, there is only one way, and that's straight up, apparently, because the sign that we're looking at has both direction arrows red slashed. So you come up to this T-intersection, and if you've paid attention to the sign that you, that you had to pass in order to get there, it says you can't turn left, you can't turn right. And we know that because of a guardrail there, you can't go straight. Thus, but officer, when you, you know, get pulled over. But Oscar, yeah. Anyway, thank you.
Steve Gibson [00:14:37]:
Our, our listeners are sending these to me, and I do—
Leo Laporte [00:14:40]:
we do we have any theory as to why this exists?
Steve Gibson [00:14:45]:
No, no, no. I mean, and I, I, as I said, the, the first thing you would think is that somebody photoshopped it. Um, I don't know why it was that someone said that was not the case. What I heard— but it was just in passing as I was, as I was like, you know, scrolling through email that there had been some changes made to the road. Apparently they forgot to change the signage in order to, uh, to stay synchronized. That makes sense. Okay, so I, I'm gonna spend one more story on this and then I'm gonna let— leave it alone. I want to do it because the, the amount of feedback I've received from a range of our listeners who whose lives are impacted by certificate authorities one way or the other has been extensive.
Steve Gibson [00:15:32]:
Um, the subject line of the email I received from Digicert, my certificate authority, last Wednesday— okay, just made me shake my head. The subject was in the email, urgent, it started off, urgent, revalidate domains expiring February 24th due to new 199-day validity requirement. Okay, so here's what they wrote with their— with great urgency. Dear valued customer, we're reaching out with an urgent request to check your certificate domain validations before February 24th, 2026. As we communicated in previous emails, February 24th— by the way, that's today, right? Today we're recording this on February 24th. February 24th is the date when domain validation reuse periods will shorten to 199 days, down from 397 days, in accordance with the CA Browser Forum's ballot SC 081 version 3. Our records indicate that you or your subaccounts have existing domains that will expire on February 24th because of this change. If your systems require immediate certificate issuance, your issuance could be delayed if you don't check and revalidate these domains before February 24th.
Steve Gibson [00:17:10]:
So What do you need to do? Digicert CertCentral with a circle R, because that's— you needed to get that registered— now displays which of your current domains will expire on February 24th, 2026 due to the change to 199-day domain validation reuse periods. Steps for reval— for reevaluating domains that expire, and then they go through it like, you know, rigmarole of, of how to march through their, you user interface. And then they finish with, we're here to help. We understand industry-driven— right, they had nothing to do with it despite the fact that they're the biggest CA there is in— and voted for all this industry— we understand industry-driven compliance changes pose significant challenges, and we're standing by to assist you. Please don't hesitate to contact your DigiCert account manager with any questions or concerns about the change to 199-day domain validation reuse periods. Thank you for trusting us with your digital— with your digital security. Signed, the DigiCert team. Okay, so since DigiCert, a prominent, if not the prominent, voting member of the CA Browser Forum voted themselves to bring about these changes, it seems a little odd for them to be sympathizing with their customers over the inconvenience that these changes create.
Steve Gibson [00:18:45]:
To be straightforward, they would state that these changes have been made in the interest of improved security. Okay, we might disagree with that, as we, you know, as we know I do, but at least then they would be genuine. What, what, what puzzled me in their note, which I read closely, was their statement that our records indicate that you or your sub-accounts have existing domains that will expire blah blah blah on February 24th. As we know, it's, it's never the case that anything is ever done that causes existing certificates to suddenly become invalid. Right, they said your issuance could be— you know, your systems require immediate certificate issuance. Your issuance could be delayed if you don't check and revalidate these domains before the 24th. Again, as I was saying, it's never the case, thank goodness, that anything is ever done that causes existing certificates to suddenly become invalid, you know, obviously revocation notwithstanding. It's always the case that the not valid after date continues to be honored.
Steve Gibson [00:19:59]:
When you think about that, it's clear that a certificate that contains a built-in not valid after date means valid until that date, and there's no way to after the fact have that no longer be true because it's, it's bound into the certificate. Um, and this is why I went to all the trouble last week to establish a new code signing relationship with IdenTrust while they— while 3-year certificates were still allowed by the CA Browser Forum, uh, for code signing. And as I mentioned at the top of the show, I am now the proud holder of a code signing certificate that will be valid until February of 2029. And nothing that happens between now and then, no matter what new insanity the CA Browser Forum may enact, will change that. So the answer to the mystery of what DigiCert means here is the phenomenon I spoke about last week, which is the new need which they created to decouple certificate qualification from certificate issuance. Since a certificate, once issued, will always live out the duration of its valid life unless it's revoked— back when certificates were issued for 10 or, or even 5 years, the qualification for that certificate was determined at the time of the certificate's issuance. Um, and that was all that— and that would be that. That, that, that was it.
Steve Gibson [00:21:40]:
You, you verify that you qualify Here's your 10-year certificate, see you in 10 years. But with the changes that are, you know, bringing about this shortening of certificate lifetimes, automation is effectively required. And, you know, they're going to be seeing that more and more. As we know, the industry intends to keep marching certificate lifetimes downward until they reach a maximum of 47 days. In 2029. We're about to drop to 200. That's happening in the next week or two, to 200 days maximum. Then it will be 100, where it holds for a couple years, then finally lands on 47 days.
Steve Gibson [00:22:25]:
So the issuance of organization validation certificates, which is what DigiCert produces, needed to be decoupled from the qualification to receive those issuance decoupled from qualification. Before the middle of next month, the CA browser forum would allow organizations to go up to 825 days between qualification intervals. So that's around 27 months. And before an organization needed to be revalidated. But those 825 days now drops to 398 days, which is what DigiSearch letter was about. They're saying that one or more of the validations that they performed for Gibson Research Corporation's identity occurred more than 398 days ago Um, until today, literally today, February 24th, and that was just fine. But as of today, February 24th, that's no longer true. The validations that were, that were, that were valid yesterday are no longer valid today.
Steve Gibson [00:23:52]:
So anybody who would need to reissue a certificate soon needs to recognize that although they could have done so yesterday, they can't do so tomorrow. Again, just craziness, but that's the way this industry is being played. So, you know, don't just go thinking that all you need to do is push a button to issue yourself a certificate. Oh no, your button has been disabled. You no longer qualify until you've had the chance— you know, they, your, your authority, have had the chance to look you up and down again, make sure you're still you, uh, and what's more, that will now be annual. Oh yes, these wild times where you could go, well, yesterday, 825 days, but once upon a time 5 years, 10 years, no problem. Those are over. So this explains why once my previously paid certification with DigiCert is over in 2 years, I'll be happily dropping OV— these organizational verification certificates— in favor of the much cleaner and simpler DV— domain validation certificates— that Let's Encrypt's automation has been gleefully issuing now to the vast majority of the internet, uh, to anyone who wishes to bring a server online.
Steve Gibson [00:25:22]:
So I, I, I, I, I— all we could do is go along with it because we have no control. Okay, so I know you were up to speed on this, Leo, because you, you, you reacted to it when I was saying I was going to talk about this. We have another example of lawmakers apparently thinking, uh, we don't know how you techies are gonna do it. Yeah, but that's not our problem. We're gonna make it a law so that it becomes your problem. And I just wanted to point out, thanks to our listener Tom Minnick for bringing this little tidbit to my attention. Tom sent a link to the reporting on the well-known and the, the reporting which appeared and was covered by the well-known and popular Adafruit website. Uh, Adafruit, A-D-A-F-R-U-I-T, for those who may not be aware, is a highly regarded hobbyist maker hardware electronics website and retailer.
Steve Gibson [00:26:27]:
They posted the news of this new numbskull legislation under their headline, California's new bill requires DOJ-approved 3D printers that report on themselves. So here's what Adafruit wrote. They said California's new bill requires Department of Justice approved 3D printers that report on themselves targeting general-purpose machines. Assemblymember Bauer, uh, Khan introduced AB-2047, the, quote, California Firearm Printing Prevention Act, on February 17th. So just recently, the bill would ban the sale or transfer of any 3D printer in California unless it appears on a state-maintained roster of pre-approved makes and models certified by the U.S. Department of Justice as being equipped with, quote, firearm blocking technology, unquote. Manufacturers would need to submit attestations for every make and model. The DOJ would publish a list.
Steve Gibson [00:27:51]:
If your printer is not on the list by March 1st 2029, it cannot be sold. In addition, knowingly disabling or circumventing the blocking software would be a misdemeanor. And it gets worse, huh? Much worse. Okay, is everybody sitting down? Adafruit continues, we've been tracking this pattern. Washington State's HB 2321 requires printers to include blocking features that cannot be defeated by users with significant technical skill. You know, good luck with that on open source firmware. New York's budget bill S.9005 buries similar requirements in Part C, sweeping in CNC mills and anything capable of subtractive manufacturing. California's version adds a certification bureaucracy on top.
Steve Gibson [00:28:55]:
State-approved platforms, state-approved software control processes, state-approved printer models, quarterly list updates, and civil penalties up to $25,000 per violation. As Michael Weinberg wrote after the New York and Washington proposals dropped, accurately identifying gun parts from geometry alone is incredibly difficult. Desktop printers lack the processing power to run this kind of analysis, and the open-source firmware that runs most machines makes any blocking requirement trivially easy to bypass. Okay, so I'll interrupt to note again that once again You know, when, when printers that can print weaponry are outlawed, only outlaws who wish to print weaponry will own outlaw printers. Nothing will be accomplished to curtail the fact that a 3D printer can be used to print a dangerous machine. Adafruit continues, the Firearms Policy Coalition flagged AB 2047 on X, and the reactions tell you everything. John LaRoe called it stupidity on steroids, pointing out that a simple spring-shaped part has no way of revealing its intended use. The Foundry put it plainly, quote, regulating general-purpose machines is another— AB 2047 would require 3D printers to run state-approved surveillance software and criminalize modifying your own hardware, unquote.
Steve Gibson [00:30:47]:
Adafruit continues, as we've said before on this blog when we covered Washington and New York, it doesn't matter if you're pro or anti-gun. The state should prosecute people who make illegal things, not add useless surveillance software to every tool in every classroom, library, and garage in the state. And as you can see, these bills spread. That's how a small group can push legislation into the entire country. First Washington proposed theirs, then New York, now California. Once these three states pass a law, that's 20 to 25% of the country by GDP and population, and thus every manufacturer is forced to comply with a bad decision in order to stay in business. If you're a maker, educator, or manufacturer anywhere in the U.S., even outside these states, this is a problem. It's a problem now.
Steve Gibson [00:31:49]:
Out of Fruits article mentioned Michael Weinberg. Michael is the executive director of NYU's, um, Engelberg Center for Innovation, Law, and Policy. He's a board member of the Open Source Hardware Association and, as he describes himself, a maker of poorly made things. He's also, however, an astute thinker. And since I think this topic is extremely interesting and that our listeners are likely to find it so, I wanted to also share what Michael wrote in the wake of the New York and Washington state bills. The title of Michael's posting was 3D printers cannot effectively screen for gun parts. He wrote, this post is a handy reference for the technical reasons why requiring 3D printers to screen for gun parts is not an effective way to reduce guns or gun violence. I'm publishing it on the occasion of both New York and Washington State introducing bills to require this type of screening.
Steve Gibson [00:32:57]:
In addition to a topic I've been researching for over a decade, the question of how to know if a 3D printer is printing a gun part is something I've spent a lot of time working on while overseeing trust and safety at a large 3D printing service provider. So consider that You know, he's been overseeing trust and safety at a large 3D printing service provider where the question of what is it we are printing on these with our commercial-grade machines, you know, comes up. So he said, this post is not about debating the larger legitimacy of gun control in order to focus on the technical reasons why requiring 3D printers to identify and refuse to print gun parts does not work. It assumes that gun control is a reasonable and legitimate action of governments. Broadly speaking, it's responding to requirements that all 3D printers check prints to make sure they're not gun parts. If the part is a gun part, the printer would refuse to print it. The short version is that accurately identifying gun parts is incredibly difficult, and the hackable nature of desktop 3D printers makes it trivial to circumvent any requirements to even try. Here's the slightly longer version: matching files is fragile, and anybody— you know, we've talked about hashes, right? The whole point of a hash is that you want matching files to be fragile.
Steve Gibson [00:34:45]:
So this is working against you here. He said the first reason that requiring 3D printers to identify gun parts is ineffective is because analyzing 3D files is complicated. Any attempt to identify gun parts will miss many parts that are actually for guns and may flag a number of parts that have nothing to do with guns. You know, they're just kind of gun-like. Expensive engineering design software is good at evaluating specific properties of a 3D file, like where mechanical stress will occur over a lifetime of use. However, even that software cannot tell you what a part actually does. Is that spring for a door? Or a shock absorber or a catapult. This challenge is exacerbated by the fact that guns are just mechanical objects.
Steve Gibson [00:35:44]:
That means that there are many ways to design any individual part, and many individual parts of guns will resemble mechanical parts with totally benign uses. Put another way, devoid of other context, a switch for a gun safety looks a lot like a switch for a door. Broadly speaking, there are two ways to think about doing file matching. Algorithmic analysis is one. This approach imagines a piece of software that can analyze a file and determine with some level of certainty if it is a gun part or just a hinge. Assuming that this software exists, which it does not at the time of this writing, It is reasonable to expect that such an analysis would be reasonably computationally expensive. 3D printers do not have the onboard processing power to do this kind of analysis. Requiring that they include chips capable of this kind of analysis would fundamentally change the economics of 3D printer design akin to requiring that all bikes include jet engines.
Steve Gibson [00:37:01]:
And I'll interrupt Michael for a moment to comment that he's not exaggerating how totally inadequate any 3D printer is for performing any sort of complex analysis. 3D printers are extremely simple and inexpensive. I've owned a number of them. They They have nearly no brain power themselves. They're extremely simple robots that read instructions from a USB stick or SD card. Um, you know, there are some that fix a liquid resin using an image and others that move a plastic extruder around in 3-space, you know, basically saying move the plastic extrusion head from where it is now to coordinates X, Y, and Z. The resin fix images, or the instructions with their coordinates, were created outside the printer by a real computer that's running some sort of engineering drawing design conversion software. Once the design is ready, it's converted into fabrication instructions, which are typically written to a storage device and then transferred to a standalone printer, which simply follows the instructions step by step without in any way, in any way understanding what it is that it's being asked to print.
Steve Gibson [00:38:29]:
That just isn't there. So these printers are inexpensive. I mean, they're really inexpensive, you know, hundreds of dollars only, because, you know, they could not be any more rudimentary. They've just been stripped of anything that they don't need. Michael continues, of course, writing, the 3D printer could upload the file to a cloud somewhere and let the processing happen there. However, internet connectivity is not a default feature on desktop 3D printers. You could require that all 3D printers maintain a constant connection to the internet in order to operate, but again that would fundamentally change how people use their printer. There are also many legitimate use environments where constant internet connectivity is neither possible nor desirable.
Steve Gibson [00:39:20]:
Of course, we immediately think of this as like, what about, you know, malware downloading itself into our 3D printers because they now have internet connections? It's like, no, please, let's not go there. And he says, and of course, this raises the question of who's responsible for maintaining that directory and keeping it secure, meaning in the cloud. So what about blacklisting, he asks? If it's not possible to analyze the true purpose of each file, it might be possible to at least, at least match them against a known database of gun parts, right? This approach also has some serious shortcomings. First, There's the question of keeping that database up to date on the printer. That would require constant, or at least regular, internet connectivity for the printer. That raises the same issues as discussed in the last section. Second, also as discussed above, analyzing and matching 3D files is computationally expensive. The most logical way to do that with the processing power of the 3D printer would be to use a hash table of known gun parts, comparing a hash of the file to be printed against the table.
Steve Gibson [00:40:33]:
The primary problem with both geometry matching and hash matching is that it's incredibly fragile. The smallest change that had no impact on the functioning of the part, right, one bit changed, would completely change its hash, effectively hiding it from the blacklist. That would make it trivial for anyone to circumvent. Identifying which changes are functional and which are merely aesthetic is not easy. That's especially true if people are making those changes with a specific goal of tricking the printer into printing a gun part. You know, make a cosmetic change, change a tiny little thing, a little tick somewhere, and now it's— it looks like an entirely different file because it's in a completely different hash. As we know, that's the nature of hashes. He writes, 3D printers print themselves.
Steve Gibson [00:41:29]:
The second reason this proposal is ineffective is because 3D printers are made in an, an incredibly distributed way. There are dozens of ways to make your own 3D printer using open source, user-modifiable parts. Even non-open source printers are highly hackable. And I— the point he's making is you don't have— it's not the only way to get one is not to buy one. You can make one. He says, as a result, there's no way to mandate that a technology that starts in a 3D printer remains in a 3D printer. The software that runs most printers is open source, meaning a single update would circumvent any screening measures. This places 3D printers at the opposite end of the spectrum from 2D printers.
Steve Gibson [00:42:23]:
This was interesting, I hadn't thought about this before. He wrote, anti-counterfeit systems prevent 2D printers from printing currency. To the extent that these rules are effective, he says, parenthet— I'm no expert, but they are often cited in these discussions as successful models. He says It is because the 2D printing industry is fairly concentrated and proprietary. 2D printer companies are actively hostile to users who want to modify their products, significantly raising the barrier to hacking around any countermeasures. Desktop 3D printers are the opposite. They all trace their heritage back to open source printers and users expect to be able to modify, extend, and hack their own printers. That means that workarounds for a screening mandate would be easy to develop, distribute, and implement.
Steve Gibson [00:43:26]:
Many open-source software packages might even include the circumvention by default, meaning users would implement it without even actively intending to do so. 3D printers are general-purpose machines This post is focused on the technical challenges with requiring 3D printers to screen every file it prints for gun parts. Nonetheless, it would be incomplete without a brief mention of how potentially invasive this sort of requirement is. 3D printers are general-purpose machines that can be used for good or ill, just as we do not require the phone company to monitor every phone call in order to prevent customers from using phones to commit bank fraud, we should be wary of requiring our 3D printers to monitor every print in order to prevent one possible type of print. That type of invasion might be reasonable if it was effective. However, for the reasons described, it is unlikely to prevent even a modestly mo— motivated person from using their printer to create gun parts. If an intervention is both highly invasive and unlikely to be effective, it's probably not an ideal policy, which I think is putting it mildly. So I think Michael did a great job of detailing the specific 3D printing issues which would surround any attempt to manage or control what a 3D printer can and cannot print.
Steve Gibson [00:45:01]:
And while I have no interest in ever owning or printing a firearm, you know, there it is again. Yeah, we are— that sounded like a ground, uh, it did, didn't it?
Leo Laporte [00:45:17]:
Came and went like when you plug in a guitar to an amp.
Steve Gibson [00:45:22]:
Yeah, exactly. Yeah, anyway. I, I will continue. Yeah, yeah. Uh, while I have no interest in ever owning or printing a firearm, I'm a proud Californian, uh, and I'm annoyed by the fact that the state I love is enacting such moronic legislation. I mean, it isn't yet a law, but, you know, maybe the California Assembly is going to pass this. It would be nuts. But the broader concern is the large and growing degree to which modern technology appears to be outpacing legislators' ability to understand what they can and cannot have.
Steve Gibson [00:46:04]:
They cannot have a practical law, no matter how much they want one, to force 3D printers not to print gun parts. They just can't, no matter— as I said, no matter how much they want it. They also cannot have a law that absolutely preserves everyone's privacy while at the same time preventing child predators from abusing that privacy to commit their crimes. We all wish it were possible to have both, but we know it's not. I read some of the proposed new California bill, that's AB 2047. It's really quite awful. Um, I have the link to the bill's full text in the show notes for anyone who might— anyone who might be curious. A bad law that hits the books can usually be challenged by those who have a vested interest in the preservation of the status quo.
Steve Gibson [00:46:57]:
But in the case of 3D printing, which is mostly a hobby interest, it's unclear who might have the, you know, the deep pockets required to stand up against it, um, and fight it out in court. If that doesn't happen It might be that the sale or transfer of 3D printers would be outlawed in those states which enact these dumb laws. Um, here's just two lines from California's proposed legislation. It says the bill, beginning on March 1st, 2029— so that's, you know, next week— would prohibit the sale or transfer of three-dimensional printers that are not equipped with firearm blocking technology and that are not listed on the department's list of manufacturers with a certificate of compliance verification, except as specified. The bill would authorize a civil action to be brought against a person who sells, offers to sell, or transfers a printer without the firearm blocking technology. Okay, now as I said, we're— I'm approaching March 1st, 2026. So the purchase ban, if saner heads do not prevail and it does come into effect, is still a full 3 years away— March 1st, 2029. This means that no matter what, it will be possible for Californians to continue to purchase the 3D printer of their choice for the next 3 years.
Steve Gibson [00:48:35]:
If you live in an affected state, currently in New York, Washington, or California— I don't know what, what the, the, the calendar, uh, states in the New York and Washington bills, but at least in California, if you've been thinking that you might like to explore 3D printing in your garage, you know, to print widgets of whatever sort keep an eye on the date. There may be a deadline coming for being able to do that, insane as that is. Yeah, impossible. Leo, it's just so wrong-headed. It's like, it's like some, you know, some non-techie legislator heard that 3D printers— you know, people had 3D printers in their garage and they were printing guns. So it's like, oh, Let's, let's have a law that makes printers unwilling to do that, right?
Leo Laporte [00:49:30]:
Well, I mean, it's— I mean, there is 3D gun printing going on.
Steve Gibson [00:49:36]:
They make those guns. Yes. Um, you know, we should have a law, Leo, that prevents resin from being willing to be formed into that shape, right? How's that?
Leo Laporte [00:49:45]:
That's the problem, is, is, is it's not prac— it's not technically possible, right? Um, Luigi Mangione's gun was partially 3D printed, for example. I mean, there— this has been an issue, but this isn't the solution, obviously.
Steve Gibson [00:50:00]:
Yeah, and I mean, I get it, right? I mean, I, I, I, I get it that, uh, printing a gun in a, in a non-ferrous substance will then render, you know, goes right through those materials, detectors, right? It, it— they can no longer be— I mean, it's not a good thing. But, but, but we're, we're back in the same problem we've often talked about. If crypto is, is made illegal, then only bad guys will use crypto. If 3D printing is made illegal, then only bad guys will, will— I mean, will be using 3D printers to print guns, and everybody else is—
Leo Laporte [00:50:43]:
I understand the motivation. The real issue is you can't do it technically, as you pointed out. A spring is a spring. It's not— it could be for a variety of purposes. You can't really identify parts that are going into a gun.
Steve Gibson [00:50:57]:
And imagine the frustration of designing, uh, a, a particular widget, you know, to, to allow your baby carriage, right? Yeah, well, to allow your baby carriage to, to roll better, and your printer says, oh, that looks like the barrel of a gun, sorry. Can't print that.
Leo Laporte [00:51:15]:
It's like, what? Darren also points out you can't 3D print bullets. They're still metal. So you're going to be able to— you're going to see, you're going to see the ammunition even if you—
Steve Gibson [00:51:26]:
and actually there's been a lot of dialogue in the past about, okay, well, maybe we need to control ammunition because that would be a better, more effective thing, right? Yeah, yeah, yeah. Oh well, as would Following the advice of this next, this next sponsor, Leo.
Leo Laporte [00:51:44]:
Yes. Now I, uh, I want to kind of a little bit bring you into this because the next sponsor is Bitwarden, who we love. Open source solution. And you're going to talk a little bit about this ETH Zurich, uh, finding about the risks of, uh, password managers if a bad guy could somehow get the vault, right?
Steve Gibson [00:52:03]:
Yes, that, that was one of the issues too, as I mean we spent a lot of time looking at client-side vulnerabilities, like, you know, while the, while the password manager is unlocked, if malware was on your computer, what could it do? These guys did a very different thing. They said basically, if the, if the cloud provider's entire server infrastructure was subverted, what could happen, right?
Leo Laporte [00:52:35]:
And, you know, okay, we'll talk about it in a lot more detail, of course.
Steve Gibson [00:52:39]:
Yes, later the show.
Leo Laporte [00:52:39]:
I just wanted to bring it up because I was really impressed by Bitwarden's response, which is, thank you for this research, it's going to help us lock down what we do. And this is one of the advantages of being an open source, uh, project, is you welcome this kind of stuff, uh, and, and, and, and you have other eyes looking at the security. And I don't think anybody should be, uh, worried whether you're a LastPass, Bitwarden, or a, uh, Dashlane user about your security at this point.
Steve Gibson [00:53:08]:
In fact, I would be less worried today than you might have been a month ago, right? Because, you know, the, the whole point is these three just were deeply audited. Yeah, I, I— it's a good point. I mean, and some of these hacks were wacky. I mean, they were way out there. It's like, well, okay, if— I mean, so if, if these three tools pass through this gauntlet, the other password managers haven't, because, because the, the, the researchers said, well, none of it's open source, we can't invest in reverse engineering these closed products, right? So we're going to look at what we can, and now they're way better for it.
Leo Laporte [00:53:51]:
Fully open source, fully GPL open source. You can inspect the source code, so can ETH Zurich and everybody else. And I think that's Really important. The other thing I like about Bitwarden, our sponsor, is, uh, they enabled very early on this memory-hard key derivative function, Argon2. And this is another good solution, right? I set my Argon2 to the maximum number of, uh, iterations, and, uh, that really secures it as well. So our show today brought to you by Bitwarden. We love Bitwarden. I use Bitwarden.
Leo Laporte [00:54:23]:
I certainly never thought of stopping. Uh, but in fact, uh, Bitwarden does let you host the vault yourself if you want, if that's something you desire. Um, I'll talk about something they've just added, which is really cool for this purpose. But, uh, there are, there are even third-party, uh, open-source solutions for hosting a Bitwarden vault. I personally, I'm not gonna do that because I think that their security is a lot better than mine. That's why I trust Bitwarden, the leader in passwords, passkeys, and secret management, consistently ranked number 1 in user satisfaction, just but not by me, but, but G2 Software Reviews. 10 million users love Bitwarden, 180 countries, more than 50,000 businesses. And whether you're protecting just one account, your own, or thousands for your business, Bitwarden keeps you secure all year long with consistent updates.
Leo Laporte [00:55:10]:
The new Bitwarden Access Intelligence will let organizations detect weak and reused and exposed credentials. Now this is legitimately a real problem. We were talking about the, the, the, the threat is coming from inside the house. This is it. Your users are using the same password over and over again frequently, or they're using a weak password. Bitwarden Access Intelligence will detect that, or a password that's been exposed in a breach, and then walk the user through remediation immediately. So the user replaces the risky passwords with strong, unique ones and understands why that's important. That closes one of the most important security gaps.
Leo Laporte [00:55:48]:
Credentials are a top cause of breaches, but with Access Intelligence from Bitwarden, they become visible, prioritized, and corrected before the exploitation can occur. Uh, another feature— I'm just picking them at random here— Bitwarden Lite. This is so cool. I was talking about self-hosting. Bitwarden Lite delivers a lightweight, self-hosted password manager. It's great for people with home labs, personal projects, or just an environment that wants a quick setup with minimal overhead. Bitwarden wants to work with you the way you want to work without compromising security. That real-time vault health, the alerts, and the password coaching features now are in every version of Bitwarden, so every user can identify weak, reused, and exposed credentials and take immediate action to strengthen their security.
Leo Laporte [00:56:34]:
They've also made it really easy if you're using the As many people do, I think, the password vault in your browser— Bitwarden now supports direct import from Chrome, Edge, Brave, Opera, and Vivaldi browsers. Direct import copies the credentials right from the browser into the encrypted vault without that separate plaintext export, which not only simplifies migration, it reduces exposure associated with the manual export and deletion steps if you don't delete that you know, in the clear password export. Well, so that's not good, but this way you don't even have one, which is fantastic. G2 Winter 2025 reports Bitwarden continues to hold strong, number one in every enterprise category for 6 straight quarters. Bitwarden setup is easy. Uh, Steve and I moved over from those other guys in minutes. It supports importing from most password management solutions. And again, I wanna underscore this because it's open source.
Leo Laporte [00:57:35]:
You can look at it, but it's also regularly audited by third-party experts. And when something comes up, Bitwarden fixes it. That's what's great about Bitwarden. Bitwarden meets SOC 2 Type 2, GDPR, HIPAA, CCPA standards. It's ISO 27001:2002 certified. Bottom line, get started today with Bitwarden's free trial of a Teams or enterprise plan, or get started for free across all devices. As an individual user at bitwarden.com/twit. That's bitwarden.com/twit.
Leo Laporte [00:58:08]:
And stay tuned because Steve will explain what that ETH Zurich report meant and what it means to use it, uh, Bitwarden users. I don't think we're, we're, we're afraid. We're switching. No, I was never worried. Never worried.
Steve Gibson [00:58:24]:
On we go. Uh, Friday before last, under the headline Fintech Lending Giant Figure It's the name of the company. Figure Technology confirms data breach. Uh, TechCrunch reported Figure Technology, a blockchain-based lending company, confirmed it experienced a data breach on Friday. Figure spokesperson, um, uh, Althea Jadic told TechCrunch in a statement that the breach originated when an employee— and get this— when an employee was tricked with a social engineering attack. Yeah, imagine that. That allowed the hackers to steal, quote, a limited number of files. I love that.
Steve Gibson [00:59:05]:
We'll get back to that in a second. The statement said, uh, the company is communicating, quote, with partners and those impacted, unquote, and offering free credit monitoring to all individuals who receive a notice. Oh, thank you. Joy. Figure spokesperson did not respond to a series of specific questions about the breach. This is, you know, from TechCrunch, a legitimate reporting, uh, group. Uh, the hacking group— guess who— Shiny Hunters took responsibility for the hack on its official dark web leak website, saying that after the company refused to pay a ransom, they published 2.5 gigabytes of allegedly stolen data. TechCrunch saw a portion of the data, which included customers' full names, home addresses, dates of births, and phone numbers.
Steve Gibson [00:59:58]:
Member of Shiny Hunters told TechCrunch— notice that, uh, Shiny Hunters is happy to talk to TechCrunch, but Figure Technology, no. Shiny Hunters told TechCrunch that Figure was among the victims of a hacking campaign that targeted customers who rely on the single sign-on provider Okta. Other victims of the campaign include Harvard University and the University of Pennsylvania, you know, UPenn. Okay, so first of all, I love the quote from their spokesperson, quote, a limited number of files, right? Who cares how many files escaped? It's—
Leo Laporte [01:00:36]:
is it limited to 1 or 1,000 or a million?
Steve Gibson [01:00:39]:
It's still limited, right? Right, well, as they say, the size matters. In this case, 2.5 gigabytes of customer personal data could do plenty of damage, right? Even if it's contained in one file. So all it takes is one. Okay, then last Wednesday, Troy Hunt's Have I Been Pwned site scooped up the deliberately posted leaked breach data and examined what had been exposed. 967,200. So nearly 1 million of Figure Technologies customers.
Leo Laporte [01:01:19]:
So first of all, it's a limited number of customers, Steve.
Steve Gibson [01:01:22]:
That's a lot, right? It's, it's not everybody on the planet, come on. But Leo, uh, well, I think you and I are in the wrong business. A blockchain-based lending company Oh, has 967,200 customers. What, what? I, I don't even— you know, okay, fine, whatever. They do. Nearly a million customers. Yes. Of Figure Technologies customers, Troy, uh, Troy's Have I Been Pwned site, uh, they all, all 967,200 had their names, physical addresses, dates of birth, email addresses, and phone numbers released after Figure— Figure Technology refused to pay up.
Steve Gibson [01:02:14]:
Now, we know that not paying is the right thing for Figure to do, right? I mean, I, you know, the rightest thing is not to get breached by an employee being tricked by shiny hunters in the first place in a social engineering attack. But what if you've been a— if you've been breached and you're being ransomed, not paying is the right thing to do. But it makes you wonder what the Shiny Hunters group are themselves are thinking now. You know, presumably as a result of asking for too much money, they got nothing in return from for their efforts, right? The Figure Technology did the right thing, said no, uh, and decided to just, you know, pay the— pay the price in reputational damage. But as I've noted recently, shiny hunters have zero interest in this data. They don't care at all. Its only value to them is the value that Figure may place on keeping it private. And once Figure said no deal, the Shiny Hunters group had to release it, otherwise their threat to do so would be meaningless.
Steve Gibson [01:03:38]:
That means that they're now unable to even resell the data since it's now freely available on the internet, and it need to— it need to be made freely available in order for them to follow through on their threat that they would do that. And we've seen reports that in general, more victims now in the last year compared to last 5 years are being, um, are declining to take it on the chin, uh, and, um, or rather deciding— sorry, they are deciding— they're, they're, they're declining to pay ransom, deciding to take it on the chin and, uh, just saying no, sorry, we're not going to pay your ransom. So I think partly this could be because these days being attacked and extorted is no longer a shocking announcement, right? I mean, I, I skip over so many of these every week because they're just boring at this point. Well, if they're boring to our listeners, they're boring to the world. The world, you know, the world's just sort of like, okay, they got breached and now they're being ransomed and blah blah blah. So it's just not— you don't need to pay the ransom to save face to the same degree, uh, and it's not— that also means that it is possible for them to recover from. They just say oops, they apologize to their affected customers, offer them, as we saw, a free year of credit monitoring, uh, and then just get on with their business as usual. And as you— you know, Leo, you and I both discovered that through no fault of ours, all of our data, including our Social Security numbers, was already out there swimming around in that big internet ocean.
Steve Gibson [01:05:33]:
So Shiny Hunter's failure to obtain anything of value suggests that perhaps the value of stolen data property is falling, and that if they don't wish to come up empty as they just did here, they may need to drop the price of their ask. Because right now, you know, they're going through the trouble of doing this, they're saying pay up or else, and people are just more— way more often than before saying No, we're not going to pay you. Well, we're just going to give our customers a year of free credit monitoring. And of course, with what, what this tells us is— and our listeners— is freeze your credit. I mean, that, that's really what you want to do is, is get it frozen. Um, last Wednesday, UpGuard posted a curious headline. They— and I mentioned this at the top— their headline was Social insecurity, colon, billions of Social Security numbers and passwords. That's all they said.
Steve Gibson [01:06:41]:
Their headline was sort of a fragmentary sentence, but okay. Uh, now, okay, billions, that seems really bad, right? But given that Social Security numbers are, first of all, specific to the United States, whose current population is around 342 million, and that a grand total of around 450 million Social Security numbers have ever been issued since 1936, the claim of 2.7 billion Social Security numbers seems somewhat sketchy. But their posting explains what's going on, and it does seem legit. They wrote, because they're a— they're they're, uh, you know, a legitimate security firm. They said this— the week of January 12th, 2026, so, you know, what, maybe 6 weeks ago, the UpGuard research team detected an exposed Elastic database with around 3 billion email addresses and passwords and 2.7 billion records with Social Security numbers. That amount of data suggests it was created by recombining prior Social Security number breaches like the OPM breach in 2015 or the one we're all, uh, aware of recently, the National Public Data breach in 2024. They said, on the other hand, if even a fraction of the records were real, if only 10% or 270 million records, or even 1% were real, the exposure would be a dire bellwether for the state of privacy in America. And on this point, I say, uh, you get into the party a little bit late, you know.
Steve Gibson [01:08:36]:
Like I said, Leo and I, we're— our social security numbers and everybody else's are already out there. Um, and here was an Elastic database exposed with 2.7 billion Social Security numbers on. I think that probably is everybody several times over. They said, with the help of some unfortunate friends, we were able to confirm that at least some of it was real. And with the help of K-pop and some American presidents, we were able to approximate when the passwords were collected. Okay, while most exposed databases require investigation to determine if they contain sensitive data. This one was obvious. The database had one index named SSN— oh good, so you can look it up by Social Security number— and another named SSN2, each containing millions of records with 9-digit numbers in a field labeled SSN.
Steve Gibson [01:09:39]:
What could that be? The database also had several indices that were collections of emails and associated passwords. On January 16th, we submitted the IP address and explanation of the issue to the FBI's IC3. We also submitted an abuse report to Hetzner, the hosting company. They replied saying they would forward the issue to the customer. No, after we clarified that their customer was in gross violation of privacy laws, all public access to the database was removed on January 21st. Hetzner replied once more, dear sir or madam, thank you for your report. This is our customer's statement. And then they quote their customer's response, hello We contacted our client and explained what SSN database host— that actually it does say what— we explained what SSN database hosting not acceptable.
Steve Gibson [01:10:49]:
Client now deleted this file from server, so problem solver for now.
Leo Laporte [01:10:59]:
Okay, this is not their native language, obviously.
Steve Gibson [01:11:03]:
Okay. UpGuard's report continues. Anyway, I'm not going to go into it. They poke around inside their database. Apparently they got a copy of it because they did a lot of, like, research. They poked around, locating some people they know closely enough to confirm their Social Security numbers. The data is authentic. Unfortunately, since one of them whose data is present in the data, uh, happened to also have her identity stolen in the past, they drew— these researchers— the entirely unwarranted conclusion that this means that this breach was the source of the identity theft.
Steve Gibson [01:11:46]:
No, or unlikely at least. We know there's really not much personally identifiable information that is not by now loose on the internet and available online. I wanted to share this specific story to drive home the point that there has been so much prior leakage of our personal data that we really have very little to no control over it any longer. Um, that's all just an illusion. It's certainly the case that the use of services like one of Twitter's sponsors, DeleteMe, makes a lot of sense for anyone who wishes to be as proactive as possible. But my feeling is that beyond that, doing everything that's within our power to minimize the impact of any— the impact of the use of any personal data loss of ours that has almost certainly already occurred is what makes the most sense. These guys did recognize that the database appeared to contain a great deal of redundancy and also a fair amount of incorrect noise. So it wasn't the highest quality, uh, which is what we'd assume when we learned that it was 2.7 billion records containing Social Security numbers when only 450 million have ever been issued since 1936 in the entire history of Social Security.
Steve Gibson [01:13:18]:
At this point, protecting ourselves is the best we can do. I assumed that the GRC shortcut I would have previously created years ago would be grc.sc/credit, and sure enough, that bounced me directly to the Investopedia page, which talks about freezing credit and provides the links to the three main credit freeze pages of the 3 main credit bureaus. Anytime you are not actively needing to have your credit queried because you're applying for credit or purchasing something or what— whatever, the best advice that exists is to keep it frozen. Because the sad truth is all of our data is, is loose. It's out there. As a, as a consequence of previous irresponsibility on the part of, of entities that we gave it to, including the credit bureaus.
Leo Laporte [01:14:25]:
You know, I think there are only a billion possible— am I right?
Steve Gibson [01:14:31]:
Social Security numbers. How many? Uh, you're right, 9 digits.
Leo Laporte [01:14:37]:
Yeah, so Yeah. So they got them all. The problem is if you don't have a name associated with it, it's just a number.
Steve Gibson [01:14:45]:
That's a very good point, Leo.
Leo Laporte [01:14:48]:
Start at 000. I know them all. I know every single Social Security number. Every one of them. You do. I just don't know whose they are.
Steve Gibson [01:14:56]:
You brute forcer, you. Oh Lord. Okay, before I go on, let's take another break since we had a long run the first time.
Leo Laporte [01:15:08]:
Yes, uh, and I think we should talk about Zscaler now. This would be a good si— good time. Actually, any time on Security Now is a good time to talk about the number one largest cloud security provider. Uh, basically that's the whole show right there in a nutshell. This episode of Security Now brought to you by Zscaler, the world's largest cloud security platform. Actually, uh, Zscaler, uh, is very timely because Zscaler works with AI. Now, we— so do you, probably, right? The rewards in business of AI are frankly too great to ignore. Every business is looking at it.
Leo Laporte [01:15:43]:
We are. Everybody is. But let's not forget the risks: loss of sensitive data, attacks against enterprise-managed AI, and then of course the fact that the bad guys are using AI as well. Generative AI increases the opportunities for threat actors, lets them rapidly create phishing lures that are impeccable, really good, much better than that email from Hetzner anyway. Uh, they can use it— in fact, maybe that's why Hetzner had all those typos, just to, just to make it look like a human did it. That was probably the reason. Uh, the problem with these AI-created phishing lures, they're good. Bad guys are using not only the AI to create the phishing emails, but they're also using it to write malicious code.
Leo Laporte [01:16:26]:
We've seen that, we've talked about it. They automate data extraction. The speed with which data extraction is happening is increasing dramatically thanks to this. There were 1.3 million instances of Social Security numbers, real ones, linked to AI applications last year. ChatGPT and Microsoft Copilot saw nearly 3.2 million data violations. I don't— I'm not trying to scare you, just trying to remind you that while you're using AI, You got to also protect yourself, and that's why it's time for a modern approach with Zscaler's Zero Trust plus AI. Zero Trust removes your attack surface, right? You're not putting out VPN addresses that give people something to hook on to. You're not— you don't have to worry about securing your data because Zero Trust secures your data no matter where it lives, in the cloud, on-prem, everywhere.
Leo Laporte [01:17:20]:
Zscaler safeguards your use of public and private AI. It protects you against ransomware, and it protects you against AI-powered phishing attacks. But don't just listen to what I have to say about it. Check out what Siva, the Director of Security and Infrastructure at Zuora, says about using Zscaler. Watch. AI provides tremendous opportunities, but it also brings tremendous security concerns when it comes to data privacy and data security. The benefit of Zscaler with ZIA rolled out for us right now is giving us the insights of how our employees are using various GenAI tools. So ability to monitor the activity, make sure that what we consider confidential and sensitive information according to, you know, company's data classification does not get fed into the public LLM models, et cetera.
Leo Laporte [01:18:07]:
Thank you, Siva. With Zero Trust plus AI, you can thrive in the AI era. You can stay ahead of the competition. You can remain resilient even as those threats and risks evolve. Learn more at zscaler.com/security. That's zscaler.com/security. We thank them so much for their support of Security Now.
Steve Gibson [01:18:31]:
And now back to Steve. Okay, so just a quickie, um, Apple Watcher and insider Mark Gurman has reported that Apple is believed to be working on a smart pendant, smart glasses, and new AI-based AirPods. Um, and that all products will be equipped with a camera that will feed data into an AI system. And I'm sure you're up on this more than I am, Leo, since you've, you've spent a lot of time over and with your Mac guys.
Leo Laporte [01:19:00]:
As long as it doesn't feed it to Siri, I'm okay.
Steve Gibson [01:19:04]:
Well, uh, apparently, uh, he said it's unclear what the AI will be doing for its user, uh, and it does seem like like, uh, you know, like a strange thing to— for Apple to be doing since people almost universally object to being surreptitiously recorded, you know. I mean, I loved listening to Alex when he realized that some guy he'd been talking to for half an hour was like, you know, had a camera in his glasses, and he said, hey, are you— are you recording this? The guy said, well, yeah.
Leo Laporte [01:19:38]:
Of course, I think Apple recognizes that this is just going to be the next thing and everybody's going to do it and they need to develop it. And then I think their hope is that people will trust Apple to keep it private.
Steve Gibson [01:19:50]:
Of any company out there, as I said, they— and, and boy, it was also— I just— as an aside, it was nice to hear you guys on our— on MacBreak, uh, talking about the, you know, as I had been saying, how frustrating it is with Apple's upselling. And it's just, you know, everybody does it now. It's— yes, you're right, it is, it is everybody. And the problem is some percentage of people it works on, and so that encourage— that encourages everybody else to do it. Oh yeah, okay, turn it into Amazon. Anyone who has continued to use Firefox on a Windows 7 or 8 machine will no longer receive security updates after this month. This month is it. Firefox support for the browser itself officially ended 3 years ago, you know, in terms of, of monthly updates or occasional updates to the browser itself.
Steve Gibson [01:20:49]:
Um, that was in January of 2023, but security fixes have continued to be provided. Those end now, this month, uh, with, with the end of this month, that's it for Firefox. So, uh, that's good that— I know it's, it's good that I'm leaving Windows 7 and, and Firefox on— I think it's like version 115, uh, ESR is that we're up to 145 now.
Leo Laporte [01:21:18]:
By the way, good news, they've added a switch in 145 that says disable all AI features. You can just click that switch, you're golden. Nice. They were smart. I think that's— that they listened to their customers on that one.
Steve Gibson [01:21:34]:
Yeah. Uh, and of course, uh, I think it was— it was— I'm sure it was— it was Vivaldi who made a marketing point. That's right. Saying we're not doing AI, right? Period. And then they said, well, until it shows its value.
Leo Laporte [01:21:46]:
It's like, oh, okay, fine. Well, I'm sorry, 148. Thank you, David. David in our Twitch says it's 148. That's the one. Just came out today with the, with the no AI switch.
Steve Gibson [01:21:58]:
Wow. Uh, Roskomnadzor, our favorite Russian group, apparently got a bit trigger happy recently as part of its recent accelerated internet crackdown, which we've been talking about the last couple weeks. Uh, this time it appears that Russia's internet watchdog blocked the official website of the Linux kernel. Uh, the block was quickly lifted after upset Russian IT engineers reminded Russkamnanzor that all of the country's native OS distros run on Linux. So, yep, can't, can't disconnect from that one, guys. They're gonna have to— even if they do disconnect from the internet, they're gonna have to have a little back door there where they're still able to be in touch with linux.org, apparently. So, oh, Leo, this raised a— if it wasn't Reuters, I would, I would wonder. Washington, February 18, Reuters.
Steve Gibson [01:23:07]:
The U.S. State Department has— is— the U.S. State Department is developing an online portal that will enable people in Europe and elsewhere to see content banned by their governments, including alleged hate speech and terrorist propaganda, a move Washington views as a way to counter European censorship, three sources familiar with the plan said. Okay, so wait, what? Triple-sourced reporting says that the U.S. is planning to do what exactly? Yes, and that— Leo, move your cursor over that, that blurred out area. It—
Leo Laporte [01:23:51]:
oh yeah, there we go. Uh, freedom is coming, and there's Paul Revere bringing freedom to those poor unfree people in France and Germany.
Steve Gibson [01:24:07]:
So yes, freedom.gov. So I get this, you get this. I'm wondering what our, what our listeners in the UK— I've got— we know we have a bunch of them.
Leo Laporte [01:24:17]:
Wonder what they're gonna, uh, see. It's gonna be an ex-feed, isn't it?
Steve Gibson [01:24:21]:
That's what it's gonna be. Essentially, that's what we're talking about. So here's what Reuters reported last Wednesday. They said the site will be hosted at freedom.gov. One source said officials had discussed including a virtual private network function to make a user's traffic appear to originate in the US and added that user activity on the site will not be tracked. Headed by Undersecretary for Public Diplomacy Sarah Rogers, the project was expected to be unveiled at last week's Munich Security Conference but was delayed, the sources said. Again, triply sourced reporting. Reuters could not determine why the launch did not happen, but some State Department officials, including attorneys, have raised concern— concerns about the plan.
Steve Gibson [01:25:13]:
Imagine that. Two of the forces— two of the sources said, without detailing what those concerns were, the project could further strain ties between the Trump administration and traditional U.S. allies in Europe, already heightened— by disputes over trade, Russia's war with Ukraine, and President Donald Trump's push to assert control over Greenland. The portal could also put Washington in the unfamiliar position of appearing to encourage citizens to flout local laws. In a statement to Reuters, a State Department spokesperson said the US government does not have a censorship circumvention program specific to Europe, but added, quote, digital freedom is a priority for the State Department, however, and that includes the proliferation of privacy and censorship circumvention technologies like VPNs. The spokesperson denied any announcement that, that denied any announcement had been delayed and said it was inaccurate that State Department attorneys had raised concerns, despite, again, Tripoli-sourced reporting. I think that one was dual-sourced. The Trump administration has made free speech, particularly what it sees as the stifling of conservative voices online, a focus of its foreign policy, including in Europe and Brazil.
Steve Gibson [01:26:41]:
Europe's approach to free speech differs from the US, where the Constitution protects virtually all expression. Uh-huh. The Europe— their European Union's limits grew from efforts to fight any resurgence of extremist propaganda that fueled Nazism, including its vilification of Jews, foreigners, and minorities. U.S. officials have denounced EU policies who say they are suppressing right-wing politicians, including in Romania, Germany, and France, and have claimed rules like the EU's Digital Services Act and Britain's Online Safety Act limit free speech. The EU delegation in Washington, which acts like an embassy for the 27-country bloc, did not immediately respond to a request for comment about the US plan. In rules that fall most heavily on social media sites and large platforms like Meta's Facebook and X, The EU restricts the availability and in some cases requires rapid removal of content classified as illegal hate speech, terrorist propaganda, or harmful disinformation under a group of rules, laws, and decisions since 2008. Rogers, the, the, the person we spoke of before, of the State Department, has emerged as an outspoken advocate of the Trump administration position on EU content policies She's visited more than half a dozen European countries since taking office in October and met with representatives of right-wing groups that the administration says are being oppressed.
Steve Gibson [01:28:17]:
The department did not make Rogers available for an interview to Reuters. In a national security strategy published in December, the Trump administration warned that Europe faced, quote, civilizational erasure, unquote, because of its migration policies it said the US would prioritize, quote, cultivating resistance to Europe's tra— current trajectory within European nations, unquote. EU regulators regularly require US-based sites to remove content that can impose bans as a measure of last resort. X, which is owned by Trump ally Elon Musk, was hit with a €120 million fine in December for non-compliance On the other hand, last week we talked about how 2.2 billion of the 2.4 billion euros that had been fined, uh, remained unpaid. Anyway, uh, it's going to be interesting to see what happens next. As I said, the site does not currently show me what's described there. There was some language in their reporting that suggested that what you and I see, Leo, is not what Europeans see currently. So, um, I'm sure that our listeners will let us know, uh, when they see this.
Steve Gibson [01:29:42]:
Um, I, I don't know what to make of this. Uh, I guess we'll follow it and see.
Leo Laporte [01:29:48]:
Um, I just love it that they're spending my tax dollars on such important initiatives. By the way, they killed Radio Free Europe.
Steve Gibson [01:29:57]:
I was gonna say, I— that had occurred to me also. It's like, wait a minute, uh, this is in lieu of— okay, so I hope I don't need to tell anyone listening not to ever, ever use an LLM to directly generate a password. In other words, never ask an LLM for a password. Never say, could you please generate a highly secure long password with 20 characters of all kinds, including a mixture of upper and lowercase alphabetic numbers and special characters?
Leo Laporte [01:30:48]:
No, don't do that.
Steve Gibson [01:30:48]:
Oh, it seems like such a good idea. You will get one. It'll look wonderfully strong, but the LLM is quite likely to give the same password to others because this is not what they're for, right? Gosh, we've spent so much time through the years on this podcast examining just how very difficult it is to actually generate and obtain high-quality passwords. I even have a page on GRC, grc.com/passwords, that— it is very popular— that, that does, does this because it's difficult. So the idea of asking a parrot for a password is almost painfully bad. Monkey123, monkey123, everybody use monkey123. Having apparently run out of useful things to explore, the site Irregular— that's the name of the site— gave— they did a detailed, in-depth exploration of large language model password generation. Under the headline, Vibe password generation: predictable by design.
Steve Gibson [01:32:08]:
Well, at least they got the headline right. Okay, so this is so nuts that I'm not going to spend much time on it there, but their, their posting is long and they've got charts and graphs and stats and blah blah blah, but they were kind enough to give us an executive summary at the beginning. They wrote LLM-generated passwords, which, you know, should be just outlawed, an oxymoron, generated directly by the LLM rather than by an agent using a tool, uh, that's key, by the way, that clause. Yes, generated exactly by the agent rather than using a tool appear strong but are fundamentally insecure because LLMs are designed to predict tokens. The opposite of securely and uniformly sampling of random characters. Exact opposite. Despite this— yeah, the exact opposite. Despite this, LLM-generated passwords appear in the real world, used by real users, and invisibly chosen by coding agents as part of code development tasks.
Steve Gibson [01:33:20]:
Instead of relying on traditional secure password generation methods. So think about that. You vibe code something, and part of that is the need to, to generate a password, and the LLM says, oh, here's a password, and plugs it in to your code somewhere deep. They said, we've tested state-of-the-art models and agents that analyze the strength of the passwords they generate. Our results indicate— or, and, in our results include predictable patterns in password characters, repeated passwords, and passwords that are much weaker than they seem, as described in detail in this publication. We recommend that users avoid using passwords generated by LLMs, that developers direct coding agents to use secure password generation methods when needed. Have them come to grc.com/passwords. And that AI labs train their models and direct their coding agents to prefer secure password generation out of the box.
Steve Gibson [01:34:30]:
And what I loved, so somewhere down in the text, Leo, it, it actually— I mean, it talked about how nonce— I mean, I, I know Not— no one of our listeners would do this, but think about the common Joe. They're, they're chatting to ChatGPT and they say, hey, you know, I'm trying to log into this site and it keeps complaining about the passwords I'm using. Could you give me, you know, a good long strong password? And it'll, it'll probably say, yeah, here you go. Doesn't—
Leo Laporte [01:35:04]:
of course it will. Yes.
Steve Gibson [01:35:07]:
Yep.
Leo Laporte [01:35:08]:
Uh, but, and that's the thing is that it's an, it's an, it's a kind of naive, uh, and I understand, I mean, oh yeah, it's the computer is going to generate a really good password.
Steve Gibson [01:35:18]:
Oh, it's a— Leo, it's artificial intelligence.
Leo Laporte [01:35:23]:
What could possibly—
Steve Gibson [01:35:24]:
it's, it's probably— it's generative. It's going to generate.
Leo Laporte [01:35:28]:
That's what generators do. It would actually be fairly trivial. I could write it right now in Claude and say I would like a strong password. Go to grc.com/passwords and get one. And it would use your code to generate the password and it would be a good password. Yeah, I, I mean, that's probably preferable to saying write a Python, you know, script that will generate a true random password. It's very easy entropy that you get from— it's hard to do. Let Steve do it.
Leo Laporte [01:35:56]:
And I mean, you could, you know, you could do it, but It's actually— would be trivial to say, just go and get it from grc.com/passwords. The problem is that, that most people are using chatbots, they're using— and they're just going to ask it and think it's just smart.
Steve Gibson [01:36:13]:
Yes, they're just going to ask it. I, I, you know, this website needs me to give it a long password.
Leo Laporte [01:36:18]:
Uh, what do you recommend? Right. What do you do for entropy in your password generator?
Steve Gibson [01:36:26]:
Um, I've, I've got a, a algorithm that's been running for 10 years or something which uses crypto in order to, to, to roll passwords forward.
Leo Laporte [01:36:38]:
Okay, yeah, so I'm just gonna— let's just, just see. I'm gonna ask Claude Code to go out. Can you fetch me a strong password from grc.com/passwords? And presumably it's going to actually run your page because it can do that. It can control a browser. And there you go. There are fresh passwords from GRC's Perfect Passwords. Skip— it calls me Skip because it's my buddy. And there— yes, and this is exactly the format you would get, right?
Steve Gibson [01:37:06]:
These are legit. This is not— nobody will ever get those again, although now you don't want to use those having shown them.
Leo Laporte [01:37:11]:
But, uh, but aside from that, yeah. By the way, it's smart. Look, it says GRC regenerates these on every page load, so these are unique to this fetch for a fresh set, just ask again or visit the site directly. Said that's, that's the intelligent way to use it. But I understand why, you know, naive users are just going to say, well, it's smart.
Steve Gibson [01:37:31]:
Yeah, they're just going to say it's AI. It's AI. It's smarter than I am, so, uh, just give me a password, right? Don't do that. Okay, I've been saying recently that the technique of asking a user to authenticate themselves by what they think is a CAPTCHA, where they're instructed to press the Windows+R key to open the Windows Run dialog, then press Ctrl+V to paste, followed by the Enter key, is terrifying. Because it is, it is, it is so powerful and potent, and because I could see so many people falling for it because just like asking ChatGPT for a password. Most people have very little idea how their computers actually operate. So they just follow instructions, right?
Leo Laporte [01:38:25]:
They're following instructions in order to just get by.
Steve Gibson [01:38:28]:
It's all an incantation for them. Yes. They don't understand it. Yes. So this highly potent form of attack has been dubbed click fix. It's called the click fix attack. Recall that I recently shared exactly such a pop-up that one of our listeners had encountered and emailed to me saying, I didn't do this, but I wanted to show it to you, Steve. And that set me off on a rant about how irresponsible I thought— I felt Microsoft was being about not tracking the source of anything pasted into the system's global clipboard.
Steve Gibson [01:39:06]:
A web browser is a very clear security boundary with all manner of creepy crawly things clamoring to escape from it. So it would be utterly— it should be utterly impossible for automation in the browser to place anything onto the system's global clipboard that can then be pasted outside the browser's security perimeter, especially into the Windows Run dialog. Seeing how obviously dangerous and effective this form of attack promised to be, I wasn't surprised to read the report that Huntress Labs published one week ago, last Tuesday. Huntress set the stage for their lengthy report, and I'll just share the beginning. They wrote, Columbia, Maryland, February 17th, 2026. Cybercrime has become the world's third largest economy with costs projected to reach $12.2 trillion annually by 2031. Today, Huntress exposes the tactics, techniques, and procedures— the TTPs— fuelling this multi-trillion-dollar illicit market in its 2026 Cyber Threat Report. The in-depth analysis sheds light on the playbook used by organized, profit-driven cybercriminals, uncovering how they weaponize legitimate tools, exploit everyday behaviors, and leverage a vast underground network to exploit people, businesses, and employees across the globe.
Steve Gibson [01:40:56]:
To produce this report, Huntress analyzed proprietary telemetry over— uh, sorry, telemetry from over 4 million endpoints and 9 million identities across the 230,000+ organizations it protects worldwide. So again, Huntress has instrumentation on over 4 million endpoints, 9 million identities within 230,000— more than 230,000 organizations that are under its protection as part of its services. They said this robust data set served as the foundation for uncovering critical insights into the evolving ransomware ecosystem Shifting adversary tradecraft and actionable strategies to help organizations prepare for the year ahead. Okay, so that's where this all came from. Under the topic of key findings, the item that caught my eye was this. They wrote, over half of all malware loader activity came from ClickFix. Hear that? Over half of all malware loader activity came from that, that single exploit. The CAPTCHA, the fake CAPTCHA that tells people who don't know how to use a computer or what they're doing but follow instructions to press You know, thank you very much.
Steve Gibson [01:42:42]:
To continue authenticating, press with the Windows+R key, press Ctrl+V, press Enter. Over half of all malware loader activity— they wrote in 2025— attackers did not need to break in when they could just trick users into giving them access. No technique did this more effectively than ClickFix, which fueled 53% of all malware loader activity. By masquerading as routine tasks like solving a CAPTCHA, ClickFix and its variants tricked users into becoming unwitting accomplices, facilitating the silent installation of info-stealers' ransomware and remote access tools. So I've specifically and explicitly reached out to many of my friends to warn them of this attack because it's so obvious to me that it's going to happen. It's just— it's too diabolical and too likely to succeed. One of my friends who works for a very large nonprofit charitable organization receives regular employee-level security training as, as part of her role. When I told her about this, she commented that they had never been warned about this type of attack.
Steve Gibson [01:44:13]:
You know, there's likely a delay between the growth of an attack and its inclusion into a training program, but that leaves a very dangerous gap. And as Huntress found from their analysis, 53% of all successful breaches that occurred during 2025 were attributable to the success of just this one class of attacks. So please warn your friends to be click— careful. It is, it is just such an obvious way for bad guys to get in, uh, and Microsoft has got to do something about this. This is their responsibility. For, for not allowing pasting into that run dialog. That's just— it's like pasting something that came from the browser. That is just insane.
Steve Gibson [01:45:07]:
But you know, Leo, what's not insane— not insane— I knew where you were going with that. As we head into our listener feedback is for us to take a break. Is hawks hunt. That's not insane. Ah, good.
Leo Laporte [01:45:20]:
No, not at all. In fact, It, uh, again ties right into our conversation that we're gonna have at Zero Trust World next week. The, uh, the, the problem is inside the house. This episode of Security Now brought to you by Hoxhunt. You, you know, uh, many of our listeners are security leaders. As security leaders, you are paid to protect your company against cyberattacks. It's getting harder and harder with more cyberattacks than ever. And lately these phishing, uh, emails generated with AI.
Leo Laporte [01:45:52]:
They're so good. They're so persuasive. I fell for one just the other day. Legacy one-size-fits-all awareness programs. In this environment, they don't stand a chance. They send at most 4 generic trainings a year, and most employees ignore them. When somebody actually clicks, they're forced into something— I— embarrassing training programs that feel more like punishment, and that's no way to learn. Right? That's, that's when you get malicious compliance.
Leo Laporte [01:46:18]:
You get people actively not learning, right? That's why more and more organizations are trying Hoxhunt. Hoxhunt, so much better. H-O-X-H-U-N-T, like fox hunt with an H, goes beyond security awareness and changes behaviors by rewarding good clicks and coaching away the bad in a way that employees love. It's fun. It's gamified. Whenever an employee suspects an email might be a scam, they click that Hoxhunt button and Hoxhunt will tell them instantly. And in a way that gives them a dopamine rush, which gets your people to click, learn, and protect your company. They, they— it's a game to them, it's fun.
Leo Laporte [01:46:56]:
And for you as an admin, Hoxhunt makes it easy to automatically deliver phishing simulations, not just email, Slack, Teams, everywhere attackers are going, right? Using AI to mimic the latest real-world attacks. By the way, you can, just as the bad guys do, personalize these simulations to each employee with knowledge you have, like department, location, and more. And, and the trainings, instead of being this long drawn-out thing, are instant micro trainings which solidify understanding and drive lasting safe behaviors. They're not punishment, they're fun. You can trigger gamified security awareness training that awards employees with stars and badges. They get— I got a gold star. You— it sounds dumb, but you know what? It works. And this really boosts completion rates.
Leo Laporte [01:47:45]:
It ensures compliance and it really makes your employees learn. They're really learning. Choose from a huge library of customizable training packages. You can use AI to generate your own too. They've got all the tools there you need. Hawkhunt, it has everything you need really to run effective security training in one platform. Meaning it's easy to measurably reduce your human cyber risk at scale. But you don't have to take my word for it.
Leo Laporte [01:48:08]:
Just check G2, over 3,000 user reviews. They make Hoxhunt the top-rated security training platform for the enterprise. Easiest to use, best results. Also recognized as customer's choice by Gartner. And it's used by thousands of companies. Qualcomm uses Hoxhunt, AES, Nokia. These companies use it to train millions of employees all over the globe. They, they use it because it works.
Leo Laporte [01:48:36]:
Visit hoxhunt.com/securitynow right now to learn why modern secure companies are making the switch to Hoxhunt. That's hoxhunt.com/securitynow. H-O-X-H-U-N-T.com/securitynow. We thank them so much for their support of Security Now.
Steve Gibson [01:48:54]:
Now back to Steve. So Doug Smith wrote, I have enjoyed you touching on AI coding topics over the last few episodes of the podcast, although the capabilities of Aisle— that's, we— that's the firm we talked about that had developed that really amazing agentic coding, uh, system, the one that found all the bugs in OpenSSL that had already been, you know, deeply scrutinized. Um, he said E— although the capabilities of IELTS that you covered recently sound fantastic, they aren't available to everyone yet. However, some aspects are available in other forms. For example, Claude Code has a built-in security-review command that does a really great job. Although it's good at checking the latest changes before a git commit and push, I've taken to using it for things like checking WordPress plugins before installing them on my sites. In one case, this turned up multiple severe security issues in a plugin for connecting to a specific service I required. I was able to present the results and a working test exploit to the vendor and work with them toward fixes.
Steve Gibson [01:50:05]:
That is— that's very cool. He said, I also saw that Anthropic has a new Claude code security feature in limited testing right now that looks like it will continue to move security reviews significantly forward. You recently suggested that the way to work with AI coding might be with test-driven development. That and more is exactly what the Superpowers add-on for Claude Code does that Leo mentioned a few episodes ago. It forces good planning, test-driven development, and code reviews by multiple AI agents each with particular specialties. Here's the description of the workflow from the GitHub README, and he goes into detail— brainstorming, using git worktrees, writing plans, sub-agent-driven development or executing plans, test-driven development, requesting code review, and finishing a development branch. Anyway, so very cool to see that this is happening and evolving. I wanted to share this because it so nicely chronicles, I think, the evolution that we're seeing in our understanding of how to employ AI.
Steve Gibson [01:51:17]:
What we have today will bear no resemblance to what we have a year from now. I, I think that's really clear to everyone. This is just happening so fast, and we arguably have quite a way to go. You know, we've learned that that as an AI's context window nears full, the— its hallucinations increase. So now we work to prevent that. We've learned that rather than using a single AI and a context window, we get far better results from using multiple AI agents, each with their own smaller contexts, and thus each bringing their own perspective. And I have no doubt that a year from now we'll have learned way more. You know, we didn't know this a year ago.
Steve Gibson [01:52:04]:
We know it now. Who knows what we're going to know next year? Eric wrote, hello Steve, I wanted to share an issue I recently ran into that makes me believe some malicious application has gotten into my PC. Now, now understand, while he wrote this, You know, he believed this. He said, I'd value your advice on whether I should completely reinstall Windows and all my applications. You're welcome to share this if you think others could benefit. Indeed, it turns out others could. He said, thanks for everything you do. Best wishes, Eric Richardson.
Steve Gibson [01:52:43]:
So he said, description of issue: last week while examining logs on NextDNS I decided to download them for a better review of activity. Upon examining the logs, I saw my DNS queries for 26-character-long domains, and then he lists 4 of them. Xdu1xjw0lnfppq4zdtoz1brlh.com. That's one of them. And there's— he's in his email, 3 more just gibberish like that. And he said, these DNS queries only came from my PC. My wife's and daughter's laptops were not affected. I looked up some of the domains on ICANN's DNS lookup but found no entries.
Steve Gibson [01:53:40]:
Google confirmed my suspicion that these lookups were likely malicious. DGA, Domain Generation Algorithm, activity. I checked the entries in my NextDNS logs and noticed these queries were not blocked! I confirmed that Domain Generation Algorithms, DGA's, protection was enabled, so I don't know why the query would not have been blocked. Okay, so as I'm reading along So far, I'm looking at Eric's evidence and I'm in complete agreement with everything he's seeing.
Leo Laporte [01:54:17]:
By the way, this is one of the great things about NextDNS is these logs. Yeah. Because you can see exactly what do you— like, look at that DNS query. What the hell was that? Uh-huh. You better explain this, Mr. Gibson. Explain yourself.
Steve Gibson [01:54:36]:
Explain to me. So I'm thinking that, yeah. This really does look pretty bad and, and quite suspicious. Then I get to his next sentence. Tracing entries in my next DNS log, I see the queries to isc.org seem to precede queries to these 26-character domains. I also see several queries to rebindtest.com. Which do appear to be blocked by NextDNS. Okay, his mention of isc.org stopped me in my tracks because I suddenly knew exactly what was going on with Eric's machine and his NextDNS logs.
Steve Gibson [01:55:23]:
And this was further confirmed by his mention of rebindtest.com. Since Eric was understandably concerned and wondering whether he would need to wipe his machine and reinstall Windows, I immediately wrote back saying, oh, Eric, that's the DNS benchmark. Familiar, hey? I said, those are the queries generated by running the benchmark.
Leo Laporte [01:55:55]:
Oh, that's— your machine— those are random—
Steve Gibson [01:55:56]:
in other words, random queries. Yes, your machine is not infected.
Leo Laporte [01:56:02]:
And you do that so they won't be cached, probably, right?
Steve Gibson [01:56:06]:
Yes, yes. I said, instead, you have great DNS. I said, the tip-off for me was your mention that queries to isc.org appear to precede them. And the clincher, though we already had sufficient evidence, was queries2rebind.test, which is my domain that I maintain for the benchmark's use. There you go. Eric replied, oh, thank goodness. Thank you for replying so quickly. Okay, so the first thing that the DNS benchmark does in the process that I call characterizing any DNS resolver, which you may then want to benchmark, is to check whether it's online at all by asking it for the IP of the isc.org domain.
Steve Gibson [01:57:02]:
ISC is the Internet Systems Consortium. The ISC has been around since 1994 and basically the birth of the internet. I chose to have the DNS Benchmark check for a resolver's online status by querying for the IP of isc.org, since even Raskobnadsor would not have any problem with isc.org nor feel any need to block it. Oh, maybe I should switch over to, uh, the, uh, linux.org as a— because we know that Raskobnadsor cannot block that. Anyway, as you, as you guessed, Leo, those wacky 26-character-long.com domains are randomly generated, though not one of them will ever exist, and that's the point. They are therefore prevented from ever being in any DNS cache since none of them will have ever been seen before. More importantly, queries for the IP address of each of them will be guaranteed to generate an NX domain, a non-existent domain error status. The benchmark absolutely knows that's the result it's going to obtain, but the resolver it's asking— that is, the one being tested— has no way of knowing that.
Steve Gibson [01:58:30]:
So it must forward each and every one of those queries to the internet's upstream.com servers to ask the IP of that. When the.com name server receives the resolver's query, it's going to think, what the heck are you talking about? That's not a valid domain, and send back the expected non-existent domain reply. But it's the length of time that's required for us to receive that reply from the server that's being benchmarked. And that's what we care about. This tells us how well connected the resolver we're testing is to the internet's.com name servers, how quickly it could obtain a valid.com address, which is the same as a non-existent address. Basically, it's inter— it, it's, it's connectivity. Um, the, the ICANN registry also shows that I registered rebindtest.com nearly 16 years ago in August of 2010. As I said, it's my own domain, which returns IP addresses for the various private networks such as the 10.
Steve Gibson [01:59:54]:
network and 192.168.something.something. A DNS resolver should really Always: Never return a private network address for a publicly queried domain. We've talked about this in the past, uh, it's called a rebinding failure. Bad guys can use that to probe around inside a user's local LAN. Their browser will, will believe that it's connecting to a server, for example, in— at the domain trickybadguy.com. But if, if the DNS for trickybadguy.com resolves to 192.168.0.1, then the browser may actually be connecting to the LAN's internal gateway router, which is not what you want a bad guy's JavaScript to be able to do. So this is just one of the many things the DNS Benchmark is able to show its users about the DNS resolvers they're currently using and others they may be considering, uh, switching over to. So in any event, if anyone else, uh, might think to look at their DNS provider's logs and see the sorts of admittedly suspicious-looking DNS lookups that Eric spotted, uh, if you do not own and have run GRC's DNS benchmark, then I would agree that is definitely a cause for concern.
Steve Gibson [02:01:24]:
But assuming that you're an owner of my latest utility software, you have no cause for concern. Very, very cool false positive.
Leo Laporte [02:01:34]:
I love that. Um, he wrote to the right place, at least.
Steve Gibson [02:01:38]:
Yes, he did. What is this? Yeah, because anybody else would have said, oh, Well, that looks really bad.
Leo Laporte [02:01:44]:
Yeah, suspicious.
Steve Gibson [02:01:45]:
Yeah. So Stephen Clark Wilson said, I was reading this ACM article and hit a paragraph that made me instantly think of you and defaults. The paragraph says, before version 4.0.0 published in 2017, Redis, the extremely popular key-value store, offered no access controls in its default configuration. Oh God. Frequently, new users of Redis would unintentionally expose their instance publicly, and this insecurity would result in data spills or become a vector for host exploitation. Yes, that's where all of our Social Security numbers got leaked. As, as, as of version 4.0.0, Redis enters a protected mode when run with its default configuration and without password protection. This limits access to loopback interfaces.
Steve Gibson [02:02:47]:
That, that, which is to say, this, this limits access to the loopback interfaces, meaning not, uh, an interface with a public IP, just 127.0.0.1. It says, as the Redis company itself has since touted the introduction of protected mode has caused the number of publicly accessible Redis instances tracked on shodan.io, a popular internet host aggregator, to decline substantially. We would hope so. In 2017, it had identified roughly, uh, 17,000 exposed Redis instances, right? Because that was the default, uh, if you didn't do something in 2020. That number had declined to 8,000 still, huh, in an audit by security company Trend Micro. And this person said, I like how simple the solution was— limit access to the loop— loopback interface. Very nice. So, okay.
Steve Gibson [02:03:57]:
Oh yes, they've certainly improved the situation by dropping the clearly exposed instances to 47% of what they were. So at this point, either those still exposed Redis key-value database stores have been sitting there for the past 9 years since before the introduction of version 4 or they were configured with some authentication and, and therefore again misconfigured. As we all know, authentication should never be depended upon to block malicious access, and I believe that a misplaced reliance upon authentication and a lack of adoption of backup measures such as never binding to a public-facing interface unless it's truly necessary remains an easily remedied source of security. And it occurs to me that it is also a shame that one of my favorite tricks has never been adopted. One of the most ironclad rules of internet routing is that any packet which is received by an internet router will have its incoming TTL, its time to live, decremented. It's, it's an 8-bit byte and, and maximum value 255. Typically it's 127. Um, the first thing that the router does upon an in— receiving an incoming packet is decrement that TTL byte.
Steve Gibson [02:05:42]:
In the packet header. And if in doing so that value is decremented to zero, that packet will never be forwarded toward its destination. A router might simply drop it if it wanted to, um, you know, like a, like a, you know, dead packet, which is what has happened essentially. Or might elect to send an ICMP time exceeded message back to the packet's originator to let it know that for whatever reason that packet died while it was en route to its destination. For some reason, maybe it stumbled into a routing loop or the TTL was too short, and so it couldn't make it to its destination because the internet diameter, as it's called, has grown over time. There are many, many more routers, and you— a packet may have to go across many more router hops in order to get to its destination. Some of the early protocol stacks used a, a TTL of 32, and at some point there were places they couldn't get because there were more than 32 hops between the source and the destination. So now all of today's, uh, stacks are typically 2— uh, 127 or 255.
Steve Gibson [02:06:58]:
So the point, however, is If this rule were not absolutely obeyed by every router, the internet could conceivably fill up with zombie packets that live forever, refusing— refuse to die, and are just circulated around. And that would be a big problem. Thus, this rule is absolutely obeyed. So as a security tool if there was some need to expose a server to the internet in, for example, some sort of cloud-hosted configuration, as, as a listener of our, uh, of ours recently shared, you know, he had to do that. He did so deliberately with foreknowledge, but because he had no choice. And if it were possible to set the TTL for that server's— that publicly exposed server's outbound packets to a low number like 2 or 3, then any other nearby clients of that public server, for example within the same cloud infrastructure that, that they were sharing, could connect and use it without any trouble, while at the same time No one in faraway China or North Korea or wherever could possibly get to it. Since a TCP connection requires round-trip verification from each end, any of the packets sent from the server would die after 2 or 3 router hops. No one probing that server from a distance would even ever be able to detect that its services were available.
Steve Gibson [02:08:49]:
Unfortunately, packet TTL has never been adopted, to my knowledge, as a security measure. It's considered to be part of deeper internet infrastructure, thus not something to be messed around with and not subject to application-level manipulation. As a result of, you know, the interface— the, the interfaces that are provided for setting a connection's TTL You know, you can, you can use raw packets to do it, but it's not something that's commonly available at the application level, uh, even if they might have an interest in employing them, which I think is too bad. So anyway, that's feedback from our listeners. And Leo, after this last message from a sponsor, we're going to get into what they— what the researchers found, uh, about Dashlane, LastPass, and Bitwarden when they took a very deep dive into what would happen if the infrastructure at the cloud end were to be subverted.
Leo Laporte [02:09:54]:
Very good. Coming up on Security Now— while Steve hydrates, I'm going to tell you about Material, our sponsor for this section, uh, of Security Now, the cloud workspace security platform built for lean security teams. You know, a lot of security assumes that you're on on-prem, right? And your emails are on-prem, your files are on-prem. But so many of us, including, by the way, Twit, uh, are not. We're— we use Google Workspace for everything we do. And managing security, we know, in the cloud workspace is tough. Phishing is not just the only way in, but today's email security typically stops at the perimeter. New attacks are hard to detect.
Leo Laporte [02:10:34]:
Your email's siloed. Your data, your identity security tools, all siloed. Uh, oh, Material can solve this. It could protect the email, it could protect the files, it can protect the accounts that live in Google Workspace or Microsoft 365. Works with both. Because effective email security today needs to do a lot more than just blocking phishing and other inbound attacks. It needs to provide visibility and defense across the entire workspace threat surface. So Material works by ingesting your settings, your contents, your logs, does this all automatically.
Leo Laporte [02:11:08]:
To provide holistic visibility into threats and risk across the entire workspace, and then gives you the tools to automatically remediate them. Material delivers comprehensive workspace security by correlating signals and driving automated remediations across the environment. It's fixed before you even know it. Phishing protection and email security combining advanced AI detections with threat research end-user report automation. They do detection and protection of sensitive data. You bet there's a lot of sensitive data in your, in your cloud, right? There is in ours. Across not just the email inboxes but shared files as well. Account threat detection and response with comprehensive control over access and authentication of people and third-party apps.
Leo Laporte [02:11:55]:
Nowadays, you know, I know this is our— the case with our Google Workspace where we've got a lot of third-party apps hooked in to the workspace. Same with Microsoft 365. Material empowers organizations to rapidly mature their ability to detect and stop breaches with step-up authentication for particularly sensitive content. That's really nice. Blast radius visualization for accounts and the ability to detect and respond to threats and risk across the entire cloud workspace. Material enables organizations to scale their security without scaling their team. Material drives operational efficiency with its simple API-based implementation and flexible, automated, and one-click remediations for email, file, and account issues, including an AI agent that automates user report triaging and response. Material protects the entire workspace for the cost of email security alone, and with a simple and transparent pricing model.
Leo Laporte [02:12:54]:
Secure your inbox and your entire cloud workspace without adding more toil to your day or costs to your balance sheet. See material.security to learn more or to book a demo. That's material.security. We thank them so much for supporting Security Now. Material.security. I love— you know, I love our advertisers because I can tell people about something that's really of real, uh, value and use to them. You know, it's not just another toothpaste. It's great.
Leo Laporte [02:13:23]:
I love it. All right, Steve, let's find out about this ETH security report, because I have to admit, when I, I sent this to you the day before the show last week, and I was a little nervous, I was a little worried.
Steve Gibson [02:13:36]:
Okay, so way back in the early days of this podcast, we talked about the technology to securely back up and securely store our data in the cloud. Of course, back then what we had were remote storage providers and clouds were white puffy things that slowly drifted across the sky. No one was calling anything and everything that was remote a cloud back then, but that's what we have today. At the time, I crystallized the concepts surrounding the only sort of encryption that made sense using the abbreviation which has kind of become famous on the podcast, TNO, which was short for trust no one. Um, this was repurposed from a prominent poster on the wall of the— of X-Files agent Fox Mulder. Uh, of course, Mulder was famously paranoid, so a poster reminding him to trust no one made sense. It also made sense for anyone who might be considering sending their personal and private contents of their PC off to a remote server. And of course, these days, what could be more personal and private than our passwords? Like, all of our passwords.
Steve Gibson [02:14:57]:
But the underlying concept behind TNO encryption was simplicity itself, which was, you know, part of the reason that it took hold, and it— and there was some appeal there. The idea was that any and all data that was going to be sent off-site would first be encrypted using a secret key which would never be shared, so that all of the— all that the remote storage provider would be receiving, um, and storing on our behalf would be a massive blob of pseudo-random data. You know, the alternative was sort of the simpler approach, right? We, we send our data and we trust them to encrypt it for us. It's like, oh no, we'll be storing it encrypted, don't you worry. No, no, we're going to encrypt it here and then we're going to send you a blob of noise and you just hold that for us in case we need it later. So as we know, right, regardless of what is fed into properly designed encryption, What emerges is indistinguishable from that pseudo-random noise. Then later we used another abbreviation, PIE, P-I-E, which stood for pre-internet encryption. Same concept, uh, you know, you would all— you would always encrypt anything you cared about before it ever left the domain of your machine to be sent out over the internet.
Steve Gibson [02:16:31]:
And along the way, we also examined the more technical details of how all of this should be done. We looked at the need for the user's password to be strong and at the use of PBKDF— password-based key derivation functions— to significantly impede the use of brute force password cracking technologies and techniques. What I want to point out is that all of this is extremely straightforward. We talked about it 20 years ago. It's— it is simple to do, and it is utterly bulletproof. It works, and it works perfectly. Nothing we talked about back then was difficult to implement then or now. So what's the problem? How did we— how can today's contemporary password managers that all rightly require the most state-of-the-art security available still be having trouble of some sort today with something as simple as those concepts of TNO and PIE? That question has two answers: practicality and feature-itis.
Steve Gibson [02:17:54]:
In the, in the case of today's password managers, it's the need to go from a dead simple, rudimentary, and utterly secure system concept, which is what we had with our TNO PIE and evolve it into a workable and practical solution. Suddenly it's not so simple. For example, in the pre-internet encryption trust no one backup solution, which we discussed in the early days, what would happen if our user's hard drive crashed, they needed their backup, but they'd forgotten their password? Trust no one cuts both ways. If you have truly trusted no one else with anything, then the well-known abbreviation that comes into play is SOL. Um, you know, Leo, you'll be able to relate to this. You have a Bitcoin wallet containing a now valuable Bitcoin that's protected by a long-forgotten password. The good news is that it is super secure. Yeah.
Steve Gibson [02:19:09]:
Oh yeah. No one is going to open the wallet without its password. And that's also the bad news, you know, since, since that no one includes you exactly. So what our original super secure system back then is missing is any form of password recovery, you know. Yes, this super simple system is completely secure, but it is also completely unforgiving. We know that any practical password manager for the masses must necessarily provide some means for dealing with the inevitable, uh, I forgot my password for my passwords. But what's also inevitable is that the moment we start adding such get-out-of-jail features, we invariably start chipping away at the pristine security we originally enjoyed. It is exceedingly difficult to have it both ways.
Steve Gibson [02:20:18]:
There's also the pressure to maintain feature parity among the competing password managers by offering some form of friends and family sharing. And if all that wasn't challenging enough, the password managers have also been confronted with rapidly evolving cryptographic cracking technology. This often requires backward compatibility with earlier releases. We saw LastPass stumble badly over this with the need to increase their client-side PBKDF iteration count while being reluctant to force their original users to keep up with the times. It was every one iteration. Yeah, every additional feature increases the complexity of the system, and we know that complexity is the enemy of security. Today's password managers are not only bristling with features, but they're also under continual pressure to match each other's features since many users will make their choice of password manager from a feature comparison grid while considering little else. All of this made password managers a terrific subject for the group of Swiss security researchers who decided to dig into the operation of three password managers to learn whether and to what degree the addition of all these extra bells and whistles may have come at the cost of their users' security.
Steve Gibson [02:21:51]:
So here's what the team wrote in the overview abstract of their 28-page research findings. They said, zero knowledge encryption is a term widely used by vendors of cloud-based password managers. Although it has no strict technical meaning, the term conveys the idea that the server who stores encrypted password vaults on behalf of its users is unable to learn anything about the contents of those vaults. The security claims made by vendors imply that this should hold even if the server is fully malicious. This threat model is justified in practice by the high sensitivity of vault data, which makes password manager servers an attractive target for breaches, as evidenced by history of attacks upon them. And we saw that LastPass lost control of theirs, right? Mm-hmm. We— they wrote, we examined the extent to which security against a fully malicious server holds true for 3 leading vendors who make the zero-knowledge encryption claim: Bitwarden, LastPass, and Dashlane. Collectively, they have more than 60— 6-0— million users and a 23% market share.
Steve Gibson [02:23:22]:
We present 12 distinct attacks against Bitwarden, 7 against LastPass, and 6 against Dashlane. The attacks range in severity from integrity violations of targeted user vaults to the complete compromise of all the vaults associated with an organization. And I need to say, with lots of conditions, which, you know, they don't want to talk about in their abstract, but, you know, they had to really— it required a whole bunch of other things to be true. They said the majority of the attacks allow recovery of passwords. We've disclosed our findings to the vendors and remediation is underway. Our attacks showcase the importance of considering the malicious server threat model for cloud-based password managers. Despite vendors' attempts to achieve security in this setting, which again I've said is difficult because we're asking so much of them, They said, we uncover several common design anti-patterns and cryptographic misconceptions that resulted in vulnerabilities. We discuss possible mitigations and also reflect more broadly on what can be learned from our analysis by developers of end-to-end encrypted systems.
Steve Gibson [02:24:42]:
Okay, so the malicious server model is— certainly is the one we want. It's the model that was explicit in our original foray into TNO. The no one who we were not trusting was the entity who was holding our encrypted data backed up. Although all of the responsibility for not losing the decryption key was ours, in return for that responsibility, we obtained the warranted guarantee of our invulnerability. The beginning of their introduction sets the stage and also shares some additional statistics about the market share of the native built-in browser-based solutions which these guys are competing with, right? They wrote, despite the rise of alternative authentication methods, meaning for websites, users today still have to deal with passwords, often numbering in the hundreds. Password managers help to tame the problem by providing a tool to securely store passwords, reducing the challenge of remembering many passwords to remembering just the one master password for the password manager. Cloud-based password managers outsource the storage to a remote server under the control of a service provider. At an abstract level, a user's passwords are collected in a single object, which is then encrypted by the user's client under a cryptographic key derived from the user's master password, creating an encrypted vault.
Steve Gibson [02:26:28]:
The client then uploads the encrypted vault to the server. When a user wishes to access a password for a particular service, their client authenticates to the service, retrieves the encrypted vault, and decrypts it locally with a user-provided copy of the master password. Importantly, in solutions of this type, the service provider does not see the vault plaintext and therefore does not immediately learn the user's passwords or other sensitive data. This is akin to the situation with end-to-end encrypted cloud storage. And while the terms end-to-end encrypted or client-side encryption are sometimes used by vendors in this space, the most commonly used term is zero knowledge encryption. The term zero knowledge, of course, has a specific technical meaning in the context of interactive protocols. But here the term is being used with a different meaning, as we shall see. The cloud-based approach has multiple advantages.
Steve Gibson [02:27:32]:
Users can access their encrypted vaults from multiple devices. Vaults can store other sensitive information beyond passwords, for example, credit card data, personal documents, and so on. And the disservice can be extended to allow sharing of sensitive data within a family group or organization. The access from anywhere feature creates work for vendors who have to support access from web browsers as well as standalone applications running on different operating systems. Many vendors have offerings which allow the cloud storage element to be self-hosted by an organization instead of by the vendor. Three prominent providers in this space are Bitwarden, Dashlane, and LastPass. At the time of writing, Bitwarden claims to have 10 million users, Dashlane 19 million users and 24,000 business customers, and LastPass 33 million users and 100,000 business customers. A 2024 report based on a survey of 1,000 US consumers gives further insight into the popularity and market share of password managers.
Steve Gibson [02:28:50]:
The built-in password managers of Google and Apple, right, meaning Chrome and Safari, now represents 55% of the market, up from a combined share of only 15% in 2021. So the built-in browser password market has— that's in 5 years— gone from, from, um, uh, 15% to 55%. Bitwarden and LastPass were the next 2 largest according to the study, with 11% and 10% market share respectively. Dashlane now has only 2% market share, down from 7% in 2021. So it's dropped 5 points when it was among the market leaders. There's a long tail, they write, of smaller players in the market. So I thought it was interesting to see that the password managers built into Safari and Chrome are enjoying a 55% share of the market. And that makes sense to me, right? While I, while I require strong cross-platform support from my chosen password manager and the ability to store all kinds of other things, My wife doesn't, you know, she lives in Chrome on both her PC and her iPhone.
Steve Gibson [02:30:09]:
I don't think she ever uses Safari and she despises Edge. So her needs are fully met without the use of any additional password manager. But I use many more features of my third-party password manager and I can't imagine operating without it. Okay, so what did their detailed research reveal? They wrote, we give a detailed analysis of Bitwarden, Dashlane, and LastPass, presenting a cornucopia of practical attacks. In the artifacts that accompany our paper, we give proof-of-concept implementations of all of these attacks, demonstrating their feasibility. The attacks allow us to downgrade security guarantees, violate security expectations and even fully compromised users' accounts. We provide a table listing the various attacks and their impacts. Worryingly, the majority of the attacks allow recovery of passwords, the very thing that a password manager is meant to protect.
Steve Gibson [02:31:12]:
We group the attacks into 4 categories: attacks exploiting the key escrow features used for account recovery and single sign-on login attacks based on lack of integrity of the vault as a whole, attacks enabling— enabled by the sharing features, and finally attacks exploiting backwards compatibility. You know, basically those are the categories of things I talked about, features that, that practical users want that the browsers need to provide. They said these attacks reveal common design anti-patterns and cryptographic misconceptions. Lack of authentication of public keys is widespread. When combined with key escrow and sharing features— key escrow meaning account recovery— this results in the adversary being able to fully compromise vaults. Another recurring failure mode is wrongly assuming origin authentication of public key ciphertexts, leading to key substitution attacks against Bitwarden. Which have been fixed. LastPass stands out for lacking any form of ciphertext integrity using AES-CBC as its main encryption mode.
Steve Gibson [02:32:30]:
Okay, so by that they mean that LastPass is not authenticating its decrypted results. AES in CBC— remember, cipher block chaining, we talked about years ago It provides state-of-the-art encryption, but after decrypting, there's no means for authenticating the decrypted result. That is, for essentially verifying that the password you used decrypted something in— back into its original form. Our longtime listeners will recall the early days when we talked about the importance of authenticating and assuming that decryption and authentication would be separate steps. The question was, in which order should decryption and authentication be performed? Today there are very good solutions for this. You know, for example, for Squirrel's design, I chose to use AES in GCM mode, which is a lovely protocol that simultaneously provides for encryption and authentication at the same time. But today LastPass may be stuck with their original decisions, uh, you know, way in the past. The researchers finished their introduction by writing, thanks to legacy code and backwards compatibility exploits, we can downgrade Bitwarden and Dashlane to similarly hazardous states.
Steve Gibson [02:34:00]:
We also show that integrity is only achieved for single fields in individual items instead at the vault level. This enables cut-and-paste attacks within items and across the vault. Such attacks can often be chained to compromise the confidentiality, confidentiality of the vault as well. These attacks work even when proper authenticated encryption is used. They're possible because of insufficient key separation in vaults with complex structures and are la— and/or a lack of cryptographic binding between data and metadata. So what all that means is that no matter how much you may want to, and no matter how well-intentioned you may be, just— if it's just not possible to check your own work, it truly is necessary to have highly motivated Highly skilled and highly creative security researchers who want to find problems and who have no ego stake in not finding any problems, which is why it's virtually impossible to check your own work. Scrutinizing these products that have become as complex and feature-laden as today's password managers have— have requires an extreme level of focus and desire to find problems. What they found were extremely complex, and I don't see any point in spending more time digging more deeply into the specific problems they found since they've been corrected.
Steve Gibson [02:35:44]:
But we're all better off as a consequence of that. The problems were always predominantly theoretical in the first place, since they depended upon some form of deep compromise of the provider server-side infrastructure. We know that that did happen with LastPass, so it's not like it's impossible, but even those issues have now been addressed. To my mind, this is a classic case of the, the safest security solution is the one that's been heavily challenged and audited by the industry's top security researchers. So I feel more confident than ever with my choice of Bitwarden as my password manager. All three of these password managers are better today as a result. And I should mention that the reason those three were chosen, as I said before, was due to the availability of at least some of their client-side code being made available by their publishers. They— that's one of the things that they mentioned in their 28-page paper was that's why these three were chosen.
Steve Gibson [02:36:55]:
So, you know, Leo, I agree with you completely. It, it absolutely does make sense to, to use a, a, a password manager that's— that makes it easy for security researchers to deeply and fully understand and scrutinize, and none of them does that more than Bitwarden. What—
Leo Laporte [02:37:17]:
so what kind of remediation, uh, can they pursue for this? I mean, is it obvious how to fix this problem? I mean, it seems, yeah, to me that if somebody has a malicious server with your vault on it, there's all sorts of mischief, right?
Steve Gibson [02:37:36]:
So So they're, they're making changes in, in the way that their lower level protocols were working. They were not doing some verifications that they could have been doing. So they, you know, what they were doing was secure, but they didn't— they, they themselves didn't have an— they, they weren't thinking of being an adversarial server because they're not, right? It's, it's very much like the interpreter. You know, we've always talked about how dangerous, how difficult it is to, to deserialize a JSON object, for example, or, or, or, or to decrypt, um, an, an MP4. It is an interpreter, and the people writing it are assuming that your, your you're, you're feeding in a valid file, not something that's malicious. So they implemented their server-side infrastructure knowing they weren't the bad guys. And so it's just impo— it's just, it's impossible for them to imagine, but what if we were the bad guys? But, you know, and it took— so it took a third-party research group to say, oh, we are the bad guys, what kind of mischief can we get up to, right? And so, so what was needed was additional, additional steps of validation and verification to prevent something that Bitwarden knew they would never do, right? But they didn't ever consider, well, what if somebody else did?
Leo Laporte [02:39:13]:
It's, it's the same thing as zero trust, right? Yeah, you should never assume that you have full control of the, uh, of the environment. But it's—
Steve Gibson [02:39:23]:
what if you did those? It's so difficult to put yourself in that mindset, right? I've talked about like debugging my code where my code, you know, has a bug. I'm staring at it and I'm, I'm like looking at it and there's not that much there. I, I cannot see it. And it's not until I step to the problem and it goes, eh, I go, oh, then I see it. It's just— there's just a— there's a weird mental block. And so it really does take a, you know, a third party. And so I'm delighted these guys, you know, went to all the trouble, and more power to them. Thank you for the research.
Steve Gibson [02:40:02]:
I hope you get lots of credit because, you know, you, you did— you did the industry a favor. You did all of— all of we who are using Bitwarden, you know, as a a great service. And Bitwarden, as you said, stepped right up and thanked them for their— the research and are implementing fixes for, for the, you know, the lack of verification that they didn't need but they could understand would be necessary if a bad guy took their place.
Leo Laporte [02:40:37]:
Right. Good. So there's not, uh, cause for concern. In fact, this is cause for celebration. —this is a— yeah, this is a useful, uh, we have a better result that all of the three— I hope all the three companies will act on and improve it. Is there anything I, as a user, can do, uh, to protect, uh— nope, not using Argon2 or increasing the iterations or anything like that.
Steve Gibson [02:41:00]:
None of that's going to help. And it's interesting too, I wonder if you'd be better off self-hosting, like doing your own server infrastructure.
Leo Laporte [02:41:11]:
I don't think so, only because I don't think I'm a— as this is not my full-time job, and presumably the people running the networks for these companies are— this is their full-time job and they know what they're doing.
Steve Gibson [02:41:23]:
Yes, you have to imagine that the, that the security that they have surrounding their infrastructure, right, you know, they've thought of, you know, everything they could possibly do.
Leo Laporte [02:41:36]:
And now even more. Yeah, I mean, self-hosting— and in effect, that's what you're doing if you're using the browser's password manager. You're assuming that your system is secure. For a long time, uh, Chrome didn't even encrypt the passwords. They said, well, if somebody has access to your system, it's game over anyway. Game over anyway. They do now. Um, yeah, it's an interesting, uh, question.
Leo Laporte [02:41:59]:
I mean, honestly If you know nobody's going to ever be in your house, writing them down in a little book is probably the best thing to do. But you can't write those down anymore. They're too— yeah, exactly. That just tempts you to make something easy to write, which isn't going to be a good password. But can we just get rid of passwords, Steve? What about that squirrel thing?
Steve Gibson [02:42:17]:
I think we need to all implement that. Yeah, it's funny how, how passkeys just kind of half— I mean, they're around.
Leo Laporte [02:42:25]:
I use them whenever I can.
Steve Gibson [02:42:27]:
They're so convenient when they, when they're in place. I love their magic, but you know, I, you know, they're still not the standard.
Leo Laporte [02:42:36]:
You know, now you'd have the same problem as with a password if the password vault were compromised. A passkey isn't inherently more secure than a password, is it? Because there's a secret stored there, or is that not the case?
Steve Gibson [02:42:48]:
Um, no, you, you do have a secret, but you're not having to share it with the server. So in, in, in the case of a password, you're giving it something that, uh, the server— secret that you, that you want it to keep. And with a passkey, all it can do is verify that you know the secret.
Leo Laporte [02:43:05]:
But if I store it— what I'm saying is, if I store it in the password manager, as I do— oh yeah, it's vulnerable. It's vulnerable just like a password would be. Yeah, yeah. Okay, would Skrill have had that same issue? Yeah, I guess it would. Yeah, because there's a secret. Yeah, the password manager is storing a secret. In effect. Yeah.
Leo Laporte [02:43:24]:
Yeah.
Steve Gibson [02:43:24]:
And in fact, Gibson is, uh, they were— passkeys, the secrets are more distributed. With Squirrel, I mean, you had one master that ran the whole galaxy. And but on the other hand, I went to extremes to protect it.
Leo Laporte [02:43:41]:
So, right. Steve and I are going on the road. So a couple of program notes. We will be doing the next Security Now Sunday, right before TWiT. Is that noon we're going to do that?
Steve Gibson [02:43:50]:
I don't remember what time. I think it's 1 PM. I better check. I think Lisa said 1 PM when I, when I asked her.
Leo Laporte [02:43:59]:
For, uh, Sunday? Yeah. And unless you make it a very short show, because TWiTS at 2, so I better check. Maybe it's 11. It might be 11. That would make more sense. That would give us a time to do a full 2.5-hour show and still have time for TWiTS.
Steve Gibson [02:44:13]:
So let's make it 11. Shall— I'm— she's gonna kick me because I asked her.
Leo Laporte [02:44:18]:
She said, we already decided this. I was like, oh, okay, you're right. Now, now I remember. Uh-oh. So, uh, we will stream it. We'll do it just like we do Security Now. Just—
Steve Gibson [02:44:28]:
it'll be Sunday, March 1st at 11:00 AM.
Leo Laporte [02:44:30]:
There it is, March 1st, 11:00 AM.
Steve Gibson [02:44:32]:
We'll celebrate a new month. Just kidding, Lisa.
Leo Laporte [02:44:37]:
Uh, I'm not worried about Lisa, I'm worried about Lori. But anyway, thank you, Lori, for letting us steal steal Steve's brunch time. Okay, I'll have, uh, I'll have, uh, mimosas ready for you, Steve.
Steve Gibson [02:44:48]:
And then email will go out to all 20,000+ listeners who are on, on the Security Now mailing list before that, right?
Leo Laporte [02:44:55]:
And then, uh, we're gonna get in an airplane, a big old jet airplane, and go to Florida— Orlando for Zero Trust World, ThreatLocker's, uh, really great security conference. I'm looking forward to it. Some great keynote speakers. Including, uh, I'm, I'm really excited, uh, to see a guy we spent a lot of time talking about. Um, oh, what's his name? Well, first of all, Adam Savage. Uh, Marcus Hutchins will be there. Oh, so yeah, so we can talk to Marcus Hutchins. Adam Savage is speaking.
Leo Laporte [02:45:28]:
My friend David Spark. Uh, Linus of Linus Tech Talks. So, uh, in fact, the whole Linus Media Group I think is going to be there. So now I'm thinking we maybe aren't going to be the stars of the show, but we— that's fine. We will be, uh, doing a presentation, uh, the last event on Tuesday, March— or sorry, Wednesday, March 4th. Yep, uh, we're the last event of the day, and then there's a nice party afterwards, cocktail party afterwards. So, uh, we will see you at Zero Trust World. I hope you are going.
Leo Laporte [02:45:59]:
And otherwise, we will back— be back here a week from Tuesday, or the next screw now after the next one. So 1068 will be back on Tuesdays. We normally do these Tuesdays, uh, right after MacBreak Weekly, which is 1:30, usually 1:30 Pacific, 4:30 Eastern, 21:30 UTC. Uh, although now that I think about it, 1068 will be, uh, on March 11th, which will be after we switch back to daylight saving time. So it'll be 20:30 UTC. I know.
Steve Gibson [02:46:39]:
And are we going to increment the podcast number for the, uh, ThreatLocker?
Leo Laporte [02:46:45]:
Oh, maybe it'll be 1069. I don't know. Yes, because we are going to do a— we are going to make a podcast out of the presentation Steve's doing at Zero Trust World. So you get to hear it even if you're not at Zero Trust World. I don't think we'll give it a Security Now number.
Steve Gibson [02:46:58]:
We'll give it a twit—
Leo Laporte [02:47:02]:
everybody or club only? Uh, you know, we should have probably had the conversation. I'm gonna say everybody. I'm gonna make an executive decision.
Steve Gibson [02:47:12]:
I think I have the power to do that.
Leo Laporte [02:47:14]:
Everybody should make everybody else happy. Yeah, yeah, everybody should get to hear it. So I'm not— but I, I'll tell you, we'll tell you next week what— where it's going to be so you can hear it. Um, you can also get the show after the fact. Steve's got unique versions at his website. He's got the 16-kilobit audio, the 64-kilobit audio. He's got the show notes that he writes, very extensive show notes. That's a great thing to have.
Leo Laporte [02:47:38]:
He's also, uh, got, uh, the transcript, very nice transcripts written by humans, a human in particular, Elaine Ferris. So thank you, Elaine, for doing that. All of that at grc.com. While you're there, pick up a copy of Spinrite, the world's best mass storage maintenance, recovery, and performance-enhancing utility. Version 6.1 is out. You'll also be able to stuff your NextDNS with weird, long, random numbers if you get the DNS Benchmark Pro, which is now available. A great way to see if you're using the fastest DNS server available to you, uh, and it's different for everybody. That's why you need a copy of it on your own.
Leo Laporte [02:48:14]:
Depends upon where you are. Yep. Uh, all of that at grc.com. You can come to our site for, uh, the 128-kilobit audio or the video of the show. That's twit.tv/sn. There is a YouTube channel dedicated to the show. And if you do want to share a clip of the show with friends, family, coworkers, bosses, that's probably the easiest way to do it. You just clip it on YouTube, you know, send them a link with that time code.
Leo Laporte [02:48:39]:
Makes it very easy for them to watch it. So we encourage you to do that, spread the word, and, and of course you can subscribe. Best thing to do is subscribe in your favorite podcast client. That way you'll get it automatically the minute it's available. Uh, and if you do that, please leave us a nice review. Tell the world about Security Now. Yes, Ms. Laporte.
Leo Laporte [02:48:59]:
So it's 11 AM.
Steve Gibson [02:49:00]:
Yes, we figured that out.
Leo Laporte [02:49:03]:
And 1068 will be the special. Ah, so Lisa has clarified 1068 will be the special. 1069 will be the next episode on March 11th.
Steve Gibson [02:49:14]:
It's what— and we do need, uh, a, a mic check on Tuesday.
Leo Laporte [02:49:19]:
Tuesday mic check. I, yeah, I figure she'll drag me to that. I don't need to remember that. Thank you, Steve. Have a wonderful night. We'll see you all, everybody, next time on Security Now. Bye-bye.
Steve Gibson [02:49:32]:
Hello everybody, Leo Laporte here.
Leo Laporte [02:49:33]:
You know what a great gift would be, whether for the holidays or at just any time, a birthday, a membership in Club Twit. If you have a Twit listener in your family, somebody who enjoys our programming, and you want to give them a nice gift and support what we do, visit twit.tv/clubtwit. They'll really appreciate it, and so will we. Thank you.
Steve Gibson [02:50:01]:
Twit.tv/clubtwit. Security now!