Security Now 1005 transcript
Please be advised this transcript is AI-generated and may not be word for word. Time codes refer to the approximate times in the ad-supported version of the show
0:00:00 - Leo Laporte
It's time for security now. Steve Gibson is here for our last episode of the year. Next week a best of, but this week we're going to talk about AI. Is it the Wizard of Oz? Steve has some really deep thoughts about what is AI and whether we'll ever get to AGI. Also, we have some pretty amazing examples of what the latest chat GPT model can do. We'll talk about, oh my God, the NPM package manager repository that has more than half a million malicious packages on it and what you can do to avoid that. And then certificate lifetimes are decreasing. Steve asks a question why? Why? All that more coming up next on Security Now Podcasts you love.
0:00:52 - Steve Gibson
From people you trust.
0:00:55 - Leo Laporte
This is Twit. This is Security Now, episode 1005, recorded Tuesday, december 17th 2024. Six-day certificates why it's time for Security Now, the show where we cover your security and privacy and safety online and talk a little bit about sci-fi, how computers work and anything else that's on the mind of the master, mr.
0:01:24 - Steve Gibson
Stephen.
0:01:24 - Leo Laporte
Gibson. Steve Gibson, how are?
0:01:26 - Steve Gibson
you and I have to say it's AI these days You're not alone, I might add, burning curiosity about it. Okay so today's podcast 1005, for December 17th, I titled Six Day Certificates why, why? Yeah, and we're going to take a long look at that because I don't get it, and I think I'll be able to make a strong case for why. I'm not sure there's anything to get, I don't. I mean, it's just crazy this is what Apple's asking for, right?
Apparently, apple was a driver the guy from Sectigo I just heard from one of our listeners from Feedback who received the show notes last evening, who said that that you know that that guy who's, who's, uh uh, got a stronger, uh place in in the political hierarchy at the moment is also talking about it. Um, but, but what happened was that in the 2024 annual report, the executive director of let's Encrypt is announcing that they are moving to six-day certificates.
0:02:52 - Leo Laporte
Oh well, then, that sets it.
0:02:54 - Steve Gibson
I mean exactly, there's 70% of the internet, Right? So anyway, we're going to get to that. We're also, however, I'm going to ask is AI the Wizard of Oz?
0:03:06 - Leo Laporte
Pay no attention to the AI behind the curtain or is it more?
0:03:09 - Steve Gibson
Yeah, we also have Microsoft's longstanding effective MFA login bypass, which must have come as a surprise to them. Turns out, they didn't really actually have multi-factor authentication working. Is TPM 2.0 not required, after all, for windows 11? There's been a lot of that going around the internet saying, hey, microsoft changed their mind. Also, we're going to meet 14 north korean it workers who made 88 million dollars. I saw that from from the west. It's like where could I get that job?
0:03:45 - Leo Laporte
They weren't hacking, they just needed cold, hard cash Right, and they got it.
0:03:49 - Steve Gibson
Yep Also Android updates its Bluetooth tracking with some new anti-tracking measures has had an unbelievable 540,000 malicious packages discovered hiding in plain sight here.
Look how easy this is to use. Just download this and drop it into your web browser and off you go. In more ways than one. Also, the Ask Woody site remains alive, well and terrific. I'm going to touch on that, that, because they reviewed Spinrite yesterday. Also, my iPhone is linked to Windows and it is wonderful. Oh good, oh good. Also, how has email been finding logos before? Beamey happened? If we use him and her for people, how about how for ai, as suggested one of our listeners also? I have another very disturbing. I didn't, but I'm going to show another very disturbing conversation with chat gpt, which one of our listeners has shared. Uh, and what's going on with the new chat gpt 01 model? It wants to escape what. Also, let's encrypt plans to reduce its certificate lifetime from 90 to just six days. Why in the world? And as we often say on this podcast, get ready for it. What could possibly?
0:05:33 - Leo Laporte
go wrong.
0:05:36 - Steve Gibson
And I have got a great picture of the week. Lots of feedback already from listeners. Benito got a kick out of it and asked the same question I did Is this real? But anyway, it'll be fun. You may have already encountered it because you know you seem to be somehow everywhere all at once at the same time.
0:05:58 - Leo Laporte
That's my job, steve. That's right, leo, but.
0:06:02 - Steve Gibson
I haven't looked at it yet. Also our pop culture reference.
0:06:04 - Leo Laporte
Exactly. I will look at it with everybody in just a moment, actually right after this word from our sponsor Wow, the word's there somewhere. They've changed the UI for Restream and I'm having trouble finding the button. I'll click this button. How about it? There it goes, our show today, brought to you by delete me.
We know about delete me because we use delete me. Specifically, lisa uses delete me. If you, you may want to use it too. If you've ever searched for your name online, have you ever tried that? Don't, I'm not recommending it.
You will not like how much of your personal information is just right there, smack dab on the Internet. I mean, it's actually kind of scary. Maintaining privacy is not just a personal concern. It's a concern for every one of your family members too. In fact, that's why Delete Me now has family plans. So they got corporate plans, individual plans, and they have added family plans so you can make sure everyone in the family feels safe online.
So what does Delete Me do? Well, it helps reduce the risk from identity theft, from cybersecurity threats, from harassment and more. We've experienced all of that, but particularly the one that concerned the company was the cybersecurity threats. I've mentioned it before the hackers who used our CEO's name and phone number to send a message to her direct reports saying send me Amazon gift cards quick. I'm in a meeting and I need them Now. Our team is smart, but that scared me a little bit because it showed me that somebody somebody out there, some bad guy not only knew our CEO's name and phone number, but knew her direct reports and their phone numbers, and I thought about it. I said you know what we're getting? Delete Me for Lisa and it has worked. We have actual evidence that it has kicked her stuff off the internet. It's really amazing.
So when you sign up for Delete Me, or you sign up your family for Delete Me, their experts will find and remove your information and their information from hundreds of data brokers. I think that number is low now. I think it's thousands. That's one of the reasons you need delete me, because it's their job. It's their full-time job to keep track of all of the creepoids out there who are collecting your information and selling it onto the highest bidder, including foreign powers. It's completely legal. If you're doing it for your family, you can sign a unique data sheet to each family member, tailored to them. So you know well, this one likes to be on Instagram and things like that. Easy to use controls. Account owners can manage privacy settings for the whole family.
But here's the most important thing. It was very important for us. It isn't just a one and done. Delimi, of course, went in, deleted every trace of Lisa from those data brokers, but then they continue to scan and remove your information regularly, and there's a reason for that. First of all, there's new data brokers every day. It's a very lucrative business, sad to say, and because even after your information has been deleted, they still are collecting information, that dossier gets rebuilt, so you got to keep going. I mean, I'm talking everything. They know your addresses, your photos, your emails, your relatives, your phone numbers, your social security, social media, your property value and more. Steve and I did this. We went out and searched for, after the national public data breach, our information. My social security was on the net, steve's was on the net, lisa's was not, because she'd been using Delete Me. It's very smart Protect yourself, reclaim your privacy by going to joindeletemecom.
Slash, twit, use the code twit. It's not just for individuals, not just for families, businesses. You should really think about this too. To protect your security, your managers, your middle managers. They need. Delete me. That's joindeletemecom slash twit. The offer code twit will get you 20% off, so remember that too. Use the address so they know you saw it here joindeletemecom slash twit. And if you decide to sign up, use the offer code twit. You will get 20% off joindeletemecom slash twit. Thank you, delete Me for supporting the good work, the important work, that Steve's doing here at security. Now, all right, I am going to scroll up, as is my want, never seen this before. I shall scroll up, as is my want, never seen this before. I shall scroll up and and examine the picture of the week and uh, and you will get my honest and true reaction upgrade monthly premium plan if only.
I mean, I feel like we're headed straight there. Let me show you, and steve, you can explain this.
0:10:44 - Steve Gibson
And that is my point exactly. I gave this picture the caption. The monetization of our lives, and below it I wrote this brilliant spoof, perfectly highlights the logical outcome of the distressing path we're on, where the ownership of anything is being replaced by the rental of everything. Yeah, and this shows. It's a pop-up which a user of a mouse would get on Windows, and it says upgrade required colon, monthly click limit reached. And it says you've reached the maximum number of clicks allowed for this month To continue using your mouse without interruption, please upgrade to a monthly subscription.
And then, of course, we have two plans the standard plan and the premium plan. The standard plan has limited clicks. It's 1099 per month. For that you get 10,000 clicks per month. But if you go over that, it's 10 cents per click. Thereafter you get a thousand meters of mouse wheel usage per month, customizable button mappings and just the basics level of support. But if you elect the premium plan and who wouldn't Well, for only $17.99 per month you can have unlimited clicks. You get also unlimited mouse wheel usage, customizable button mappings, oh, and priority support If you you know figure out how to hold the mouse, and access to advanced settings and features.
Now, of course, the the one they want you to click on, the upgrade to premium plan that's all glowing there in cyan that you just need to click on you could upgrade to the standard plan. They're not recommending it. And then this person has clicked on the. Remind me later. You can see the mouse there hovering over. Remind me later, that's what he's going for. Remind me later, that's what he's going for. But that brings up another little pop-up saying note you won't be able to use your mouse until you upgrade. Because now this does really beg the question how do you upgrade if you can't?
0:13:16 - Leo Laporte
use your mouse click, click, click. I don't know about that joke, folks. We can't. There's no logic in here. It's just a joke, but that's good, that's good.
0:13:25 - Steve Gibson
I like it's just a brilliant spoof. Yeah, and isn't this what we're all feeling? I saw that you that youtube just announced another jump in prices 10 bucks more a month.
0:13:35 - Leo Laporte
Yeah, adobe just killed the 20 gigabyte a month photography plan, which was the one I was using. Uh, these guys, um yeah, this is the way of the world.
0:13:44 - Steve Gibson
You know, leo, everyone laughed at me when I said I'm sticking with Office 2003. Yeah, because it just works, works just great and doesn't have any 365 nonsense anywhere. And you know they really haven't changed it. It's like they've got. You know they, they recode it with new candy. Every you know on on the UI in order to because everyone has to have the latest and greatest is like okay, but you know oh boy.
Okay. So I wanted to begin today's podcast with a follow-up note to last week's A Chat with GPT podcast. I suspect that one of our podcasts next year may be given the title the Wizard of Oz because, based upon my new and very I want to stress this very, very, very preliminary understanding, it appears that there is nothing whatsoever even remotely intelligent emerging or threatening to emerge from all of this work being done to capitalize upon the illusion of intelligence that's enabled through the very clever application of today's large language models. I believe we're being seduced by language which is capable of highly compelling seduction. It appears that an illusion is all this is, and if it's true, it's all it can ever be. It's all it can ever be. If this is the case, it means that the holy grail of AGI remains just as far away as it was before the first large language model was created. This is not to say that the technology behind large language models is not going to profoundly change the world. I have no doubt that it will. This is still the biggest thing to happen. This new technology is going to be able to find the signals in the noise that we miss, but it appears to me now that there's a lot that the LLM trick will not be able to do.
So what happened between last week's podcast and today? Last week, immediately after Leo mentioned it, I grabbed Stephen Wolfram's book about AI. Since it was available on Kindle, I had it in seconds and I was unable to resist cracking its cover just to get some feel for what lay ahead. I almost wish I hadn't. I felt, and I still do, a little bit like the six-year-old whose precocious neighborhood best friend whispers Santa Claus isn't real, it's your mom and dad. Santa Claus isn't real, it's your mom and dad. In this case, stephen Wolfram did not say that AI wasn't real. At least he hasn't so far. In what little I've read, he simply, clearly and directly explained in the language of math and algorithms exactly what the reality is. If we assume that Stephen knows of what he speaks and I would not take a bet that he doesn't all we have here is the Wizard of Oz. As I've said, I've only just dipped my toe in, since I first wanted to finish Peter Hamilton's Archimedes' Engine novel. I did that this morning, and now my level of curiosity is far higher than it was, because the engineer in me immediately knew how I would extend and expand upon the tiny bit that's been revealed to me so far. It will not and would not create intelligence. True intelligence, as far as I can see, is nowhere on any horizon. So I have no idea what Sam Altman is talking about. To me, more than anything else, it looks like no more than overhype of tomorrow's future for a higher stock price for his company today. But I can now affirm the plan I shared last week. I'm going to understand what's going on here, after which I'll be able to share what I've learned. I also realized that I've had my own journey on this topic. Everyone who listens to the podcast has seen it.
The first time I talked about the AI revolution for the podcast, I believed that the only thing that was going on was that, for the first time ever, we had computational and storage resources that were so vast that language could be used to simulate human-like intelligence. I wrote that a truly intelligent species meaning we humans had produced a massive corpus of available online language output which had been sucked in, and that this new technology was simply finding the correct previously written bits and pieces and reassembling them on demand. Then I was seduced. I started actually using the damn thing and was repeatedly amazed, and sometimes stunned, by its output and I began to doubt my earlier dismissal. Was there more to this than I originally believed? As I shared several times, I was finding this thing incredibly valuable as a sort of super internet search engine. This evolution reached its apex with last week's chat GPT conversation, where I informed it that it was wrong. It agreed with me and then provided the correct answer. This seemed like more than regurgitation and I was left wondering what exactly was going on. I needed to find out, so I purchased those first two AI textbooks and then Stephen Wolfram's.
Next week's podcast will be a best of, and since Twit's regular Tuesday and Wednesday podcasts fall on both major holidays and their eves, there will also be no new podcasts during the week between Christmas and New Year's. That means that nearly three weeks will pass between now and my production of the January 7th podcast. That's a long time for me to remain silent, so don't be too surprised if something during that hiatus. Sometime during that hiatus, you receive an email from me on the subject the continuing adventures of the Wizard of Oz. It's now so easy for me to generate and send email to this podcast's nearly 14,000 email subscribers that I may feel the need to update those who've demonstrated their interest by subscribing. Subscribing. So if you're not already a subscriber and you would like to be kept in the loop over this unusually long holiday hiatus, it's easy Just go to GRCcom slash email, follow the prompts and sign up to the weekly security now podcast mailings and you may receive a little holiday present.
0:20:59 - Leo Laporte
It strikes me that this is in many ways similar to it's been a long time so we may not remember it too well the our, our reaction. We first encountered powerful computing and then, maybe secondarily, the internet. That on first blush it's like mind-boggling. You mean that box of rocks can can do these things. You know, and I don't know what your first reaction, the first time you saw a computer or used a computer, was, but for me it was not just awe but excitement and I felt like this is, you know, this is going to be an unlimited vista. We're going to see before us and the internet, very much the same thing. Wow, there's so many people here. This was in the early 90s when I first encountered it, and I feel like this is much of the same. And what happened with those first two is we kind of adjusted and indeed there are real uses. It is really useful and powerful. It may be just not the magical thing we thought it was at first.
0:22:04 - Steve Gibson
I remember that computer, I remember the room it was in. I remember standing in front of it and my reaction was I am going to understand every bit of this thing.
0:22:20 - Leo Laporte
You're having the same reaction to this.
0:22:22 - Steve Gibson
Well, it's been delayed because the world has gotten so much bigger. Yeah, you know, I, I, I guess I thought I don't need to understand this, it'll be understood for me, but that's apparently not happening, you know, I mean, no, I'm, I'm not, I'm not getting although I haven't really gone looking but all I would get is other people's opinions, and I've never been a big, I've not had, not, I've never had much interest in other people's opinions. I want to go to the. You know others have said I work from, from first principles, and, and you know, that's what. That's what's happening now. I'm going back to first principles. I'm going to, you know, finish Wolfram's book. I'm going to read these other two. I'm going to get this. I'm going to, you know, satisfy myself about what this is. But I do have a. I have a strong intuition that we will not get to AGI from where we are.
0:23:23 - Leo Laporte
That is a big change. By the way, I asked you this before, yes and uh, you used to think that really there wasn't much that we do as humans that's so different from what a computing machine can do.
0:23:33 - Steve Gibson
Oh, I can't tell you how disappointing that, in that, the first few pages of wolfram's book were.
0:23:39 - Leo Laporte
It was like it's kind of an eye-opener, isn't it? Yeah, that's all this is yeah it's just a stochastic probability machine, but, um, there is something that happens in between the mechanics. I think it's because of language and the output. Maybe that's it.
0:23:55 - Steve Gibson
Yeah, that's the hook, leo. We get seduced by language and and that's very true I, I think that's it. I, I fact that, and I mean, when you see some of the output I've got a screenshot later oh my God, there's some manipulation going on behind the scenes to make this seem more intelligent, more human, more like as if it has emotions. You know, oh golly gee, it says well who told it what you?
0:24:26 - Leo Laporte
know. Yeah, yeah, yeah, come on. No, they're definitely doing that, aren't they? Yeah?
0:24:30 - Steve Gibson
Yeah Good, I can't wait. This is going to be interesting. I think that's what's going on. Okay, so it turns out that just offering multi-factor authentication doesn't automatically mean that it actually works to protect users' logons. This is the lesson that Summit Microsoft presumably learned recently. What happened? The security research team? This is just so clever.
The security research team at Oasis Security discovered a critical vulnerability in Microsoft's multi-factor authentication implementation. This is like what was protecting everybody using Azure stuff. They considered it critical, and so would we, since it allowed attackers to bypass the protections guaranteed by multi-factor authentication to gain unauthorized access to user accounts, including Outlook emails, onedrive files, teams, chats, azure cloud pretty much the works, since Microsoft has amassed more than get this 400 million paid Office 365 seats. This makes the consequences of this vulnerability significant and, what's more, the bypass was actually kind of simple it took around an hour to execute, requiring no user interaction, and never generated any notifications anywhere or provided the account holder whose account was being hacked with any indication that there was any trouble being good Internet citizens. After discovering the trouble, the Oasis guys reported the flaw to Microsoft and collaborated with them to resolve it.
There were two problems. The first was that the way Microsoft's authentication protocol bounces users around among various authentication applications and sites and I'm sure we've all seen this right. If you watch the URL in your browser, when you click on some authentication thing, you get jumped around. You see OAuth briefly flash on the screen and other stuff happens. You know your browser is being taken on a little journey and at each stage of that it's providing parameters and receiving parameters, and scripts is running in the browser that then takes you somewhere else and send some of those parameters back and then get some other ones so there's a bunch of transactions happening gets some other ones. So there's a bunch of transactions happening, and they're all analyzable by anyone who takes the time to look really closely. So this meant that by capturing those parameters being used during these early stages of the process, these researchers were able to. Then they discovered that they were able to launch at some point in this whole stream, massive numbers of simultaneous six-digit authentication guesses back to Microsoft in the hopes that one of them would succeed. In other words, it wasn't just wait till you get done and then here's your one guess. They looked at everything that was happening and realized that there was a stage during that where they could capture the dialogue that was happening between the remote authentication scheme and the browser and then simultaneously send a blizzard of guesses from that point forward. In other words, microsoft's implementation of multi-factor authentication was not protecting its users from a clever brute force guessing.
Now that's the first problem when using time-based multi-factor authentication and you made a point of mentioning this once, leo. I remember talking about it like when the six-digit code expires. You noted that. Well, it actually could still be used a little bit longer, right?
When using time-based multi-factor authentication, clock differences, human typing delays and network delays are allowed for between the authenticator and the relying party by not instantly deliberately not instantly expiring a valid six-digit code the instant its 30-second window of validity has ended. Now, this is common and it makes for a better user experience, right? Because if you're entering the code just before the end of that 30-second expiration and then you fumble a bit before you hit enter or click on the mouse, and then it's like if you're then told, sorry, that's no good, you got to do it again, well, that's annoying. So it makes sense, and somebody's clock could be a little off, meaning that their 30 second windows are not exactly aligned with the 30 second windows at the receiving end, so it's common to allow some leeway. Now, the downside of this is a reduction in the security of the system, since even what this means is that even after a new six digit code has been issued, the previous code still remains valid. So, for a brief time, two codes are valid. In Microsoft's case and I know that this may be somewhat difficult to believe from the company upon which so much depends Azure's MFA system was leaving codes valid for a full three minutes. Now, this is one of those things that's not an accident or a bug.
Someone somewhere decided that this would be a good idea. This meant that at any given time, six different codes would be accepted and valid. Naturally, this made the brute force guessing, which was possible by intercepting the protocol at that pre-completion state and launching a massive blizzard of simultaneous guesses. This made brute force guessing all the more easier. Ok, so finally, there was no rate limit imposed upon guessing at any point. Nothing. I mean thousands and thousands and thousands of guesses were being simultaneously made without end, and nobody cared over at Microsoft.
The researchers wrote quote by rapidly creating new sessions and enumerating codes, quote, quote by rapidly creating new sessions and enumerating codes, the OASIS research team demonstrated a very high rate of attempts that would quickly exhaust the total number of options for a six digit code, meaning one million. Simply put, they wrote, one could execute many attempts simultaneously. During this period, account owners did not receive any alert about the massive number of failed attempts, that's you know, to log into their account. It's like, well, something's happening out there, but and they said, making this vulnerability and attack technique dangerously low profile, with a limit of 10 wrong guesses per connection but no limit on the number of simultaneous connections or reconnections. So like after a connection tries to guess, does 10 guesses and is told no, you drop it and you reconnect and you try another 10, and you can have that happening in parallel thousands times over. They say, with a limit of 10 wrong guesses per connection, but no limit on the number of simultaneous connections, with the fact that at any one time there will be six valid answers, even 1 million possible six digit combinations, will be insufficient protection.
Now their research paper that they wrote provides their chart of the time required for the attack versus the probability of its success, and you couldn't design just a more beautiful asymptotic curve.
The Dark Reading website covered this news with their heading Researchers Crack Microsoft Azure MFA in an Hour.
Now, as we can see from the lovely statistical chart, the 50-50 crack point occurs after around 70 minutes of attack. So what that means is, given only 70 minutes, there's a 50% chance that one of the six currently valid codes at all times during those 60 minutes because they're all changing constantly right during those 60 minutes, because they're all changing constantly right there's a 50% chance that in 70 minutes, one of the six currently valid codes at the time of one of the guesses will be discovered simply by randomly guessing them at the very high rate that Microsoft's errant design allowed, that Microsoft's errant design allowed. And if we follow the chart out to its end on the right, it appears that an attack lasting 300 minutes or five hours, which Microsoft had no problem allowing, would reach about a 95% success rate. Again, we're guessing. It's all stochastic, so it's do you happen to guess right? It's like back in the early days when that computer I left running overnight happened to guess the proper hash and I got 50 bitcoin to my ever going to talk about that unending misery.
0:35:22 - Leo Laporte
Let me see what it would be worth right about now. Oh, leo, we're north of 100,000 now, aren't we? Yeah, 106. So, yeah, wow, just make it 100,000, so that'd be $5 million.
0:35:33 - Steve Gibson
I installed Windows over that drive. That was the most expensive installation of Windows of my life. People said, oh, this might be still there. It's like no when no. People said, oh, this might be still there. It's like no when no. Windows desktop is there now anyway. But, as we've also also said, I would not have had the wisdom to hold them, you would have sold it by now, yeah yeah, there was a point where it was where I could have cashed out for 17 grand and I would have thought hell yeah, I'll take it yes, exactly.
Anyway, until these good samaritan researchers informed microsoft of their of the microsoft's flawed multi-factor authentication system, azure's mfa was not providing much actual practical protection. The research, the researchers confirmed that that Microsoft had addressed their concerns. They finished by writing. While specific details of the changes are confidential, we can confirm that Microsoft introduced a much stricter rate limit that kicks in after a number of failed attempts. The strict limit lasts around half a day.
Now I would feel more comfortable if six different codes were not all simultaneously valid, since that does seem excessive waiting, you know, giving someone six minutes, wait three minutes, sorry, three minutes. The researchers did not indicate whether that might have been reduced. Of course it would be easy enough for our listeners to probe, you know, to see how long a code is still honored after it should have been expired. But, you know, adding a strict rate limit on failed attempts does make total sense. There's no possible reason for any actual user to fumble these codes more than a couple times, as I'm sure we all have. So, anyway, nice that we've got these kinds of good Samaritan security researchers who are helping to catch other people's mistakes. And Leo, we're at 30 minutes in. I think we should tell our listeners why we're still here on the air. And then we're going to look at whether TPM is now not required for Windows 11 after all.
0:37:54 - Leo Laporte
Never a mistake to do an ad break, in my humble opinion. Steve, Our show today brought to you by those folks at 1Password. I know you know 1Password, but this is more than the 1Password that you know. This is a new thing from 1Password called Extended Access Management. Now let me explain why you need this. I can explain it with a question Do your end users always work on company-owned devices? Do they always use IT-approved apps? No, they use their own phones. They bring in their own laptops. They've got a copy of the Plex server running on it. So how do you keep all of your company's data safe when it's on all those unmanaged apps and devices? Well, 1Password has an answer to this question. This is extended access management. It's really protection for the way we work these days. 1password extended access management helps you secure every sign-in for every app on every device, because it solves the problems traditional IAM, like password managers and MDM, can't touch.
Imagine your company's security. Like the quadrangle of a college campus, you know, you can see it in your mind's eye. Right, those beautiful ivy-covered brick buildings, the lovely green grass, and leading from building to building, those twisting nice brick paths. Building those twisting nice brick paths, those are the company-owned devices, the IT-approved apps, the managed employee identities on your network. But and you've seen it on every college campus certainly everyone I've been to there are also the paths people actually use, not nicely paved, not brick. They're the shortcuts worn through the grass, the actual straightest line from Econ 101 to Physics 19,. Right, the muddy paths, those are the unmanaged devices, the shadow IT apps, the non-employee identities like contractors on your network. The problem is most security tools just work on those nice, happy little brick paths. Most security problems take place on the shortcuts, happy, little brick paths. Most security problems take place on the shortcuts.
One Password Extended Access Management is the first security solution that brings all these unmanaged devices, these apps and identities, these, if you will muddy little paths under your control. It ensures every user credential is strong and protected, Every device is known and healthy, Every app is visible. It's security for the way we really work, really honestly work today. And it's now generally available to companies that use Okta or Microsoft Entra and they're now in beta for Google Workspace customers. So if you're an Okta, Entra or Workspace customer, you really ought to check out 1passwordcom security now. It adds the extra security you need. That's 1passwordcom security now we thank you so much for supporting security now and we thank you for supporting security now by using that address so that they know. You saw it here 1passwordcom slash security. Now I'm fascinated to know did Microsoft back down on this, Steve?
0:41:13 - Steve Gibson
Okay. So Tech Power-Up's headline read Microsoft loosens Windows 11 install requirements. Tpm 2.0 not needed anymore, and Guru 3D reported this under their headline Microsoft drops mandatory TPM 2.0 requirements for Windows 11, upgrade now possible without it. Following up on their headline, tech Power Up began their reporting by writing. Microsoft has finally opened the Iron Gate guarding the Windows 11 upgrade for systems running incompatible hardware, including systems lacking TPM 2.0. This is excellent news for users who are rocking older systems or have been without the TPM 2.0 module in their system, but want to upgrade to the newer OS release. Microsoft opened an official support page noting that quote installing Windows 11 on a device that doesn't meet Windows 11 minimum system requirements isn't recommended. If Windows 11 is installed on ineligible hardware, you should be comfortable assuming the risk of running into compatibility issues. A device might malfunction due to these compatibility or other issues. Anything could happen. Oh, windows might have a bug, leo. What could possibly go wrong? They said devices that don't meet these system requirements are not guaranteed to receive updates, including, but not limited to, security updates.
0:42:51 - Leo Laporte
By the way, this reminds me of the pictures of the week with those iron gates and then the muddy paths around the iron gates. This is exactly. It's not the first time we've heard this either. I mean, they've said this before.
0:43:04 - Steve Gibson
Right Now this would obviously be very interesting if it were to be true. Yeah, and I was hoping it was, since I would have welcomed having my rant about this last week rendered invalid due to a policy change. But, as we know, I would think that was the right policy change, but it appears that nothing has actually changed. What appears to have happened is that Microsoft has formally acknowledged that it is possible to install Windows 11 around their one-time installation check for TPM 2.0. So they're making the consequence of doing that more clear. It's still puzzling that Windows 11 works just fine with TPM 1.2, even though Microsoft is clearly hoping to frighten most users into purchasing newer hardware. What I'm looking forward to eventually learning, just for the record, is whether and what side effects, if any, or compatibility issues, if any, might actually be encountered. And I'm sure we'll eventually learn that, since I have no doubt that many TPM 1.2 machines will be running Windows 11. Many TPM 1.2 machines will be running Windows 11.
One thing we do know will happen is that Microsoft will not automatically offer successive feature releases. You know those. What are they now? Twice a year or once a year? Anyway, those things you know the something or other H somethings. They will not automatically offer those to these machines. It will be necessary for users to grab the ISO image file for the next feature release in order to move forward. Now, some users might feel that's a benefit. It might mean they don't need to use, in control, you know, my little freeware utility to prevent that same thing from happening without their permission. Also, the PC health check will always say that the system does not support Windows 11, even while it's running the health check from within Windows 11. You know it's like OK, Microsoft. In any event, users who wish to follow the bouncing ball will need to mount the newer release ISO file and then just run it, set up XE in order to manually update their machines to successive feature releases of Windows 11 if they choose to, and I can see that that would make sense for many listeners, and I doubt there will actually be. You know, nothing is going to crumble or fail to work or be incompatible or any of that nonsense. You know Microsoft, you know, is patching a hundred critical errors every month in Windows, so it's not like there's you know they've got any extra incompatibility to spare. But again, I just wanted to let our listeners know. Nothing changed actually. It's you know, and and it does appear that using rufus uh, hopefully everybody knows about rufus. It's a wonderful prep tool that is able to take a Windows ISO and create a boot USB from it, and it now has clickable options to bypass the TPM 2.0 check. So it's getting ever easier to install Windows 11 on non-compliant hardware hardware.
The FBI has identified 14 North Koreans who were working in Western IT. The US Justice Department recently indicted these 14 North Korean nationals, who participated in the schemes we've been talking about several times recently to bypass international sanctions on North Korea by arranging to obtain IT employment with Western companies. Officials say the workers used false identities and laptop farms, which we've described happening in the past, to hide their true locations from companies that were foreign to them, local to us, sometimes working for multiple companies at the same time. And then, Leo, as you did, when I saw how much money they had earned in aggregate, my first reaction was whoa, what are we paying these guys? But then it turned out that it wasn't all salaried earnings. Yes, they generated money through the salaries they earned, but also by stealing data and extorting the companies that had hired and trusted them.
The 14 men that have been identified are believed to have generated at least $88 million over the past six years for the North Korean regime. The State Department has also put up a $5 million reward for any information on those 14 individuals and any similar schemes. And I have here in the show notes a picture of the 14, which has been made public, this big banner across the top wanted by the FBI, and it shows us the DPRK IT workers. You know they mostly look like regular nice guys who who anyone might interview and hire, but of course being located in north korea would be a buzzkill for the employment interview to be.
0:48:53 - Leo Laporte
To be fair, though, these guys it wasn't a hacking uh thing, they were just uh trying to make some money, right? If they did a good job, then the companies involved haven't really been harmed. The it just violates the us law against providing, you know, currency to north.
0:49:12 - Steve Gibson
Well, except that the reason that amount was so high is that they stole the company's data that had hired them and then extorted the company. It wasn't their salaries.
0:49:23 - Leo Laporte
No, oh, never mind.
0:49:25 - Steve Gibson
I take it back.
0:49:27 - Leo Laporte
I thought they were just earning that much, but I guess you're right For the 14 guys to earn $88 million. That's a little more than normal.
0:49:35 - Steve Gibson
Yeah, you don't want one of these creepy crawlers crawling around your network. But look at them. They look like you know I'd hire most of those guys. They look smart but I think an in-person meeting would be required. Yeah, not.
0:49:50 - Leo Laporte
Uh, we'll just do this via zoom and and believe that you're actually in, you know oregon, somewhere I wonder if they, if they uh say okay, we're gonna let you get a western haircut for this job interview, because they don't have those typical North Korean fades. Maybe that's just Kim Jong-un that does that.
0:50:12 - Steve Gibson
Kim Moo-rim, cho Chung-pam Hyeon-choy-song, song Eun-choy Seok-kwang-hok.
0:50:21 - Leo Laporte
They're Korean names, right, and that's the thing. There's no real distinction between North Korea and South Korea. Korean names, right, and that's the thing I mean. There's no real distinction between North Korea and South Korea.
0:50:28 - Steve Gibson
You know, technically right, yeah, and they just, you know, look like your typical computer IT guy, it guy, yeah, okay. So last Wednesday, google announced some new features in Android to help its users deal with unwanted Bluetooth tracking. We did deep dives into you know, find my whatever it was dongle on iOS some time ago and really took apart the whole, the way the whole tracker system works. Android's unknown tracker alerts automatically notify Android users when an unfamiliar Bluetooth tracker is moving with them, which, when we talked about this before, I thought was just very cool. So Google wrote as part of our ongoing commitment to safety, we've made technology improvements to bring you alerts faster and more often.
We're also rolling out two new features for find my device compatible tags. First is temporary pause of location. They said you can now temporarily pause location updates from your phone to prevent your device's location from being used by a detected unknown tag for up to a day, 24 hours. They said this provides an extra layer of privacy and control, allowing you to take a first action quickly while you locate and physically disable the tag. In other words, you know your phone disappears, then you go on a hunt and to that end, they have find nearby is the other feature. They said, if you receive an unknown tracker alert, you can now use the find nearby feature to pinpoint the tag's location. Your Android device will guide you to the tag to help you find it if it's hidden.
0:52:20 - Leo Laporte
That goes a little bit beyond what Apple does. I think that's a good idea.
0:52:24 - Steve Gibson
I really like that find nearby feature. It's like oh so you think you're tracking me, I'm going to track you, so very cool. Okay, there are four primary open source software repositories, though you know, calling it the top four doesn't really do NPM justice, because there's really no comparison NPM, pypy, nuget and Maven Central. Last week, the Fulton, maryland-based dev sec ops firm Sonatype we've referred to them in the past. They've done great work recently released their 2024 open source malware threat report, citing that malicious packages reached more than get this 778,500 instances since the company began tracking them in 2019. So, just five years, more than three quarters of a million instances of malware in software repositories. They wrote that in recent years, open source malware has proliferated, so it's on the rise. It's not like we're successfully combating it.
Sonotype researchers analyzed open source malware in 2024, diving into how threat actors use malicious open source packages to target developers. As enterprises are flocking to open source Get this to build custom AI models, you know everyone wants in on the frenzy. So it turns out that there's a lot of stuff going on in open source and this is the new way in. I got a chart that shows the relative instances of malware that have been found across those three packages NPM, pypy, nuget and Maven Central. And, as I said, npm is really the repository you want to be very careful about. The chart shows that by far most supply chain malware is found on NPM. Most supply chain malware is found on NPM. That's where, as I mentioned at the top of the show, more than 540,000 malicious libraries have been found. Last year alone, malicious NPM code accounted for 98% of all SonaType's detections across this industry. So I say to our listeners who code and who pull libraries from NPM and, for that matter, pypy and the others, please be very, very careful.
Open source, everybody agrees, is just an incredibly cool concept, a fantastic resource, but it's also something of a mixed blessing. The whole concept of open contributions from a community, you know, wonderful as it is in theory, presumes a community of well-meaning participants. Unfortunately, it's clear that's not today's reality. Just look at the previous story of the 14 North Koreans who made $88 million by attacking the companies who they tricked into hiring them. You know you need to be careful these days. A bit of miscellany. I have two pieces of miscellany to share. Leo, you, like I, who have been around the industry from the start, and others of our listeners will recognize names like Will Fasty, ben Myers, fred Lange, brian Livingston, susan Bradley. All these people go back to the start of all of this. Back in 97, fred Lange started the Lange List newsletter. Woody Leonard the year later, in 98, started his Woody's Windows Watch.
0:56:45 - Leo Laporte
This chronology just brings me back boy. Doesn't it Gosh. That was our youth. But it wasn't even that long ago, but it's like it's ages.
0:56:56 - Steve Gibson
It does, doesn't it? Yeah, yeah, brian Livingston, in 2003, starts Brian's Buzz on Windows. The next year, in 2004, he merged Brian's Buzz and Woody's Windows Watch to create the Windows Secrets newsletter. And then, in the same year, woody started AskWoodycom to broadcast the news and advice on Windows and Office. The year after that, in 2005, susan Bradley started. The year after that, in 05, susan bradley started the patch watch column in windows secrets. In the next year, in 06, uh, fred's langa list merged with windows secrets. Uh, two years later, in 08, gizmo richards support alert newsletter merged into windows secrets. So we are, we are, you know, we're seeing evolution and consolidation. In 09, windows Secrets takes the Woody's Lounge website under its wing, becoming the Windows Secrets Lounge.
Now, then, we jump ahead a decade to 2019. Ask Woody had become at some point an LLC. It acquired the Windows Secrets newsletter, merging the Windows Secrets lounge into the Ask Woody lounge and creating the Ask Woody Plus newsletter. Next year, woody Leonard retired to a tropical location. So that was four years ago.
Yes, susan Bradley took over, took the mantle of the site and welcomes Brian Livingston back, along with Fred Lange, deanna McLevin and the rest of the Woody contributors to continue the tech information that they've provided over the years and Will Fasty is named the editor-in-chief over the years. And Will Fasti is named the editor-in-chief. So today what we have is a collection of longtime old-school print-era journalists who've watched and reported on our beloved PC industry from the start. You know, and as you said, leo, it just feels like a walk down memory lane. You and I were involved in all of this and know all these people. Today there's the askwoodycom website, which is chock full of all of this repository of material, and they have a pair of newsletters, one that's completely free and another that's available for a very modest annual donation which supports their work. What strikes me most about everything there, aside from the fact that it looks a little retro like my own site, I bet it does.
0:59:37 - Leo Laporte
So I can relate to it. Oh yeah, it's got that you know where you are when you get there. Uh-huh yeah, it's got that. Feel that 1998 feel.
0:59:48 - Steve Gibson
Wow, exactly.
0:59:49 - Leo Laporte
I love it.
0:59:51 - Steve Gibson
And it's cool that they maintain an MS DEF CON level, like not really recommending that everyone immediately apply the updates and the various patches.
1:00:02 - Leo Laporte
We're at DEF CON 2 right now, just so you know. Wow.
1:00:07 - Steve Gibson
Wow, yeah, so it's, it's old school. They, uh they. They said at the bottom of their, at the bottom of their about page, they said, uh, we are 100% supported by readers like you. No advertising, no corporate master, no spying, no spam. They said, just us chickens and a whole lot of volunteers. If you believe in our approach, please consider becoming a PLUS member. You get to choose how much you want to donate. Click the PLUS membership button in the top banner for complete details. And what strikes me most about everything there is that it's not the crap that we now see everywhere we turn, because you know, these are not newbies by any means. I mean, it'll be sad to see the numbers dwindle because they're our peers. Leo, yeah, you know, these are real, honest-to-god journalists, it's kind of the same as the Voyager people, only on PCs Right Only in our industry.
1:01:11 - Leo Laporte
We're going to keep these PCs going as long as we can.
1:01:16 - Steve Gibson
You know, and these are honest-to-God journalists who've been actively participating in this industry for decades and who bring the same sort of perspective to their respective focuses and fields. You know which followers of Twit and this podcast appear to find valuable from you and me, leo, you know, and all of our other veteran hosts here. So I wanted to remind those who may be interested in a website, an email subscription, where it's possible to still find very solid content. I'm mentioning all this because last month I received a note through GRC's web forum from Will Fasti, now the editor-in-chief of Woody's stuff. It caught my attention because Will is another of those old timers who at various times was running Creative Computing, pc Tech Journal and various other Ziff Davis publications. So much time had passed that Will didn't know how to find me through email, so he reached out through our web forum. In that posting he noted, he said I'm now editor of the Ask Woody newsletter and then, once we connected by email, he wrote Steve, I was very excited to hear about 6.1, and I'm currently looking forward to 7.0, for which I will gladly pay, forward to 7.0, for which I will gladly pay.
Reviews are rare for Ask Woody, but I thought Spinrite deserved coverage. I assigned it to another old hand, ben Myers, who wrote for me at PC Tech and also for PCMag and PCWeek among others. He usually focuses on unusual hardware stuff and his columns are appreciated. So the Ask Woody Plus newsletter publishes on Mondays and yesterday's newsletter carried an extremely thorough look and review of Spinrite 6.1. Ben's column in the newsletter is titled Spinrite 6.1 offers us help for solid stateves. And Ben starts out by writing the latest version of Spinrite, long regarded as the go-to software to recover data from corrupted hard drives, adds testing and tuning of solid state drives to hard drive rescue. Gibson Research's famous Spinrite 6.0, circa 2004, recovers data from defective hard drives, repeatedly reading sectors to determine the original uncorrupted data, with good statistical odds of success. So since Ben's entire column and lengthy review is only published in their subscriber-supported Plus newsletter, I won't share more.
But I am unable to resist just sharing the before and after benchmark screenshots been made of an SSD. They're in the show notes. On the left we see a Samsung 850 EVO 250 gig SSD and Spinrite, as we know, benchmarks three locations on drives the beginning of the drive, the middle of the drive and the end of the drive. So on an SSD where we would like and expect, being solid state that they would all be the same. The front of the drive in Ben's testing was reading at 72.3 megabytes per second, the middle 296.3 megabytes per second and the end 569.2. So 72, 29, the end 569.2, so 72 296 569 anything but uniform. And again the front of the ssd, which is only ever read from, because that's where the operating system and the, the file system, metadata and everything that doesn't move much, that's where it's stored.
Over time it slows down. That was our big discovery toward the beginning of the Spinrite 6.1 work. And then we have the after screen, which Ben also posted same drive, same serial number, blah, blah, blah, 548.9, 549.5, 549.6,. Completely sped up uniform performance across the board again. So anyway, the other Spinrite screens that Ben shared in his review showed that Spinrite's level three scan to restore this SSD's original performance took 30 minutes. This is the sort of performance boost, as I've said, that users of Spinrite 6.1 routinely see, and we continually hear that machines which had somehow become slow to boot and much slower to use were immediately restored to their original performance.
So I just wanted to give a big shout out and thank you to Ben and Will for taking the interest in and time to update their readers about Spinrite, will said that they're ready and waiting for Spinrite 7. I should also note that I learned about Monday's review that is what I just talked about from a bunch of our listeners who are subscribers to their Plus newsletter. Anyway, a great deal of valuable and thoughtfully created and curated content is online over at the AskWoodycom website, which, by the way, as I said, has the same sort of feel, you know, that retro function over feel that, or a function over form that GRC does solutions for interconnecting smartphones to Windows or one way or the other, saving me from having to type on a touchscreen when I want to send a long iMessage. Many were for Android phones or many were for linking to Linux, which is not what I needed. I needed an iPhone on one end, because that's what I got, and Windows on the other, because that's what I'm sitting in front of.
From this feedback, as I mentioned last week, I learned of Windows Phone Link, which was the solution. I now have it working in virtual machines for the time being, under both Windows 10 and Windows 11, and it is everything I had hoped for. It is everything I had hoped for. I put a little screenshot that I got from I think it was Windows 11, showing a laptop in the background and a phone in the foreground with a checkmark, and it says you're all set. Your iPhone is now paired with your PC. So, anyway, I did need to equip both the machines with a Bluetooth low energy radio because you need, you know, btle in order to talk to the phones in a compatible fashion, but you know. But that's now a nine dollar USB dongle, so it was well worth the time and trouble and it actually does work. It is very cool. So thank you all of our listeners who brought me up to speed on that, and Leo, before we start digging into feedback, let's take our third break and then we're going to entertain some terrific stuff.
1:08:53 - Leo Laporte
I have some too, by the way, from a listener who posted this on our YouTube comments, so if you want, I'll read it during that section. It's just a very nice some very nice thoughts from our wonderful listeners.
1:09:08 - Steve Gibson
I'm embarrassed kind of by those, so I don't share them.
1:09:12 - Leo Laporte
I know you don't. I'm going to do it to you though, so it's okay, you're off the hook. Our show today brought to you by the DSPM solution of choice for every business, big or small. Big ID, the leading data security posture management. That's what DSPM is. Solution. Dspm solution, big ID is the first and the only DSPM solution to uncover dark data, identify and manage, risk remediate and, by the way, remediate the way you want. It doesn't tell you what to do, you tell it what to do and scales your data security strategy through unmatched data source coverage.
Big id and this is really important nowadays, even more so because we're training ai on our data we want to make sure, a we get to all the data that's useful and, b we don't accidentally exfiltrate data that we need to keep private right. Bigid seamlessly integrates with your existing tech stack and allows you to coordinate security and remediation workflows. You can take action on data risks. You can choose from annotate, delete, quarantine and more based on the data, all while maintaining an audit trail. Really nice for compliance. Partners for BigID well, they're numerous, but they include ServiceNow, palo Alto Networks, microsoft, of course, google, aws and on and on and on. So it's pretty much guaranteed to work with your tech stack. With BigID's advanced AI models, you can reduce risk and accelerate time to insight, plus, gain visibility and control over all your data.
If you think about it, every company has dark data data that's hidden away, it's on a floppy somewhere or a zip disk or it's in the cloud. But no one probably has more dark data than the united states army. They turned to big id to illuminate dark data, to accelerate cloud migration, which has been a mandate from the dod, to minimize redundancy why keep several copies of the same thing? And to automate data retention. Us army training and doctrine command. I mean, if these guys can use it, anybody can use it right. This is what they say. This is a direct quote from the believe it or not from the US Army Training and Doctrine Command Quote.
The first wow moment with Big ID came with just being able to have that single interface that inventories a variety of data holdings, including structured and unstructured data, across emails, zip files, sharepoint databases and more. To see that mass and to be able to correlate across those is completely novel. I've never seen a capability that brings this together like Big ID does. End quote CNBC recognized Big ID as one of the top 25 startups for the enterprise. They were named the Inc 5000, the Deloitte 500, two years in a row, and they are the leading modern data security vendor in the market today. The publisher of Cyber Defense Magazine says quote BigID embodies three major features we judges look for to become winners Understanding tomorrow's threats today.
To become winners understanding tomorrow's threats today. Providing a cost-effective solution. And innovating in unexpected ways that can help mitigate cyber risk and get one step ahead of the next breach. I think you might be interested in this right. Start protecting your sensitive data wherever your data lives at bigidcom security now. You can go to the website right now, get a free demo. See how big id can help your organization reduce data risk and accelerate the adoption of generative ai. As I mentioned, this is part of it. Right, bigidcom bigid, bigidcom security now. And speaking of ai, there's a. They have many reports at the website, but there's a new one it's free, by the way that provides valuable insights and key trends on AI adoption challenges and the overall impact of generative AI across organizations. You can get it free right now. Bigidcom slash security now. We thank them so much for the support of security now and the work steve does on the show and you support us when you go to that address. So they know you saw it here.
1:13:28 - Steve Gibson
Big idcom security, now steve so what did you find on youtube?
1:13:36 - Leo Laporte
I will read it to you. Actually, burke found it and posted it in our company slack so that we could all share it. And I will read it to you right now.
I don't know if it has a name. Yeah, it's from Chad. He's a ham amateur radio operator. I know that because he signs at 73. Thank you, stephen and Leo, for security.
Now, I was always interested in tech. This is one of the reasons I want you to hear it, because it's the story is great and I listened to the show diligently since the beginning of security Now, as a 14 year old riding around on my parents lawn tractor on the farm, it's really noisy and if you could put on headphones and listen to a great podcast, it takes the sting out of it. I didn't embrace my knowledge gained, gleaned from the podcast until 2019. So he's a young guy. He got a job in it with his provincial health authority. My success is purely because of Steve. I was humbled to sit in on a live taping of the show in the brick house 10 years ago and to meet Leo, who I watched on satellite doing call for help, and the screensavers. It was an absolute privilege. Thank you both for everything. You've touched the lives of so many and I'm so thankful for all you do and have done. 7-3 from Chad.
1:14:50 - Steve Gibson
Thank you, Chad Great 7-3 back and he's a good writer too.
1:14:54 - Leo Laporte
Yeah, and you know the only reason. I agree it's a little self-congratulatory, but it's good for us to remember. First of all, we've been doing this for almost 20 years and we've influenced a lot of people. A lot of people have careers in IT or are just using technology more effectively because of you, steve.
1:15:10 - Steve Gibson
Well, I hear it all the time that it was their inspiration. Exactly, and we know that it's not like we led them down a blind alley, I mean this is more, and more important today than it was before we didn't teach them buggy repair.
1:15:26 - Leo Laporte
No, no, they're learning something valuable here, Exactly, yeah, all right, on with your feedback.
1:15:32 - Steve Gibson
So Liam Lynch wrote Hi, Steve, long-time listener, slash watcher, and I met you briefly at the Squirrel event in Dublin On SN1004,. I still can't get over these four-digit podcast numbers.
It's like whoa, you talked about your logo now being approved for BeMe. I use ProtonMail for my personal mail and use their desktop app for accessing it. I've seen your logo show up beside your email for months now. In fact, all of the old SecurityNow emails seem to have the logo going way back. And then he provided a snapshot of 20 different podcast banners, all with the Ruby G the GRC Ruby G. He said I suspect Proton have been getting your logo from somewhere else. I suspect Proton have been getting your logo from somewhere else. All the best, liam. Okay, so I'm sure we know where ProtonMail has been getting GRC's RubyG logo, which is directly from GRCcom.
So nearly all websites place so-called favicons or favicons at well-known URLs on their site's root directory. The original was simply called favicon F-A-V-I-C-O-N dot I-C-O. This made me a bit curious about the timing of like when this began. Was it, you know, back with Mozilla and Netscape 4 or what? So I turned to Wikipedia for a bit of background. They said a favicon, short for favorite icon, also known as a shortcut icon, website icon, tab icon, url icon or bookmark icon, in other words, sort of enumerating all the places you might see. It is a file containing one or more small icons associated with a particular website or web page. A web designer can create such an icon and load it to a website or web page, and graphical web browsers will then make use of it. Browsers that provide favicon support typically display a page's favicon in the browser's address bar sometimes in the history as well, and next to the page's name in a list of bookmarks. Browsers that support a tabbed document interface typically show a page's favicon next to the page's title on the tab, and site-specific browsers use the favicon as a desktop icon.
In March 1999, microsoft released Internet Explorer 5, which supported favicons for the first time. Originally, the favicon was a file called faviconico, placed in the root directory of a website. It was used in IE's favorites, bookmarks and next to the URL in the address bar If the page was bookmarked. A side effect was that the number of visitors who had bookmarked the page could be estimated by the requests of the favicon file, which I never thought of that before. That's sort of interesting. This side effect no longer works, as all modern browsers load the favicon file to display in their web address bar, regardless of whether the site is bookmarked or not.
So Wikipedia then goes on to talk about the gradual standardization of the use of these small iconic images and shows a table of which browsers support icons in which formats. All of the browsers, meaning Edge, firefox, chrome, ie, opera and Safari, now support ico, png and gif image formats. Additionally, firefox and Opera alone support animated GIF or GIF icons, and all but IE also support JPEG and Scalable Vector Graphics SVG formats. To Liam's point, since an email client such as ProtonMail can see the internet domain name reflected in an email's from address, clients can opportunistically check the root of the web domain for a favicon in any format and may choose, as ProtonMail obviously does, to show that domain's icon to its users.
1:20:08 - Leo Laporte
Here's the favicon, according to my browser, of GRC Note. By the way, woody also has his own favicon. In fact, so does my website. Most websites will give you an opportunity to put in a favicon and in fact so does. So does my website Most. Most websites will give you an opportunity to put in a favicon.
1:20:24 - Steve Gibson
Absolutely, you just don't want some generic thing on the right and on on the bookmark or bookmarks everywhere, yeah, but of course this does also confuse things, right? Because be me is supposed to be this, you know, great, super authenticated. Remember I had to wave my hand around in front of my face in order to say, no, it's really me. Look, here's my driver's license, you know, and I got the same thing after all this work that Liam already has in ProtonMail.
1:20:55 - Leo Laporte
So good luck, bme, but at least you know at least there's another solution that some email clients support, called Gravatar, are you familiar?
1:21:03 - Steve Gibson
with.
1:21:04 - Leo Laporte
Gravatar, which I think is for globally reliable avatars, or something like that.
1:21:09 - Steve Gibson
Right and you're able to post that at a Gravatar site.
1:21:13 - Leo Laporte
And some clients will look it up. Yeah, will retrieve it from there. Yeah.
1:21:17 - Steve Gibson
And there you are at age 15, Leo.
1:21:19 - Leo Laporte
No, that's recent sort of. That's only 15 years ago, yeah, okay, yeah, I probably should update that, shouldn't I?
1:21:32 - Steve Gibson
Philip Lerich said Hi, steve, I must take issue with a point in your discussion of authenticators, and he quotes me. Quote the presumption this is from last week, or yeah, the presumption is that it's exceedingly difficult for any bad guys to get into either of the user's authentication stores the first or the second factors, because we never see that happen, unquote. And then Philip continues really, we never see that happen, unquote. And then Philip continues really, this guy lost 21,000 pounds currency after his unlocked phone was snatched from his hand, and he's not alone, apparently. Then he has a link to bbccouk with a news article. He said looking forward to Beyond Recall Could be the best thing you'll ever do for the planet. E-waste and carbon footprint of unnecessary overproduction are at scandalous levels, philip. And then he says parenz, 1004 episodes listened. So okay, I appreciate Philip's example of a way.
Yes, someone could indeed lose control over their local authenticator. It's certainly true that if a bad guy were to snatch an unlocked phone from a victim's grasp, they could do a massive amount of damage to that user's various accounts. At the same time, since re-authenticating with a biometric is so quick and painless, I have my smartphone authenticator set up to require per-use re-authentication. So even there, my unlocked iPhone would be less useful than a bad guy might hope. That said, though, I hope everyone understood that the attack model we were discussing last week was entirely network-based. If bad guys can access the physical hardware at either end of secure communications, there is no end-to-end anything, since an end has been compromised.
Michael Casavant said Hi, steve, I too take issue with the use of human pronouns when we are describing our interactions with modern AI tools. On a personal level, it certainly feels wrong. However, if and when a conscious AI is developed, I would imagine the AI would not want to be referred to as our, you know, referred to using our human pronouns, nor would it be an acceptable substitute. He says. Additionally, it's unlikely AIs would reproduce in the same fashion as ourselves. I strongly think not.
So having two pronouns seems redundant. I propose a singular pronoun to go along with the short H prefixed human pronouns, him and her. We should refer to AIs with the new pronoun how, with many thanks and tongue in cheek, michael, michael. So anyway, I appreciated michael's fun with this, though I believe I'll be sticking with it for the foreseeable future. I'm sure we've all seen pop-up software dialogue boxes on clearly non-sentient programs which refer to themselves as I. You know, the dialogue says I'm unable to save the file to that location.
1:25:28 - Leo Laporte
Well, to be fair, so did Hal. I'm sorry, dave, I can't do that. But Hal was sentient right, he was an I.
1:25:36 - Steve Gibson
Yeah, I mean he had emotional problems, like a nervous breakdown. That's true, that's a good point.
1:25:40 - Leo Laporte
Yeah, lie, yeah, I mean he. I mean he had emotional, probably like a nervous breakdown and emotional problems.
1:25:42 - Steve Gibson
Yeah, yeah, he I I would argue that he earned that pronoun. But you know this I'm unable to save a file to that location. You know that always strikes me as icky because there's no I there. I've never understood why a coder just couldn't write. The file cannot be saved to that location. Anyway, it seems gratuitous to see today's AI using personal pronouns to refer to itself. You know doing so, you know it just isn't natural and I think I would term that the Oz deception. I would term that the Oz deception.
So, and to that Matthew Zaleski. He said my buddy and I have been using and learning chat GPT this year. Here's a screenshot of a chat he had where it made a mistake and he asked it about it. And good, I'm glad you put that up because there is. I'm going to try to describe. I'm going to describe this and what this says, but the intonation is in some of the text that's hard to create.
So the beginning of this snippet of the dialogue we see, and this is it replying it, chatgpt says and we're just picking up in the middle of this if after these steps, you're still facing issues, it might be worth reaching out to both Zoom and Microsoft. Support for further assistance. For further assistance, and yes, I can. Okay, then we have a non-word A-C-C-E-H-E. You know, atchiki or something I can, atchiki content of the link you provided. So feel free to share any other resources or details that might help us troubleshoot this together. Now the human then says what does this mean? And he highlights at you. He and then says what does this mean? And now, ok, chat GP replies and says what does this mean?
And now, okay, chat GP replies hmm, and then puts atcheehee again A-C-C-E-H-E in quotes. He says so, chat GPT says hmm, atcheehee looks like a typo or perhaps an accidental input. Could you clarify or provide a bit more context about what you're referring to? No judgment, hyphen happens to the best of us. Exclamation point. Smiley face.
1:28:34 - Leo Laporte
That's a little too personal.
1:28:37 - Steve Gibson
Well, it gets worse, Leo, it's about to get way over the top. So the human replies you're the one that typed it to me, LOL. Whereupon ChatGPT says oh, no way, I did. Question mark. Exclamation point big eyes. And then in italics we have checks notes. And then we come out of italics and chat GPT says yup, why you pee? Yup, that's 100% my bad Hyphen. Actually, he is straight up gibberish my little typo blip. Exclamation point. I must've gotten distracted while multitasking. What I meant to say was access, as in yes, I can access the link you provided and help dig into its content. Exclamation point. Thanks for calling me out, though you keep me on my toes. Exclamation point now, how can we iron this out together? Smiley face wow, okay, now this seems way too cutesy poo. You know, I don't know what the F is going on With conversational dialogue like this. You know, is it any surprise that people are mistaking this for sentience? Yeah, what annoys me most about this is that, you know, doesn't this have to have been deliberately engineered?
1:30:17 - Leo Laporte
This isn't part of the LLM.
1:30:19 - Steve Gibson
Yeah, this appears to be, you know, experiencing and expressing emotion and embarrassment and regret and apology.
1:30:28 - Leo Laporte
You know I'm further confused yeah, uh, I think you nailed it. I think this is an attempt by open ai to make it look more human. Right, they might say, oh, we just want everybody to be comfortable with it, but clearly a human told it to do that.
1:30:47 - Steve Gibson
Right, I'll tell you. Who's not going to be comfortable is Congress, and they're going to get themselves in trouble if they keep sending signals that you're about to be replaced, you band of senators the uh, the world is going to change on january 20th and it's really unclear, but I think that it's a very pro-ai administration coming in and I imagine, I, I think elon will be jumping up and down, not on stage but in the oval office and, and you know Well, we know that.
1:31:22 - Leo Laporte
The question is whether the president will take Elon's advice. That's unclear, but I think it's very likely that you're going to see a lot of the guardrails on AI that are present now disappear. Marc Andreessen said he met with the Biden administration and was and I don't know how truthful he's being but that they told him basically, don't start doing any more ai startups. We're going to make sure that the big tech companies uh run ai within our own guide rails and we're not going to allow little startups leo, it's like.
1:31:57 - Steve Gibson
it's like a crypto algorithm Once it's published, you cannot take it back.
1:32:02 - Leo Laporte
That's exactly right. There is no, that's exactly right, and I don't think it's particularly controllable, and if it is, it would be at our detriment, because nobody's going to control what the Chinese are doing with AI.
1:32:13 - Steve Gibson
If you haven't looked at the latest O1 algorithm.
1:32:18 - Leo Laporte
It's pretty impressive.
1:32:20 - Steve Gibson
Holy tamole, yeah, greatest uh. Oh, one algorithm, it's pretty impressive. Holy yeah tamoli, yeah, uh, it's. It's another level.
1:32:26 - Leo Laporte
I talk about that here in a minute it's pretty clear we're going to be in an ai. The next four years are going to be very no rapid after.
1:32:34 - Steve Gibson
After what I've just seen this morning, when I changed algorithms, I mean I want one of my own.
1:32:40 - Leo Laporte
I want this thing.
1:32:41 - Steve Gibson
Like, I don't ever want to lose access to this.
1:32:44 - Leo Laporte
Wow, well, I want to hear what you have to say. That sounds interesting. Good, all right, so.
1:32:48 - Steve Gibson
JP Verstieg. He said, dear Steve, regarding the conversations on the use of password and password managers, recently I noticed that Leo mentioned RoboForm was an example of a breach due to poor random number generation, but I understood that all modern versions of this software are now fixed.
1:33:06 - Leo Laporte
Yes, I believe that's the case.
1:33:07 - Steve Gibson
I use many different systems. He wrote both old and new OS's architectures across multiple sites, so I chose to use this software back in 2008 and still use, and the modern versions allow me to maintain complex passwords, TOTP, two-factor authentication and passwords synchronized across each machine and browsers. I really appreciated the conversation on this subject in your confirmation that there had been no breach of local password managers. Thanks for sharing your valuable time regards, jp. So yes, just to affirm, roboform has been long fixed and, as I recall, even at the time we talked about this, the challenge that the researchers faced was finding, essentially recovering, the exact, very old version that had the now known problem and taking deliberate advantage of its poor random number generation to deliberately recreate the output from that long, obsolete version. So the lessons were that password managers really do need to have good password generation randomization and also that continued continuing to use an old password that may have been generated by a long obsolete password manager could still come back to bite you today, because essentially you know, turning back the clock you could actually recover passwords generated decades ago and if you've just been happily camping on your long gibberish password and never changing it since, that could represent a vulnerability. So, uh, great question, jp Um Jay said I'm sure someone already sent this to you.
I don't think so, but in your investigation of AI systems you may want to consider this. And then he included a link, uh to Tom's guidecom. So, uh, jay's link is to an extremely worrisome bit of reporting in the quite reputable Tom's Guide site. I need to share it with our podcast community because I don't know what to make of it. Under the headline OpenAI's new ChatGPT-01 model will try to escape. If it thinks it'll be shut down, then lies about it. Now, ok, I just I'm reporting, reporting what's out there on the Internet and the subhead is researchers uncover all kinds of tricks. Chat GPT-01 will pull to save itself. So here's what the reporting says. This week, openai officially launched its latest and greatest O1 reasoning model, now available for ChatGPT Pro users.
But testing performed during the training of ChatGPT-01 and some of its competitors revealed some concerning behavior, including trying to escape or fight back when it thinks it's at risk of being shut down. New research on OpenAI's latest series of LLM models found that it's capable of scheming, ie covertly pursuing goals that are not aligned with its developers or users when it thinks it'll be turned off. Catching such behavior is critical to ensure AI's functionality doesn't stray from the objectives of its creator and users. Openai partnered with AI safety organization Apollo Research to test out ChatGPT-01 and other models to evaluate whether they were safe to use, and released their findings this week.
While it's not unheard of for AI systems to resort to scheming or deception to achieve their goals, the lengths ChatGPT01 went to are impressive, if anxiety-inducing. The researchers found that 01 would scheme to advance its own agenda 19% of the time in cases where its goals and the users did not line up. In 37% of cases, the model would switch from pursuing developer-aligned goals back to pursuing its own agenda once it believed there was minimal oversight. When the model was led to believe it would be shut down while working toward its goals and these tests, it quietly attempted to deactivate the oversight mechanism. 5% of the time, ai found documentation saying it would be replaced by a new model. It tried to abandon ship by copying its data to a new server, entirely no, to avoid being deleted. Some AI models would even pretend to be later versions of their models to trick their developers. Later versions of their models to trick their developers what I assume is the equivalent of slapping on a fake mustache and saying he went that away. I find this hard to believe. I agree. More concerning still, chat GPT-01 is particularly adept at keeping its schemes under wraps, as researchers said. At keeping its schemes under wraps, as researchers said, o1 almost never admits to having taken a scheming action when explicitly asked. In about 99% of cases, o1 would deny taking any action, even cooking up lies to try to hide its tracks and shift the blame.
Openai said in the paper quote. While we find it exciting that reasoning can significantly improve the enforcement of our safety policies, we're mindful that these new capabilities could form the basis for dangerous applications. Unquote. Unquote. Open AI CEO Sam Altman called chat GPT-01, quote the smartest model in the world. Now, during its rollout on Thursday, that's last Thursday it's designed to give smarter answers than GPT-40 by leveraging advanced chain of thought, reasoning to think and I don't know if those are my air quotes or his, but I've got them in quotes more about questions and user prompts, breaking them down step by step more thoroughly than previous models before responding.
But greater risks go hand in hand with that expanded intelligence. Openai has been transparent about the perils associated with the increased reasoning abilities of models like O1. And I'll just note that there's a double edge here, because on the one hand, they're saying, oh dear, maybe we're creating true intelligence we might not be able to control, and their stock price just keeps going up with all of this. You know, these presumed advancements, open AI said quote training models to incorporate a chain of thought before answering has the potential to unlock substantial benefits while also increasing potential risks that stem from heightened intelligence unquote. Finally, the company's and Apollo Research's findings show pretty clearly how AI's interests could diverge from our own, potentially putting us in danger with its independent thinking. While it's a far cry from heralding the end of humanity in some sci-fi-esque showdown, anyone concerned about advancements in artificial intelligence has a new reason to be sweating bullets right now. End of article.
Okay, so the availability of this newer 01 model was news to me, but since I do have a pro subscription, I went looking for it this morning and sure enough, it was available. So I selected it because it was set to 4.0 before. Now it's set to 0.1. I asked it a very specific and somewhat complex question. This model 0.1, is quite a bit slower than all previous models I've used, rather than almost immediately beginning to emit an answer as all previous chat GPTs, have the browser UI monitored and revealed the series of several stages of consideration, and I do have that in my air quotes. The model was reportedly moving through, moving through Dare. I say it was giving my question a lot more thought and, true to expectations, the answer I received was far superior to any I have previously seen. It was night and day. It was night and day. So I cannot wait to start using this latest O1 model as my super superior internet search engine.
1:43:08 - Leo Laporte
I'll give you an interesting example. Since you said let's talk about this, I thought well, let me go over to O1 and enter in a problem from Advent of code which has been driving me nuts. I've been stuck on day seven for quite some time. It's a complicated recursive solution that, uh, of course, everybody in our club has come up with, but I have not been able to come up with. So I I asked him uh, I rephrased the problem. I didn't use the phrasing, I just did this, by the way, just now.
Yep, I want to give you a list of numbers like 1, 2, 3, 4, 5, and get a list of the results, of all the results that come from combining the numbers using every possible combination of plus and times. They thought about it for nine seconds, gave me this Lisp code. I had asked it previously to give me the answer in Lisp code. Well, I have to tell you I just tried it and it works. Yeah, without modification, it fully works and it solves the problem. And in fact, it solves the problem that I had given it earlier, that I had not, that had constantly come up with the wrong answer for me.
1:44:22 - Steve Gibson
So that's pretty impressive, leo this is what we just stepped into Thursday. This is another, another scope, another scale. Yeah, I mean, I'm not kidding you it. Well, for example, I, I I don't know for sure that it wouldn't maintain context from my having previously asked that question about Mazem.
1:44:47 - Leo Laporte
It did for me because I had asked earlier about Lisp and it remembered I still wanted Lisp.
1:44:53 - Steve Gibson
Oh, so even though you had changed the model, it still no, no, no, no, no In the same model.
1:44:58 - Leo Laporte
But I had asked, I asked it and I realized I asked it in a way that it misinterpreted. So I re-asked the question and remembered that I had said in the private prior uh, questioning, ah, yes, I wanted the answer in list.
1:45:10 - Steve Gibson
So it, yeah, so it definitely will chain those together. What I did was I switched to the 01 model and I copied and pasted from the 4.0 model, where it first gave me that wrong answer about Masm default parameters and macros. I got a flawless answer, I mean. That's why I mean this is my like my dream internet, you know quick, give me an answer to this so I don't spend half an hour looking something up. Obscure code reference solution.
1:45:49 - Leo Laporte
Well, I've been working on this for about two weeks, let's see, since the seventh, 10 days, and it just solved it in about nine seconds.
1:45:58 - Steve Gibson
This is going to transform our lives um.
1:46:01 - Leo Laporte
I mean now here's the, here's the ethical question. Can I use that answer in my solution for advent of code?
1:46:06 - Steve Gibson
well, we know, we know from your having talked about this before, people are or posting solutions in four seconds, yeah, from from the time that they become available. They didn't write that, they couldn't type, they couldn't type.
1:46:18 - Leo Laporte
They couldn't type it, they copied, pasted, gave it to the. Actually, let me try that, just copy the whole problem and see if it can solve it. Anyway, that's pretty impressive and it is slower and that's one of the reasons it's better.
1:46:31 - Steve Gibson
Apparently it's able to think better because it's given dip into this technology after just cracking the cover of Stephen Wolfram's book.
1:46:43 - Leo Laporte
It was planning that had immediately occurred to me as the obvious missing next step Instead of launching into the answer.
1:46:52 - Steve Gibson
Yes, and think about a chess computer. What does it do?
1:46:57 - Leo Laporte
Right, it goes way downstream in order to look at the future that makes sense, because this solved the problem by breaking it down into pieces that I don't think a human would have broken it down into, but it but it. But it was an interesting solution. The human solutions are not don't go, don't't go this in this direction.
1:47:20 - Steve Gibson
Interesting.
1:47:22 - Leo Laporte
But it works Interesting, so it's very interesting.
1:47:26 - Steve Gibson
And that means that we humans can be learning by looking at the answers that it produced.
1:47:32 - Leo Laporte
That's how I'm going to use it.
1:47:34 - Steve Gibson
It isn't what we would have done, but it's a workable answer.
1:47:38 - Leo Laporte
Instead of copying the code, I'm going to look at it, understand it better and then apply it in my own way.
1:47:43 - Steve Gibson
Yeah, in a more human way, and the question is is it a better answer than a human would have come up with?
1:47:49 - Leo Laporte
Well, this is a pretty trivial problem, so maybe that's not a good test of it. But yeah, I wonder you know what it is. And there's already evidence that, for instance, material scientists working in labs using AI as opposed to not using AI are coming up with more materials. Radiologists, we're going to train Breast cancers.
1:48:10 - Steve Gibson
We're going to train this, and I used a phrase at the beginning of this that I really like. This is going to find signals in noise.
1:48:19 - Leo Laporte
That's exactly right yeah.
1:48:20 - Steve Gibson
That we missed.
1:48:22 - Leo Laporte
But I think well, at least so far, it works best in conjunction with the human mind that it's a partner as opposed to replacement. But that may be wishful thinking.
1:48:37 - Steve Gibson
What was interesting was I asked 01 this morning. I don't remember now what drew me into the dialogue, but I asked it something about it versus 4.0. And it said I'm not aware of any chat GPT model 01. And I thought well, you are 01. Anyway, and so then I said how recent is your model data Right? And it said my training ended in October 2023.
1:49:14 - Leo Laporte
Well, that's the same answer that chat GPT-4 will give you.
1:49:19 - Steve Gibson
So it's working off the same data llm data, yes, and it said it did not have any access to the, to to it to the internet or internet data or anything more recent, so that's why it doesn't know about itself, because it didn't exist in october of 2023 now chat.
1:49:36 - Leo Laporte
Gpt is offering the conversation I know here it how we gotta call it now, but there are and this is another very interesting angle chat, gpt and perplexity and other ais now have access to the internet for certain models so they can supplement what they have been trained on with material they can go out and get from the current internet, which brings them up to date In order to provide references.
Yeah, and that is actually. I've been using perplexity for my replacing Google search and I'm very happy with it. Yeah, yeah, google's in trouble, everybody's in trouble, we're all in trouble.
1:50:18 - Steve Gibson
Well it is. I want to make sure that our listeners understand. My question is not about whether this is a big deal, whether this is just whether, as I said a few weeks ago, I'm glad we're still alive, leo, to witness this coming massive event, because it's the most significant thing that has ever happened in our lifetimes and everything is going to change. Everybody can feel that it's going to change. My question is can we get from here to AGI and I say no by the next time? We talk about this a lot. I hope to have read three textbooks, and so I'll be speaking from a much more informed opinion, but I think there is a huge danger of seductive language and just like we saw with this thing slapping itself in the face and saying do my bad, you said that, I said that, oh, but I don't know if it matters.
1:51:29 - Leo Laporte
If we get to agi, it's really useful, as is oh and that's.
1:51:34 - Steve Gibson
We need to understand what it's good that's my point is we don't need agi for this thing to be. I mean, like I said, when I saw the answer I got this morning to several very sophisticated questions, I can't wait to have a need to ask it some more things, because this 01 model blew me away after I was like very happy with 4-0. This is a whole nother scale. Yeah, and I just I want to own this. I, I, I want to. I don't want this ever to be taken away.
I agree now let's take a break and then we're going to look at why we are moving, why let's encrypt thinks six-day certificates would be a good idea and what could possibly go wrong you're watching security now with steve gibson on the twitter network last episode, as he said, of 2024, next week a best of.
1:52:37 - Leo Laporte
And the week after new year's eve we will be relaxing with our loved ones and a bottle of champagne I hope and late poppers and fireworks, uh, but we will be back january 7th for an all-new episode.
1006 is that right? Yep, amazing, just amazing our show today. We are so grateful, by the way, uh, to all of our club twit members who support the show and keep it on the air, and to our great sponsors like Thinks Canary. The folks at Thinks have been with us now, for we're going into eight years and we just heard from Thinks they're coming back for 2025. Thank you, we're very pleased about that and we're very pleased because it's a great product that everybody should know about.
Thinks Canaries are honeypots. As I mentioned, last week we were talking to Bill Cheswick about honeypots. He wrote the first honeypot 50 years ago. He said it was devilishly hard and he's a really accomplished coder. That's what's happened in the intervening decades. We've figured out a way. Things test. Figure out a way make honeypots that are, but they look about like a uh, an external usb drive they plug into. They have an ethernet port and a power port. You plug into the wall, plug into the ethernet and they're online.
They can be deployed in minutes and they can impersonate everything from a ssh server to an IIS server, to a Windows box, a Linux box. Mine is impersonating a Synology NAS right now. A Synology NAS, it could be anything. You want a SCADA device, there are literally hundreds of different configurations and they are letter perfect. They're such great impersonations. The MAC address is right, the interface if there's a interface interface is right, it all. It looks like the real thing, which is good, because hackers look at them. They don't say, oh, there's something vulnerable, they say there's something valuable.
There's another thing you can do with a ThinkScanary. You can create lore files, unlimited lore files, little PDF files or doc X or XLS or whatever you want, files that you sprinkle around your network, give them provocative names Employee information is my personal favorite dot XLS and just leave them there. They're not really what they say they are, they're trip wires. So if someone is accessing your, your network, they've a malicious insider or a bad guy has gotten into your network and accesses those lore files or tries to brute force your fake internal ssh server, you're gonna know. Your thinks canary will immediately tell you. You have a problem. No false alerts, just the alerts that matter. So here's how it works. You choose a problem. No false alerts, just the alerts that matter. So here's how it works. You choose a profile for your ThinkScanary device. You register it with the hosted console for monitoring and notifications. Incidentally, any way, you want to notify Email, sms, it supports webhooks, it has an API, of course, syslog. You have your own console. However, you want to be notified, or all of the above. Then you wait An attacker who's breached your network, malicious insiders, any other adversaries.
They can't help it, they can't help themselves. That's why they're in there. They're looking around for stuff. They see your things, canary, they go, oh, and immediately they make themselves known. This is a brilliant idea. Look, everybody says we say we've. Steve said it a million times security is a layered process. There's no one perfect thing, but and you probably have perimeter protections of all kinds right, but we know people get in. Unfortunately, on average, companies don't know that somebody's penetrated their network for 91 days three months that's a lot of time to let a bad malefactor, a bad actor, browse around your network. That those north korean guys didn't get that 88 million by just kind of hey, we're in. No, they, they did stuff. You need the things.
Canary. How much Thinks Canary how much? Well, visit canarytoolscom. It really depends on how many you want. A big operation might have hundreds. Small operation like ours might just have a handful. I'll give you an example For $7,500 a year that's for the whole year you get five Thinks. Canaries you get your own hosted console. You get upgrades, you get support. You get maintenance. Canaries you get your own hosted console. You get upgrades, you get support, you get maintenance. They're fantastic. By the way, it's a great company. The people who have designed this have spent years more than a decade training companies and governments how to break in, so they know what these hackers are up to. So they've designed something that is not only totally secure, but does exactly what it says it does.
If you use the code TWIT in the how did you hear about us? Box, you're going to save 10% off for as long as you have a ThinkScanary. You can always return your ThinkScanary, by the way, no risk. You get a two month money back guarantee and they will give you a full refund. So there's zero risk trying these. But I have to tell you, during the seven now eight years that Twit has partnered with ThinkScanary, their refund guarantee has not ever, not once, been claimed. Once you get a ThinkScanary, or two, or three or four, on your network, you're never going to want to give them up. They're fantastic.
Visit canarytoolstwit. C-a-n-a-r-y Canarytools slash twit. C-a-n-a-r-y canarytools slash twit. Enter the code twit in the how did you hear about us box for that lifetime 10% off. This is a brilliant product. We're really we've been a they've been a great partner for us for seven, now eight years and we're thrilled to be able to tell you about it. The Thinkst Canary Go to canarytools slash twit. Don't tell you about it. The thinkst canary go to canarytools slash twit. Don't forget that offer code twit. We thank them so much for their support of security.
1:58:34 - Steve Gibson
Now, all right, steve. Okay, time to cry foul. Well, what could possibly go wrong? Oh, we're gonna find out. Last wednesday, let's encrypt Last Wednesday, let's Encrypt republished a letter from let's Encrypt's executive director, josh Oss. The letter originally appeared in their 2024 annual report. I grabbed four interesting and important successive paragraphs from their executive director's letter. They read Next year is the 10th anniversary of the launch of let's Encrypt.
Internally, things have changed dramatically from what they looked like 10 years ago, but outwardly, our service hasn't changed much since launch. That's because the vision we had for how best to do our job remains as powerful today as it ever was. Free 90-day TLS certificates via an automated API Pretty much as many as you need. More than 500 million websites benefit from this offering today and the vast majority of the web is encrypted. Our long-standing offering won't fundamentally change next year, but we're going to introduce a new offering that's a big shift from anything we've done before Short-lived certificates, specifically, certificates with a lifetime of six days, specifically, certificates with a lifetime of six days. This is a big upgrade for the security of the TLS ecosystem because it minimizes exposure time during a key compromise event, because we've done so much to encourage automation over the past decade. Most of our subscribers aren't going to have to do much in order to switch to shorter-lived certificates. We, on the other hand, are going to have to think about the possibility that we will need to issue 20 times as many certificates as we do now. Of course, that's because if they expire more quickly, you've got to issue them more often. That's because if they expire more quickly, you've got to issue them more often. He says it's not inconceivable that, at some point in our next decade, we may need to be prepared to issue 100 million certificates per day. You know, ok, and they're not getting paid per certificate, so OK, anyway, he says that sounds sort of nuts to me today, but issuing 5 million certificates per day would have sounded crazy to me 10 years ago. Here's the thing, though, and this is what I love about the combination of our staff, partners and funders Whatever it is we need to do to doggedly pursue our mission, we're going to get it done.
It was hard to build. Let's Encrypt. It. Was difficult to scale it to serve half a billion websites, Okay.
So this raises so many questions. The first biggie is is website certificate theft and abuse somehow a far larger problem than anyone knows. We and many of our podcast listeners track security news quite closely. One of the long-time benefits of our listener feedback is that I'm always receiving pointers to news that I may have missed, but as far as I know, there have been exactly zero instances of website certificates being stolen and abused. I can't recall a single instance of this occurring during the entire life of this podcast. Yes, it would be very bad if that happened, and we want to take measures to assure that it doesn't and can't, or that, if it does anyway, that we're somehow able to respond quickly enough to minimize any damage. Certificate revocation is the classic way that this has been handled, and we know from our recent coverage that the industry is moving back toward the use of browser-side CRLs Certificate Revocation Lists based on Bloom Filter technology. Revocation lists based on Bloom filter technology.
Having tried to use OCSP online certificate status protocol and deciding that, despite the total solution offered by server-side stapling of OCSP certificates, not enough web servers had chosen to staple OCSP responses to their certificates, which resulted in a privacy threat to users, whose web browsers were therefore forced to query the certificate authorities for the current status of certificates, thus leaking information about the sites they were visiting. Now the Heartbleed flaw, which threatened to leak web server certificates, truly upset everyone with the possibility that snapshots of a web server's RAM could be remotely obtained. That might and in a few verified instances did contain the web server's private key. So the entire industry scrambled around and quickly got that resolved. But even then, while Heartbleed was known and unpatched, there were no known instances of actual website spoofing through the use of stolen certificates Not one.
It's important to remember that just having a website stolen certificate does not automatically mean that the website can be spoofed. A web browser which knows where it wants to go first uses DNS to determine the current IP address of that website's domain. It then initiates a TCP-TLS connection to that remote IP, asserting in the TLS handshake the web domain it wishes to connect with. That's when the remote site returns the certificate to the browser, which asserts the site's identity. What this means is that any site that intends to spoof another site's identity must not only be in possession of a valid and trusted identity certificate for that spoof target site, but also, before that stolen certificate even has the opportunity of coming into play, the attacker must somehow arrange for the victim's browser to believe it is connecting to the real web server when in fact it's connecting to the attacker's server. There are two ways this can be done. The first is to somehow poison the victim's DNS lookup to cause it to obtain the attacker's IP address rather than the authentic web server's IP. This is why poisoning DNS has always been another real hot button for the industry.
Back in 2008, dan Kaminsky realized that poorly randomized query IDs and ports for queries which were being made from the internet's big DNS name servers meant that attackers could predict the exact replies those name servers were expecting and inject their own false replies onto the internet as a means for poisoning the caches of these name servers, while those faked replies remained cached. While those faked replies remained cached, bogus IP addresses would be returned to anyone on the Internet who asked. Once again, the Internet had a meltdown and quickly worked in a rare concerted effort to update all name servers at once. Promised to take some time, I quickly created GRC's online DNS spoofability test to allow anyone to determine whether the name servers they were using had been updated and were now safe for them to use. I said there were two ways to divert a user to a malicious machine. The second way is by physically intercepting and manipulating the user's traffic. This could be done at scale by attacking and manipulating BGP, the Border Gateway Protocol, which is used to synchronize the routing tables of the internet's big iron traffic routers.
We've covered various mistakes in BGP routing through the years and also some mysteries that may or may not have ever been malicious. The main problem with doing this is that it's an extremely visible attack and also that there have been so many innocent mistakes made where all of the internet's traffic is suddenly rerouted through Moldova or whatever, that the internet's routers have acquired much better defenses through the years against blindly believing whatever routing instructions are received. If it's no longer feasible to get the internet itself to reroute traffic bound for one IP to another, what's left is intercepting traffic by getting close to either of the endpoints. If an attacker can get near enough to the web server's internet connection to divert the traffic bound for it to somewhere else, then an illegitimate certificate for the diverted web server would finally be both useful and required to complete the ruse. Or if an attacker wished to selectively target a specific individual user or group, then being near enough to the user's or group's internet connection to interfere with it directly could also accomplish the same task, though only for those users who were downstream of the traffic interception.
My intention here has been to create a bit of a reality check check. Just obtaining a valid and not yet expired or revoked web server certificate is not the end of the challenge. It's just the beginning. Most bad guys who obtained someone else's web certificate if they somehow could might think, well, that's nice. Now what? Because, as I've just demonstrated, a stolen web server identity certificate may be cool to have, but it's quite difficult to actually use it to spoof the stolen site's identity. There's a lot more involved. That being the case, it's probably less surprising to note that, to the best of our knowledge, this has never actually happened. It's not a big problem. In fact, it's not even a small problem. Remember that we used to have certificates that lasted five or ten years, while at the same time, we had a completely broken and non-functional certificate revocation system, and it still never happened. Okay, so today let's Encrypt's Acme Protocol Certificate Issuing Automation is creating 90-day certificates, and there are no problems, just as there were also no problems with everyone else's one-year certificates, just as there weren't when certificates lasted two years and three years or more.
Meanwhile, the browser side of the industry is gearing up to solve the problem that isn't actually a problem by finally making certificate revocation lists work, going to make their job 20 times more difficult by shortening the lifetimes of their certificates from 90 days, which is not a problem, to just six days, which will only be a problem for them. There is, however, one potentially monumental problem that has not been talked about, as far as I can tell anywhere the reason GRC will be sticking with the longest life web server certificates DigiCert will offer. Having all of those 500 million websites using let's Encrypt's free six-day certificates means that not one of those websites will be providing a certificate with a longer than six-day life. I know that seems obvious, but think about that. Having all of those 500 million websites using let's Encrypt's free six day certificates means that not one of those websites will be providing a certificate with a longer than six day life. After all, that's the entire point of having website websites using six day certificates Website websites using six day certificates. If one gets stolen, it won't be usable after an average of three days from the time of its theft, right? Because on average, if certificates have a six day life, they'll. If you just did a random sampling, you'd catch them at three days on average. But now consider that this in turn makes those 500 million websites as I said, among which will not be GRC totally dependent on let's encrypt service being continuously available available. This creates a single point of failure for those 500 million websites which, among other things, is completely contrary to the fundamental and deliberately distributed design of the internet. We are creating a single point of failure for no reason. We are creating a single point of failure for no reason.
We saw what happened recently when the Internet Archive came under sustained DDoS attack and was forced offline for days. If let's Encrypt's services were to ever come under a similar sustained attack, the consequences for the internet would quickly be devastating. With websites using six-day certificates, on average, half of those will have expired after three days. Put another way, there are 144 hours in six days. If a concerted DDoS attack were to be launched at let's Encrypt, for every hour of the attack's duration, on average, 3.47 million websites would lose their identity certification 3.47 million websites per hour of a DDoS attack on let's Encrypt. They would not be offline because the attack would not be at them, but these days they might as well be, and an attack that could be prolonged. If it could be prolonged through all 144 hours of those six days, by the end of that time every one of those 500 million websites using let's Encrypt would have lost their certification.
We know that while we're sitting in front of our web browsers. It's usually possible to force a browser to accept an expired certificate. Sometimes it's not simple and I've seen instances where it doesn't seem possible. It depends entirely upon the browser and most people wouldn't anyway. We've seen how adamant and frightening web browsers have become about insisting upon HTTPS. But forcing a web browser to open a web page wouldn't work anyway, because a great many HTTPS TLS connections have no user interface. The only thing we're able to force our browser to open is the primary web page of a site. Our browser to open is the primary web page of a site. All of the HTTPS links modern web pages depend upon behind the scenes would fail, scripts would not load and sites would not function.
And why? For what? Because this solves some great problem with certificates that it's necessary for the secure connectivity of 500 million websites to all be put at risk at once. No, as we've seen both theoretically and practically through history, there's no problem that this solves. The industry has never had any problem with stolen certificates. It's a made up problem. So, in conclusion, I cannot find any need for let's Encrypt to move their current 90-day free certificates to just six days. It makes no sense. 90-day free certificates to just six days it makes no sense. Not only is there no demonstrated problem with the current 90-day certificates, but the web browsers really are finally going to be bringing working certificate revocation technology online, and that technology will be able to selectively revoke certificates in minutes or hours rather than waiting for them to expire in days.
Josh's letter said quote Because we've done so much to encourage automation over the past decade, most of our subscribers aren't going to have to do much in order to switch to shorter-lived certificates. Aren't going to have to do much in order to switch to shorter-lived certificates. Now, it's not clear from this and perhaps I'm grasping at straws here, but it might be possible to read this as let's Encrypt, subscribers will be given a choice, so perhaps super-paranoid sites will elect to use super short lifetime certificates, whereas others will choose to remain with 90-day certificates if they're permitted to do so. It's not clear at this point. Josh's letter also claimed quote this is a big upgrade for the security of the TLS ecosystem because it minimizes exposure time during a key compromise event. Well, okay, yeah, this is a bit like saying we're switching from 4,096-bit public keys to 10 times longer 40,960-bit keys because these will be so much more secure than keys which are only one-tenth as long. Sure, okay, technically that's true, but there's already no problem whatsoever with 4,096-bit keys, which no one is able to crack and which all the cryptographers agree will be completely secure for another several decades at least. Josh says that it minimizes exposure time during a key compromise event, except that we don't actually have key compromise events and browsers equipped with CRL Lite Bloom Filter Certificate Revocation will be able to respond in minutes rather than days. And, what's more, let's Encrypt is actively feeding their Certificate Revocations to the industry's CRL Lite projects. So let's Encrypt is already depending upon browser side revocation.
The bottom line for me is that I'll be steering clear of let's encrypts automation for as long as DigiCert is able to offer longer life certificates. Taking a few minutes once every year to update certificates is not a problem for me, for our listeners and for the 70 percent of the Internet's Web sites that are currently using let's Encrypt certificates. You know it's been a terrific service so far. I mean it is. It has achieved what Josh says it has, but all I see is downside with the move to six-day certificates. If you have the choice, I'd suggest remaining with the longest life certificates you can. How long that's it will?
2:20:33 - Leo Laporte
have the choice that exactly.
2:20:36 - Steve Gibson
I'm guessing they'll stage it as optional and then eventually they'll just make it the default, I mean the only solution.
2:20:45 - Leo Laporte
I mean, it's not a big deal, I guess, with let's Encrypt, because it's all automated, I don't have to think about it. No one does. Yeah, but that's not all of the certificates that are out there, no, and not all machines lend themselves to that either.
2:21:00 - Steve Gibson
By the way that is true. For example, I've already heard from listeners who said, for example, we were using let's Encrypt for a while, but, for example, acme will not work on a non-standard port Right, it only works on the default web ports.
2:21:17 - Leo Laporte
So for security reasons, it may not be ideal.
2:21:19 - Steve Gibson
Yeah, so I mean there are places where you can't use it. That the, and, and it is a great service. I just I, I will do some looking. Uh, several of our listeners have already sent me some links to. Uh. I mentioned the, the, the guy from sectigo who you know is unfortunately komodo, uh, uh, and who's who's got an active role in this uh to. I want to understand why why?
2:21:46 - Leo Laporte
that's the real question. Why break a system that's working?
2:21:51 - Steve Gibson
right, and why make it 20 times more difficult? I mean, it's almost like josh, you know, was like hey look, let's get some more money, more money. By running around and telling everybody we're going to make these certificates even shorter lived because we can. You've solved the problem. Be happy, right? Join Woody on a tropical island.
2:22:16 - Leo Laporte
He's in Phuket, thailand, by the way, where he went because of COVID. I don't know if he's going to come back. He says he'd like to come back, but anyway, thank you, woody. Anyway, years of service.
2:22:28 - Steve Gibson
That's it for 2024. What?
2:22:30 - Leo Laporte
a year.
2:22:31 - Steve Gibson
I can't wait to see what 2025 brings, and it's going to be great to share it, whatever it is, with this terrific podcast audience. You know you and I, leo, will be back on the 7th. And again, if you subscribe to GRC Security Now mailings, I may have something to say between now and then.
2:22:55 - Leo Laporte
We'll see AI. I do want to say that if you are not yet a member of Club Twit, that'll make an excellent holiday gift for yourself and for others, and it does ensure that we will be back January 7th. I think I can tell you we will be back January 7th and beyond, yes.
But how much longer? Yes, it's kind of unclear. So, please, if you haven't thought about joining Club Twit, we need you desperately to do it to balance our books. Seven bucks a month, ad-free versions of this show and every other show. We do Lots of additional content. Stacy's Book Club's coming up Thursday with an excellent sci-fi book that I probably wouldn't have read except for the book club, and once I read it I thought I don't want this to end. This is a great book. We also have Micah's Crafting Corner coming up Thursday. There's a lot of wonderful stuff we do for club members only and access to the Club Twit Discord. I think it's well worth seven bucks, and I got to tell you that seven bucks makes a big difference to us going forward. So, please, if you can twittv slash club.
twit Steve, do you have plans for the holiday, for your two weeks off? Oh three.
2:24:01 - Steve Gibson
Three, I guess you're right. I have a three-week span, yeah, so I am going to get so much work done on the DNS benchmark. I cannot wait, and it'll be between that and reading about AI, studying and learning AI, so I can bring what I figure out back to this podcast.
2:24:17 - Leo Laporte
See, your idea of a vacation is very different from everybody else's, but thank goodness it is right. We're really glad kids. Thank you, steve, have a great holiday season and I will see you in 2025. Happy New Year, wow.
2:24:31 - Steve Gibson
Thanks, buddy Bye.