Transcripts

Intelligent Machines 860 transcript

Please be advised this transcript is AI-generated and may not be word for word. Time codes refer to the approximate times in the ad-supported version of the show.

 

Jason Hiner [00:00:00]:
I'm Jason Heiner filling in for Leo Laporte, and I've got our co-hosts Paris Martineau and Jeff Jarvis. We have a conversation with Dan Patterson, and it's a huge week for AI news. We talk about the Anthropic-Pentagon showdown, Claude becoming the most popular app in the world, and Perplexity out open-clawing OpenClaw. That's what's coming up next on Intelligent Machines. Podcasts you love from people you trust.

TWiT.tv [00:00:29]:
This is TWIT.

Jason Hiner [00:00:34]:
You're watching Intelligent Machines, episode 860, recorded on March 4th, 2026. You gotta get computer. Hello, it's time for Intelligent Machines where we cover artificial intelligence, robotics, and all the aspects of the AI revolution, the AI industry. I'm not Leo Laporte. I'm Jason Heiner, editor-in-chief of The Deep View, filling in for the inimitable, irreplaceable Leo Laporte, who's off this week. And of course, I'm joined by our regulars. We'll have Paris Martineau in a moment joining us, but I have the ever reliable Jeff Jarvis. Craig, Craig, Craig, Newmark.

Jeff Jarvis [00:01:18]:
What else do you need to say?

Jason Hiner [00:01:21]:
I love it. I love it. The Jeff Jarvis, so distinguished that his intro needs no intro. It has its own intro, which is outstanding. And we also are joined by our special guest for this week. Is Dan Patterson as well. Dan, welcome.

Dan Patterson [00:01:39]:
It's great to be here and great to see you both. Thanks.

Jason Hiner [00:01:42]:
Yes, thank, uh, thank you. It's always, uh, always a pleasure. Thank you for making the time. And, uh, Dan, you and I go way back, so I'm so thrilled that we get the chance to, to come together and talk a little bit about some of the stuff that you've been doing, some really important work, some really valuable things, and the stuff that you do, Dan, and Blackbird AI, the company that you work for, I feel like it only gets more valuable every day right now, the way that the world is moving. So really appreciate you being here.

Dan Patterson [00:02:16]:
Well, likewise, thank you for having me. And both of you do equally important and interesting work and really fascinating work. Jeff was just talking about his latest book, which is gonna be pretty mind-blowing. And Jason, Thank you. I think what you're doing at The Deep View is just— it's every day I check in on the site and the newsletter and it's always innovation.

Jason Hiner [00:02:41]:
Thanks, Dan. Yeah, it's— I've never seen anything like this news cycle. We talk about— we're going to talk about the AI news. I mean, I know we were, we were talking about it as we were getting ready for the show and it's unreal. We were even looking at the stories for this week and going like, wait, that happened this week too? Oh yeah, that one was also this week, you know? I mean, it's unreal. So we're going to get to all of that, of course. And Jeff, who is a connoisseur of the news of these things, of course, is going to be one of— I'm going to go to Jeff for a number of these stories as well, who has great context and also amazing insights on all of it. Before we do that, before we get to the news, Dan, let's talk a little bit about kind of where you're at and what you're doing.

Jason Hiner [00:03:30]:
You're very familiar with the show. You've been coming on the TWiT network for a long time, I think are very well known to the audience. But just in case there's a few people who don't know, why don't you talk a little bit about what you do at Blackbird AI? And also, I'll just say, for those who don't know, Dan is a longtime journalist. He works for me at publications that I've worked for multiple times. We go back a long way. And Dan is an incredible investigative reporter and news anchor, news person going back multiple generations here. Now, and now working, um, in the AI space, in the cybersecurity space, in the disinformation or, um, you know, misinformation space. So Dan, talk a little bit about Blackbird for those who don't know.

Dan Patterson [00:04:19]:
Yeah, it is great to be here, and it, it does now feel like multiple generations. I mean, with you both, like, you guys have seen so many different generations, uh, and right at— with, with the TwitNight, where I feel like there's just been evolution through different generations of tech. Yeah, going back to, uh, you podcasting almost predating social media. So really, we've seen a lot of change here, and I've been pretty fortunate to know you guys both and to be on the network. Uh, so I, I mean, the, the, uh, kind of short version of what we do at Blackbird— and right, I, I think both of you guys— once a journalist, always a journalist. That's a lot of what I do.

Jeff Jarvis [00:04:57]:
Like a Marine.

Dan Patterson [00:04:57]:
Yeah, yeah, right. I, I hope we can aspire to more. Um, although I've known many great Marines too. So Blackbird Protects, um, I mean, the, the line that we like to, to share is that we protect organizations, executives, uh, and governments from narrative-based disinformation attacks that can cause operational, uh, financial, of course, physical, sometimes harm. Um, so what we do, I think probably many listeners are fairly familiar with the concept of social listening or tag clouds or like categories you can kind see or get a sense of what's happening on social media by using these tools that can engage, uh, conversations on social networks. But of course, we all know that social media is atomized now. It's not just one dominant network. There's many networks, many different chat applications.

Dan Patterson [00:05:56]:
There's the dark web, which, you know, a ton of— it's, it's old news by this point, but it's still— there are a ton of bad actors on there. And We call it a disinformation attack or a narrative attack because it's almost like we're not just listening. And there's many jokes, you know, we hear for you. We're not just listening or using social listening. We're tracking the narratives, the conversations, the bad actors who will use sometimes automated tools. You know, all of us are familiar with bots, but they'll use now in the age of generative AI and different forms of artificial intelligence, they use generative tools or AI tools to amplify a narrative attack. And these can target people or governments or organizations. You know, there's a very famous example of a beer brand a couple of years ago.

Dan Patterson [00:06:49]:
I forget exactly which one, but I think all of us can think about, you know, maybe the experience of being doxxed or, you know, the type of of media that we can encounter online often is coordinated and it's amplified by bad actors who have agendas. And sometimes that is at a tremendous scale. So I think both of you have probably heard the narrative that, well, you know, AI is just good for slop and it's, you know, we like— what good is it doing? It's costing jobs and taking all this energy. And like, there's some truth to some of those narratives, but We use artificial intelligence to find narratives because, and we call it a narrative because it traces and it moves across different applications from chat apps to social media in different platforms. So we use that to find the actors, the narratives, what they're saying, who they're targeting, and importantly, how they're being amplified, the tools that are being used to amplify these narratives and who they're targeting and why they're targeting different people or governments or organizations. There's some things that, you know, we definitely can't or won't talk about because they're confidential. Some of our partners are organizations like NATO and large governments and representatives or people— not representatives, we don't want to get into politics with specifics— but, uh, you know, we talk, we, we make sure that we are protecting fairly important actors and organizations from this kind of innovative new form of attack.

Jeff Jarvis [00:08:39]:
Dan, is it just attacks or is it also, you know, I've argued that journalists have to learn new skills to listen better.

Dan Patterson [00:08:46]:
Yeah, for sure.

Jeff Jarvis [00:08:47]:
Is it also, do your clients hear things that they, before all the internet and everything, couldn't have heard before and learned from and act on?

Dan Patterson [00:08:56]:
Yeah, that's exactly right, Jeff. It is, and that's one reason we use a narrative, you know, in this case, a narrative attack or a disinformation attack attack. We use that word, although we've, we've kind of, um, you know, the words disinformation and misinformation don't have a lot of meaning to the general public anymore.

Jeff Jarvis [00:09:15]:
Sure.

Dan Patterson [00:09:16]:
Yeah, right. So, and especially in the age of, uh, you know, there are more specific ways to talk about disinformation, like a deepfake. Um, but no, Jeff, you're exactly right. You know, we try to use the term narrative fairly often because, uh, it, it really is— there's a story in everything, right? And there's a story being told, even in— one post can tell a large story and the person behind it. Again, maybe not precisely metadata, but the idea of metadata, the person, place, and thing talking about something and the way that they talk or shape or craft a conversation. Communicators inherently understand this. That is almost as important as what is being said. So yeah, I think a lot of organizations, executives, and companies are interested in learning the narrative.

Dan Patterson [00:10:14]:
It's not just social listening, which feels a little dated. It's— it is the narrative of what is happening now.

Jason Hiner [00:10:23]:
Yes. And I want to say we also now have Paris is here. Paris Martinot.

Paris Martineau [00:10:28]:
Sorry, I'm a little late.

Jason Hiner [00:10:30]:
No worries. Investigative journalist.

Jeff Jarvis [00:10:32]:
Paris has a boss.

Paris Martineau [00:10:35]:
You know, sometimes you can't— it's unfortunate when you have a job, you can't simply be like, I can't be in this meeting, I must podcast. You have to sit there politely and participate and then frantically message your podcast chat that you're going to be a little late. But I'm happy to be here.

Jeff Jarvis [00:10:53]:
Were you twitchy, Paris? I imagined you being twitchy.

Paris Martineau [00:10:55]:
I was definitely twitchy. It's been a strange day. I was like, yeah, absolutely.

Jason Hiner [00:11:04]:
And you— and it's the perfect— that kind of chaos is perfect for the week that we've had in AI, which we'll get to because it has been such a week. What a week in the news, you know, too, right? So we're glad you're here, Paris. And Dan Patterson, our special guest for this week. Dan, I wanted to ask you, I wanted to double-click on one of the things that you talked about and your CEO Wassim, um, Khaled talks about this a lot. Uh, he— we just had him on the, uh, our show, The Deep View Conversations, and he talked about this idea that you, you also referenced, which is essentially that, um, perception itself has become an attack surface.

Dan Patterson [00:11:48]:
Exactly.

Jason Hiner [00:11:49]:
And that is something that is really, um, almost a little bit mind-blowing, but it helps conceptualize the sort of the level of challenge that, that we're dealing with and that Blackbird especially is trying to help companies, executives, high-profile people who are in danger of being doxxed or in danger of also potentially being physically attacked. You all have signals where you will, as I understand it, and you can double-click on this for us, but that if you see a certain amount of chatter, you can present levels of risk. And even as I understand it, different like lights, you know, red, yellow, green, or in reverse, you know, green, yellow, red in terms of the level of risk of someone potentially in your organization being physically attacked based on the chatter that's out there. So all of that is something we wouldn't have even— what was barely, I think, on the radar 10 years ago when Blackbird started this. But now you have a lot of clients that, that depend on that kind of intelligence day in, day out, week in, week out. Maybe you could just talk a little bit more about that.

Dan Patterson [00:13:07]:
Yeah, Jason, that's exactly right. And what, what Simeon was referencing, and he will go into great detail in that DeepView podcast, is right. Perception is the attack surface and our own realities, especially when we spend time in these algorithmic silos, that becomes our own reality. And yeah, we did. We have these. I mean, this is fairly nerdy, but this audience will understand it. We did just release this API and we do have these. It's called Constellation, and we use that metaphor for a reason because you can kind of see clusters of conversations.

Dan Patterson [00:13:44]:
And right, we do present— I mean, everybody has a dashboard. This is not— I mean, it is a dashboard, but it presents information in a vastly different metaphor in different type of view structure because the information is far more like a narrative. And you will see, you can kind of see right as those lights go up, you can kind of get a sense of actions that might happen. It is really fascinating because especially when you think about a lot of, right, perception is the attack surface, much like a cyber attack. When you see, or when people who work in IT or work in cybersecurity, you can kind of see different risk signals that happen across your network. When you see similar signals, and again, it's using a metaphor and I'm kind of mixing metaphors, but you can see different signals happen in narratives and then get a very similar sense that an attack is about to happen or one that is in progress could lead to physical or other types of, of harm.

Jason Hiner [00:14:52]:
You know, you know what's really interesting about this too, Dan, where I think it really gets to the intelligence aspect of what you do is that it would be really easy for you all to just say, you know, to sort of be the boy who cried wolf. Like any signals happen, like you could help the company freak out, right? Like, here we are, we're going to send you that, like, look out, something bad is going to happen. But one of the things that you all do is you will also, as I understand it, You will tell companies, don't respond to this. There's something that's happening right now. But what we can tell from the patterns is that some of this is bot traffic. Some of this is not actual, you know, people or the number of people are the one that are involved are, you know, have an alternate view. And you all will give companies advice where you'll say, do not engage. Because if you engage, you will potentially amplify this to a level where it could become, you could increase the risk.

Jason Hiner [00:15:49]:
Risk. And so you will— so it's not just always telling people that they should be freaked out. Sometimes it's telling people this is not a— this is not worth getting, you know, rolling in the mud on. You should let this go, let it play out. And our intelligence tells us that this is likely to just play itself out quietly over a period of the next sort of 24 to 48 hours or whatever the case. Am I characterizing that correctly?

Dan Patterson [00:16:15]:
Yeah, for sure. Although I think that maybe we might do that on a macro level, and that's kind of just good comms strategy. Every journalist knows that, like, just don't feed the trolls, don't get involved. And I think that on a macro level, we probably don't advise companies on how to respond, but we will give them the tools that allow their teams to make better response decisions so they can not just make better decisions, but make those decisions faster. Because as everybody here knows, sometimes this happens very quickly. I don't know if any of you have had this experience. I had, you know, years ago I was covering stories that sometimes were pretty, uh, prone to pick up, uh, different types of bad actors. And sometimes it would happen very fast, and they can find out a lot of information about you, your family, your friends, where you work, what you do.

Dan Patterson [00:17:07]:
And I just remember from personal experience that happened within seconds. And so we probably advise companies to pay attention to certain risk signals or look out for types of behavior, as opposed to like, do this in this particular instance, because everybody and every organization is different. But again, my advice is always, don't— just what you said, Jason— don't respond. Don't get into it.

Jeff Jarvis [00:17:36]:
What about cases where I mean, the attack scenario is somebody comes after you, they don't like you, they think you're vulnerable. There's various scenarios. I'm gonna go to my favorite story of the week and it's not AI and it's not Anthropic. It's McDonald's CEO eating the new Archburger. Did you all see that?

Dan Patterson [00:17:55]:
I did not.

Jeff Jarvis [00:17:56]:
No. Oh, it's brilliant.

Paris Martineau [00:17:58]:
It's a man takes a bite of a burger in a way that makes it very clear he would rather be doing anything else in the world.

Jeff Jarvis [00:18:08]:
He's the CEO of McDonald's. He takes the tiniest bite you can imagine, and he's like, this is delicious. Uh, and you know he had multiple takes because the number of fries in the fry container went up and down. Maybe it's very possible. So, so I just want to get that in there because I thought it was so funny. Burger King came along and said— and the CEO of Burger King took a monster Whopper bite of his Whopper, which he's reduced. But my point is, finally, there was no one in that room at McDonald's had the courage, obviously, to say, boss, I don't think you want to do this. I think something's going to happen here.

Jeff Jarvis [00:18:49]:
This is self-inflicted damage. But they didn't have— there was a management issue there in terms of not understanding how to tell the boss something, but there was the larger question of saying, what are you going to say about the company in this case? What narrative are you creating? How does that dynamic work in when it's self-inflicted and when management doesn't know what to do? How much are you in a position of kind of educating them about their own companies and their own selves?

Dan Patterson [00:19:16]:
Well, you know, we don't have to say anything to a company, but what we do is kind of a spectrum of— we provide a spectrum of tools and technologies. On the one hand, like I was talking about earlier, we definitely just released an API that's hyper nerdy, like the engineers are going to understand the API. Uh, but we also, Jeff, we, we have this, uh, technology called Raven Recon, which is easy to understand and easy to use. And anybody from, uh, an engineer to an executive, uh, can understand this tool. And that is kind of built built for— we call it Recon because it's built for finding information that is happening to or about individuals. So even without listening to your own comms team, even though in this case they might be sitting there cringing, um, you could give it to an executive and have them— uh, in fact, my phone's going off with a likely scam right now— you could give it to an executive and they could easily understand that, okay, these narratives are happening about you right now. You know, make whatever decision you want to make, but here are the risk signals. Here is what's happening.

Dan Patterson [00:20:30]:
And again, because there are more technical capabilities with the technology, with our— it's called our Constellation platform, like I referenced earlier, because it is like stars in the sky. It touches a lot of different points. You can then say, hey, engineering team, Let's learn a lot more about these narratives. Who's pushing them? Are these anomalous? Are they bots? Or these actual humans saying actual human things? Which can give you a lot of information, you know, if it's bots or if it's real people reacting. Again, Jeff, in that scenario, anybody can react to that and say, okay, I need to learn more about what's happening. I see the risk signals accelerating.

Jason Hiner [00:21:18]:
Yeah, you know, Dan, that's probably one of the reasons why a lot of your early customers were a lot of like crisis comms, uh, and, uh, other organizations that were, were dealing with where they had some kind of crisis, uh, and they wanted to figure out how can we manage this? How can we be smarter about, you know, understanding it and really stay on top of it? And like I said, there's a, there's a level of intelligence that your company provides that was just not even on the radar, you know, a decade ago. And now, you know, you help companies be a lot smarter, you know, about this area. Since then, you mentioned this, you know, at the top too, you know, you've also started to engage other clients, nation states, NATO, others. And so can you talk a little bit about that, like the evolution, how the evolution of your of the companies that are coming and asking for, you know, your services and the intelligence that you all are offering and how that's kind of changed and evolved both the mission of the company and maybe the tools that you are and the toolset that you have to offer.

Dan Patterson [00:22:33]:
Yeah, right. I mean, it really is about making more strategic, faster, and intelligent decisions and enhancing those capabilities. You know, I've been with Blackbird just under 3 years, and our CEO, who Waseem, who you spoke with, and our CTO Nishad, they started working on these problems about a decade ago, back in the era where I'm sure you all are familiar with the term fake news, when that was kind of the term du jour about what was happening in the media ecosystem and the social media ecosystem. And I, you know, they have also been working on, along with some of our other engineers, been working with artificial intelligence long before it was fashionable, and our technologies kind of advanced as those— again, I mean, no pun intended— as those narratives and as those ideas advanced, right? We went from kind of an unsophisticated concept that we had the word fake news for, which really didn't do a good job of explaining the phenomenon, and their technologies kind of looked at, okay, here is a good use case. Maybe it is crisis comms. Comms, because we can kind of figure out, using AI, or at the time, probably they were using machine learning and other technologies. And now, you know, as we advanced through maybe the crisis comms era, and I know that we had APIs, and we had different ways of tapping into the data, by the time I came on, we developed this tool called Compass, and we still use this. This is the only consumer— I mean, any I think if you have technical abilities, you can use our tools, but any consumer can use compass.blackbird.ai.

Dan Patterson [00:24:20]:
And this is, you know, we don't actively promote this to consumers, but it's very easy to understand. You do have to create a login, and that's mostly to prevent spam and other junk. But any— it will check any claim that you see online. It, you know, often if you're scrolling social media, you'll see a lot of very confident claims, and you'll, you'll see something that could be disinformation. It could be accurate, or it could be intentionally accurate information. It could be intentionally or unintentionally misleading information. Often we share stuff we don't mean to, but share stuff that is misleading misinformation. And you can put anything, literally anything, post a, a link to something, uh, post, just type something in there.

Dan Patterson [00:25:10]:
I saw so-and-so talking about such-and-such, and it will not just give you a yes/no answer, it will give you the context with links to where you can learn a lot more about this. And it will do it fairly quickly, a paragraph or two. We have a fast version that will give you a sentence or two, but the longer version will give you good context. Now we've built it out, so kind of to answer your question, Jason, about the trajectory, it will, it will check videos, it will check photos and images. So you can tell, was this a deepfake or was this a manipulated, you know, a cheapfake? Was this something that was manipulated to, to advance a narrative? So those technologies and tools, I think, kind of help us look into the future. And like I said, we built this about 2.5, 3 years ago when I joined the company. But now, you know, with this this new API and Recon, it really does take things that are on the one hand very technical, but on the other hand very, you know, for executives or individuals, pretty easy to understand. It does require a technical deployment, but once it's deployed, it's easy to use and understand and can allow you to make very fast strategic decisions that help you I mean, make better decisions faster and, in theory, stay safe, or whether you're in comms or governments or an executive in organization and individual, make decisions that are better or better informed.

Paris Martineau [00:26:44]:
How do you make sure that tools like that themselves aren't unduly influenced by disinformation or kind of deepfakes or just the I guess the general low-quality nature that much of our information ecosystem has taken on, kind of, especially in this age of AI?

Dan Patterson [00:27:06]:
That's a great question.

Jason Hiner [00:27:07]:
That was very generous. Yes. Goddamn.

Dan Patterson [00:27:11]:
I mean, that's very interesting as well, right? So I think if I understand your question, Paris, is like if something is, or if the tools are dependent on the the ecosystem and the ecosystem itself is being manipulated. How do you make sure the tool then isn't manipulated?

Jason Hiner [00:27:26]:
Yeah.

Dan Patterson [00:27:27]:
Yeah.

Leo Laporte [00:27:27]:
Right.

Dan Patterson [00:27:28]:
So that is again where, and it is also where like, I don't have the engineering chops to tell you technically how it works, but it is why we have engineers who really do, you know, we don't have like, here's one whitelist of sources and we make sure this is a good pure list of sources that will always tell you the truth. It is pretty dynamic and robust. We don't just look at all the social networks or all of the news websites. I said this a little bit before you joined, but we will look at chat applications, the dark web, the entire information ecosystem. We have a pretty good understanding of what's happening. And because we are full of experts who are building systems that can look for this. We do see the, the actors that are pushing, uh, manipulated narratives, and we see the behaviors. And so we also understand the trends, the tactics, the techniques.

Dan Patterson [00:28:28]:
There are very technical words for this, but working with some partners, again, like NATO, these aren't—

Jason Hiner [00:28:35]:
using—

Dan Patterson [00:28:36]:
understanding these tactics is not a practice. And so they will inform our engineers about the signals and the types of behaviors and the platforms on which information is manipulated. And so again, I'm not an engineer, but I know that we take some of those signals, many of those signals, and we build those into the system so we have a better understanding and are not manipulated ourselves.

Jason Hiner [00:29:06]:
Mm-hmm. Dan, so, um, want to be respectful of your time. One last question I'm thinking about. So this compass.blackbird.ai, this is a great resource for, you know, everyone, um, in the, in the audience to, to be able to use, uh, if they have questions about the veracity of a, of an image, of a report, of a video, you know, uh, is it a deepfake? Is it, uh, manipulated all of those things. How is your company, as I understand it, how is the, could you talk a little bit about how the company itself is using AI? How is it using, you know, AI, you know, are you developing your own models for that tool? Also when I talked to Wassim, one of the things he mentioned that sort of, you know, scared me straight a little bit was he was saying that if you're not using AI, if you're a leader and you're not working and thinking about AI agents right now and you're really just still using chatbots, you're already behind. Like you need to really be thinking about what are the ways that agents can transform, you know, your organization, the ways that you work, the ways that you operate, all of those things. And so I thought this would be a great opportunity to talk a little bit about the ways you all as a company, even though you are and have been an AI company before it was fashionable, as you said, you know, AI itself as a tool is changing some of what you do and the ways that you do it. Yeah.

Dan Patterson [00:30:34]:
Yeah, for sure. So we do build and train our own models. And, you know, the second part of your question, Jason, you know, we seem probably spent quite a bit of time articulating what executives and decision makers should be doing when it comes to agents. But I think that The reality is many of these tools are so accessible that they're being used by your, your teams and they are transforming the business. So above you and below you, they're transforming business. And it is— I don't want to speak for him, but my guess is that you would, you would just have to have the vocabulary and the capability to to use these tools that are advancing so much more rapidly than almost any other technology we've seen prior to be able to manage and to lead teams to make good strategic decisions for your own companies and know the difference between homegrown and homebuilt networks and what you can have with your own generated systems or trained systems versus— and make the decisions and, you know, maybe kind of the old technology challenge. Do we build it? Do we buy it? Do we integrate something? And I think that these things are happening so rapidly that decision makers and executives must have the same vocabulary as the rest of their team and their, their clients, their partners, and the rest of the, the players in the ecosystem.

Jason Hiner [00:32:10]:
So great insight. And that's one of the things that, you know, podcasts like The Intelligent Machines are trying to do, help people have that understanding, have that knowledge and awareness of how these tools are advancing so that they can work with them, learn about them, and be able to sort of lead from the front, as it were, in their companies. Dan Patterson, thank you so much for, for being here. Always a pleasure. Thank you for the important work that you and Blackbird are doing, you know, really providing people with tools that were not possible before and are making the world safer, are making us smarter about the level of threat and risks that are out there. And also just a pleasure to have you. You are one of the best people in this industry, one of my favorite humans. And so such an honor always to be with you.

Dan Patterson [00:33:06]:
You too, Jason and Jeff, Paris and Benito. It's great to be with you all. I really appreciate being able to talk about this stuff. Thanks.

Jason Hiner [00:33:16]:
Take care. Take care, Dan.

Leo Laporte [00:33:17]:
Take care.

Dan Patterson [00:33:18]:
We'll see you. Talk to you soon.

Jason Hiner [00:33:21]:
Okay. All right. Dan Patterson, um, what a powerhouse. I mean, that, um, that stuff that they're doing, you know, I, I just couldn't even have, have, uh, conceived of some of those things, you know, even 5 or 10 years ago, you know.

Paris Martineau [00:33:36]:
And now it's just rapidly accelerating.

Jason Hiner [00:33:39]:
It is accelerating, right? Like the ability to do the things that they're talking about, right? It's empowering threat actors in ways that we never could have, um, you know, I guess we could have anticipated science fiction has anticipated some of it, but it's at a level and a speed that's just out of, out of this world right now. I mean, that's kind of a chicken and egg thing though, right?

Jeff Jarvis [00:33:59]:
Because science fiction is also the responsible for some of this stuff.

Paris Martineau [00:34:03]:
I was going to say a lot of the people doing these sorts of things are directly influenced by science fiction.

Jeff Jarvis [00:34:11]:
True.

Jason Hiner [00:34:12]:
Chicken and egg. Chicken and egg.

Paris Martineau [00:34:13]:
Chicken and egg indeed.

Jason Hiner [00:34:15]:
Well, uh, Paris, great to see you. Um, and likewise, Paris and Jeff, thanks for, for letting me, you know, sit in the Leo seat for this week.

Jeff Jarvis [00:34:26]:
We get in trouble when we do it.

Paris Martineau [00:34:27]:
Back— we, we do— I just want to say Jeff and I are on hiatus because we, we bring up, uh, too many spicy stories.

Jason Hiner [00:34:35]:
I'm here to take all the bits this week.

Paris Martineau [00:34:37]:
Thank you, thank you.

Jason Hiner [00:34:40]:
Everything that's great, I will tell them, was all your guys's idea. You know, all of the mistakes were mine.

Paris Martineau [00:34:46]:
So now you're speaking our language.

Jason Hiner [00:34:49]:
Excellent, excellent. We, we have so much to cover. I, I want to start a big week in AI. Oh my gosh.

Dan Patterson [00:34:56]:
Yeah.

Paris Martineau [00:34:57]:
Oh my gosh, we have a lot of ads this week.

Jason Hiner [00:35:02]:
Let's let's pause and send it over to Leo, um, to talk about one of the sponsors for this week's show.

Leo Laporte [00:35:11]:
This episode of Intelligent Machines brought to you by DeleteMe. If you have ever wondered how much of your personal data is out there on the internet for anyone to see, please do me a favor, don't look, because it's more than you think. It's appalling. Your name, your contact info, Your Social Security number, in many cases, your home address, even information about your family members, all being completely legally, I might add, compiled by data brokers. And they are completely legally selling it online to anyone, even foreign governments, marketers, law enforcement, anyone, hackers. Anyone on the web can buy your private details, and that can mean the worst— identity theft, phishing attempts, doxxing, harassment. But there is a way to protect your privacy with DeleteMe. Look, I, I live online and I know how important this is.

Leo Laporte [00:36:08]:
In fact, we use DeleteMe. I think every company should use it for their management because we were getting phished. People were able to find out all sorts of information about our, our team and use it to send very credible phishing texts trying to rip us off. We immediately signed up for DeleteMe, and we've been using it for years. It really works. It really works. That's why I recommend DeleteMe. Why? It's why we use DeleteMe, and it's why you should use DeleteMe.

Leo Laporte [00:36:35]:
It's a subscription service. Now, that's important because it doesn't just do it once. It removes your personal info from hundreds of data brokers. This is the key. There are more than 500 data brokers, and there's more every single day. So what you do, you go to DeleteMe, you sign up, up. By the way, it's joindelete.me.com. Make sure you use the right address, joindelete.me.com.

Leo Laporte [00:36:54]:
You sign up, you provide them with exactly the information you want deleted. They need to know what you don't want, and that way they don't delete stuff you do want. They take it from there. Their experts know exactly where to go and how to delete it. They'll send you regular personalized privacy reports. We just got one the other day showing what info they found, where they found it, what they removed, so you know what they're doing. And this important. It's not a one-time service.

Leo Laporte [00:37:20]:
Delete.me is always working for you, constantly monitoring and removing the personal information you don't want on the internet. And you need that because these data brokers are not the nicest people, and they're constantly rebuilding those dossiers even after you have them deleted. They have to delete them by law, but nothing stops them from recreating them. Plus, there's new ones all the time. In fact, the sleaziest thing they do often is change the business name so they could start over. Over with a clean slate and all your information. To put it simply, DeleteMe does the hard work of wiping you and your family's personal information from data broker websites, and no one does it better. Take control of your data, keep your private life private.

Leo Laporte [00:38:00]:
Sign up for DeleteMe. We've got a special discount just for you today. You get 20% off your DeleteMe plan when you go— and this is important— get the right site to joindeleteme.com/twit. Use the promo code TWIT at checkout The only way to get 20% off is to go to joindelete.me.com/twit and enter the code TWIT at checkout. joindelete.me.com/twit. Use the promo code TWIT. If you just Google DeleteMe, you'll go to the wrong place. There's another company in the EU and yet they don't do the same thing.

Leo Laporte [00:38:34]:
You want to go to this one. It's joindelete.me.com/twit. Don't forget that offer code TWIT. Now back to intelligent machines.

Jason Hiner [00:38:45]:
Okay, well, we have to talk about this weekend. Yeah, yeah, yeah.

Paris Martineau [00:38:52]:
I, I was gonna say, it's very rare that all of my conversations with normal people who don't care about AI begin with, oh my God, the AI news. And this was one of those weeks.

Jason Hiner [00:39:07]:
I certainly, in AI, it's the most consequential weekend, news weekend I've ever seen. But I almost think even in terms of tech, I don't know that I've ever seen a weekend like this where, you know, tech was the story, the story, even when something as consequential as the US invading another country was.

Jeff Jarvis [00:39:29]:
Well, that's almost coincidental, right? Oh, and by the way, we also invaded a country or bombed a country.

Paris Martineau [00:39:36]:
And we use Claude for that.

Jeff Jarvis [00:39:37]:
And we use Claude for it, right? So the whole Anthropic OpenAI saga here would be huge on its own.

Jason Hiner [00:39:44]:
Yes.

Jeff Jarvis [00:39:45]:
Add in a war too.

Paris Martineau [00:39:46]:
Just for context for anyone who doesn't know what we're talking about, on Friday, Trump directed every federal agency to immediately cease use of all Anthropic technology. This was the culmination of a simmering brouhaha between Anthropic and the Department of Defense. In part, we spoke with this last week. It's this kind of paradoxical thing where Pete Hegseth has simultaneously designated Anthropic a supply chain risk to national security. And they also used Anthropic and Claude in particular as as part of their operations, um, to enact war in Iran.

Jason Hiner [00:40:33]:
Yes, yes.

Jeff Jarvis [00:40:35]:
It—

Jason Hiner [00:40:36]:
the number of aspects of this to unpack are, are so many.

Jeff Jarvis [00:40:41]:
Um, so we discussed this a bit last week, okay, where I think Leo's starting point was, um, similar to, uh, Strategeries this week, kind of, okay, the government has to decide how to use these tools. I disagreed, and I think that there's— I disagree with Secretary as well. I think there is a need for, especially in unusual times— so we try to be not too political and call this unusual times— a need to speak one's conscience and decide what's used and what's not. The analogy I make is that certain pharma companies will not sell certain drugs to certain states if they're used in executions.

Jason Hiner [00:41:24]:
Yep, yep.

Jeff Jarvis [00:41:25]:
And so companies have some rights there and have that ability. So Anthropic came along and said, you can't use our stuff to autonomously kill people, and you can't use our stuff to surveil Americans. And there was a moral aspect to that, but there was also a practical aspect of that, like, this stuff ain't ready. Yeah, you don't want to use it for that. You know, I wouldn't trust it to go kill people. People. What are you doing? Then the Hexastuff got all macho chest-beating.

Paris Martineau [00:41:55]:
Yes.

Jeff Jarvis [00:41:55]:
And then Trump got all macho chest-beating, as Paris recounted there.

Paris Martineau [00:41:58]:
And it's worth noting that this, like, designation— the Pentagon, as of when I checked today, the Pentagon has not formally issued this supply chain risk designation threat through any official channels. All this messaging is done on social media, as is kind of the norm in this administration, which really adds an unusual aspect to all of it. What we've heard so far is— The Washington Post also reported this weekend that a hypothetical around nuclear ballistics might have been kind of what, for lack of a better term, blew this whole thing up. The Washington Post reported that Emil Michael posted an posed an extreme hypothetical during a meeting in January 2026. If an intercontinental ballistic missile was launched at the U.S., could the military use Claude to help shoot it down? And the accounts as to what happened next diverge sharply. The Pentagon's version is that Anthropic responded, you could call us and we'd work it out. And officials were really mad at that because they were like, that is ridiculous. Anthropic's version is they say That's totally false.

Paris Martineau [00:43:11]:
We said we've always agreed to allow Claude for missile defense, which is a crazy sentence after this day.

Jason Hiner [00:43:17]:
Yeah, it wasn't part of the red lines as they call them.

Paris Martineau [00:43:20]:
Anthropic says the red lines are two specific categories: mass domestic surveillance and fully autonomous weapons.

Jeff Jarvis [00:43:27]:
Yeah, right.

Jason Hiner [00:43:28]:
Yes.

Jeff Jarvis [00:43:28]:
So that was crazy enough as a story, right? Right there, that Friday, huge implications. Is the government going to destroy Anthropic? How far could this ban go? A friend of mine at University of Virginia said, do we have to stop using it because the university gets grants from the federal government? That's earth-shattering enough. And then along comes Sam Altman.

Jason Hiner [00:43:54]:
Yes.

Jeff Jarvis [00:43:55]:
Who comes in and says, okay, I'll do it. And apparently was doing this all along. And at various points supposedly had his own rules, agreed with Anthropics' rules, but really didn't because otherwise they wouldn't have done it. Then he admitted that it was opportunistic and sloppy. Then he whined to his staff that this was really painful. Give me a break. And so that's added a whole other layer here to where this goes. I've got to ask you both, because you're younger than I am, which is not hard to be.

Jeff Jarvis [00:44:35]:
Does the name Eddie Haskell mean anything to you?

Dan Patterson [00:44:37]:
Oh yeah. No.

Jeff Jarvis [00:44:39]:
Oh, Paris, Paris, Paris.

Paris Martineau [00:44:42]:
I also couldn't remember most names that I've heard.

Jeff Jarvis [00:44:45]:
Well, no, this is like— this is an old, old guy TV reference. This is, uh, Leave It to Beaver.

Paris Martineau [00:44:51]:
Uh, oh, I know Leave It to Beaver.

Jeff Jarvis [00:44:53]:
Well, then you should know Eddie Haskell. Eddie Haskell was the friend of Wally's who was the two-faced slimy ass-kisser. Oh, Mrs. Cleaver, you look absolutely lovely today. Right? And that's what I did. I got AI to do a GIF for me of Sam Altman meeting Eddie Haskell, but it doesn't mean anything to you, so I'll not even bother showing it. But Sam Altman proves himself to be a two-faced traitorous ass-kisser to the government.

Paris Martineau [00:45:25]:
Is this surprising to anybody? Is this not— not to oversimplify this, but is this not the sort of behavior that led to Sam Altman's original ouster from OpenAI?

Jeff Jarvis [00:45:36]:
Exactly. So now what happens? So now it gets more interesting because he whines about, well, you know, this could be damaging to the brand, but there's a movement to delete ChatGPT. Anthropic leads, goes to the top of the downloads.

Jason Hiner [00:45:54]:
App Store and the Google Play Store also. It is skyrocketed as the number one app in the world, passing ChatGPT, which had been the number one app in the world for, you know, 3 years.

Paris Martineau [00:46:05]:
Anecdotally, as listeners of this podcast will know, but just for context, I'm in a lot of subreddits for all the various models. And I really enjoy being in the OpenAI ones, in part because up until recently my main source of joy was whenever OpenAI would depreciate a model, people would freak out because they were going to lose their AI girlfriend. Now all of those people are aggressively organizing to switch to Anthropic.

Jeff Jarvis [00:46:35]:
They're angry. There have been hundreds of— Wait, let me just paraphrase so we understand. These were adamant ChatGPT fans.

Paris Martineau [00:46:43]:
These were ChatGPT fans, so adamantly fans that they're like attuned to different models, making lengthy 100-word-plus posts whenever a brief change happens to a model or some sort of tweak is made to the system. And these people are not only switching en masse who, and it overwhelmingly seems like Claude, but they are really relishing in the experience of not being in the ChatGPT ecosystem. I mean, maybe this is just the anecdotal experience that I'm seeing, but I have seen probably 20 to 50 posts in the last day or two, not looking for it, of people being like, wow, I like Claude so much more. Or wow, I think I've seen one or two that said they like.

Jeff Jarvis [00:47:35]:
So here's my question.

Jason Hiner [00:47:37]:
Both of them.

Jeff Jarvis [00:47:37]:
Did Sam Altman do permanent damage? Did he shoot his company in the foot? Or does this go past?

Paris Martineau [00:47:44]:
I mean, it's— there's of two minds. One, ChatGPT is in a position now where they are the dominant market leader. They have such a head start— isn't even the way to describe it. They have so much more market penetration than any of the other companies. And significantly, significantly more. Both they and, uh, Gemini have so much more penetration than Anthropic, so it's hard to compare the two. But one consequence of that is when you're overexposed, you are increasingly likely to end up getting kind of a negative— to have your brand reputation be tarnished. And I do think that there's been a number of instances that have increasingly tarnished the ChatGPT and OpenAI brand, starting with everything going on with the sycophancy, the suicidal impulses.

Paris Martineau [00:48:36]:
A common complaint I see in all these forums is that as a counter to the kind of sycophancy and AI psychosis-inducing tendencies of these models, now, like, you'll be asking ChatGPT for help sorting through some emails, and it'll be like, like, you're not crazy, you're not broken, take a deep breath and we'll work through this together. And people are like, what the heck? I'm just asking for my emails to be sorted. I do think, so I think that this exists in that context.

Jeff Jarvis [00:49:07]:
Meanwhile, they didn't report a suicidal person in Canada. Of course. Or a homicidal person in Canada.

Paris Martineau [00:49:11]:
I think that this is a huge sticking point for like, I was out to dinner last night with a friend that is not plugged into AI stuff at all. I'd say an anti-AI person in every sense of the word. She was like, oh yeah. Me and a bunch of other people have like signed this stop using ChatGPT thing, and I've been seeing that being shared around everywhere. I do think that it's a notable moment so far in this company's history.

Jeff Jarvis [00:49:36]:
What do you think, Jason?

Jason Hiner [00:49:37]:
Yes, so I, I think that it reminds me a little bit of the, um, Instagram— there was like, um, the, the, uh, Uber, the boycott Uber movement, you know, um, because of all the things that came out about them You know, they changed CEOs. They did. It caused them to change CEOs for sure. There was like the Instagram one similar to that, a boycott of Instagram. It does remind me a little bit of those for sure. And they were— they both were consequential. And I think that they were brand moments where they were— where there was brand damage done. They did both recover.

Jason Hiner [00:50:21]:
And so I expect this to be a bit like that. And here's why I'm thinking so. And then there's a couple of parts of it I'd love to unpack with, with you all and get your thinking on too. So I think that what OpenAI still has going for it is with the level of talent that they have at the company, they are also making found these tools like the easiest to use. I think by and large, even lately, I've heard some programmers talking about the fact that developers that they're using Codex instead of Claude Code because they're like, it's actually gotten better over the last few weeks. And so that was kind of surprised me because Claude Code has been really, has just had it figured out right among developers for a while. But among ChatGPT itself, like some of its controls, user customization, things like that are just a little bit better. I think their browser also is, is, is, uh, is just a little bit easier to use.

Jason Hiner [00:51:26]:
So I, I— people often go to the path of least resistance, as we all know, human beings. And so I expect, as long as they don't have— and there's the risk of this— some people have left OpenAI kind of publicly, right, even in the past sort of week, because over this—

Paris Martineau [00:51:42]:
employee sentiment is a huge aspect of this.

Jason Hiner [00:51:45]:
The employee, if they start to lose, and they lost, they've lost several important people over the last few days. If they, if that exodus becomes more acute, then I'm going to be more worried. But I do think right now they still have the people and the staff and this mission of making these tools better and easier and faster to the fact that I think that they will likely still own a lot of the consumer, you know, sentiment of this. And they will continue to— this will be a little bit of a blip. I do have some bigger questions, though, too, and I'd love to get your thoughts on this. The, the Sam Altman thing, I do think that, you know, Altman's— he is very much almost like the mirror image of, of of Dario Amadei. You know, Dario is, um, he's very, uh, and my sense has always has been this way, he's very sort of, um, almost like single-mindedness and like steadfast on an idea, right? Like that he pushes forward. And I want to talk about one of those ideas first.

Jason Hiner [00:52:59]:
Whereas Altman, I think Altman takes in a lot of things, um, and, and reads sort of the tea leaves and then of make some decisions of like, you know, which ways, which way things are going. And let's, let's sort of listen to our audience. Listen, listen to—

Jeff Jarvis [00:53:18]:
is he— is it unfair to say he's a little Trump-like in that he's impulsive?

Jason Hiner [00:53:22]:
I think there are— I mean, he will act very quickly, right? I'm not sure that I would even consider this move impulsive.

Paris Martineau [00:53:28]:
I think this was calculated. And, okay, I assume that Sam Altman OpenAI, Google, I'm sure even Meta are all salivating, were salivating at the idea of getting Anthropic contract, $200 million contract that they're going to be—

Jeff Jarvis [00:53:44]:
Yeah, but comes with a lot of not just baggage, but bombs.

Paris Martineau [00:53:49]:
Well, none of them care because, I mean, Google rolled back its internal prohibition on an AI for weapons and surveillance in February last year.

Jeff Jarvis [00:53:57]:
Yeah, now you have 900 employees of Google have signed a letter.

Paris Martineau [00:54:00]:
Well, that's what I think is the most interesting part here is I think that, yes, all these like consumer reputational blights that will continue to exist. But the real place where this could actually make a marked difference is in employee, like in the employee talent wars that these AI companies are doing. And this is something that Karen Hao got into in her book, An Empire of AI, which is that all of these companies are paying their employees crazy amounts of money. Um, they kind of have the pick of the litter in the sense, and it's been a real struggle for all of them to figure out, hey, how can we attract the best possible talent that can give us the edge? A lot of these people were attracted to these companies by making them— making like lofty promises about ethics, doing the right thing, building a technology that's going to change the world. And you're starting to see that in the way that employees employees of OpenAI and Google are really reacting negatively to this and being like, hey, why aren't we standing up to these absurd, uh, demands like Anthropic is? And I think that that's the sort of thing that people are going to listen to when they want to make their employment decisions, the decisions about where to work. If Anthropic comes to any of those people that have signed this— now Anthropic has a, a literal list of 1,000 employees that they could possibly scoop from these competitors.

Jason Hiner [00:55:26]:
They could go pick off. So here's the, here's the thing that is, is interesting, I think, for us to consider. If we consider the two, those two red lines, if we go back to that for a quick second, because there's, there's been a lot of— and you mentioned the Stratechery piece. Um, there's also Altman who, who tweeted out and he said, I don't feel like The executives of private companies should be making decisions in a democracy that, you know, they should be made by elected officials, not non-elected executives of private companies. You know, to a degree, Ben Thompson, his treatise unpacks that and essentially is saying much the same thing. But here's what— here's my question about that is And you all tell me if you understand this the same way. I don't feel that Dario and Anthropic is necessarily saying this is wrong and nobody should do it. They're saying we are not comfortable with it.

Jason Hiner [00:56:27]:
As Jeff, you said it like, we know this technology, we know the reliability of it, we know the challenges of it, and we are not comfortable with this technology being ready to be given the weapon where it can choose which humans should be, you know, taken out, that there should always be a human in the loop on that. That seems like a pretty reasonable, you know, ask. And then the other mass surveillance, which is illegal.

Paris Martineau [00:56:56]:
Yes.

Jason Hiner [00:56:57]:
But the— what the government has said and the very important part in the contract was like for essentially we can use this technology. We want to be able to use this technology. And as I understand it, This is standard language in all Pentagon contracts for all legal purposes. And what they said verbally was, we don't have any intention to use your technology for mass, you know, for killing of humans autonomously and or for mass surveillance of workers. And yet we have to go to the line. We're not going to allow any carve-outs of language saying we won't agree to those things specifically because because they are illegal and all things that are illegal are part of the contract. And Anthropic said, like, we're not comfortable with that. We want that carved out because those are things— these are areas where if humanity— like, we've seen this movie, right, where when robots and AI can kill humans without any human intervention, the potential consequences are very, very negative.

Jason Hiner [00:58:01]:
We believe that we don't want the technology we've built, um, to be a part of that because, you know, our feeling is the, the consequences are very negative and we don't want to do that. That to me seemed like a very reasonable consequence, and I feel like it, it has gotten mischaracterized as them moralizing to the government and, and trying to tell the government what to do. Is that— am I understanding it correctly? What do you think?

Jeff Jarvis [00:58:26]:
Yeah, I mean, this is what we discussed last week too. But, and Ben, who does great analysis, though I will confess I go to, I always go to Gemini and ask it to summarize him first because he's so long. Ben argues strenuously that you can't have companies deciding what's what. You've got to have elected officials deciding how to use these tools. Otherwise you'd have a dictatorship of companies. Well, there's a few issues here. One is we are in exigent circumstances with the government we have.. And individual responsibility and accountability will matter.

Jeff Jarvis [00:59:05]:
And so just as the 6 members of Congress who did the video say, reminding members of the military that they should not follow illegal orders because they are ultimately responsible under the laws the president established at Nuremberg— pardon me, I'm not I'm not going Godwin's Law here, but I'm gonna end up going there a little bit for a minute, sorry. It's gonna get worse for a second, then it'll get better. So there's responsibility for the military person. Is there not also responsibility for the company? As I mentioned earlier, pharma companies choose not to sell their drugs to states that are gonna use them in executions. E. Gay Fotherman, the manufacturer of Zyklon B, has been held responsible by history and others for selling that poison to the Nazis in the concentration camps. And so we say to that company to this day—

Paris Martineau [01:00:04]:
I was thinking of that exact comparison, Jeff.

Jeff Jarvis [01:00:06]:
You shouldn't do that. You are held responsible for that. You are accountable for that. And so the trajectory is where Ben's worried about the dictatorship of the company Okay, I get that, but I'm worried about the dictatorship of the government.

Paris Martineau [01:00:21]:
You're worried about the dictatorship of a dictatorship.

Jeff Jarvis [01:00:24]:
Exactly. And if you're going off and doing things, what— and we constantly are saying to the AI companies, you need to be careful about how your stuff is used. You need to put in guardrails, though I think that's impossible, but we still say that. People say that, right? You are ultimately responsible. We say the same thing to social media companies. We say that in all these companies. You are responsible. You are accountable.

Jeff Jarvis [01:00:42]:
You have to be moral in these decisions. Well, okay, so Anthropic comes along and says, yeah, we have a moral line and here it is, and we don't want our stuff used in this way. And then they're being accused by people like Ben of trying to be dictators. No, they're trying to be accountable and responsible in, again, exigent times where the risk is very high that their technology could be used in a way that would in the least shame them in history. So I think that they've got an opportunity and a need and a responsibility and a right to say no. So it's a really interesting issue here of where you go. And then if we go to Google, I think Google's right now trying to hide, like, just forget us for a while.

Paris Martineau [01:01:32]:
We're not really here. It's very interesting, Google and Meta both like—

Jeff Jarvis [01:01:35]:
And Microsoft, I think, and Amazon, right? They're all— well, Amazon's— we'll get that in a minute. But they're all kind of trying to hide. But Google now has employees rising up again, as they did in the robot days, that is the days when Google had a robot company, saying, no, you don't use this stuff for war at all. Yeah. And so where do these other tech companies go for all the reasons you mentioned, Jason, but also for their moral and legal responsibility to themselves and their legacy?

Paris Martineau [01:02:05]:
Uh, I also want to point out, uh, Lieutenant General Jack Shanahan, who was the inaugural director of the DoD's Joint Artificial Intelligence Center, the guy who led Project Maven, the Pentagon AI program that famously caused that like Google employee revolt in 2018, he, uh, weighed in on the subject this weekend as well, and he said painting a bullseye on Anthropic garners spicy headlines lines, but everyone loses in the end. He called Anthropic's red lines reasonable and said, quote, no LLM anywhere in its current form should be considered for use in a fully lethal autonomous weapon system. It's ludicrous to even suggest it. I think that speaks for itself, you know. I don't think that— I think I'm just— I'm baffled as to how this situation got to the point that it is at now, because you're dealing with some nut jobs. I mean, yes.

Jeff Jarvis [01:03:00]:
I mean, Hegseth is at the same time, he went to the Boy Scouts and said, okay, you can keep girls for a little while, but you have to get rid of all the DEI. So I'm sorry, I'm going to do it again here. I'm going to go to the same place. I'm going to the Nuremberg. So it's now the Hegseth Jugend, right? And they're dictating to the Boy Scouts. What do you do in that case? You've got macho. It's a macho thing and you don't dare disagree with me. I'm gonna get you, I'm gonna destroy you.

Jeff Jarvis [01:03:26]:
Yeah. And they have the mechanisms to do so.

Jason Hiner [01:03:30]:
Yeah. The— what was incredibly unexpected maybe was that they did this and they told them— now, to, to your point, Paris, they haven't actually executed on the, the promise, which is to make them, you know, a supply chain risk, which is normally reserved for foreign adversaries, right? So it was definitely a DEF CON 5 move. It's like, we're going straight to DEF CON 5. Like, we're gonna, you know, uh, we're gonna term you an adversary, you know, to the, um, you know, to the Republic. And there was some concern, I think, that was that, okay, that could really damage Anthropic, right? That could have some—

Jeff Jarvis [01:04:14]:
kill them almost, couldn't it?

Jason Hiner [01:04:15]:
It could, it could kill the company, right? Um, or at least it could it could make a lot of people uncertain about whether they could still do business with them. Yes. And they are largely, for those who aren't familiar, I'm sure most of our audience is, they make most of their money on their, on their API, on their enterprise business companies paying to use their services. So if a bunch of those companies have questions about whether it's legal for them to use it, that causes a lot of problems. And then something happened that we didn't expect. Which is that American citizens and maybe people around the world were so inspired by the stance of what they did, of this sort of principled stance that they had taken, that they started downloading Claude. Claude has been— most people, if you say Claude, they don't even know it's a chatbot until last week. Yeah, it's like this very sexy— pretty pictures.

Jason Hiner [01:05:14]:
Yeah.

Jeff Jarvis [01:05:14]:
What does this go for?

Jason Hiner [01:05:16]:
What it— I mean, nobody really knew what Claude was. Not nobody, obviously. It, it had its, it had its fans. A lot of the tech nerds, like the people who are really into AI, a lot of them for, for years have, um, have— not years, for months have been saying, because this is a, this industry, it feels like years, but months, um, have been saying like Claude is the best. And I, I I've seen this and probably you all have seen it with a lot of, you know, people, you know, who are really deep into the AI ecosystem and use it a lot, have been using Claude 'cause they're like, there's just parts of it that are a lot better and more accurate and less hallucinations and safer and all that. So fine. But the broad, the broad spectrum of people did not know what Claude was until this weekend. And all of a sudden it went from like, it was like 40th to 100th on a lot of that.

Jeff Jarvis [01:06:06]:
Jimmy Kimmel of AI.

Jason Hiner [01:06:09]:
Shot to exactly the top number one within hours, essentially after the word came out on Friday.

Paris Martineau [01:06:17]:
It surged so much that Cloud also went down this week and experienced— well, it was in part twice, in part because a surge in downloads and usage, but also I think a data center got hit by a missile. So, you know, a complicated weekend.

Jason Hiner [01:06:36]:
For good old Claude. Very complicated. So Claude rises to the top. And I don't know about you all, but I'll even share an experience I had with— I have a friend, does not work in tech, works in nonprofits as an educator, very smart person, wonderful friend of mine who came to me on Saturday afternoon. And he had downloaded an AI chatbot for the first time. And he downloaded Claude.. And he was having to do two things. He was having to sort of make a, like a social speech.

Jason Hiner [01:07:08]:
And then he was working on this presentation that he needs to give for, for in an academic forum. And he told me, he's like, I have to show you this. It's like this one, I asked it to sort of make the basics of a speech for me. And he said, I have to tell you. And then the other thing he said, and the other thing is I helped it think, helped it have it help me think through the arguments that I need to make in this presentation. And he said, and I have to tell you, and I'm going to show them to you. He said, I'm both really upset and incredibly impressed. He said, I'm upset at how good it is.

Jason Hiner [01:07:50]:
And I'm also impressed that it could help me, you know, think through some of these things in such a powerful way. So he showed showed me them, and he had done a great job with these things. And I just was so blown away that the first place he went was Claude because he had seen all this. And he subscribes to our newsletter, but he only does it, I think, because since I started there in December and has been keeping up with it. But that told me that Claude has truly broken through the mainstream. Like, it has skyrocketed into the level of attention and not only that, but usage in a way that really is giving it a moment that I do wonder into your all's question, is it durable? Are they truly going to be this counterpoint? Is it a cultural moment? But I think we have to acknowledge that at least in this 3-year AI boom that we're in, we haven't seen anything like this before something come out of nowhere and go from—

Paris Martineau [01:08:58]:
I had a very similar experience this weekend. A friend of mine, um, who's always been, I guess, a big Gemini head, but it's kind of like waxed and waned. Like, there's maybe a time where he was using it more, but it wasn't that often. He mentioned to me this weekend, he was like, yeah, you know, I downloaded Claude and I'm just like playing around with it because I thought— and by Monday he was like, Claude's helped me reevaluate my whole like current life professional progress. I've vibe coded a CRM for my custom outreach that I'm going to be doing this week. I've— I literally just got a text 5 minutes ago. It was talking about some sort of camera that he wants to buy. He's like, I've got to ask my new best friend, Claude.

Paris Martineau [01:09:43]:
I'm like, good for you, bro, I guess.

Jeff Jarvis [01:09:46]:
So the other— I didn't even put this in the rundown, but Nvidia, Jensen Huang said said, uh, they're going to invest the $30 million in, um, OpenAI, but he said that's probably it for both OpenAI and Anthropic. And what he hid behind was because they're likely to do IPOs. Yes. So that becomes another interesting wrinkle in this, is they're both headed that way, I think. But I think that, uh, OpenAI has just got delayed.

Jason Hiner [01:10:14]:
Yeah, I think you're right, because Anthropic might have just gotten accelerated. Yes, sentiment. You know, the markets are based on sentiment, you know, in addition to earnings, but really the sentiment. And the sentiment on both of these companies just shifted so dramatically, you know, in the last 70, you know, or the 72 hours over the weekend, that it really changes the— it changes the game, at least in the short term. But longer term, these are both going to be public companies. They're destined to be some of the biggest companies in the world. I think they are on the way. You know, NVIDIA, OpenAI, and Anthropic, uh, it feels like in 3 to 5 years from now, you know, they are the sort of Apple, Microsoft, um, Google, you know, of, of this.

Jason Hiner [01:10:58]:
Now, we shouldn't count Google out either, right? Google, um, it was just a few months ago that Google was sort of the belle of the ball with Gemini 3 and Nano Banana, and they do have some things going in the right direction, uh, and so we can't count them out either, I guess.

Jeff Jarvis [01:11:16]:
Yeah. So Paris, you raised in our private chat, it's probably time, Jason, to earn some more money. It is. And then come back for some more stuff because you've got 3 ads to get in before we turn into a pumpkin.

Paris Martineau [01:11:28]:
You know, everybody loves to advertise on our here show, Intelligent Machines.

Jason Hiner [01:11:34]:
They do.

Paris Martineau [01:11:34]:
4.

Jeff Jarvis [01:11:34]:
We love them. 4 today. Leo's gone and look what happens. The money comes kaching in.

Jason Hiner [01:11:39]:
The money's just shaking out into trees.

Leo Laporte [01:11:41]:
Jason's here.

Jason Hiner [01:11:41]:
Get me there. Amazing. All right, so let's send it over to Leo to talk about another one of this week's sponsors.

Leo Laporte [01:11:49]:
This episode of Intelligent Machines brought to you by Modulate. Every day, enterprises generate millions of minutes of voice traffic. That's customer calls, agent conversations, and sad to say, fraud attempts. Unfortunately, in most cases, that audio is still treated like text, right? Flattened into transcripts. Which strips it of tone, more importantly, strips it of intent, strips it of risk. Modulate fixes that. Modulate exists to change that. First proven in gaming, Modulate's technology has supported major players like Call of Duty and Grand Theft Auto.

Leo Laporte [01:12:28]:
These games really needed it to separate, you know, playful banter from intentional harm, and they do it at scale. Today, Modulate helps many enterprises, including Fortune 500 companies, understand 20 million minutes of voice every single day by interpreting what was said and what it actually means in the real world. This capability is powered by Modulate's very powerful ELM. They call it Velma 2.0. I love it. Velma is a voice-native behavior-aware model built to understand real conversations, not just transcripts. It orchestrates 100+ specialized models, each focused on a distinct aspect of voice analysis, so it can deliver accurate, explainable insights in real time. This is an amazing technology.

Leo Laporte [01:13:19]:
Velma ranks number 1 across 4 key audio benchmarks, beating all the large foundation models in accuracy, cost, and speed because it's designed to do exactly this. Velma's number 1 in conversation understanding, number 1 in transcription accuracy, and cost. Number 1 in deepfake detection. Number 1 in emotion detection. Built on 21 billion minutes of audio, Velma is 100 times faster, cheaper, and more accurate than LLMs at understanding speech. And that includes the best— Google Gemini, OpenAI, xAI. Nobody does it better than Velma. Most LLMs are just, you know, black boxes.

Leo Laporte [01:13:59]:
Velma doesn't just assess a conversation as a whole, it breaks it down. For greater accuracy and transparency by producing timestamped scores and events tied to moments in the conversation, which means you can see exactly what's going on when risk rises, when behavior shifts, when intent changes. With Velma, you can zoom right in. You can improve your customer experience. You can reduce risks like fraud and harassment. You could detect rogue agents and more. Go beyond transcripts. See what a voice-native AI model can really do.

Leo Laporte [01:14:31]:
Go to Modulate's live ungated preview of Velma. It's at preview.modulate.ai. That's preview.modulate.ai. See why Velma ranks number 1 on leading benchmarks for conversation understanding, deepfake detection, and emotion detection. Again, that's preview.modulate.ai. Now back to the show.

Jason Hiner [01:14:59]:
All right, thank you, Leo.

Jeff Jarvis [01:15:00]:
And we have more war news.

Jason Hiner [01:15:02]:
We do. So before— we could talk about this whole Anthropic, Pentagon, OpenAI thing for another hour, I'm sure. The last thing I wanted, um, to wrap it, um, is we ran a poll on the Deep View. So we, we have our audiences, um, goes, uh, it's about half a million people, uh, every day. We, we run the top stories in AI And we have a poll. And so in our poll, we asked, should Anthropic have acquiesced to the Pentagon's request to remove safety restrictions? All right, before you—

Jeff Jarvis [01:15:33]:
just wonder the results, Paris, if you cheated and looked. Uh, no. What do you think the answer is going to be?

Jason Hiner [01:15:39]:
Yeah, what do you— what do you think the answer is going to be?

Paris Martineau [01:15:45]:
Overwhelmingly no. I'm going to be optimistic.

Jason Hiner [01:15:47]:
Okay, what do you think, Jeff?

Jeff Jarvis [01:15:49]:
Yeah, I'm gonna be optimistic side.

Jason Hiner [01:15:51]:
We're siding with Anthropic. 79% said no, they should not have acquiesced. 17% said yes. Uh, 5% said, you know, other.

Paris Martineau [01:16:05]:
Do you have any demographic, any other details on the 19% who said yes? Like, are they— do they have a sub-bucket that's like corporate shill or something?

Jason Hiner [01:16:16]:
I don't know, but you know, our, our audience This is pretty diverse. It's mostly professionals who work in the AI industry or who work with AI, and it's pretty diverse across the US and Canada.

Jeff Jarvis [01:16:31]:
Oh, the Canadians skew it then.

Jason Hiner [01:16:36]:
All those good guys. Yes. Yeah. I was surprised by that.

Paris Martineau [01:16:43]:
What about it was surprising to you?

Jason Hiner [01:16:46]:
What was surprising to me? I figured it would be— I figured it would be yes, the majority, but I thought 55-60%, right?

Jeff Jarvis [01:16:56]:
Well, we'll say most anything these days, you will find a minimum of 35-40% who will side with the administration.

Paris Martineau [01:17:03]:
Exactly, exactly. Well, as someone who's spent probably many hours at this point with our survey team at Consumer Reports whenever I've had to, um, write these sort of things, there's— I think that part of the reason why you get such an overwhelming response is the way the question—

Jason Hiner [01:17:19]:
the way the question was asked.

Paris Martineau [01:17:20]:
It's, it's true. Definitely asked in a way that it makes it clear what the moral choice is.

Jason Hiner [01:17:29]:
It's true, it was a little bit of a leading question.

Paris Martineau [01:17:32]:
So, but it's also kind of a leading scenario, you know. I think that that's a— I would argue that that's an accurate way to describe describe the situation, even if it is leading.

Jeff Jarvis [01:17:42]:
How would you otherwise, uh, word it? Did Anthropic do the right thing or the wrong thing?

Paris Martineau [01:17:48]:
Check one. Well, no, probably— I mean, the thing I've learned about talking with, like, we have this whole team of, like, professionals. I don't even know what the profession is called of surveyors, like, and it's a million different caveats. It would be like a thing being like a paragraph that dryly summarizes the debate Providing arguments on both sides and then says like, do you agree or disagree with Anthropic's stance as stated or something like that? It would, it would be making it a lot more boring, opaque, and kind of hard to parse, which I think is perhaps a disservice to—

Jason Hiner [01:18:25]:
It's like, would, should, you know, Anthropic made its stance on the two items that it believes should not have been left to LLMs to do, whereas the government, you know, believes that they are elected officials and should be the ones that decide, you know, where do you stand? Boom, boom. Like, something like that would have been a little more—

Paris Martineau [01:18:50]:
Yeah, but I think also, I mean, part of the thing is there's like a million ways to slice and dice surveys.

Jason Hiner [01:18:55]:
I'm not sure that it's entirely a useful endeavor when I think I thought this was the most, 'cause I wrote, I also full disclosure, I wrote the question. So like I thought this was the most obvious question to ask ultimately.

Jeff Jarvis [01:19:09]:
I was like, should they have asked or not? Well, I argue that all surveys are biased by their nature, period.

Jason Hiner [01:19:17]:
Yes.

Paris Martineau [01:19:18]:
It's fair. And I'd also argue that anyone who's reading your newsletter and responded to the survey already has a more robust understanding of the situation than average survey respondent.

Jason Hiner [01:19:29]:
That's right. That's right. So that, that's fair. Like they understand what an LLM is. They understand the risks. They understand hallucinations. They understand how long they get things wrong and they probably are less likely to trust them to do really, you know, important and sort of existential kind of things.

Paris Martineau [01:19:45]:
Is this one of the most overwhelming responses you've gotten?

Jason Hiner [01:19:48]:
It is. It actually is one of the most overwhelming responses we've ever gotten to a question that went sort of one direction.

Paris Martineau [01:19:54]:
Do you remember any others off the top of your head?

Jeff Jarvis [01:19:57]:
Claude 4. Should Claude 4.0— I mean, should ChatGPT 4.0 be killed?

Paris Martineau [01:20:03]:
I mean, should we be allowed to marry ChatGPT 4.0?

Jason Hiner [01:20:09]:
85% say yes. Yes, yes. Now, I do want to go to the Amazon questions. There are some other war-related things that we should touch on.

Paris Martineau [01:20:18]:
So we've got a big old war section tonight, guys.

Jeff Jarvis [01:20:20]:
Oh my gosh.

Jason Hiner [01:20:20]:
Yeah, we do. A whole war section. So, Jeff, why don't you talk a little bit about the Amazon?

Jeff Jarvis [01:20:25]:
This is straightforward, that Amazon says the drone strikes damaged 3 facilities in the UAE and Bahrain, and there's no one saying directly that they were targeted. However, things are pretty well targeted these days. And Amazon, as an American institution bigger than Kentucky Fried Chicken, in these foreign places with the internet and technology, with everything that's going on, it's really interesting to me that I think that American tech now becomes a pretty clear target.

Jason Hiner [01:21:08]:
Yeah, that it's a, it's a really interesting development that one of the things that are most known about American, about America America and Americans are these companies, these global tech companies that are the biggest companies in the world are in some sense the biggest symbols of what is American in the same way that Coca-Cola might have been or Nike or other companies in past generations, the most iconic things, Kentucky Fried Chicken, you mentioned, Jeff, McDonald's. In a very real sense, the tech companies are the emblems of what America is. And so in that sense, they are also what we've learned now, the biggest targets to if you want to make a statement about your feelings.

Paris Martineau [01:22:01]:
Rest of World had good reporting on this as well this week. It kind of captured the larger stakes, which is that, I mean, the Gulf has basically positioned itself as a safe harbor for the World's data, like, to attract Silicon Valley. Like, they've— there's been like over $2 trillion in investment pledges like made during Trump's, uh, golf tour last May. And it's been kind of positioned as quote unquote the third global center for AI alongside the US and China. And now, I mean, um, there was a, uh, researcher at Qatar University who told, uh, rest of world the security The security frameworks behind the US-UAE AI partnerships were built for supply chain control and political alignment, not for protecting buildings during a military crisis. And now, I mean, this just makes it increasingly complicated.

Jeff Jarvis [01:22:56]:
It does. And I want to go back to what we were talking about before too, in terms of Google, Microsoft, Meta, and Amazon. Yeah. They were all scared of pissing off the administration. Now they're scared also of pissing off the populace, of pissing off nations. I mean, Europe that are not necessarily aligned with what's happening out of America and Israel right now. And they're pissed. They're scared of pissing off Middle East powers.

Jeff Jarvis [01:23:27]:
They're hot under the collar.

Jason Hiner [01:23:28]:
This is not easy. There are hard decisions to be made, you know, in those cases. Yeah. This is where it would lay. This, I remember somebody telling me like, this is when leader, that's when you actually have to be a leader. Like most of the time when things are going well, your job is just sort of keep the trains running on time, right? When things get hard and there are difficult decisions, like that's when you need a leader, right? That those are the times when leaders, you know, have to earn their money. They have to make very difficult decisions. Decisions.

Jason Hiner [01:23:55]:
And this is one of those moments where there's, there are signals to sort out and try to, try to understand. And I think the one that you mentioned, Jeff, that maybe was the X factor is the populace. I don't think we expected, you know, like we saw it in terms of like this poll, but, but also in terms of the way people voted with their downloads in overwhelming fashion with Quad over the weekend, like people have made a large statement about where they stand on this in ways that has really been in one sense encouraging, right, from maybe a democratic process standpoint. I don't know if you want to call it that, but and another in terms of almost like a level of engagement and activism. And I don't mean activism in maybe the traditional sense, but maybe just a level of not not being, um, you know, just, uh, whining bystanders, you know, sort of.

Jeff Jarvis [01:24:57]:
I, I'm serious about Jimmy Kimmel. I think it's a Jimmy Kimmel moment for AI. Okay. Um, where, um, uh, Viacom, when it lived, thought, okay, no big deal, you know. I mean, I'm sorry, Disney.

Paris Martineau [01:25:13]:
Disney, wrong. Yeah.

Jeff Jarvis [01:25:13]:
Um, wrong megacorp. Yeah, wrong, uh, uh, wrong Late No, um, Disney thought, well, okay, so this is the obvious thing we got to do, so we'll do it. Okay, no big deal. Well, you know, we'll, we'll eat some crow with Kimmel and then figure it out. Yeah. Uh-uh. Nope. Got into much hotter water then and, uh, and gave them the COVID and the courage to say no to the administration.

Jeff Jarvis [01:25:40]:
So that's what's going to be interesting in all this. Yeah. Yeah, I do think European regulators are going to start speaking up too and saying, um, no, we don't want to use tools that are used to autonomously kill people. We don't want to use tools— and why are you just saying, uh, don't, um, uh, surveil Americans? Why don't you have the same standard for the rest of us? Sure, sure. What gives here?

Jason Hiner [01:26:05]:
They're going to get caught in that vice if they could encode some of that. I mean, what's been happening, we've been seeing because of the gridlock in the US, the, the, because of the, you know, political division and the gridlock in, uh, in passing laws, in, in sort of functioning. The, the, uh, the legislative branch has been really in, in gridlock and not functioning well for a couple decades. And because of that, the, um, European regulators have been really setting the, you know, setting the standards on a lot of these things. And then when they do it, these companies that are global companies, they often prefer not to have two different sets of rules they play by. And so the European standards will often be propagated, although we have started to see that splinter in some ways the last few years. There are some things that features or products that aren't available in the same way, you know, in Europe that they are in the US. And so we'll see how sustainable that is long term.

Jason Hiner [01:27:15]:
But there, this could get codified, some of these things that Anthropic, to your point, Jeff, like some of these things Anthropic has brought up could get codified by the EU and/or other places. And that could make that more of a, have this sort of global impact on on some of these companies.

Jeff Jarvis [01:27:32]:
That's gonna be really interesting to watch. If you're Anthropic or if you're a company that now doesn't know what to do, the question is who can give you cover? Oh gee, we'd love to do this, but we really can't. Look at all the implications.

Paris Martineau [01:27:43]:
Yeah. I do think one aspect of this that has been interesting to me is there's, I'm probably bungling the precise details of it, but I've always heard that there's some part of, the terms for Anthropic employees' equity packages that say, like, by working at Anthropic, you have to recognize that we may very well make choices that reduce your equity to absolutely nothing, make it absolutely worthless based on our moral and ethical standards that we have built. Central principles of the company. And it's like, this is a perfect example of that. It's an example of of the competing values inherent in trying to combine morality or ethics within not only a capitalist ecosystem, but perhaps one of the most hyper-capitalist ecosystems we've ever seen in terms of the AI race. Yeah.

Jason Hiner [01:28:41]:
It's interesting. This came up over this weekend as well, Paris. I'm glad you surfaced this or elevated this because I hadn't seen that before or heard it or read it. But, but basically some people took the language and put it on Twitter and said, like, in the employment agreement, it says we may make decisions, just as you said, that could take the value of the company, you know, to zero, but that we will make these decisions based on our, you know, essentially our company mission. So now I'm going to play devil's advocate for, for a minute.

Jeff Jarvis [01:29:14]:
That's a properly Joe Rolle, yes.

Jason Hiner [01:29:16]:
Thank you. So, um, Matt Rubio-Licht, who works for the Deep View, wrote, um, a perspective or a commentary. Um, we have this at the end of all of our stories. It's called Our Deeper View, where we're trying to, you know, really get to what's the, what's the thing, right? Not just report the, the news, but, but also get to what's the, what's really going on here. And in, in one of these over the past few days. What she wrote was that in one sense, what else was Anthropic going to do? They built the company on this mission that we are the safe AI, we are the principled AI, and we are the ones that's going to put guidelines in place. And so did they really have any other choice? Acquiescing would've literally—

Paris Martineau [01:30:08]:
I would say absolutely they did. Every single company in existence has basically made the other choice. Google used to have a core principle of don't be evil, and they went so far the other way. They were like, we're scratching that out, buddy.

Jason Hiner [01:30:23]:
It's too hard. So this is why we published that argument just as it was. And I am really proud of the way that she wrote it. I thought it was very well reasoned and very clearly stated. When, when we talked about one of the things that I said was similar to what you, you mentioned, Paris, what we've seen throughout history is when these moments come, what the technology companies have all said is we make the tools, how other people use them is up to, uh, they sort of wash their hands of it. This goes all the way back to IBM in World War II selling technology to Germany. So I know we keep going back to Germany in the 1930s. Here we are, back to Germany.

Jason Hiner [01:31:06]:
All roads always lead back there. Brownie face. For the disaster that it became. But that was what IBM said in the '30s. This is what Microsoft said in the '80s and '90s. This is what Google said, as you said, in more recent history. Paris says the technology companies always default to being sort of morally neutral. And we make the tools, what people do with them, we can't really control.

Jason Hiner [01:31:28]:
And so one of my, you know, counterpoints was that Anthropic actually did— just saying no, we will not do that. We have certain, you know, red lines that we, that we can't cross because we are not confident that technology is— can do this and do it well. And the consequences of it not working correctly are disastrous. And so, and are core to democracy, are core to human rights. Like, we We can't do it and we won't do it. I find that pretty unique in human history in terms of all these tech companies. I can't think of another example.

Paris Martineau [01:32:07]:
Other—

Jason Hiner [01:32:07]:
Jeff's example of the drug companies is a good one.

Jeff Jarvis [01:32:11]:
The pharmaceutical companies.

Jason Hiner [01:32:12]:
That's pretty edgy. Well, but that is—

Paris Martineau [01:32:14]:
I mean, I think had you asked me a week or two ago what I thought Anthropic would do, I would have said, oh yeah, they're going to cave. Keep their military contract, make sure they're not cut off from the supply chain like every other company. I will say it is very startling to me, surprising that they decided to literally put their money where their mouth is.

Jeff Jarvis [01:32:35]:
So let me, let me go devil's advocate again. Okay, thank you. Yes, yes. Here is, I contend that the idea, and I've done on the show often, often, that the idea of guardrails is a lie. Okay, it's a general-purpose machine. My example always— hello, Gutenberg— is— thank you, Benito, for the plug there for the full screen for those watching. Uh, you can go back to the three-shot now. Uh, is that the printing press was a general machine.

Jeff Jarvis [01:33:09]:
You couldn't have said to Gutenberg, okay, you could do this, but there's this guy who's be born called Martin Luther, keep him away from the damn thing. And you can't, 'cause it's a general machine. And AI, though I don't believe in AGI and all that, it is a general machine to the extent that anybody can make it do anything they want. And the guardrails are a lie. So in a sense, on the one hand, Anthropic could have said, yeah, we have no control over how people use our tools. Exactly the way you put it, Jason, exactly that. That. We have no control.

Jeff Jarvis [01:33:41]:
It could be used any way. But then on the other, other hand, they say that it's not up to the tool to be controlled, it's up to the people. And so it's up to us to tell the customers, you may not use it for this. And there's plenty of other examples in there, right? I mean, when I recorded the audiobook for magazine on sale now, um, right, uh, at the end they, they were going to have me read a statement saying, "No company may use this in any form or any universe, this or in the future, for AI or we're gonna kill you." And I said to the producers, "I can't say that. Not coming out of my mouth. Nah." And so they're setting a restriction. What are terms of service? They're all restrictions that are put on products, whether we read them or not, whether we follow them or not is another matter, but companies all the time say, You may not use this in this way. Okay? You may not do these things.

Jeff Jarvis [01:34:40]:
So it's not about the tool itself being foolproof. It's about the need to tell the people who use it how they may use it in your view. And if you don't like it, don't buy it, should be what should be said. I think it's also interesting here. I know we're going back to our first story again because it's so big.

Jason Hiner [01:34:59]:
It's so big.

Jeff Jarvis [01:34:59]:
What I don't— one thing I don't understand about the timeline of all this from the beginning is that this was in their contract and in their rules, right, from the beginning. You may not use it for these two things. What was it? Just the war game that motivated them to come? Was it just wanting to be hardasses with Anthropic?

Paris Martineau [01:35:17]:
Yeah.

Jason Hiner [01:35:18]:
What triggered it to get this bad? Yeah, we are missing some information on how that unfolded and why and when. I think it's clear, likely to come out in basically the coming days, weeks, months. Um, but yes, I think we, we don't know that, and it will, it will be potentially, I think, helpful to us in understanding the story. For now, I'm going to send it back to our good friend Leo to give us another one of this week's wonderful sponsors.

Leo Laporte [01:35:49]:
This episode of Intelligent Machines brought to you by Zscaler, the world's largest cloud security platform. Look, we here at IAM know the potential rewards of AI, and you probably should know about it too. It's just too great for your company to ignore. But we're also aware, and I hope you are too, of the risks. Not just, I mean, loss of sensitive data, attacks against enterprise-managed AI, but also frankly, threat actors. Generative AI increases their opportunity, helping them to rapidly create phishing lures, to write malicious code, to automate data extraction, There were 1.3 million instances of Social Security numbers leaked to AI applications last year. ChatGPT and Microsoft Copilot alone saw nearly 3.2 million data violations. You don't want that to happen to your company.

Leo Laporte [01:36:41]:
You got to rethink your organization's safe use of public and private AI. Just check out what Siva, the director of security and infrastructure at Zuora, says about using Zscaler to prevent AI attacks.

Jason Hiner [01:36:56]:
With Zscaler being in line in a security protection strategy, it helps us monitor all the traffic. So even if a bad actor were to use AI, because we have tight security framework around our endpoint, helps us proactively prevent that activity from happening. AI is tremendous in terms of its opportunities, but it also brings in challenges. We're confident that Zscaler is going to help us ensure that we're not not slowed down by security challenges, but continue to take advantage of all the advancements.

Leo Laporte [01:37:24]:
Thanks, Siva. With Zscaler Zero Trust plus AI, you can safely adopt generative AI and private AI to boost productivity across the business. Their Zero Trust architecture plus AI helps you reduce the risks of AI-related data loss and protects against AI attacks to guarantee greater productivity and compliance. Learn more at zscaler.com/ai.

Jason Hiner [01:37:49]:
.com/security.

Leo Laporte [01:37:50]:
That's zscaler.com/security.

Jason Hiner [01:37:50]:
Now back to the show. So I want to talk about one more big thing that hap— there's so many big things, like we could go and talk about a lot of other things, but there is one—

Paris Martineau [01:38:02]:
we could have another 3 hours of this podcast.

Jason Hiner [01:38:04]:
I know, we could do a lot. Uh, I've never seen a week like this, and I feel like I've said that about 3 or 4 times so far in 2026. But here we are. I want to talk about a product that Perplexity has announced over the past weekend. Perplexity had been flying a little under the radar so far in 2026. But they released a product in the— this has become the year of AI agents. The personal AI agent has become the thing. Claude Code, so Anthropic's product Claude Code and Claude Cowork were a big part of this.

Jason Hiner [01:38:42]:
That has become an AI agent Leo has talked a good deal about that and the success he's had in doing some and automating some things in ways that he has found incredibly helpful and powerful. And then, of course, we've had the whole OpenClaw, ClaudeBot, MaltBot, MaltBook phenomenon all of its own. And then OpenAI, of course, hired Peter Steinberger, and then OpenClaw has become its own foundation. That has, again, risen to the level of consciousness, a new level, this concept of AI agents. And then when OpenAI hired Peter Steinberger, they basically said, we're going to have Peter come here and create AI agents for everyone. We're going to make AI agents that are just so much easier to use because you have to be a bit of a techie to use either OpenClaw or Claude Code and Claude Cowork. And so they're like, we're going to make this a lot easier. And then one week later, Perplexity released the product that they— exactly the product that they talked about.

Jason Hiner [01:39:50]:
And obviously they didn't do it in a week. They've been working on it for a couple of months. But clearly the level of acceleration in the space and the level of being able to use these coding tools to, to elevate what engineers are capable of and the speed at which you can ship new products has just taken the velocity of this industry to a level that we've just never seen before. And Perplexity Computer, so full disclosure, I had a bit of an exclusive on that. I published the story. It was on the top of Tech Meme when they released it last week, at the end of last week, and got a chance to use it a little bit right away. But beyond that, so I can, I can speak to it, but there was something else interesting. There's like 3 things that this does that other agents don't, and we can, we can talk about.

Jason Hiner [01:40:47]:
But the thing that happened with this that was really wild was that on Twitter, essentially the Perplexity team, you know, and I have this on good, on good, you know, terms from the Perplexity folks that they said to their team, like they do every time they launch a new product, they're like, hey, Hey, this is our new product. If you want to tweet about it, you're welcome to. And usually you get like a handful of people that do it. Well, unforeseen by Perplexity, their team, which had been trying this thing and like loving it, they went on Twitter and just like exploded Twitter with it. And so this thing spread far and wide really quickly and gained a little bit of a viral moment, got helped by one other thing, which is that there were some people that combined Perplexity Computer is what they call their AI agent. With, uh, Perplexity Finance, their, their sort of Yahoo Finance competitor. And they basically were like, I use this to make a Bloomberg Terminal, um, competitor. And they're like, I one-shotted— this is the thing, like, I one-shotted it, and I used this AI agent to build my own Perplexity Terminal computer, and I just canceled my $30,000 subscription.

Jason Hiner [01:42:04]:
And so And that gave it a whole nother sort of level of interest and buzz. And so, but this perplexity computer thing is really interesting. You know, Jeff, when we were talking about it before, you were like, so is this Open Claw? This is just like an Open Claw for everybody that can use it. And I thought that was the perfect, that my headline had a little bit, my very first headline had a little bit of that in it too. I think it's a great way to think about it.

Jeff Jarvis [01:42:27]:
So let me ask you two questions about that. One, I think I asked whether it was Open Claw but ready for prime time. Is it in some way better, safer, not just slicker, but better, safer than open claw? Is that possible?

Paris Martineau [01:42:40]:
And then second— I mean, I think almost anything is safer than open claw.

Jeff Jarvis [01:42:45]:
Is it safe enough? Yeah, that's true. That's true. Yeah. Though, you know, I just saw that there was a story I don't think I put in the rundown that Cursor, is it?

Jason Hiner [01:42:56]:
What do they call their browser?

Jeff Jarvis [01:43:00]:
Oh, Comet.

Jason Hiner [01:43:01]:
Comet, Comet, thank you.

Jeff Jarvis [01:43:02]:
Comet could be, I think one calendar invite, until about a month ago, a calendar invite could corrupt everything you have, which they fixed, but that was an issue. But my other question is, 'cause the one thing Perplexity has always done well is stay on top of PR. They're really good at, like Comet is an example where everybody, they knew everybody was gonna come out with these things. They came out with it. Stairs first. They, they do these sometimes outrageous things. Do you think that they wished they had released Computer before OpenClaw, or did OpenClaw open the door for them saying, we got something better?

Jason Hiner [01:43:41]:
Yeah, I think you're, you're right, Jeff. They like to move fast, and they're— they've pioneered a bunch of the things that eventually OpenAI and Anthropic eventually ended up doing, right? As you're, as I see what you're getting at. And so I think they do like being first. In this case, I do think that Open Claw gave them a chance to ride that buzz a little bit. And like in a way that maybe AI agents would've felt a lot nerdier. I mean, they still feel nerdy, but at least Open Claw created a lot more curiosity, but it's very hard to use. You have to be very technical. It's very command line.

Jason Hiner [01:44:18]:
Client-oriented. There are some hosted versions of that you could get that are a little easier, but did you get to play with, with, with computer? I did. So you only have, you can only use it if you have a Perplexity Max subscription, which is their $200, $250, you know, subscription. So I had a version of that that I could test it with. And there are just a few things that it does really well, but Paris, I can see you're dying to.

Paris Martineau [01:44:43]:
My issue is that it's a terrible name because Jeff just said, did you get to play with computer? And that made me involuntarily laugh. Like, they might be good at PR, but naming your product computer, it's not gonna work.

Jeff Jarvis [01:44:58]:
It's—

Paris Martineau [01:44:58]:
how am I gonna tell my mom to download computer?

Jason Hiner [01:45:03]:
Download computer. Um, Mom, you gotta get on computer. I know, I had the same first reaction. Paris. The funny thing is how quickly I'm like, okay, Perplexity Computer is like PC, right? And so it's like, it's basically worse. It's even worse. So generic. Yeah.

Jason Hiner [01:45:22]:
That it's, that it's bad. But also at the sense, like, you don't have to remember a whole lot about it, you know, at the same time. It's, but the, but the pro, the product itself is interesting. It does 3 things really well and interesting, I think. In these AI agents, so this is where it gets super nerdy. In these AI agents, you have to have an API key, which is essentially where you pay per use for any— because these things use a lot of, of what are called tokens. That's AI inference. And I know this is a lot of things, but they basically every time you use one of these AI models, it's expensive.

Jason Hiner [01:45:56]:
And right now, if you pay your $20 a month plan, you know, you are typically somebody who probably doesn't use it a lot or most people don't. So you don't. But these AI agents use a lot of computing power, if we just put it that way.. And so if you're using them, OpenClaw, if you're using even Claude code, you have to have an API key because basically you're paying per use. If you use a bunch more, you're going to pay a bunch more. What the first thing that Perplexity did to sort of make AI agents easier is they do away with all of that. And that's why they only have it on the expensive plan for now, because they essentially are giving you a bunch of extra— you get a bunch of usage when you use that plan. Now, if you go over this massive cap, so if you're a really hardcore coder or something, fine, you'll probably still have to pay.

Jason Hiner [01:46:46]:
But if you don't, if you stay under that, you're just going to use this like everybody would. That was the first thing. The other thing that it does that's really unique and interesting is Claude Code only uses Anthropic's models, right?

Paris Martineau [01:46:57]:
Yeah, I mean, I think that's the actually really notable and interesting thing here is that you can use— you could have one like different models, Opus 4.6 for core reasoning, Gemini for deep research, Grok for yelling at someone on the internet, ChatGPT for sycophancy.

Jason Hiner [01:47:13]:
You could have it all. It, and it's pretty good. It has like, it routes your queries to the best models, just like as you're saying, you know, it knows which ones are good at which things. And that part is pretty good. That's the second thing. The third thing it does is, when you do it, say you want it to build an app, say you want it to build the Paris app for scanning, you know, Twitter and giving you, you know, story ideas that have to relate to topics X, Y, and Z. And then you also want to share it with like one person on your team. Perplexity Computer can do it, it can deploy it to the web, and then you can share that URL.

Jason Hiner [01:47:54]:
Whereas if you had Claude Code, if you had Codex, that's OpenAI's program, if you had Replit, you know, Cursor, if you had one of those, you have to make the code, then you have to go deploy the code to a server somewhere. The one reason why, and they learned this from Lovable, if anybody, for those who are familiar with that, that's a code, you know, an AI agent sort of coder where you just make the thing and it deploys it right away on Lovable. And then you can say, you can make your thing in 10 minutes and then you can send a URL. You just made a web app and you—

Jeff Jarvis [01:48:25]:
And it uses Gemini. Gemini, right?

Jason Hiner [01:48:27]:
Lovable uses Gemini. I'm not positive about that, but I think you're right. I think it does. It is working with Gemini. It's a company that's based out of Europe, Northern Europe, but, um, we like that. Yeah, they are, uh, they are, uh, a company that has figured out that one piece of like, oh, if we let people make the thing and deploy it right away, that's a big plus. Well, Perplexity Computer learned that and it does that as well. So for example, I had it make an app app.

Jason Hiner [01:48:54]:
And I, I had this app test that I have where I want it to go and every morning scan all of the sources, you know, and I tell it, here's a bunch of sources, scan these and a bunch more like it, and it'll make like, you know, 20 sources for me. And I had, I had ChatGPT and Claude, you know, make this query for me to make essentially a morning news scanning app for me. I took that, most of the code, most of these programs break on it when I do it, it or doesn't do it right, it messes something up. Lovable did the best job of making it and then putting something on the internet that I could use right away. The only other one that could do that was Perplexity Computer. I put it in there the first time, is the very first thing I gave it was this kind of somewhat complex, make me a morning, you know, AI news gatherer for me. It did it right away and it deployed it and I could send the link out and I was like, whoa, okay. So this is more powerful than lovable.

Jeff Jarvis [01:49:50]:
It deployed it as well. You didn't have to go into terminal. You didn't have to put it on a server. You didn't have to do any of that. Didn't have to do anything. I've been arguing about this with Leo because Leo is nerdy and he loves nerding out and he wants everybody to be using terminal. And I'm saying you're not going to scale at that level. You're not going to scale if people have to install things to run them.

Jeff Jarvis [01:50:11]:
You want to say, look what I made, world, with a link.

Paris Martineau [01:50:14]:
I'm nerdy and I get nervous when I'm in the terminal.

Jeff Jarvis [01:50:18]:
Yeah.

Paris Martineau [01:50:18]:
I mean, I still do it, but it's just even the barrier of going from cloud cowork to cloud code is a lot for the average person.

Jason Hiner [01:50:28]:
For sure. For sure. So Perplexity Computer, it is the, the name is a little questionable. The project though is really promising for the reasons that, you know, Jeff, you just mentioned, like the ability to be able— an average person to go in, describe what they want, and it spit out a thing that you can just take a link and send it to anybody. That is the thing that is really powerful. And then as you mentioned, Paris, the fact that you can do it, it can essentially do best-of-breed models across all of these labs is also a bit of a superpower. So really interesting. I'm going to go now.

Jason Hiner [01:51:04]:
I'm going to send it back to Leo to do our last sponsor for the show, and then we'll come, come back and talk a little bit more about some tools.

Leo Laporte [01:51:12]:
This episode of Intelligent Machines brought to you by OutSystems, the number one AI development platform. The agentic shift is happening. You know that if you listen to the show, we are really moving beyond simple chatbots. And here's the good news. OutSystems is leading the agentic conversation. OutSystems helps businesses build AI agents that can actually do work. It's amazing. Things like taking actions, making decisions, integrating with data rather than just answering questions.

Leo Laporte [01:51:45]:
OutSystems is solving the talent gap. There really aren't enough AI engineers in the world, but OutSystems empowers the developers that your company already has to build at an elite level. It's like a superpower for devs. OutSystems is the secret weapon behind the world's most successful companies. And, and not just for the little apps. These are for massive, complex systems. Systems that run banks, insurance companies, government services. OutSystems even helps companies with aging IT environments bridge the gap to the AI future without a rip-and-replace nightmare.

Leo Laporte [01:52:20]:
And I can give you an example. I can give you several. They helped a top US bank deploy an app that lets their customers open new accounts on any device delivering 75% faster onboarding times. They even helped a global insurer accelerate the development of a portal and app for their insurance agents, delivering a 360-degree view of customers, enabling those agents to grow policy sales. That's just a small sample of what OutSystems can do. OutSystems combines the speed of AI with the guardrails of low-code. It's actually a marriage made in heaven, the safest and fastest way for an enterprise to go from, we need an AI strategy to we have a functioning AI application. Stop wondering how AI will change your business and start building the agents that will lead it.

Leo Laporte [01:53:09]:
Visit OutSystems.com/TWIT to see how the world's most innovative enterprises are using AI-powered low-code to transform. That's OutSystems, O-U-T-S-Y-S-T-E-M-S.com/TWIT. To book a demo and see the future of software development. OutSystems.com/twit. Let me thank them so much for supporting Intelligent Machines.

Jason Hiner [01:53:32]:
Now back to our intelligent hosts. Thank you, Leo. All right, so now it's time for the picks of the week.

Paris Martineau [01:53:43]:
Uh, Paris, would you like to get us started? I will. And this was an important pick for me to do in the week that Leo's not here. Listeners of the show will know that Last week, one of my picks was the New York Times Crossplay app, which I'm somehow more addicted than I was last week. I perhaps have— I have more than 12 games going on.

Jeff Jarvis [01:54:05]:
Leo has been playing mightily in our chat.

Paris Martineau [01:54:08]:
Leo, because I am trouncing him. I was really worried at first because I got such a bad rack of tiles when he started playing, but I beat him in the end, and we are rematching, and I'm still still. But one of the reasons I think I'm beating him is I've gotten really into, over the last couple of years, more Scrabble strategy theorizing. And there's a great book and online resource called breakingthegame.net that is like all beginning, intermediate, and advanced Scrabble and Scrabble tournament, uh, strategy. And, you know, wow, I'm choosing to show— share this in the show because I've— I shared this then last week with one of some of my other friends that I'm absolutely dominating in Scrabble, and it has not improved their ability to play. So I feel safe sharing it in a place where Leo could hear it. Um, but I don't know, check it out if you want to beat your friends more in Scrabble. It gives you some good, uh, some good stratagems to be thinking.

Jeff Jarvis [01:55:08]:
I quote Leo from our WhatsApp. Yeah, we're a WhatsApp family now. You're a stone cold killer.

Paris Martineau [01:55:17]:
I've been sandbagged. I mean, our board right now is a carnage. We've really played ourselves into several corners, none of which are particularly good. But, um, I was very worried that I wouldn't beat him in the last game because he last— the last turn, um, was like up by 30 points. But I think I ended up beating him by 2 in the end. So I don't know, get on the New York Times crossword app and use breakingthegame.net to trounce your friends even harder.

Jason Hiner [01:55:46]:
That's my pick of the week. Stone cold killer gives the pick. That's me. Very good.

Jeff Jarvis [01:55:51]:
All right, Jeff, how about you? All right, this is my one. I have more than one. I wanna mention this, the story just for the record to put on there that News Corp did a big deal, $150 million with Meta and the Robert Thompson, who I disagree with constantly about all matters of internet and all that. He said that News Corp is now basically an AI AI input company, which I found amusing. But that's not my pick. I could do a few different ones here. I could do a paper that's out that says we don't know how social media bans will affect youth, but we're doing it anyway. But I'll leave that aside.

Jeff Jarvis [01:56:25]:
I could do a nice New York Times feature about Bell Labs and all that has happened there. This is why I wrote an op-ed a couple years ago now begging for the soon-to-be vacated Bell Labs in Murray Hill, New Jersey, to be turned into a museum. Yeah, but instead I'm gonna do Walkman Land. So Paris, are you too young for Walkman?

Paris Martineau [01:56:48]:
No, I had one once, and they also came back, I feel like, in the last 5—

Jeff Jarvis [01:56:52]:
they have to have been 10 years. So if you go to Walkman Land— well, I guess we can't show it. Can we show up in it or no?

Jason Hiner [01:56:58]:
We can't.

Jeff Jarvis [01:57:00]:
I'm working on it. Uh, he's working on it. Sorry, I should have warned you, but you know, okay. Okay, so it is, uh, I knew Paris would do exactly that. And ooh, why don't you? So this is pages of Walkmen. I always did the plural, Walkmans, not Walkmans, Walkmen. Walkmen is right. Walkmen.

Jason Hiner [01:57:21]:
Good, good.

Jeff Jarvis [01:57:21]:
That's correct. Uh, I had this one. I had this one.

Paris Martineau [01:57:24]:
I had this one. You did? The Iowa HSPS 008 is so pretty. Honestly, a lot of these are very pretty.

Jeff Jarvis [01:57:35]:
The Philips AQ 64-92 is gorgeous. I'm up to 17 pages. I'm trying to see how many pages there are. 20 pages.

Jason Hiner [01:57:42]:
It goes on and on and on and on. All the Walkman models.

Jeff Jarvis [01:57:48]:
52 pages.

Jason Hiner [01:57:49]:
Yep. That's how many.

Jeff Jarvis [01:57:51]:
That is crazy. 52. Geez. Geez. And it was, it was, it was a life-changing thing.

Paris Martineau [01:57:58]:
Well, some of my friends have the Sony TCM-4500, the My First Sony range. That's a real popular one among the Brooklyn crowd nowadays.

Jeff Jarvis [01:58:07]:
Oh, you mean like today they have one?

Paris Martineau [01:58:09]:
Like right now? Today. I know at least two people who have that. This, I'll put it in the chat, in their home right now. And I always see it at people's apartment and then take a photo of it and then never look it up.

Jeff Jarvis [01:58:24]:
Which I'm—

Leo Laporte [01:58:26]:
now I know.

Jeff Jarvis [01:58:26]:
This was, this was a huge change, right? People could take their music anywhere. The bigger change, of course, came before that. I went to Greenbrook Electronics today, which I can't wait to take Paris and Leo there because it's this weird kind of dusty museum store. And I had to buy a transistor for a class I'm teaching Friday. I'll put this in front of the camera.

Jason Hiner [01:58:43]:
Wow, look at that little thing!

Jeff Jarvis [01:58:47]:
Oh, little guy! Transistor, of course, replaced this, the tube. The tube. As Benito pointed out earlier, he still uses tubes because he's an audio freak. So anyway, this is what changed everything because this is what enabled the portable radio. This is what made it so you could take music with you anywhere.

Jason Hiner [01:59:09]:
The transistor radio.

Leo Laporte [01:59:11]:
Right?

Jeff Jarvis [01:59:12]:
Amazing. And but then that was— you were stuck with radio, you were stuck with DJs, you were stuck with all that. The Walkman gave you the first control. Role. And so I think it's important. So that's, that's that. I want to— I'll mention one other thing, uh, where is it here— that, uh, according to Edison Research, podcasts now lead AM/FM in spoken word listening.

Jason Hiner [01:59:36]:
Really? First time this is—

Jeff Jarvis [01:59:38]:
it's crossed the Rubicon. I'm kind of thinking I'm surprised that didn't happen before, but there's a lot of our grandparents are still listening to AM radio.

Jason Hiner [01:59:47]:
And like in, in factories and stuff like that, they still leave it on all day, right? There are lots of, uh, and even, um, restaurants, like the back kitchen and the off back offices and things like that.

Paris Martineau [01:59:58]:
A lot of the Brooklyn girlies also have AM radios. A lot of— a coolest one is one of those, like, it's an under cabinet AM, like, or a radio setup. It also It also has a cassette player in it, and one of my friends who has one of those is moving to Chicago, and I'm hoping that I get it in the—

Jason Hiner [02:00:18]:
Oh, okay. You know, the— there's also this kind of, um, uh, comeback of the, of the iPod.

Paris Martineau [02:00:25]:
So, um, because it has no cases right now, it's— yes, it's like actually expensive to get an iPod.

Jeff Jarvis [02:00:32]:
So that's not just the New York Times making up a trend?

Jason Hiner [02:00:35]:
No. As a matter of fact, I saw The, the, uh, oh, uh, Tony Fadell, who was one of the, you know, creators or was on the team, you know, him tweeting out, he's like, look, I don't know if Apple's gonna start making it again, but he's like, look, it's official, like, this, this is going, um, this is going pretty big. And, and I think a lot of it is, it is the sort of anti, um, screen time device, you know, where, uh, no notifications.

Jeff Jarvis [02:01:01]:
Yeah, that's what, that's when I roll my eyes, all that.

Jason Hiner [02:01:03]:
I, I do a little bit like, could I imagine going there? Probably not. But anyway, it's, it's also the flip phone. I know some people that have done the flip phone, you know, thing as well.

Paris Martineau [02:01:13]:
Um, but one of my friends has a flip phone and I ridicule him every single time.

Jason Hiner [02:01:21]:
As you should. It would be like going back to the tubes when you have the transistor, you know.

Paris Martineau [02:01:26]:
Uh, it doesn't— he also got one and he like tries to text from And we're in group chats and I'm like, Rick, you can't be doing this.

Jeff Jarvis [02:01:36]:
It's rude.

Jason Hiner [02:01:37]:
The iPod thing is so weird because like for the longest time we were just like, just put this thing in my phone, please put this in my phone already, please. And now we're just like separating it again. That's funny. Now we're like, I want the iPad— the iPod back. All right, well, for my pick, mine is going to be something that I feel like should be so obvious on its face and really should be a feature and not a company. And yet I almost like can't live without it on a daily basis, which is this, this app Whisperflow that lets you, you know, on— I use it on Mac and I use it on, you know, phone as well, where you just hold down one button and you can dictate to it and it, it essentially puts it into clear text. You know, it'll, it'll sort of correct it and make it, make it clear into to complete sentences. And I feel like this should not exist, right? Like Alexa, Siri, Google Home, Google Assistant, like should have done this really well a long time ago.

Jason Hiner [02:02:39]:
But, you know, it's funny that for all of the challenges that Apple has in AI, if it would just buy Whisperflow and make it so that when you talk to Siri, it actually works every time, because this thing essentially works every time or 90% of the time. The perception of what Siri being better, the way that it would go up so dramatically would be incredible. And so that's what makes me think like this is really something. There's a couple other ones like this, a couple competitors as well. WhisperFlow is probably the best, the best known. And I find it to be, you know, the sort of one of the easiest to use, especially on the computer, because all you have to do is hit the function key on your— on a Mac and then it pops up and you can use it for anything. The thing that I found, there's two things that I love about it. One, is that it tracks how fast you go.

Jason Hiner [02:03:29]:
You know, most people can type— if you're really fast, you type like 75 words a minute, right? The average person is like 45 to 50 words a minute. And I think I was— I'm about in that range. You know, when you speak, you're up to about 150 words per minute, 125 to 150 words per minute. So I've noticed now there are times when I can't do it and where it's like I'm in a cafe or something that's a little weird, but the whisper is the right because you can actually do it where I could like say it like this and I could just whisper and it actually works, which is pretty, which is pretty cool. And then, you know, the, the other aspect of it that, that I really, that I really enjoy is the fact that it sort of does it sort of gamifies it a little bit, right? You can, you can get the stats. Stats on how fast you're going. And you can do it. Like, I can be in bed and everybody's asleep and I can like whisper into it and my note.

Jason Hiner [02:04:27]:
Then the last thing I love about it is I do my best thinking when I walk. And I've never— I was like, if I could only type, I get my best ideas to write when I walk. But if I could only type. And so sometimes I would, you know, type on my phone, leave notes in Apple Notes and stuff. But with this, I have actually started writing some things while I'm walking, and it's been super, super handy. So that's mine. It's like the AI feature that really shouldn't be an AI feature. I feel like every one of these—

Paris Martineau [02:04:54]:
But AI is so good at it. I mean, I use— I've talked about on the show, I have a thing on my computer, MacWhisperer, that allows a bunch of WhisperKit transcriptions. That's how I transcribe a lot of interviews, and it's phenomenal.

Jason Hiner [02:05:06]:
It's local. I love it. So good. So good. So if you're still typing most of your stuff out there, folks, you know, you could use this tool, use another one too.

Jeff Jarvis [02:05:18]:
No, no, no, I can't think without typing. You gotta use your fingers. I gotta use my fingers. I got to. That's part of my book, Hot Type. The end is a typographical autobiography where I talk about how the keyboard changed the way I thought and then the computer changed the way I thought. I, I can't— unless my fingers are poised over the home keys, I can't think.

Paris Martineau [02:05:45]:
No. Interesting. Interesting.

Jason Hiner [02:05:45]:
Be able to do it. Uh, so it's, uh, it's an interesting world. It is an interesting world. I, uh, yeah, those are, those are all fun picks. Those are all really, really fun picks. Well, Paris and Jeff, thank you for letting me be here, do this with you.

Jeff Jarvis [02:06:04]:
What a pleasure.

Paris Martineau [02:06:05]:
Really, really good job, Foley. Thanks so much for steering the ship.

Jeff Jarvis [02:06:08]:
This was phenomenal.

Paris Martineau [02:06:09]:
It was great. And we're finishing before a large cane emerges from offstage to drag you out of the podcast studio booth, which is—

Jason Hiner [02:06:16]:
that's right, there's the cane. All right, well, appreciate it. Um, thank you everybody for tuning in to Intelligent Machines. Leo Laporte will be returning, so, uh, you can count on that. And, um, thank you for a great week. And we will— of course, this show is back every week, and Paris and Jeff will be here again, and Leo Laporte will be back. And you can get— and you can count on even more news that's going to happen. However big the news was this week, it's going to be even bigger.

Jason Hiner [02:06:48]:
It never stops. I'm sure it never stops. And where can people go to follow your work? Yeah, thank you, Paris. So, so the deepview.com. You can find me there. Subscribe.thedeepview.com is how you can get our newsletter. Every day we have a send of the newsletter with the top stories in AI. We pick 3 stories and we also try to unpack them.

Jason Hiner [02:07:11]:
And then you can also find me if you want my updates in real time on Twitter, God help us all, at x.com/jasonheiner. And yeah, thank you again and have a great rest of the week.

Leo Laporte [02:07:29]:
Hey everybody, Leo Laporte here and I'm gonna bug you one more time to join Club Twit. If you're not already a member, I wanna encourage you to support what we do here at Twit. You know, 25% of our operating costs comes from membership in the club. That's a huge portion and it's growing all the time., that means we can do more, we can have more fun. You get a lot of benefits— ad-free versions of all the shows, you get access to the Club Twit Discord and special programming like the keynotes from Apple and Google and Microsoft and others that we don't stream otherwise in public. Please join the club if you haven't done it yet. We'd love to have you. Find out more at twit.tv/club.

Leo Laporte [02:08:13]:
To it, and thank you so much.

Jason Hiner [02:08:17]:
I'm not a human being, not into this animal scene.

Paris Martineau [02:08:22]:
I'm an intelligent machine.

All Transcripts posts