Tech News Weekly 374 Transcript
Please be advised this transcript is AI-generated and may not be word for word. Time codes refer to the approximate times in the ad-supported version of the show.
0:00:00 - Mikah Sargent
Coming up on Tech News Weekly. Amanda Silberling is here and we kick off the show by talking about AI making us think less critically Does it or doesn't it? Then we talk about a BBC study about AI chatbots not doing a very good job at summarizing the news. Afterwards, Emma Roth of the Verge stops by to give us an understanding of what Google's doing with machine learning and age verification, before we round things out with a story about Thompson Reuters winning in the first US major copyright case against AI. All of that coming up on Tech News Weekly.
This is Tech News Weekly with Amanda Silberling and me, Mikah Sargent, episode 374, recorded Thursday, february 13th 2025. Ai's first major copyright loss. Hello and welcome to Tech News Weekly, the show where every week, we talk to and about the people making and breaking that tech news. I am one of your hosts, Micah Sargent, and it is now the second yes, yes, second Thursday of the month, which means we are joined by the awesome, the cool, the amazing, Amanda Silberling. Welcome back, Amanda.
0:01:29 - Amanda Silberling
Hello, I feel like I just need you to follow me around whenever I walk into a room and then have someone be like the amazing, the cool, yes.
0:01:36 - Mikah Sargent
I will gladly do that. I love a sort of royal, regal opening announcement. Before the person walks in, we can say all of your titles and epitaphs and all that kind of thing. So okay, as many of you who tune in regularly know, the way that this works is we take the time to share some stories of the week, to share some stories of the week. So that means that we will be chatting this week first about Amanda's story.
0:02:14 - Amanda Silberling
Tell us what you got.
So this week I wrote about a study from Microsoft and Carnegie Mellon about how AI impacts our critical thinking skills when you use AI in the workplace.
And this is interesting in part also because it's from Microsoft, so that does sort of set the tone of the study a little bit, where they just randomly are like so, by the way, when you're using Copilot in Microsoft Word.
But they found that quote used improperly, technologies can and do result in the deterioration of cognitive faculties that ought to be preserved. So what they mean here basically is like they did a study of about 300 people who use generative AI tools at work, like chat, GPT, that sort of thing generative AI tools at work like chat, GPT, that sort of thing and surveyed them on how they use them, whether they're using them for tasks that they deem to be high critical thinking tasks, and basically, like what they're saying is, using things like chat, GPT at work does not make you worse at critical thinking, but if you rely on them too much, then when you end up in a situation when you have to think critically because the AI can't address your prompt, then you are less primed to do so because you are not as accustomed to doing that sort of thinking.
0:03:53 - Mikah Sargent
Okay, this, okay, so that this feels like another one of those. Um, every once in a while, actually more than every once in a while you read a study and you go well, yeah, that's kind of what this sounds like, because at first, and and at first and on the face of it, it does sound like the sort of hot headline of saying AI is making you dumber for critical thinking moments and you exercise those critical thinking skills less because you instead, let you know, most of the time rely on AI. Then the next time one of these moments pops up, you're not prepared for it and so you're not going to be as good at it. I mean, yeah, if we don't practice skills, then we don't have those skills. And so I could say the same thing, for if I don't try to solve a Rubik's Cube every day, and then I go to try to solve a Rubik's Cube after four months of not trying to, I'm probably not going to be as good at it as I was whenever I was solving it every day.
It's kind of like, okay, yeah, that makes sense. One thing about this study that kind of originally stuck out to me or made me kind of you know, think about it is that I think that on its face it kind of reaffirms what people that are worried about the impact of AI are worried about, meaning that they have that concern that if we're not exercising the skills that we are handing over to generative AI, then we may not have those skills whenever we need to. But I also I mean, if you read the text of the study and you look at kind of the questions that they ask, there's a lot of critical thinking going on for the people who are looking at the responses from the generative AI and making sure that it's accurate.
0:05:58 - Amanda Silberling
Yeah.
Which is good no-transcript user is aware of the potential downfalls of AI and, where these systems fail, the more likely they were to be using critical thinking skills. And then, for users that had more confidence in these systems, they had been found to over rely on it, which again this kind of is like well, yeah, duh. But I think it's rare when we get studies that really illustrate, like, how exactly this works, beyond just sort of the like well, this has to be making you dumber, right, because you were not thinking. Well, this has to be making you dumber, right, because you're not thinking. But so then there is another quote that says potential downstream harms of Gen AI responses can motivate critical thinking, but only if the user is consciously aware of such harms.
0:07:15 - Mikah Sargent
Fair enough. So, yeah, you've got to kind of know about it in the first place, and I think that's where the literacy aspect comes into play, and I don't mean literacy in its classic sense, but just understanding of how AI, generative AI works. And unfortunately, these tools, I think in many cases, are being foist upon individuals without much more than just use this and do it, and that's where you don't get the opportunity to go okay, here's how it works, here's what it does, here's what it can't do, and I'm going to use all of that to inform how I move forward and what I do with these tools. So, yeah, in that way, I guess it isn't providing the opportunity for the critical thinking. I do also find it, as you pointed out, delicious that Microsoft was a part of this thing, because I think of any company that is in the AI game I guess Apple is now getting to that place but for a while, microsoft was the only company that was giving direct consumer focused feature sets and more than we saw from other companies. You would see it in all of the tools that you're using at work and at home and everywhere in between, and so it is kind of it's good in a way that Microsoft is the one that is, you know, being curious about this and making sure you know that we understand its impact and maybe putting out a little bit of a heads up that, hey, you know, relegate the grunt work that you once did to these tools, but also make sure that you are maybe not relegating the higher tier tasks there. Let it take care of the really kind of easy peasy stuff and then you go from there.
I also saw an interesting aspect, too, of people being able to practice their if you've got IQ, their EQ, their sort of emotional quotient, in the sense of taking an email that was formulated by AI and then using one's own knowledge of the culture of their work environment to make changes to that email so that it fit within the scope and the kind of unspoken political underpinnings of the work culture. And that is some very high-level critical thinking that AI we haven't really seen. I mean you, you tell a, you ask AI to to write three jokes and as you, even I saw you posting about this not to, maybe even today, I'm not sure, but recently, yeah, today about the stupid and bad jokes that AI was telling, like it is not good at that, and I think that that's because there's a human nuance to that just as much as there is to navigating a complex, again underpinning of relationships in the workplace, I guess is what I'm trying to get at.
0:10:44 - Amanda Silberling
Yeah, of relationships in the workplace, I guess is what I'm trying to get at. Yeah, and I think this isn't this isn't even like an ai specific issue, but like if I told you right now, like, Mikah, I need you to contact my boss and um tell her that I'm working on this story about uh turtles I don't know like you wouldn't know how to do that, you would be like, well, do you normally email your editor? Do you normally slack your editor? Do you text? Do you call uh? Do you use proper grammar and punctuation? Do you use emojis like how are you communicating with this person? So like, like, I'm always on slack. So if I emailed my editor and was like, hello, I have a story idea. She would be like why are you emailing me? Did you get hacked?
0:11:30 - Mikah Sargent
Like this is weird. Are you about to ask me to buy Apple gift cards?
0:11:34 - Amanda Silberling
Yeah, oh no, that does happen to us. Where you get, there's like a scam, where we get texts from people pretending to be our superiors at work and they're like people pretending to be our superiors at work and they're like can you buy me an Apple gift card? And I'm like no.
0:11:48 - Mikah Sargent
No, that's happened to us too. But the funny thing is they used my boss's former last name, and so immediately we're like well, I know, that's not them, because A no one these days is using stupid SMS like signature things anymore, and B that isn't even that person's last name anymore, so you played yourself. But anyway, that's an aside. In theory, that's supposed to get better in a bad way with the help of AI, where, if you can train it on someone's public persona, you can train it on someone's public persona. So, maybe, if I downloaded all of your tweets and it had the context to see that you share screenshots of you talking to co-workers in Slack and then also use your verbiage to tie all that together, maybe, maybe it would do that, but the funny thing is, I think it would severely fail when it comes to me, because, frankly, I'm a big code switcher and so when I'm talking outside of work, it's not the same person. Well, I mean, it's the same person, but it's another part of the same person. And so, well, I mean, it's the same person, but it's another part of the same person, and I don't think that AI would be good at figuring that out. So, anyway, all very interesting. I think, though, looking at the time, it's time for us to take our first break here on the show, again joined by Amanda Soberling of TechCrunch for this week's episode of Tech News Weekly.
Now this episode is brought to you by the folks at ZScaler, the leader in cloud security. Very important Enterprises have spent billions of dollars on firewalls and VPNs. Despite all that, money breaches continue to rise by an 18% year-over-year increase in ransomware attacks and a $75 million record payout in 2024. These traditional security tools expand your attack surface with public-facing IPs that are exploited by bad actors more easily than ever with AI tools, and they struggle to inspect encrypted traffic at scale, which is what leads to and allows compromise. Vpns and firewalls also enable lateral movement by connecting users to the entire network. That allows data loss via encrypted traffic and other leakage paths.
Hackers exploit traditional security infrastructure using AI to outpace your defenses. So it's time to rethink your security. Don't let those bad actors win. They're innovating and exploiting your defenses. ZScaler Zero, trust plus AI stops attackers by hiding your attack surface, so making those apps and IPs invisible. It will eliminate lateral movement because users are only able to connect to specific apps, not the entire network. You get continuous verification of every request based on identity and context, plus simplified security management with AI-powered automation and detecting threats using AI to analyze more than 500 billion that's with a B daily transactions. I mean. Hackers can't attack what they can't see, so protect your organization with ZScaler, zero trust and AI. Learn more at zscaler.com/security. That's v, and we thank ZScaler for sponsoring this week's episode of Tech News Weekly.
All right, we are back from the break and it's time for my story of the week. Surprise, it's also about AI. Ha ha Ha ha, we got you. Hmm, this is a story from the BBC that is actually a study from the BBC. That it's I don't know. Study is a loosely defined word these days, but I will tell you what this is. It is a research moment. And took those 100 news stories and asked ChatGPT, copilot, gemini and Perplexity. It got journalists who were relevant this is, according to the BBC, relevant experts in the subject of the article and then rating the quality from each of the AI assistants. And I want to start before we even get to the results of this.
Anyone who has used, who has an iPhone, who has updated to the most recent version of, I guess the last most recent version of iOS, will be very familiar with the fact that AI summarization can be pretty crummy inability to accurately summarize not just news but also any notifications that came through, because AI notification summaries was part of Apple's kind of feature push. And so somebody who had a door, for example, that somebody who had a door, somebody who had a doorbell, an internet connected doorbell who would get maybe 20 notifications in a day of you know this person walked past. I detected a person outside your front door, would suddenly get a notification saying there are 15 people standing outside your door, what? No, that's not what's happening, and so Apple ended up rolling back AI news notifications, in particular, because of the concern that one of these could actually have a greater impact, and so that's what the BBC wanted to look at. I own 51% of this company. No, sorry, 51% of all AI answers to questions about the news were judged to have significant issues of some form. So what that means exactly is hard to say in terms of what the significant issue itself was. But on top of that, 19% of AI answers that actually referenced BBC content introduced, so added factual errors like incorrect factual statements, numbers and dates.
And, as Anthony Nielsen in the Discord chat is saying, yeah, not sure why we needed summaries of summaries. Let's talk about the fact that we turned on push notifications for the app that we had created, but was also looked at by the newsroom manager of the day and, I think, even the developer of the push notification system, before it ever went out, because that's a big thing and that's a big responsibility, and so all of that is already happening. To provide a summary, why do we need to summarize a summary? We don't is the answer, but the idea was, of course that if BBC pushed six news stories to you and six push notifications to you, each of which is a summary, then you could get one summary that said well, a helicopter crashed and a royal baby was born, and the groundhog says six more weeks of winter all in one notification. You wouldn't have to read through all of them. It just didn't work in many cases.
Some of the stuff that the BBC saw included Gemini incorrectly saying that the NHS did. I had to reread this because I didn't know that this was the case. Gemini incorrectly said so it's kind of a double negative here that the NHS did not recommend vaping as an aid to quit smoking, which means that the NHS does indeed recommend vaping as an aid to quit smoking, which I found interesting. But that's an aside and maybe that's why Gemini was also confused. Who knows side? And maybe that's why Gemini was also confused, who knows?
Chatgpt and Copilot said Rishi Sunak and Nicola Sturgeon were still in office even after they had left, and perplexity misquoted BBC News in a story about the Middle East, saying Iran initially showed quote restraint and described Israel's actions as aggressive. And that last one is the real, because if we're talking about international politics and international news and we have people who are getting their information from push notifications instead of other places. That could be a problem. So, overall, the BBC study revealed that Microsoft's Copilot and Google's Gemini had more significant issues than OpenAI's ChatGPT and Perplexity, which is interesting given that Copilot is, for the most part, a wrapper over the top of OpenAI's own GPT system. In any case, I wanted to hear your thoughts on this and, in particular, as kind of an active journalist who is, in some cases, probably ending up in some of these notifications, right.
0:22:07 - Amanda Silberling
Yeah, I feel like I am biased because I'm like, well, they're trying to do my job, so I don't like that. But I also am biased towards humans and I think that humans already do not the greatest job at media literacy. And if we are struggling to correctly interpret headlines written by people that are like copy edited by multiple people, then we're also going to struggle to glean the right information from potentially incorrect push notifications. So the study here doesn't surprise me in its results. It does concern me, and I think this is just another example of like do we actually need summaries of summaries? Like it's already an issue that people read headlines and not articles and then now people would theoretically be reading summaries of headlines of articles.
0:23:16 - Mikah Sargent
It's like it's a game of telephone, and we've all played telephone. We know that the reason why telephone is a game is because in the end, you end up with something completely different from what you started with. That's the whole thing about Telephone. And the more it gets distilled and distilled and distilled, the further from the initial accuracy it is. It makes me, it makes. This is the thing for someone who, for someone who doesn't understand a lot of the or some of the technology behind this, so someone outside of what we do, they are more or they're less likely to be thinking about whether they should trust this right. They'll take a lot of it just as a given. So the companies that are making this have two problems. One is for those folks who will take this stuff as given and then use that information in ways that run contrary to fact, because it is not fact but instead is inaccurate. But then they also have the issue of those of us you and I who do know this and who are keeping an eye on this and therefore do not trust this stuff. And even if the companies work to you know Apple's rolled that back. I have AI notification summaries turned off across the board, even though it was only disabled for news. I don't like it. It has wronged me. It wasn't even a wrong me once, wrong me twice situation. No, it got something wrong once and I ended up missing more context of a message that was sent to me and that was it, and I don't see myself ever turning it back on. I'll be honest and I have held true when I've said that in the past about Siri. Like I very rarely use Siri because it just messed up too many times in the past and I just don't trust it to do things, so it's out of my muscle memory almost entirely. My muscle memory is I open up whatever I need to do and boop, boop, boop, bop and make that thing happen, and I feel the same way about this. So I think that this is a two-pronged issue for these companies, where the people who are in the know are not going to be trusting it and they've got a lot of uphill work to do if they want to prove that it can work. And then the people who aren't in the know the impact of that, as we're seeing here.
I do find it interesting too, with this BBC piece Deborah Ternes, I believe, t-u-r-n-e-s-s, who is the CEO of BBC News and Current Affairs, in a blog post about this study, talks about AI distortion as the threat to trusted information. In the same post, ternus says quote don't get me wrong AI is the future and brings endless opportunities. And then goes on to talk about how the BBC is using AI for different parts of its company, and yesterday on Intelligent Machines. Nay, this week in Google we talked a little bit about how we see the future in terms of journalism and AI and the need for those organizations to work together to better. Because this is the thing ai is not going anywhere, and so if it's not, then perhaps it means trying to figure out a way for these two things to to coexist, and coexist in accuracy. And I don't know. I don't know if we're going to get there or if it's just going to be this continued plowing along of just break and fix, and break and fix and only kind of fix.
0:27:28 - Amanda Silberling
Yeah. It strikes me as weird that in this blog post she's like well, ai is the future and brings opportunities. Because, like, what opportunities? Like she doesn't say yeah just opportunities yeah. Like she doesn't say yeah, just what, she's just.
0:27:59 - Mikah Sargent
Yeah, yeah. I just think that right now we already have issues with distrust in the media and problems with media literacy, and adding this additional layer of obfuscation doesn't really help either of those issues. Yeah, I agree with you. I think that you are right. I will say, as the last kind of point for this, I am glad to see the study that you reported on and the study that the BBC reported on and completed itself being done, because it means that we are focusing on this new field with scientific studies and actual research, and I hope we continue to do that and continue to learn more and look at the impact generative AI is having across the board. Amanda Silberling, I want to thank you so much for taking the time to join us today. If people would like to follow along with your work, where is a good place to go to do that?
0:28:43 - Amanda Silberling
to follow along with your work. Where is a good place to go to do that? I am on blue sky, at uh, at Amanda dot O M. G dot L O L, which is a URL that I have and I'm using. Um, I'm not really posting on X these days, but I'm there also and the little thing is there, Um, and I'm on tech crunch writing stuff.
0:29:03 - Mikah Sargent
Nice. Thank you so much. We appreciate it and we'll catch you again next month. Cool Bye, Bye, Alrighty folks, we're going to take another quick break before we come back with our next story about Google using machine learning which is another phrase for AI to figure out a user's age.
But first let me tell you about Veeam, who are bringing you this episode of Tech News Weekly. You've heard this rhyme at this point Without your data, your customer's trust turns to digital dust, and that's why Veeam's data protection and ransomware recovery ensures that you can secure and restore your enterprise data wherever and whenever you need it, no matter what happens. As the number one global market leader in data resilience, Veeam is trusted by more than 77% of the Fortune 500 to keep the businesses running when digital disruptions like ransomware strike. And that is because Veeam lets you back up and recover your data instantly across your entire cloud ecosystem, proactively detect malicious activity, remove the guesswork by automating your recovery plans and policies, and get real-time support from ransomware recovery experts. Data is the lifeblood of your business, so get data resilient with Veeam. Go to Veeam that's V-E-E-A-M.com to learn more, and we thank Veeam for sponsoring this week's episode of Tech News Weekly.
All righty, we are back from the break, and that means it is time for our next piece about the future of Google's attention on the age of its users. Joining us to talk about what's changing is Emma Roth of the Verge. Welcome back to the show, Emma.
0:30:53 - Emma Roth
Hi, yeah, thanks again for having me.
0:30:56 - Mikah Sargent
Yeah, pleasure to have you here. So Google has announced that it will begin using machine learning Interesting that it uses that word instead of AI, which is what it would use in other places. But anyway, to estimate user ages, can you walk us through how the system works and why Google is announcing that it's implementing this?
0:31:16 - Emma Roth
Yeah, definitely. So this whole thing is kind of just a test for now here in the US, so it hasn't officially been implemented broadly yet. But what it's going to do is it's going to use the data that it has about you maybe your browsing history or your YouTube channel, like the videos you watch on YouTube and the age of your account to feed into this machine learning model and it's going to try to determine your age. And I think this is kind of coming about right now just because of all the child safety laws that are coming up in many states across the US, and also federal lawmakers are really paying attention to this issue.
0:31:58 - Mikah Sargent
Definitely. This is an interesting thing because, you know, depending on somebody's individual interests and perhaps they're, I don't know they, up to this point use Vimeo or something they may have just created an account and they may be super into videos of people building Lego creations, but they like those ones that are giant blocks instead of the ones that are little blocks. But they like those ones that are giant blocks instead of the ones that are little blocks. That could, you know, in theory, maybe mismark someone as a different age. But the model is supposed to analyze user behavior, supposed to look at the browsing history and the YouTube activity, like you talked about. Is there any talk about how it's balancing this accuracy with privacy concerns, given that it has to kind of collect and process all of this data, given that Google has a little attention on it for the way that it serves us ads and watches us across its different properties? Any talk about? Hey, we're still trying to maintain privacy here.
0:33:08 - Emma Roth
Yeah, they have said that they aren't going to be collecting any additional data on top of what this initiative would do, so it's just going to be using, like, the existing information. But privacy advocates are kind of concerned about this type of age verification method because it could potentially open the door for, like, additional data collection.
0:33:34 - Mikah Sargent
Now, one of the key motivations for this change seems to be, as you might imagine, the growing regulatory pressure, including laws like the Kids Online Safety Act, that's COSA and COPPA 2.0. How does Google's approach kind of align with these legislative efforts? Is this written into this legislation that age verification in such a direct way needs to happen? Is it more vague? Kind of what's the current state of things?
0:34:11 - Emma Roth
what's the current state of things? Yeah, so there isn't really like one set age verification method and it is kind of written in a more vague way in these laws. And what these laws do is they want to apply protections for minors on these platforms and they want to keep them away from potentially harmful content and also from tracking and data collection. And by adding age estimation to Google, it could potentially, like Google will automatically put these accounts into certain, have certain settings, so they'll have like a safe search filter on for kids under 18. And they'll also have restrictions on certain types of YouTube videos. And that does kind of fall in line with what these laws would require. But there isn't. It's just tough to for lawmakers right now to like determine like what is the best age verification method, and it's kind of a challenge for these online platforms as well.
0:35:11 - Mikah Sargent
Understood. Now Google plans to notify users when it detects that they may be under 18 and offer verification options like government IDs and credit cards. Do we know what happens if users at that point don't verify their age? Are they going to be restricted from accessing certain features? What's kind of the next step in clamping down once it's been, I guess, not confirmed that they are of the proper age?
0:35:41 - Emma Roth
Yeah, that's a very good question and Google hasn't really provided an answer on that, but I would assume that you would be kind of blocked out until you do verify your age with a government ID or a credit card, which some people might argue like is an invasion of privacy. But yeah, it does seem like you might be out of luck there.
0:36:09 - Mikah Sargent
Understood. The announcement also highlights new parental controls and family link updates. I saw that you kind of talked about in your piece. How do these changes help parents manage their children's online experiences more effectively?
0:36:25 - Emma Roth
more effectively. Yeah, so one of the biggest updates Google announced was that they're bringing school time to Android devices. So that means parents can limit the calls and messages that their child receives on their phone during school. So they can basically make it so their child doesn't receive any of these types of distractions. And they're also adding a way for parents to add and manage contacts directly through the Family Link app for their child's phone.
0:36:58 - Mikah Sargent
Got it. And then, last but not least, I'll ask you, with platforms like Meta also exploring AI-based age verification, is this the future of online age restrictions as far as we can tell, or is this maybe kind of the starting point and we've still got these companies that have a lot more work to do? I always like to end with a little bit of a crystal ball question. It's hard to know for sure, but kind of what's your take on where things are right now in your reporting on this and kind of how companies have responded?
0:37:29 - Emma Roth
Yeah, I think right now like it does seem like things are leaning towards this AI age verification, like using existing data. I think a lot of like other methods, like showing your ID and or like using your credit card, are kind of more. It seemed a little bit more like harsher and unpopular at this time, but I think that platforms are going to continue to refine this a lot more. I think there's definitely like more to be done in this space and that there's there has there's going to be changes down the line, I'm sure.
0:38:05 - Mikah Sargent
Absolutely. Emma Roth, I want to thank you so much for taking the time to join us today on the show to give us an understanding of kind of where things are in terms of Google looking out for those of a specific age and trying to verify that going forward, now we, after this test kind of finishes up, we'll be able to figure it out. Do we know, or rather that that part's already happened? What, instead, I mean to ask is if people would like to follow you online, where can they go to do so?
0:38:35 - Emma Roth
Yeah, you can follow me on the Verge or you can find me on X at EMRoth08.
0:38:44 - Mikah Sargent
Beautiful. Thank you so much and we'll see you again soon. Thank you, beautiful. Thank you so much and we'll see you again soon. Thank you All righty. We've got a quick break before we come back with the final story.
This episode of Tech News Weekly is brought to you by CacheFly.
For over 20 years, CacheFly has held a track record for high-performing, ultra-reliable content delivery - serving over 5,000 companies in over 80 countries. At TWiT.tv we've been using CacheFly for over a decade, and we love their lag-free video loading, hyper-fast downloads, and friction-free site interactions.
CacheFly: The only CDN built for throughput! Ultra-low latency Video Streaming delivers video to over a million concurrent users. Lightning Fast Gaming delivers downloads faster, with zero lag, glitches, or outages. Mobile Content Optimization offers automatic and simple image optimization so your site loads faster on any device. Flexible, month-to-month billing for as long as needed, and discounts for fixed terms. Design your contract when you switch to CacheFly.
CacheFly delivers rich-media content up to 158% faster than other major CDNs and allows you to shield your site content in their cloud, ensuring a 100% cache hit ratio.
And, with CacheFly's Elite Managed Packages, you'll get the VIP treatment. Your dedicated Account Manager will be with you from day one, ensuring a smooth implementation and reliable 24/7 support when you need it.
Learn how you can get your first month free at cachefly.com/twit. That's C-A-C-H-E-F-L-Y dot com slash twit.
All right, so during the show I received word that, unfortunately, our next guest is not going to be able to join us for the episode. All is well in the world, no worries there, but I am going to tell you a little bit about the piece from Wired's Kate Nibbs. Kate has been on the show before and Kate wrote the piece over at Wired about Reuters Thompson Reuters winning the first major AI copyright case in the US. They sued and successfully did so the AI startup Ross Intelligence, claiming that it copied materials from Thomson Reuters' Westlaw database. The judge ruled that Ross Intelligence's actions violated copyright law. This, of course, meant that the judge rejected all of the startup's defenses, including its claim that using Westlaw's legal research content fell under fair use, and, of course, fair use was a central point in this case. Despite that, the court found Ross's use to be infringing, saying that the company had created a market substitute for Westlaw organization. You know the push notification stuff. I remember when we started to teach everyone a little bit about fair use and we had a fair use lawyer that was also a journalist at the company and it was kind of this.
For those of you out there listening, there's a really interesting field regarding animals and I've talked about it before called chicken sexing and essentially it is the job of these individuals who are chicken sexers to very quickly tell the difference between a female and male chicken, and unfortunately it means disposing of the male chicken and continuing the female chicken along the lines. But chicken sexing is taught in such a way that you just kind of instinctively learn the difference between the two. Because when chickens are that small, when they're that fresh, it's very difficult to actually tell the difference without tests and also without magnifying glasses and lots of other things, glasses and lots of other things. And so the way that they kind of teach new people is by just having them sort through chickens and then they get feedback on if they've got it right or they don't. And you have to do it a lot and then eventually somehow you just instinctively learn the difference between the two chickens.
And so that is my way of saying that we kind of employed that teaching technique at the news organization and talked about fair use and what is allowed, what's not, and that argument of being transformative. One of the big aspects of that is are you taking what someone else has created or put out there and doing the exact same thing with it? Because that's not transformative. Nothing is transforming. Nothing is changing. You're just doing the same exact thing. And that was the argument here that if a company was going to claim fair use, you have to have transformation. And there wasn't transformation. Instead, it just wanted to be a replacement or a quote market substitute for Westlaw from Thomson Reuters.
There's a lot that's involved in this. I think one of the big things. I've always known Reuters and, to an extent, I've always known Reuters and, to an extent, ap to be two big organizations that are very protective oh, getty Images is another one Incredibly protective of the stuff that they own. So I'm not surprised to see Thomson Reuters A involved in one of these copyright AI cases and B to win one of these copyright AI cases and B to win one of these copyright AI cases. According to the Wired piece, it is a significant blow to AI companies because a huge aspect, as we just talked about yesterday on Intelligent Machines of AI, is the scraping of data from various online buckets and the regurgitation, in some way, of some of that information, and so, in order to train those models, they need to be able to gather this information from different places, and it comes down to need to be able to gather this information from different places and it comes down to, as Jeff Jarvis you know talks about, is it?
Is there a difference between a human being and an AI reading something and using that to inform their response, its response going forward? If I read, I mean literally what I'm doing right now. I have read an article from Wired, and now I am talking about that article from Wired and you, many of you some of you, hopefully many more are subscribers on the network and therefore are paying for what I'm creating right now, and the sponsors that I just talked about are also paying to be on this episode. So I am getting paid for reading something that I read online and transforming it to contextualize it and bring it together for this one-way conversation that we're having right now. So is there a difference between me doing that and getting paid for it? Because that's where we've seen the argument before, right? Is that? Well, when OpenAI does it, they're scraping those things and then you're paying for it. It's selling a service, but is not the job of a commentator a similar thing? And then is there a difference between the two? In any case, it does mean whether it's a blow, I don't know, but a significant moment for AI companies because they rely on that scraping and reproduction of those materials for training, and so this could make it harder for AI developers potentially to claim fair use. Once you've got precedent, then it goes on from there.
Ross Intelligence had already shut down in 2021 due to the financial burden of the lawsuit. Again, thomson Reuters I'm not surprised, highlighting how legal challenges can affect smaller AI startups before they get a chance to fight in court, and the case is part of a broader wave of AI copyright lawsuits that we're seeing out there, with companies like OpenAI and Google also facing legal scrutiny over whether their training data infringes on intellectual property rights. We've seen some companies choose to partner up with different news organizations and other online publications and services to kind of combat those concerns. We've seen Adobe figure out a way to do it without copyright infringement, as far as we've been able to tell, in order to create its generative AI tools so that it stays out of any of this copyright issue. It's kind of fascinating to see how these different companies are navigating this space, but legal experts say the ruling will or could, but probably will have major implications on the AI industry because again, we're looking at precedent here An interesting aspect of this, though, is AI models in their current form are huge data gobblers right.
They gather a bunch of information and use that to, and it seems to be the case that the more data they have, the more accurate and capable they become, and that has led to an issue where there's we're coming up against the cap on the data that's available to these AI systems. So if you get to the end of the data, what do you do? Well, it turns out, you use AI. So many of these companies are starting to look at and, in some cases, are generating data using AI that can then be fed to the AI to train the AI. It's kind of wild. Is it a snake eating its tail? It's not eating its tail. It's like a snake letting its tail pass through it, and more tail keeps being made. So it's sort of like a oh, it's centipede eating its tail, except you don't die. Instead, you just get bigger and bigger each time as you eat your tail. I don't know, it's a cheat code centipede, and I wonder if we're going to see more focus toward that AI-generated data away from the web in general and the quote-unquote open web, in order for these companies to protect themselves from the way that this ruling could play out, the way that this ruling could play out. I plan to keep an eye on Kate Nibbs' work in this regard as we start to see more AI copyright cases make their way through the courts and have more rulings, and then we'll have Kate on the future to kind of give us a recap or a current understanding of where things are as far as AI rulings go. But head over to wired.com to check out Kate Nibbs' reporting on Thomson Reuters winning the first major AI copyright case in the US. That is going to bring us to the end of this episode of Tech News Weekly.
My show publishes every Thursday at twit.tv/tnw. That is where you can go to subscribe to the show in audio and video forms. If you want all of our shows yeah, it's an infinity, centipede, infiniti-pede. I don't know. If you want all of our shows ad-free, consider joining the club at twit.tv/clubtwit. First and foremost, you can join the club for two weeks for free so you can check it out and see if it's for you, and then after that it's just $7 a month. That's it. You get every single one of our shows ad-free. It's just the content. You get a warm, fuzzy feeling, knowing you're helping support what we do here on the network. You get access to the Twit+ bonus feed with extra content you won't find anywhere else. You get access to the members-only Discord server, which is a fun place to go to chat with your fellow Club Twit members and those of us here at Twit. It is a fantastic, fun and great time and we love our Club Twit members with our whole hearts. So thank you for considering subscribing twit.tv/clubtwit and I thank you to those of you who are subscribed and head to twit.tv/clubtwit, and I thank you to those of you who are subscribed and head to twit.tv/clubtwit/referral, because there is where you can go to subscribe or to invite your friends to join and end up getting months of club twit for free. If you'd like to follow me online, I'm at Micah Sargent on many a social media network, or you can head to chihuahua.coffee. That's C-H-I-H-U-A-H-U-A.coffee, where I've got links to the places I'm most active online.
Be sure to check out my other shows which published today Hands on iOS, hands on Tech. And why am I forgetting? Oh, iOS Today is the other one. Did I say Hands on iOS. iOS Today. Hands on Tech. Hands on Mac. Those are my three other shows. Can you tell I've got a lot of shows and I did a lot of shows this week because Leo is out and so there's been a lot going on.
But thank you so much for being here with us this week and special shout out to John Ashley, who was incredibly helpful this week in booking guests for the show, was incredibly helpful this week in booking guests for the show and even though we ended up not being able to have that guest, it was because of John that we were going to have that guest and because of John that we had Emma as well. So a special shout out for his help. With Leo being out, it was awesome to have. Anyway, that's all. Those are all my shout outs. Have a good rest of your week and happy Valentine's Day to those of you who celebrate. The card and gift giving holiday started in like 650 BC or 650 AD. It came around a long time ago, St Valentine's Day, anyway. Bye.