Transcripts

Tech News Weekly 337 Transcript

Please be advised this transcript is AI-generated and may not be word for word. Time codes refer to the approximate times in the ad-supported version of the show.

0:00:00 - Mikah Sargent
Coming up on Tech News Weekly. Jennifer Pattison Tuohy is here, and Jennifer talks about Matter 1.3, the latest spec of the framework for communicating with the smart home, and some of the new device types and features that you can expect in this version. Then I talk about some new accessibility features making their way into Apple devices, including one I'm super excited about. That means I may not have to worry about getting sick in the car when I'm reading my iPhone or my new iPad. Then the one and only Jason Howell joins us to talk about what all was announced at Google IO. Boy, was it a lot? Before we round things out with Pranav Dixit of Engadget, who joins us to explain what the heck OpenAI announced on Monday and what are some of the features of its new GPT-4o model All of that coming up on Tech News Weekly.

0:01:00 - VO
Podcasts you love. From people you trust. This. Is TWiT.

0:01:09 - Mikah Sargent
This is Tech News Weekly, with Jennifer Patterson Tuohy and me, Mikah Sargent, episode 337, recorded Thursday, may 16th 2024. Ai News from Google, io and OpenAI. Hello and welcome to Tech News Weekly, the show where, every week, we talk to and about the people making and breaking the tech news. I am one of your hosts, Mikah Sargent, and, as this is the third week of May, we are joined by the wonderful, the awesome Jennifer Patterson Tuohy of the Verge. Not the Burge, what's the Burge? Welcome back to the show.

0:01:46 - Jennifer Pattison Tuohy
Jennifer. Hi Mik,ah, always a pleasure to be here. Third Wednesday, Thursday of the month is my favorite day of the month.

0:01:55 - Mikah Sargent
You know, I was just last week, it was Amanda Silberling, and Amanda Silberling was celebrating her birthday on that Thursday. And I said, that's the second host who was celebrating a birthday on Tech News Weekly, so now I'm kind of curious if the rest of them will be as well. I haven't looked yet at mine. I don't know if my birthday falls on a Thursday this week, but it'd be pretty cool if it did.

0:02:23 - Jennifer Pattison Tuohy
In any case, the way that this works, it's a good day of the week to have a birthday Thursday, because then you get like a whole extra long weekend to celebrate. That's a good point, that's a good point.

0:02:32 - Mikah Sargent
Yeah, I have to I don't know call the fates and say can we do Thursdays please? Anyway, this is all about stories of the week, the stories that we think are of the most interest that may have come out since we last spoke, and so, as is always the case, Jennifer, you will kick off with your story of the week.

0:02:56 - Jennifer Pattison Tuohy
Well, so it's, I'm glad you said since we last talked, because this actually came out last week, but it was. It's still a big story, especially in MySpace, which is that Matter 1.3 launched. So for those not familiar, matter is a protocol, interoperability standard for the Internet of Things, so for devices that work in your smart home, and it's been around for a couple of years. It's founded by Apple, Amazon, Google, Samsung. Basically, it's supposed to make the smart home work better, and it comes out with a new release every six months. The Connectivity Standard Alliance, who organizes and plans Matter, releases a new spec every six months. So now we're in the spring or almost summer, and so they've come out with 1.3. So this is the third update to the spec since it launched in 2022.

And it was a doozy. It was an exciting one. It brought a lot more appliances to the standard. So this basically means that more devices in your home, more categories of devices in your home, or categories of devices in your home, will now work with the standard. And the real benefit of this for most people is that if you have a smart home and you're fed up of having to maybe use one device, one, one voice assistant to control one thing, have to use your iPhone to control another and need a Android phone to control this one. This makes everything work with everything, and what it's added now is we've now got microwaves, ovens, electric cooktops, extractor hoods, dryers which join, washing machines, fridges and a whole slew of appliances. So basically, now you can make your kitchen smart, you can make your laundry room smart, and it all works locally. And today there are smart appliances, but most of them are cloud based. So this is kind of exciting from a smart home perspective, but one of my I mean having new appliances and new devices that work with Matter is a great step forward for this standard. It basically is going to make it so much easier for us to use smart devices and to get them to communicate with each other, and we won't have to rely on figuring out whether it works with Zigbee or if it's Wi-Fi or if it uses Bluetooth. If it works with Matter, it will communicate with everything in your home.

But what's really what I got most excited about with this new release is that they've added energy management, and now energy management doesn't sound as exciting as gadgets, but it is a huge sort of step forward for the smart home. It kind of takes the smart home In my mind. It takes the smart home from sort of niche to necessary. I mean, you and I, I know we both love to be able to turn our lights on and off and change the color with our voice and our phone. But not everyone wants to do that or really cares about doing that. But everyone wants to save money and most people want to save energy and energy management.

In this time of climate crisis, energy crisis is a huge, huge thing that the smart home can help us with and with the Matter 1.3 spec they've built in support for energy management. So what that basically means is that any smart home platform that your devices are connected to that work with Matter can read the energy that your devices are using, can process that data, can help provide you with information about how your home is consuming energy, and then the hope is and this is already the case in some platforms it will actually be able to intelligently manage your energy for you, so sort of decide when to run your dishwasher or when to run your dryer, when the energy is the cleanest or the least expensive, and then the sort of key thing that came out with this spec is electric vehicle management. So this is now part of Matter as well. Evse's electric vehicle supply equipment. So when you're charging your EV, you can and hopefully this will happen when platforms adopt Matter 1.3, you could program in, say, I need my car to have 80 miles of range by 3 pm this afternoon and I want you to use the cheapest available energy between now and then to make sure it happens.

I mean, and that is the kind of useful thing the smart home really was designed for and to date, while it's possible, it's not easy. So, yeah, this is all kind of exciting. There's lots of stuff that Matter 1.3 brought, but on top of all of that, but those are sort of the things that I've been most excited about. I love the smart kitchen stuff and the smart energy, I think, is just something that most people will eventually. I mean, even if you're not interested in the smart home today, I think appliance manufacturers, energy companies are really going to start pushing consumers towards using these types of tools because they are going to be pretty much vital to helping reduce our reliance on fossil fuels and help encourage better use just on that last part, yeah, as people are buying, you know, their whirlpool from 20 years ago finally goes out when they buy the new thing, whether they intended to or not, they're more likely to get these smart features and then start to use them, understand them.

0:08:19 - Mikah Sargent
This is interesting to me. Me you talked about in this spec update that energy management is part of this. When you say that energy management is part of this, are they building on the Matter spec such that Matter itself is able to look at these devices and do the energy management, or are they just making matter able to read the energy management that these devices are already doing? Because this is something that kind of and correct me if I'm wrong seems to run contrary to what the initial introduction of matter was, where it was simply supposed to be kind of a universal language for all of these devices to connect. Now it's like they're building on little worker bots or something, is that?

0:09:13 - Jennifer Pattison Tuohy
the case. I see where you're going, but basically it's allowing. So Matter itself, as you're rightly saying, isn't a platform, it isn't a feature. You're not going to use Matter as a user. You're going to use Google Home or Apple Home or Samsung SmartThings, but what Matter is doing from the communication side is allowing these devices to communicate that type of data.

0:09:43 - Mikah Sargent
Got it Okay good.

0:09:44 - Jennifer Pattison Tuohy
So when Matter first came out, it supported smart plugs. But all you could do if you used Matter in your smart home platform with a smart plug was turn it on or off. And there are lots of smart plugs out there that have energy management capabilities. So they could say not energy management but energy reporting capabilities. The management is obviously the tier above. So, yeah, using the word management might be confusing a little. It's the reporting. The devices are reporting on the level of matter, so they're reporting that. They can report their actual and estimated measurements, including how much power they're using, the voltage, the current real-time use and sort of over time energy consumption or generation. So, for example, if you at some point solar panels may be part of matter, so you could actually like calculate how much energy you're generating versus what you're using, and so, yeah, that's what matters.

Enabling is that device-to-device communication. So now a smart plug, say, for example, eve Energy Energy. Their Eve smart plugs have always monitored energy consumption for you. You haven't been able to do much with it outside of the Eve app. Now that, with energy management capabilities as a part of the matter communication protocol, the reporting of these sources, statistics, data, this data will be available to platforms and then the platform can add a management layer on top. So, and actually right now, smartthings already has a very robust management energy management platform that you can use. That's not related to Matter, but this could help feed more data into it. I'd love to see Apple Home do more with energy management. They've already started to do some things. You see the little. Have you used the little clean or?

dirty energy widget that you can get that tells you what your local energy is and whether this is a good time to go run your dishwasher or run your tumble dryer, because you'll be using clean energy. That's baby steps right now, but you could sort of extrapolate how, with this type of reporting coming in from all devices that consume energy or devices that generate energy, that a smart home platform like Apple Home could help you manage. That you may still want to use, say, your Tesla charging app if you have an EV, but if you could also feed that data into your smart home platform so you can manage what you're using across your home. So it's not just you know, everything isn't siloed like it is today. It'll help bring, bring all the data data into one source so that you can then hopefully and this is where things like AI may come into play generative AI like intelligently manage that balance and those loads, which I think is really interesting.

0:12:38 - Mikah Sargent
Absolutely, and that also makes me feel a lot better about it in general, because it doesn't feel like matter is getting too big for its britches, as we say. I was kind of worried, you know, if it starts trying to add features on as the platform itself, that muddies the waters and makes it arguably more difficult and less likely that everybody's going to continue to be on board. Jump on board and keep, you know, trying to work towards this more centralized approach.

0:13:06 - Jennifer Pattison Tuohy
So it's very true and that's a real balancing act that that I, when I talk to people in the CSA, they're really trying to achieve that real balancing acts of just providing the the, the pipes versus what you know, what they need to provide slightly above that in order to help make things work better, because they didn't start with it. With energy reporting, um, because it was more complicated. It's the same thing with robot vacuums. They don't have like mapping capabilities in robot vacuums because, again, that's slightly a layer above. And then when cameras come, which has been promised, it's unlikely that there'll be any type of recording or storage component to Matter. It will again just be communicating that data to and from whichever platform you want to use.

But it is a balancing act because you don't want it to be so basic that it's not useful, but you also want those features. And then companies and manufacturers want to build on top of Matter to offer better experiences like energy management and then so that you will choose them. Because that's what now, that's, the sort of beauty of Matter is that platforms and device makers are now competing on features, not interoperability. So you're not just choosing a smart plug because it works with your iPhone or with Apple Home, you'd be choosing a smart plug because it has these good energy reporting features that you can then use with your smart home platform of choice.

0:14:36 - Mikah Sargent
So it's yeah, it's a balancing act for sure Now, when it comes to the matter of casting, that is truly the future. I dream of this idea that I don't have to remember that, because I don't remember that I put in wash and that the washing machine is done. So the way that I said well, I tried to set it up was I put a vibration sensor on the washer, which is a mistake, people, because if you know about anything about washing machines, it's that they have cycles, and so those cycles have periods of time where the washer is running and where it's not, and where it's draining and it's doing this, and so the vibration is different, and so it never lets me know when the wash is done. I should have put it on the dryer. I didn't think to put it on the dryer.

Great, could put it on the dryer, because I hate the little buzzing sound that the dryer makes when it's done. I just want the little notification on my phone. I would love it if my washer and dryer both could tell the Apple TV pop up a little notification that says Micah, your wash is ready to be moved to the dryer. That, actually, what it mostly would tell me with the washer that I currently have is that your spin cycle didn't complete because it's an old washer and it couldn't spin the load around. But anyway, that's. I digress in terms of that. But yeah, tell us about a little bit more, about Mattercasting and kind of what you imagine is the future there.

0:16:00 - Jennifer Pattison Tuohy
Yeah, so Mattercasting is really interesting. It's actually been part of Matter since the beginning, but we haven't seen much uptake of it because the original kind of idea behind matter casting and still part of what it does is like casting, as in what Google Cast and Apple Airplay does. So actually Amazon is the only one that's taken up matter casting to start with, because they were one of the only platforms that didn't have a way to cast content from your phone to your TV, a proprietary way at least. So that's the main. That's the sort of what it launched as, but the basic idea is casting data, whether any type of data, from one device that works with Matter to another. So, you know, basic level, you can cast content from your phone to your TV. It's actually app to app communication.

So Amazon recently launched the capability to do this with Fire TVs and so you can cast Prime Video from your phone to your Fire TV Only works with Prime Video at the moment, so it's really limited, but you could see a world, if there was more adoption, where this would be quite. You know it's basically apple airplay, but for everyone else, or Google cast, but for you know, not having Google cast I always have trouble with in terms of trying to adjust the device. Just watch, I'm watching while I'm using my phone, like this is a. I've tested this. I've actually got a blog that I wrote on the verge about my experience using matter casting, if you want to know more about that side of it. But what Mike is talking about is what's new with 1.3 for matter casting is they've taken this ability and taken it so that you can actually have your fridge or your dryer or your doorbell not your doorbell eventually your doorbell I guess your doorbell now, if it wasn't a video doorbell.

But, any device that is matter enabled can talk to or cast their information to a screen in your home, such as your TV. And so, yes, this would be, say, your washing machine's done. It could pop up a notification on your TV saying washing is done. If you have a smart washing machine today, you could probably get a notification on your phone, but it's just going to be you getting that notification. If you live in the household with lots of people, you might want someone else to take the laundry out. So the nice thing about the TV is it's a communal screen, so everyone in the house knows that the laundry is done and it's time to put it in the dryer.

But it could also be cast to any type of screen that's matter enabled. So you could come, it could be cast through a smart display. You know it's sort of got a huge scope of potential in the ways it could communicate with you. And one of the things I'd heard early on about this and this hasn't been implemented yet but could be is if you have presence sensing in your house and you're using matter casting the TV, the washing machine, for example, to carry that one along could know which room you're in and cast to the TV in your room to tell you, whichever room you're in, to say, oh, your washing's done, so you don't miss it and you don't end up with moldy laundry. So it is, you can see the potential. It is kind of there's some exciting stuff with this.

0:19:06 - Mikah Sargent
Yeah, I that's. That's really exciting and that honestly, I'm hoping that WWDC is going to give us more in terms of a matter and the interactions, their integrations there. But Apple has been slowly seeding all of its products with ultra wide band chips and given that I want to see more being done with that presence sensing technology and especially in the smart home, so the idea that, yeah, it doesn't have to bug anybody else, it can just let me know in the room that I'm in hey, you've got something that's ready to be done would be very, very cool. So fingers crossed on all of that.

0:19:53 - Jennifer Pattison Tuohy
You could have your oven send you a notification I mean any device in your home and you could set it up. Obviously, if you don't like getting notified about your wash, you don't have to, I do. I mean, Samsung already has this with its smart. I have a smart Samsung washing machine and a smart Samsung TV and I get an alert saying my washing's done and it can sometimes be kind of irritating, but so you can turn. You know it's obviously all going to be choice whether how you want to be notified, but the idea here is that you're going to have the choice and whichever platform you want to use, you'll be able to have this functionality. You don't just have to have Samsung appliances. You'll be able to have a Whirlpool washer and an Apple TV and it'll all work together.

But you did bring up Apple. So I do want to just end on a downer here, because, as excited as I am about Matter 1.3, I really feel like, after a couple kind of boring rollouts, this has kind of helped bring us to a much more interesting state of this specification, this standard. The problem is, the idea is you can use this with any smart home platform you choose. However, none of the smart home platforms have even adopted 1.2 yet, let alone 1.3. And you mentioned WWDC. I have my fingers crossed. We're going to get a great matter update that in iOS 18, we're going to have all the support for the 1.2 devices, which is like robot vacuums and refrigerators and washing machines, and all the support for 1.3. I'm not going to hold my breath, but the problem is, if we don't, we're waiting until next year, the way that Apple does it.

0:21:32 - Mikah Sargent
Yeah, that's true, it needs to come.

0:21:35 - Jennifer Pattison Tuohy
Unless they may hopefully maybe come up with a different way of doing this. Does it have to be every year? Can they not add something in between? Amazon's been really slow on this too, but it's sort of taking that slow and steady approach. It's like we just want to make sure we don't break everyone's homes. So I can understand that Samsung's done a little bit more with its SmartThings platform and Google Home had been very slow as well.

But Google IO this week they've actually made a lot of big announcements that, whilst they weren't specific, well, they were about Matter to some extent. They're basically now allowing all their Google TVs. So Chromecast with Google TV, Google TVs built into TV and the LG TVs that they announced at CES were going to support Google are all now going to be Matter Google Hubs, so you suddenly can use Matter in your Google Home much easier than you could before, because prior there was more limited, you had to have a Nest Hub or a Google Wi-Fi. And they're also opening up the Google Home APIs so any developer can now develop apps and services using Matter and Google Home devices in their own app. So a little bit like how HomeKit works with like, say, eve device If you've ever used Eve or Nanoleaf app. You pull all your HomeKit devices into the Eve or the Nanoleaf app.

I think that's what we might see the potential of with Google Home APIs, so that you know you don't have to use Google Home if you don't want, but you still can have the infrastructure of Matter through the Google Home APIs, so that you don't have to use Google Home if you don't want, but you still can have the infrastructure of Matter through the Google Home APIs. And they're letting developers put Google Home hubs into their own devices. So if you don't want any Google devices in your home, you may not have to, but you could still use the infrastructure, the local infrastructure of the Google Home platform, which I think is really interesting change. So there's some movement in matter for the platforms, just not enough for me yet.

0:23:41 - Mikah Sargent
I want more. Yeah, come on, keep going, keep going.

0:23:44 - Jennifer Pattison Tuohy
Yes, I mean they've put all this effort into this and we still can't really use it yet. Yes, and that's kind of frustrating.

0:23:51 - Mikah Sargent
Especially as they keep, you know, making these new announcements and things and you're going. I want to be able to see it in action. I want to be able to see it work and, yeah, you're right that you know the companies are working to see more of it in action, because it'll be easier to talk about too seeing it in practice.

All right, we've got to take a quick break. Before we come back with my story of the week, I do want to tell you about ACI Learning, who are bringing you this episode of Tech News Weekly. That is the provider of ITPro binge-worthy video, on-demand IT and cybersecurity training. With ITPro you will get certification ready with access to their full video library of more than 7,250 hours of training. Premium training plans also include practice tests to ensure you're ready before you pay for exams, and virtual labs to facilitate hands-on learning. ItPro from ACI Learning makes training fun because all training videos are produced in an engaging talk show format that, honestly, we've watched. It's truly edutaining so you can take your IT or cyber career to the next level, be bold and train smart with ACI Learning. Visit go.acilearning.com/twit. Use code TWIT30 at checkout to save 30% on your first month or first year of ITPro training. That's go.acilearning.com/twit and use code twit30. Thank you so much to ACI Learning and ITPro for sponsoring this week's episode of Tech News Weekly.

All right, we are back from the break and it is time for my story of the week I wanted to mention. Every year around about the time of Global Accessibility Awareness Day, apple makes an announcement of some of the new accessibility features that the company is working on. And I know that sometimes there's criticism that there's a lot of kind of oversharing of the stuff that Apple is working on or too much celebration of the things that the company makes. But when it comes to this area of Apple, I really appreciate how much attention is paid to accessibility when it comes to the company. And another thing you know, even if it's not enough to kind of celebrate the fact that there is an inclusive nature to this technology, you can also kind of be aware of the fact that this stuff that's being made it in some ways is going to be great for everybody.

I remember a friend of the show, Rene Ritchie, talking about how he didn't make use of many accessibility features maybe a couple, but there was a time when he had to go in and get his eyes dilated for an eye exam and that day he used so many different accessibility features that are particularly for people with low or no vision that were very, very helpful to him. Following that and that for him it was a moment of celebration and realization that all of these tools can be used by everybody and so, even if it's not something that you necessarily need there could come a time when you do need it and just also being aware that a lot of this stuff is stuff that people need to use every single day. So with that out of the way, I want to talk about some of the features that are coming, and the first one I'm very excited about because we saw it first in Apple Vision Pro, and that's eye tracking. So all of the interactions that you do with the Apple Vision Pro so all of the interactions that you do with the Apple Vision Pro when it comes to kind of browsing, start with the eyes. You look around at different things in your line of sight and then you use your finger and you can go back and watch our Vision Pro review to learn more about all of that but you use your finger to select different things and to interact. But your eyes are the first kind of step. Your eyes are the first kind of step. Apple is bringing that to many iPads and iPhones so that people with physical disabilities are able to use the front-facing camera on these devices to actually browse around their devices. So, according to Apple, it kind of requires a little bit of calibration, although it says it calibrates in seconds and then it uses on-device machine learning, so everything is done securely on the device. Your glances are not being sent away to a server to be processed or anything like that. It all happens right there and then you can navigate through different elements of an app just using your eyes. Super cool, and something that I'm looking forward to trying when that comes around. I'll mention a couple of more, and then we kind of talk about these a little bit.

Another feature that I think is going to be really interesting, and again with eye tracking first, we saw it in Apple Vision Pro and with the music haptics. This is something that the haptic engine has existed on these platforms for a long time. Right, it is Apple's name for the little vibration motor that exists in many of its devices, including the Apple Pencil Pro at this point. So Music Haptics is a feature that allows people to experience music physically, and I don't know. I know I've seen a number of videos growing up where they talked about people who are either deaf or hard of hearing who experience music through the vibrations. You know, they put their feet on the floor, they can feel the beat. If they, you know, are listening in front of a speaker, they can kind of feel the displacement of the air. Well, now, when you turn on this feature, the Taptic Engine or the haptic vibrations in the iPhone will actually play what Apple calls taps, textures and refined vibrations that all kind of mimic the music that you are listening to. And this is already going to work with many of the songs in the Apple Music Catalog. And Apple says that they're making it available as an API so that developers who have their own music that plays in their own apps will also be able to do so in their apps. So just the idea that somebody could feel the music is really cool.

And again, I think this is something that I'm just going to want to try in general, because it just adds an extra layer. And then there's one more I'll mention before we break into conversation, and this is one that I cannot wait to turn on. It is science, that is. Oh and, by the way, I'm trying to mention it on every show just so that there's not a huge distraction. I have Invisalign and so I'm getting used to them. I literally just got them. So if it sounds like I'm talking through some stuff, that's because I am, uh, so sorry about that, but pretty soon I'll have it all figured out and then it will not be as noticeable anyway I didn't notice it at all.

Oh good, I can't tell.

0:31:19 - Jennifer Pattison Tuohy
If you're being serious, I am being serious, sorry, I feel so good I.

0:31:24 - Mikah Sargent
I couldn't tell if that was sarcasm. Oh, that's good. Um, thank you, that makes me feel so good. I couldn't tell if that was sarcasm. Oh, that's good, thank you, that makes me feel better. It means it's starting to work at least. All right.

So back to this. I get so sick riding in a car while looking at things on my phone. I cannot read in the car, I can't even like if there, let's say, there was a ladybug that landed in my car I'm a passenger in this case and I'm watching the ladybug, kind of like, climb up the chair. I would start to get sick with that. Even so, what this is, just this boggles my mind.

Basically, the reason why and this part I understand, the reason why we and there's research to back this up the reason why we get sick whenever we're in a vehicle that is moving is because there is what's called a quote sensory conflict between the fact that we're seeing stuff moving and the way that we are feeling that movement. And this all goes back to just our bodies, our brains and our bodies trying to protect us from things that could potentially make us sick. So it's essentially our brains thinking that we have eaten something that is dangerous to us because we are not visually experiencing what we should expect to be experiencing, and so it makes us nauseated, with the hope of purging whatever is inside of us. That's making us sick, that's making our vision go weird, because back in the day we would eat things like mushrooms and other things that we shouldn't eat, and it'd make us sick, and so we try to avoid that. Very cool, right, but unfortunately we've lived as human beings. There's a huge, vast amount of time that we've been alive as these modern humans, that we exist, that meaning that our brains are like what is happening? You're in the car, you're doing this. It doesn't make sense.

So that was the whole explanation to talk about vehicle motion cues. And what this does is it uses the sensors in the iPhone to feel how the car is moving iPhone to feel how the car is moving and then it puts these little kind of dot animations on the screen that move in relation to how the car is moving, and the thought is that by seeing this and matching it to the car's movement, our brain will be less likely to notice a difference between the two, which would be less likely to make us nauseated. I can't wait for this feature. So those are just some of them. I'm curious if you had a chance to read about any of these, if you're excited about any of these and, more importantly, of the accessibility features that have existed even up to this point. Have you made use of any of them?

0:34:34 - Jennifer Pattison Tuohy
Yeah, well, I am really excited about the car sickness one, because I too suffered terribly and have done my entire life, although, oddly, I don't suffer so much on a phone, like if I read a book in a car, I am just toast. A phone somehow had been a little less, but still after a long period of time, and I'm really interested to see how this works. I mean, I've always been. I was always told you know, look out the window, that's like straight ahead, that's the way to cure it.

0:34:56 - Mikah Sargent
Yeah, because drivers get less likely.

0:34:59 - Jennifer Pattison Tuohy
I never get car sick when I'm driving? I never. I mean, yeah, so it's fascinating and really interesting use of the sensors in the device. Already, I mean, I think this is really fascinating, like all of the accessibility features actually, and you know, obviously they're kind of packaged in a way to appeal to people that really need to use them, which I think is great. But, as you say, many of these features actually are useful for everyone, and the eye tracking in particular, I'm very interested in. I as, when I had a baby, my kids were babies you know having eye tracking when my hands were full, when I was reading something. Or you know, when you're cooking and your hands are full of stuff and you want to, you know, move to the next part of the of the recipe. I can see so many uses for eye tracking. I've not tried the Vision Pro. How efficient is it though? Like, does it really work or is there?

0:35:56 - Mikah Sargent
yeah, the eye tracking stuff is it was.

For me it was spot on. There have been some people who've had issues with it, and I hesitate to do this in most cases, but I will say that some of it would have been user error, because there was a part at the very beginning where you kind of have to calibrate it and if you go past that too quick then it doesn't calibrate properly. So I'm thinking that if you properly calibrate it, it's going to be pretty good. I don't think it's going to be as good as the Vision Pro, because it's got so many different cameras and they're all yeah. So I would imagine that when you're talking about things like recipe stuff, that's probably going to be a great place for it. In fact, there's already a developer who has made an app where, if you I can't remember if it's you raise your eyebrows or you open your mouth, but there's something and it will actually make the recipe page turn. And that's in the current version of the app. So I can't wait to see what they do with this eye tracking stuff, what people come up with.

0:36:55 - Jennifer Pattison Tuohy
Yes, I think that's going to be great. No, and it's all. I was also particularly interested in the ability now to assign custom utterances to Siri, to launch shortcuts. I mean, you can do that to some extent today, but they're also going to allow it. They're going to add a feature here that you need to sort of recognize atypical speech. So you're basically going to be able to train your phone or iPad to understand you and, for people that have speech difficulties or speech impediments, understand you, and for people that have speech difficulties or speech impediments, um, it's going to make it. It's going to open up a huge sort of new world of accessibility.

The way, um, that the Siri version works is like you could set up shortcuts to make you know. So if it just one sound could trigger, you know your phone to launch your messages and or you know, to launch any app that you want or maybe even complete. Like a more complex task that you might have to use lots of steps or lots of utterances, like we would be doing, whereas now you could just set up and just one. You could just say do it, or whatever you need it to do, and whatever kind of sound you make, and it will understand what you want, what you want it to do and I of sound you make and it will understand what you want it to do and I think that could be life-changing for a lot of people with severe speech issues. So speech inhibitions if I think of the right word.

So all of these things, like you said, though, it's great for the people that definitely are going to need to use them every day, but if you're not one of those people, I'd still recommend going into the accessibility section and just going through and seeing all of the different features that you can use, like the Apple Watch double tap that came out with the most recent Apple Watch. That was actually a feature in accessibility before slightly different implementation, but you know it was there. Actually, it's almost like a sort of a little lab area for some, I think, for some of apple's engineers, to kind of come out with things that then maybe will become part of the main um function for the, the phone or the operating system. So it's. There's some fascinating things in that section of the iphone, so I've sadly had to use um, the one where you have to zoom the screen in a little bit bigger now, cause I'm getting old. So I've gotten familiar with my son, picked up my phone the other day and was like why is everything so big? Oh man, yeah.

0:39:17 - Mikah Sargent
And honestly, you know you never know it's. It can be handy when it's needed. Um, I am realizing we've been having so much fun.

0:39:28 - Jennifer Pattison Tuohy
Very long. We've gone well over. We have gone well over.

0:39:30 - Mikah Sargent
So I am going to say goodbye to you. Of course, folks can head to thevergecom to check out your work. Where do they go to follow you online and keep up with what you're doing?

0:39:40 - Jennifer Pattison Tuohy
Yep, theverge.com. And then I'm on threads at smart home mama, and on the X, at JP2 to E, and yeah, um, here every third Thursday, beautiful Thanks, so much Thanks.

0:39:54 - Mikah Sargent
All righty folks, let's take another quick break before we come back with my first, first of two interviews. I want to tell you about Yahoo Finance, who are bringing you this episode of Tech News Weekly. When it comes to your financial future, you probably think you've done everything you've saved, you've researched, you've invested all you can, but there's more that you can do. You can take those investments to the next level by using what every financial great has used for more than 25 years. It's Yahoo Finance. Whether you're a seasoned investor or you're looking for extra guidance, yahoo Finance provides all the tools and data you need in one place. Yahoo Finance provides a holistic view of the financial news cycle, including breaking Thank you other investments. You get a comprehensive perspective that distinguishes great investors, and Yahoo Finance ensures you have the insight to examine your wealth in its entirety.

I really like to use Yahoo Finance around the time that the earnings reports come out for big tech, because they all kind of do them around the same time, and what's great about it is it's all right there, and while you're looking at the different news pieces which they do a good job of collecting the ones that matter you can also see how those stocks are independently impacted Really great way to kind of see it all in one place With a community of more than 90 million users each month. Their real strength is helping you on your way to financial success. So for comprehensive financial news and analysis, visit the brand behind every great investor yahoofinance.com, the number one financial destination. yahoofinance.com. That's yahoofinance.com. Alrighty folks, we are back from the break and I am excited to say we are joined by someone who knows a thing or two about Google and about AI and about beautifully putting guitars on the wall. It's Jason Howell.

0:42:08 - Jason Howell
Yes, take a look at the guitars on the wall. That's all I want you to see when you look at my video.

0:42:14 - Mikah Sargent
Jason, your shot looks fantastic. Thank you, the guitars on the.

0:42:16 - Jason Howell
It will make you think, thank you. The guitars on the wall will make you think that I actually play them. Yeah, I thought you played them once I don't have any time to play music.

0:42:24 - Mikah Sargent
Micah Help me You're doing too much. That's what it is, ai will play the music for you, don't you worry?

0:42:30 - Jason Howell
There we go.

0:42:30 - Mikah Sargent
Yeah, we'll talk about that. Oh, there were boos. So Google IO should I say it's underway or has it like fully wrapped up now, because I know there's like post event stuff that happens right?

0:42:42 - Jason Howell
Yeah, literally I just looked at my inbox and I got the email from Google that said all right, it's a wrap. So I think it's pretty much wrapped at this point.

0:42:50 - Mikah Sargent
Good, good. So, given that Google IO has wrapped, it's likely we won't see new announcements from the company for a couple of days. So, while everything is fresh and has been announced, I actually want to kind of start in a backward way. I would like to know if Google announced anything big at Google IO that wasn't about AI.

0:43:19 - Jason Howell
Well, you put the big modifier in there and that kind of shut me down a little bit. I mean, really, really, at the end of the day, this was an AI conference for the most part. When we're talking about the major breaking news that everybody's going to be, you know that the general user, let's say, would pay attention to that, the news organizations. You know this moment, general user, let's say, would pay attention to that, the news organizations. You know this moment of artificial intelligence everywhere, and so that's almost all you saw when it came to like the keynote and all of these really big attention grabbing events. But you know, big news that wasn't AI. I mean, it is also a developer conference. So if you're a developer, there's plenty for you to feast on there. You know changes to Firebase, changes to how you can test devices using Google's cloud, kind of more devices there so you can test your app against that.

Wear OS 5 got a developer preview. So you know, as an Android user for a very long time, wear OS 4 is probably as close to all right, Google you're getting there with wearables that I've ever seen, and so I'm curious about Wear OS 5. You know changes to the Play Store there's also, you know this thing about Chrome OS possibly running in a virtual machine on Android, which, if you want to know more about that, check out Michelle Roman's reporting on that on Twitter or on Android Faithful. We did an interview actually with Samir Samat and Dave Burke from the Android team while we were at Google IO and I'm throwing all these out there just to try and make it sound like there's news that's not AI driven, but you put big in there and that just kind of shut me down. No, there wasn't a whole lot of news that really made the headlines from Google IO that didn't have something in some way to do with artificial intelligence.

0:45:03 - Mikah Sargent
Understood. So, given that we got to go back to the beginning and with Google that means in the beginning, that means search let's start with search. I mean, what AI flavor has Google added to search?

0:45:23 - Jason Howell
Well, I think the real big news here is, you know Google's been testing generative search experience for those who have opted in. I opted in as early as I could because I wanted to get a sense of, like, what does it mean to have AI in my search? You know that chocolate and peanut butter moment, like, what does it mean to have AI in my search? You know that chocolate and peanut butter moment, like, does it? Do they combine together? Well, and you know so I've got opinions on that, but I won't get into that quite yet. Essentially, the news from IO is that, after testing this, they are now rolling this out, I think in the coming week or two, to everyone, at least here in the US. And so what does that mean? You're going to get something they call AI overviews up at the top of your search results. So if you do a search query, Google's AI system is going to take a look at a lot of those results and kind of do what AI is reasonably good at, which is kind of summarize, take in all those points of information, try and figure out what are the things you really want to know, give you a summary up at the top that they call overviews. Interestingly, there is no opt out for this, at least for now. That's what I'm reading. So it's kind of which I think is very telling. This is basically Google saying whether you want to or not, we're giving you AI in your search and you're going to try it. You know, at some point your curiosity is going to get piqued and you'll look into it for yourself and see how you feel.

But it is interesting. I mean, they showed that off. But then they also went a little deeper to say that eventually, you know, Google search is going to have multimodal capabilities, multi-step reasoning, and the example that they showed here was using your phone. To you know, they had a turntable of a record player on the on the set and it had an issue with its tone arm. They're like I don't know how to ask Google how to fix this. So I take a video and I ask it a question hey, Google, how do I sorry, if I just set off some glasses there, how do I?

I probably did. How do I? You know what is, what is this called and how do I fix this? And that would then tie into the AI search mechanism and be able to pull in all the points of information and reference and research and collect it and organize it and then give you like a step-by-step kind of way to fix the thing that's called a tone arm. You didn't know that before. Now you do, and so that isn't happening immediately. It is happening eventually. But what you will probably start to see very soon now here in the US is that that generative search experience up at the top of your search results Got it.

0:47:56 - Mikah Sargent
Now can you explain the Ask Photos feature? I know that also got some attention at the event.

0:48:03 - Jason Howell
Yeah, yeah, and this is interesting to me because Google Photos is one of the Google suite of apps that I probably use the most. You know, there's photos, there's maps I mean, obviously, gmail and Calendar and those kinds of things but photos has been doing interesting things with artificial intelligence for years now. It's really a big reason why it's been such a huge success is because it's been a product that people have the need to use. You load an insane data set, which is all of your pictures, into it, and then Google rolls out these features that allow you to interact with what feels like, you know, such a deep chasm, a cavern of images that you want to pull from. Like I can't remember where this image is, but I know enough about it to try and find it.

And you, they have had search embedded in this and in fact, earlier today, you know, I opened up Google Photos and I plugged in a search that said computers and it pulled back a bunch of photos from my long list of you know, my long archive of images that had computers.

So you could do this. But what they're doing is they're increasing the complexity of the queries, so understanding more than just what is this image, but almost how it relates to you, based on other signals. So, for example, I don't know if this would work once it rolls out, but you could maybe say something like what computer was I using in 2016? Or whose wedding did I attend in 2014? And that's just a couple of guesses as far as how you might think about using this. One example that they gave is like what are the themes of the birthday parties I've held over the years for my daughter, and it would be able to go in and identify these are the photos from your birthday parties that you've held for your daughter. These are all the themes that we've identified, and then to be able to present that to you in a way that's organized and this feature set is coming sometime this summer Nice.

0:49:57 - Mikah Sargent
Now I'll be honest that I have not really used Gemini much in Google workspaces because I kind of feel like it's hidden away In Google Docs it's the most present, but even then it's kind of I don't know. But they did detail kind of an expansion in the workspace including virtual teammates, which you seemed excited about. I watched your post recap and then also automation, so I was hoping you could tell us a bit about those new features and if there were any that I missed for Google Workspace.

0:50:28 - Jason Howell
Yeah, it's so interesting that you mentioned that. As far as Gemini being hidden and hard to get to, because I think Google's big MO right now is put Gemini into everything so that you don't have to try and find it, and yet it's still I totally agree with you. Sometimes it's not totally obvious, even though it's there, you kind of have to go looking for it. Gems is basically AI agents, and I mean all the AI companies that have LLMs like this, they're all doing this now. It's essentially a way for you to create some sort of specific task to assign an AI system to when you go to Perplexity or you go to Cloud or you go to ChatGPT. I mean these things encapsulate a wide, vast kind of space of information, and something like GEMS, something like AI agents, are really kind of more designed to be like very specialized to a specific thing that you set up, and so one of the things that they kind of, you know, showed that off with was, you know what you're talking about, the virtual teammates, which I don't believe is live yet, but it's essentially, you know, creating a spot within your Google workspace team that is a virtual person that you give a task or a set of duties that they are responsible for.

And hopefully, if you're more organized than I anticipate I will be but although I'm very interested in this, I just have this feeling that, like I'm going to open it up and be super confused and give up on it.

But I'll try, I'll do my best. But if you know what you're doing, you can create, you know, teammates on your team as if, as if I was in the office with you again, Mikah, and you know we had some sort of collaboration going on and you, you know, we decided that you were really good at this particular task and so you would do this. Going forward, I could then do that in a virtual sense using AI, within the confines and the controls that these systems give you. And I think the big power here, potentially, if it all works out well is that Google has so many different properties that, if you're a workspace user, are all interconnected, and I think through this, Google is really trying to put the pieces together to make it so that these things connect even deeper together and potentially in an automated way, to make it just easier for you to work.

0:52:48 - Mikah Sargent
Got it Okay. And then I think this is kind of the big one, which is especially given the open AI announcement, which we'll talk about next, as we're nearing the end. Project Astra Tell us about Project Astra.

0:53:06 - Jason Howell
This moment like being there at Google IO. This was one of the few moments during the main keynote that drew probably the biggest reaction. Astra is full real-time multimodal AI on device. It's conversational. They showed kind of a video clip of a woman walking through an office and pointing the camera at certain things and asking questions. Know it taking a little bit of a couple of seconds to understand the question and then interpreting what it's seeing through the camera, interpreting what she's asking for. She's not necessarily always asking for something specific. Sometimes she's alluding to something and it figures it out. It's really inferring a lot based on the context that that's shared. It's also a longer context window and one thing that they showed in that demo that was really neat is that you know about halfway halfway through the video, she's like do you remember where my glasses were? And it's like, oh yeah, I remember where that is. It's over by the Red Apple. And then you go, you know she walks over to the Red Apple because she had scanned that with the camera 30 or 40 seconds before and it's understanding, it's picking up all these signals to hopefully give you what you're looking for. And I think the real capper there was the moment of the glasses right, because then she picks up the glasses, puts it on her head and then you see what she sees through the glasses and she continues to interact with the world in the same way, and really, I think that's what's really telling. Like, by the way, I happen to have my Google Glass right here. It's been a very useful set prop this week.

But I think you know Google has been doing this thing, trying to make this work, for a very long time. They keep kind of throwing these hints about glasses and what they can do in the future, and I think we're starting to get to the point, to where these separate things start to really converge. To you is that all of these AI advancements do end up in a glasses format, in something that is miniaturized. It doesn't look weird on your face, but offers all of this context. Like I would love to be able to just look at something and be like, oh, what is that? And to know you know in a beat exactly what it is, and I think we're getting. We're getting there. The Ray-Bans, the Meta-Ray-Bans have shown us that it's not impossible, and so I'm excited about that.

0:55:38 - Mikah Sargent
Me too. All right, now I'm going to kind of challenge you, since we are very much running out of time. I was hoping you could tell us both about generative AI stuff and then Android Kind of a highlight recap of those two places.

0:55:56 - Jason Howell
Sure, I can do that. So, generative AI, primarily three things VO, which is Google's first I think their first time showing off their video generative kind of rival to Sora. 1080p video you can extend clips beyond one minute. Donald Glover was in one of the promo videos, so it must be good, right? Invite, only for now. Then there's Imogen, or Imagine however you want to say that. Three, which is just an update to their photo generative product less artifacts, better at text, private preview happening there.

Music AI Sandbox. I'm super thrilled about this. This is because I am, you know, I like to pretend I'm a musician and I'm really curious about how generative AI can work into the music creation process, similar to what I was talking about earlier with a virtual teammate. But like a virtual musician, I have a need for someone to play a Rhodes piano over the top of this. So make one for me, and you know it does that, and I think what we're seeing from Music AI Sandbox is kind of leading in that direction. So I'm super curious about that.

You asked about Android. I'd say, if I had to, you know, just pick a couple of things really quickly Scam call detection using on-device AI to recognize when you're talking to someone in an unknown number and they start telling you oh, your bank. You know you need to move your money over here. It almost got my mom a handful of months ago. Thankfully it didn't happen, but that one's really close to my heart. And then, just in general, gemini Nano, which is the smaller on-device version of Gemini that you find on the Pixel devices and on Android, just getting a much wider understanding, much broader context, integrating into the core of the OS and multimodality coming soon to pixels and later this year to other Android devices. So you know, ai it's all about AI for Google right now.

0:57:47 - Mikah Sargent
Well, jason, I want to thank you for taking the time to go through all of that stuff as well. That is, there was a lot, I know, and so I appreciate you joining us to talk about that. Well, normally this is where I'd say, of course folks can go to, but why don't you just tell us where folks should go, to make sure that they can see everything that you're doing?

0:58:07 - Jason Howell
Sometimes I don't even know, Micah, Just go to youtube.com/TechSploder is my YouTube channel where I do a lot of stuff. I actually posted a thing today with like expanded thoughts on my time at Google IO and then my socials Raygun.fun, which I got from you when I was working with you. So beautiful.

0:58:27 - Mikah Sargent
Thank you, Mikah, thanks, and we'll see you again soon. Good to see you all. Bye, alrighty folks. One more interview:

This episode of Tech News Weekly is brought to you by CacheFly.

For over 20 years, CacheFly has held a track record for high-performing, ultra-reliable content delivery - serving over 5,000 companies in over 80 countries. At TWiT.tv we've been using CacheFly for over a decade, and we love their lag-free video loading, hyper-fast downloads, and friction-free site interactions.

CacheFly: The only CDN built for throughput! Ultra-low latency Video Streaming delivers video to over a million concurrent users. Lightning Fast Gaming delivers downloads faster, with zero lag, glitches, or outages. Mobile Content Optimization offers automatic and simple image optimization so your site loads faster on any device. Flexible, month-to-month billing for as long as needed, and discounts for fixed terms. Design your contract when you switch to CacheFly.

Cachefly delivers rich-media content up to 158% faster than other major CDNs and allows you to shield your site content in their cloud, ensuring a 100% cache hit ratio.

And, with CacheFly's Elite Managed Packages, you'll get the VIP treatment. Your dedicated Account Manager will be with you from day one, ensuring a smooth implementation and reliable 24/7 support when you need it.

Learn how you can get your first month free at cachefly.com/twit. That's C-A-C-H-E-F-L-Y dot com slash twit.

Thank you so much to CacheFly for sponsoring this week's episode of Tech News Weekly. All right, as I promised, it wasn't just Google who had a lot of AI to talk about. It also was OpenAI who announced a new model. Joining us from Engadget is Pranav Dixit, who is here to give us the rundown. Welcome to the show, hi. Thanks for having me. Yeah, it's so great to get you here and I think we should just get right into it and get the kind of big one out of the way. You know, we've heard a lot from OpenAI's leaders about the goal of getting to an AGI that is going to benefit humanity. So at this event did OpenAI announce artificial general intelligence?

1:00:57 - Pranav Dixit
So, at this event, did OpenAI announce artificial general intelligence? I would love for someone to tell me what artificial general intelligence actually means at this point. There's a lot of definitions flying around and certainly OpenAI has its own definition, and their stated mission is to develop AGI that benefits all of humanity. And they define AGI as you know AI systems that are generally smarter than humans, which is, you know, a really broad and nebulous definition. What does smarter actually mean? You could argue that in some ways, ai is already smarter than humans. It can ingest like 50 books at the same time and analyze them for me, which no human can do, but it's also just really dumb at other things. So I think there's still no consensus of what AGI is. So no, they didn't announce AGI. What they did announce was a brand new AI model called GPT-4.0, which is essentially an upgraded version of GPT-4, which is the current flagship model that powers ChatGPT Understood.

1:02:02 - Mikah Sargent
So yeah, let's talk about GPT-4.0. Just tell us a little bit about how it differs from the models that came before, what gives it that O and maybe even what that O means that O means yeah.

1:02:23 - Pranav Dixit
The O stands for Omni, which is to say that GPT-4-O is truly what they call multimodal. It is capable of taking in prompts, not just in text format, but it can take in prompts as audio, it can take in images and it can also output whatever answer it gives you in one of those formats. So we are not just dealing with text anymore. There's a few big differences between GPT-4.0 and the models that previously came from OpenAI. Openai says that GPT-4.0 is one step closer to much more natural human-computer interaction.

That's certainly something that we saw in the demos that they showed off. You can have a conversation with it that feels almost natural. It's almost like speaking to another human being. It can, for instance, recognize emotions. It can recognize the tone of your voice If you show it a selfie, it can tell if you're feeling stressed, if you're happy, and you can also interrupt it, which was something that was not possible with previous versions of ChatGPT. Lots and lots of people were comparing the interactions that they saw in the demos with the AI from the movie Her. I know my Twitter was full of Her comparisons. So that's the main difference. It's much more human-like and it's significantly faster than anything that came before it.

1:03:47 - Mikah Sargent
So speaking of the ability to interrupt it and the ability to communicate with it in just voice, because for folks who didn't quite understand the way that it worked before, essentially what was happening is you had kind of several models all working together, so when you were having an audio conversation with it, for example, it was taking that audio conversation and then using a model to turn it into text, and then GPT-4 at the time was looking at that text, understanding the text, responding to the text, which then took the text response and used another model to turn it into voice, which then got sent back to you.

Obviously, that's going to result in a lot of lag, make it difficult for interruptions. So all of that leads me to ask kind of, if they've made all of these changes, they've trained one model to do all of this, the one ring to rule them all, so to speak. Gulp, then what can it do? What did they show it being able to do that? I don't even know if I want to ask. That seems compelling, because maybe some of it didn't seem compelling. But yeah, what were the use cases that maybe the company thought were compelling?

1:05:02 - Pranav Dixit
Yeah, I think the way you broke it down is exactly right. If you use ChatGPT right now, what it's doing is basically using a combination of different models to convert your speech into text in the background and then basing its responses off of the text. And when you convert something into text, it loses tone, it loses emotion, it can't figure out if there's more than one speaker. So GPT-4.0 is certainly capable of a lot more than just it's not really transcribing your text in the background. It's actually understanding your voice or a picture that you show it, and this was apparent in all the demos that they showed off. One of the demos was a live translation demo where OpenAI CTO Meera Murati and two researchers had a conversation switching back and forth between English and Italian. It was similar to Google Translate, but it was also just like incredibly fast and natural sounding.

The more impressive demos were some of the ones that came after it. One of the researchers told Chad GPT that he was feeling nervous because he was on stage presenting and he asked the app to listen to his breathing and sort of breathe in an exaggerated fashion. And Chad GPT was able to listen to his breathing and told him to calm down because he was just going too fast. That's not something that was possible before. Not something that was possible before. There was another demo where another researcher wrote down a linear equation on a piece of paper and asked the app to just watch it with his iPhone's camera and walk him through the steps to solve it, and it was able to do that.

If you go to OpenAIcom, there's tons of other demos on the website where the app just tells somebody to spruce up their appearance ahead of a job interview. There's another video of the app telling a student, walking the student through steps, figuring out like a math problem on their iPad. There's also a really fun one where two chat GPT voices harmonize together and sing a song about San Francisco. So all these demos really show off that it's not just transcribing text right, it's going a step beyond that.

1:07:31 - Mikah Sargent
Absolutely yeah. I saw the one where the two were talking together and I was as much as it was silly, I was impressed by the fact that you had yeah, that it was happening at all, and I think that that speaks to my next question, which is on the performance side of things. You know, OpenAI says that this model is faster, but they also say the model is cheaper, and I was hoping you could explain. How is that possible?

1:08:00 - Pranav Dixit
Yeah, when they say it's cheaper, what they're really talking about is the cost of working with the model for any developers who want to use it to build AI apps and services. Because GPT-4.0 is a lot more efficient than GPT-4, it'll cost developers half of what it costs to build something with GPT-4. So that's what they mean when they say it's cheaper. Really, it's more for people who want to use it to build AI apps and services, not sort of regular people like you and me.

1:08:34 - Mikah Sargent
Got it, and now, in its blog post, the company did detail some of the limitations of the new model, and that is something that I've appreciated about OpenAI from the get-go is that they are not going to well. As far as we've seen, they don't try to hide any of these limitations. Can you talk to us about some of the limitations that they foresee with GPT-4.0?

1:09:00 - Pranav Dixit
Yeah, in fact OpenAI had an entire video up on their website just showing GPT-4.0 kind of messing up, and they sort of put it under the heading of limitations. I guess they were trying to be transparent. It's look, it's not perfect. It still stutters, it still keeps blabbering on unless you sort of interrupt it aggressively. It kind of doesn't know when to stop. Sometimes it can suck at translating. But, more importantly, you know it still has all the limitations that are endemic to AI in general. You know it still hallucinates, which means that it still makes up stuff, even though it's a lot more natural sounding doesn't mean that you should trust it. It's a more polished form of what came before, but it still has all the problems that currently exist with AI, which I think a lot of critics are pointing out. Let's not get carried away by oh, how flashy it is or how lifelike it is, when the fact is that it's still limited by the problems that are inherent to AI.

1:10:12 - Mikah Sargent
Absolutely. And then, last but not least, something that the company seemed to really tout across the entire announcement was that GPT-4.0 is available to everyone. I've been using GPT-4 because I subscribed to the service and that was only available to paid users, enterprise users, that kind of thing. Do you have any insight to provide on why the company has made its more powerful model available to everyone at this point and kind of moved it from behind the paywall?

1:10:46 - Pranav Dixit
Yeah, so this was definitely a big surprise when they announced it. When they had released GPT-4, which was the previous model, it was only restricted to paid users. You had to pay $20 a month and sign up for GPT-Plus to be able to access GPT-4. If you didn't pay up, you were still stuck with GPT-3.5, which was the earlier model. It wasn't bad, but it wasn't GPT-4. Model, which wasn't bad, but it wasn't GPT-4. With GPT-4, yes, they are going to make it available for free to everyone. I believe it's still rolling out to people who pay first and once those are done, they'll roll it out to everyone.

Look, openai pretty much kicked off the AI arms race at the end of 2022 when they released Chan GPT, and I think they're really starting to feel the pressure from the competition right now. They're far from being the only kid on the AI block, even though they are the coolest. You have Google with its Gemini, which also does similar stuff. It's also really smart. Google showed off a lot of AI stuff at Google IO just a couple of days ago. There's other startups like Perplexity. There's something called Claude, which is one more AI chatbot from a startup called Anthropic. So, all in all, I think OpenAI is really feeding the pressure and it's really feeding the heat from the competition, and I think that's a big reason why they made GPT-4.0 free. It helps them stay ahead of the curve, it helps them sort of just keep the buzz around this stuff alive and not get eclipsed by the other big tech companies or even some of the startups.

1:12:35 - Mikah Sargent
Gotcha, any last things that you want to share from that OpenAI event? I feel like we've covered most of it, but I just want to make sure I didn't miss anything that stood out to you, or were there even any kind of hints at what's to come, that's next for OpenAI, because there was lots of speculation about what the company might announce at the event and I don't think that anyone quite pegged this exact thing GPT-4-0, as what was coming next for the company, as everyone kind of looks for GPT-5.

1:13:12 - Pranav Dixit
Yeah, there's a couple of things that I was thinking, which is if the biggest thing that you can announce is GPT-4, what does that mean for GPT-5? How far away is it? Because I feel like if they had they announce GPT-5 and we still don't know how much better it's going to be. 4.0 looks pretty awesome, and so what is GPT-5 going to be? I think that picture is still hazy.

The other thing before the event that everybody was talking about and I believe there was a Reuters story about this was that OpenAI was going to announce a search engine to compete directly with Google, and then, I think one day before the event, sam Altman tweeted and said there's no search engine and sort of poured cold water on that story, but it makes me think that maybe they are working on a search engine. That would sort of be the obvious next move, right? A lot of people are talking about AI chatbots essentially replacing Google. For a lot of people, Google just changed up its own search engine to respond to this sort of chatter, and so I would be very interested to see how a search engine powered by OpenAI's tech is going to look like.

1:14:42 - Mikah Sargent
Absolutely. Yeah, I'm keeping my eyes peeled for that for sure. I want to thank you so much for taking the time to join us today. I know you did some moving around and shuffling, so I really appreciate you for joining us today. Of course, folks can head over to ngadgetcom to check out your work, but is there another place or are there more places that folks can go to keep up with what you're doing online?

1:15:06 - Pranav Dixit
Yeah, you can follow me on. I still call it Twitter. It's still so weird to say X, but you can follow me on Twitter at twittercom slash Pranav Dixit, which is just my first name, last name, that's probably the best way to find me. I'm still hooked to Twitter.

1:15:23 - Mikah Sargent
Understood. Thank you so much for your time today. We appreciate it. Thanks for having me. Alrighty folks, that brings us to the end of this episode of Tech News Weekly.

Our show publishes every Thursday at twit.tv/tnw. That is where you can go to subscribe to the show in audio and video formats. If you'd like to get all of our shows ad-free, well, we've got a way for you to do that. It's Club TWiT at twit.tv/clubtwit. When you join the club for $7 a month, you will get access to every single Twit show ad-free. It's just the content. You will also gain access to the Twit Plus bonus feed that has extra content you won't find anywhere else behind the scenes before the show. After the show, special club Twitch events get published there, including our escape room in a box and our recent silent movie watch, where Leo and I spent the whole time talking through a silent film. We had a lot of fun. Actually, you could also check out the Discord, the Club Twit Discord, which is a fun place to go to chat with your fellow Club Twit members and also those of us here at Twit.

I'm also launching something soon. I think it's going to be about once a month. I think it's going to be about once a month and it is Micah's Crafting Corner, where we'll all just kind of gather and work on the crafts that we individually are working on, maybe share with other people what we're working on, talk about the crafts that we like to do Crochet knit, I don't know if you like to cut apart magazines and make creepy signs out of them. Fine by me, as long as you're being nice. It's called Micah's Crafted Corner and I'm going to be launching that soon, so I'm really excited about that. You're only going to be able to get that if you join the club. On top of that, again, all of this is just $7 a month. You're also going to gain access to the video versions of our club twit shows.

So that is the Untitled Linux Show, hands on Mac, hands on Windows, home Theater Geeks, ios Today. If you want to see us do those shows, you will need to join the club. Thank you to those of you who are members, who are watching live, otherwise you're not going to hear this and thank you to those of you who are considering joining, especially those of you again who are cutting apart magazines who will join so they can be part of Micah's Crafting Corner, thank you. Thank you for tuning in this week. If you want to follow me online, I am at Micah Sargent on many a social media network, or you can head to chihuahua.coffee that's C-H-I-H-U-A, h-u-a.coffee, where I've got links to the places I'm most active online. Check out my other shows Again Hands on Mac, which publishes today. iOS Today, which publishes today. Ask the Tech Guys, which publishes later on Sunday. I do that show with Leo Laporte, where we take your tech questions live on air and do our best to answer them, and I will see you again next week for another episode of Tech News Weekly. Bye-bye.

All Transcripts posts