Feed aggregator

Windows 10 users who don’t want to upgrade to Windows 11 get new lifeline from Microsoft

Techradar - Tue, 06/24/2025 - 12:00
  • Microsoft has launched a wizard to help Windows 10 devices stay secure
  • It’s only intended as a temporary solution, though
  • Windows 10 support ends later this year

Windows 10 has been around for almost a decade now, but official support is due to end on October 14 this year. Yet that doesn’t have to be the end of the road, as Microsoft has just announced a new process for anyone who needs a little more time to switch to Windows 11.

The updates are part of Microsoft’s Extended Security Updates (ESU) program, which brings monthly critical and important security patches to Windows 10 users for one year after official support ends. Microsoft says this is only meant to be a short-term solution, as it doesn’t include non-security updates or new features.

With today’s change, there are now a few new ways to get started. For individuals, there’s a new enrollment wizard that will give you three options: use Windows Backup to sync all your settings to the cloud; redeem 1,000 Microsoft Rewards points to get started; or pay a one-off fee of $30.

After you’ve picked an option and followed the instructions, your Windows 10 PC will be enrolled. ESU coverage for personal computers lasts from October 15, 2025 until October 13, 2026. The enrollment wizard is currently available in the Windows Insider Program, made available to regular Windows 10 users in July, and will roll out on a wider basis in mid-August.

Time to upgrade

Back view of a man using a laptop with Windows 11's Microsoft Store app open

(Image credit: Foxy burrow / Shutterstock / Microsoft)

The ESU changes aren’t just coming to individual Windows 10 users. Commercial organizations can pay $61 per device to subscribe to the ESU program for a year. This can be renewed annually for up to three years, although Microsoft warns that the cost will increase each year. Businesses can sign up today via the Microsoft Volume Licensing Program, while Cloud Service Providers will begin offering enrollment starting September 1.

As for Windows 10 devices that are accessing Windows 11 Cloud PCs via Windows 365 and virtual machines, these will be granted access to ESU free of charge and will receive security updates automatically, with no extra actions required.

In a way, Microsoft’s announcement highlights the struggles the company has had with getting people to upgrade to Windows 11. Microsoft first announced that it would kill off Windows 10 way back in June 2021, and yet there are still people and organizations that have not made the switch, despite many years of prompts and warnings.

For some people – especially those with mission-critical devices or large fleets of computers – upgrading to Windows 11 might be a herculean task. But if you’re able to make the switch, you really should do so to ensure you keep getting all the latest updates. We’ve even got a guide on upgrading to Windows 11 to help you through the process.

You might also like

Now it’s dogs doing Olympic diving in the next AI video craze to sweep the Internet from the Veo 3 rival Hailuo 02

Techradar - Tue, 06/24/2025 - 07:32
  • New diving dogs AI-generated video follows the cat Olympics craze
  • You can try Hailuo 02 yourself for free
  • Are dogs better at 'diving' than cats? You decide

Hot on the heels (or should that be paws?) of the AI-generated ‘cats doing Olympic diving’ video that broke the Internet a few days ago comes the natural follow-up.

Yes, it’s dogs doing Olympic diving, which opens up the possibility of a debate on who does it better - cats or dogs?

@stanislav_laurier

♬ оригинальный звук - stanislav_laurier

Created by TikTok and Hailuo 02 user Stanislav Laurier, the video features the same impressive physics and realistic depictions of dogs that made the cat video so successful in the first place. The way the dogs bounce on the diving board before launching themselves into a spin makes this a truly impressive piece of AI work.

And of course, the dogs look just as realistic as the cats as they walk along the diving board. It’s only when you see them doing impossible spins that you realize that this must be AI.

Like the cat video, this was created in a new Veo 3 and Sora rival called Hailuo 02, and effortlessly demonstrates how far AI video has come.

On the podium

After a few impressive dives, the video ends with a winners' podium showing off which dogs got third, second, and first place. Here, AI lets itself down slightly, as it says "1nd" and "2st" on the podium. It's amazing that it can get all the complicated physics of spinning dogs correct, but can't get some simple text right.

The video was posted on TikTok, and received quite a few comments, especially from Laura Smith who perhaps hadn’t quite caught on that the video was made with AI: “Wowww!!!! This is so amazing that these clever dogs can do this!”

Other users seem to have worked it out, though, like Kaia : 3, who said, “I’m crying, I thought this was real until the Pomeranian started spinning.”

Try it yourself

Hailuo 02 is created by Chinese AI video developer MiniMax and it debuted earlier this summer.

You’ll need to create an account to use Hailuo 02 (it let me log in with my Google account), but after that, you can give Hailuo a go yourself for free. I asked it to create “A cat throwing a shot put in the Kitty Olympics 2026”

As a “non member” (subscriptions are available, starting at $95.99 – about £70/AU$147 – a year) I got 500 free points that were valid for the next 3 days, and I had to wait in a four minute queue, which was more than acceptable, after which it started to generate the video. After a couple of minutes, my video was ready, and it had only used up 25 of my points.

I’ll admit that it doesn’t look great, but that was my first attempt. More time invested in refining the very simple prompt I used would produce much better results.

So, who do you think takes the prize for best Olympic diving? Dogs or cats? Comment with your opinion below, and let’s not pretend that this isn’t exactly what AI was created for.

You might also like

Microsoft's 'if you can't beat them, join them' approach to the threat of Steam in the new Xbox PC app is a great idea

Techradar - Tue, 06/24/2025 - 07:00
  • Microsoft's improved Windows 11 Xbox PC app will be available for Xbox Insiders
  • Its Aggregated Game Library will allow users to access games on multiple storefronts in one app
  • It's going up against SteamOS and its game library setup

Microsoft's ROG Xbox Ally handheld gaming PCs are set for release later this summer, alongside a significant Xbox app upgrade – and it appears that our first taste of the handheld-friendly app is closer than ever.

Announced on Xbox Wire, Microsoft's new Aggregated Game Library will be available for Xbox Insiders to preview, leading up to its full launch alongside the ROG Xbox Ally handhelds. It will let users launch games across Steam, Battle.net, and multiple platforms like Epic Games, all on the Xbox app, essentially emulating Valve's SteamOS.

It's set to act as a direct competitor to Valve's efforts at creating a handheld-friendly gaming experience; first with the Steam Deck, and now with the Legion Go S and other handhelds without an official SteamOS license. It's been a while since fans and I have pleaded with Microsoft for a portable Windows 11 mode, and I couldn't be happier to see it doing just that.

However, I'd say it's evident that Microsoft has a lot of work ahead, attempting to improve Windows 11 and going up against SteamOS. We already know that gaming performance on SteamOS is better than Windows 11's – and yes, while we still need to see the Xbox app first, it may have some catching up to do.

While Windows 11 has the advantage of running most multiplayer games using anti-cheat, there's a strong chance of this compatibility on Linux improving – and that's because SteamOS is making its way to other handhelds away from the Steam Deck. Not to mention, Splitgate 2 developers tweaked its anti-cheat to make the title playable on SteamOS, so others may follow suit.

Analysis: I may not turn my back on SteamOS, but Microsoft's move is a welcome one

ROG Xbox Ally & Legion Go S

(Image credit: Future)

Let's get one thing straight: I'm absolutely all-in for the new Xbox app, and I'll more than likely be using it on my dual-booted Asus ROG Ally. However, I'm keeping my expectations low, and I don't think the new upgrade will convince me to move away from SteamOS completely.

Now, you could say it's an unfair judgment as the upgrades aren't available yet – but fans have been asking Microsoft to consider a portable handheld mode for a long while now, so the onus isn't on the fans, but rather Microsoft itself.

Valve's SteamOS has multiple years of work under its belt, with optimizations pushing for a smoother and more customizable handheld experience. Tools like Decky Loader (which isn't affiliated with Valve) are a massive part of that – and I hope that Microsoft can replicate a smooth and customizable experience within the Xbox app.

The preview should arrive later this week, and you can be certain that I'll be testing it on my Asus ROG Ally...

You might also like...

Google Earth is now an even better time-travel machine thanks to this Street View upgrade – and I might get hooked

Techradar - Tue, 06/24/2025 - 07:00
  • Google Earth is celebrating is 20th birthday this month
  • It's just added a new historical Street View feature for time-traveling
  • Pro users will also get AI-powered upgrades to help with urban planning

Google Earth has just turned 20 years old and the digital globe has picked up a feature that could prove to be an addictive time-sink – historical Street View.

Yes, we've been able to time-travel around our cities and previous homes for years now on Google Maps, but Google Earth feels like a natural home for the feature, given its more immersive 3D views and satellite imagery. And from today, Google Earth now offers Street View with that historical menu bar.

That means you can visit famous buildings and landmarks (like the Vessel building in New York City above) and effectively watch their construction unfold. To do that, find a location in Google Earth, drag the pegman icon (bottom right) onto the street, click 'see more dates', and use the film strip menu to choose the year.

Around major cities and landmarks, Street View images are updated so regularly now that their snapshots are often only months apart, but in most areas they're renewed every one to two years. That opens up some major nostalgia potential, particularly if the shots happen to have frozen someone you know in time.

Bringing history to life

A timelapse of the Notre Dame cathedral through the years from Google Earth

The Notre-Dame de Paris cathedral (above) is a particularly interesting subject for a Google Earth aerial timelapse (Image credit: Google)

To celebrate Earth's birthday, Google has also made timelapses of its favorite historical aerial views, which stitch together satellite photos over several decades. This feature became available in the web and mobile versions of Earth last year – to find it, go the the layers icon and turn on the 'historical imagery' toggle.

One fascinating example is the aerial view of the Notre-Dame de Paris cathedral (above), which Google made exclusively for us. It shows the gothic icon from 1943 through to its unfortunate fire in 2019, followed by its recent reconstruction.

But other examples that Google has picked out include a view of Berlin, from its post-war devastation to the Berlin Wall and its modern incarnation, plus the stunning growth of Las Vegas and San Francisco over the decades.

There's a high chance that Google Earth will, once again, send me down a hours-long rabbit hole with these Street View and historical imagery tricks. But it's also giving Pro users some new AI-driven features in "the coming weeks", with features like 'tree canopy coverage' and heatmaps showing land surface temperatures underlining Earth's potential for urban planning.

That perhaps hints at the Gemini-powered treats to come for us non-professional users in the future. But for now, I have more than enough Earth-related treasure hunts to keep me occupied.

You might also like

On Broadway, A.I. and High-Tech Storytelling Is Having a Moment

NYT Technology - Tue, 06/24/2025 - 04:00
Videos and projections depicting an A.I.-generated actor, the digital memories of robots, a redwood forest and more: High-tech storytelling is having a moment.

Mattel's going to make AI-powered toys, kids’ rights advocates are worried

MWP Page - Mon, 06/23/2025 - 22:21
https://www.malwarebytes.com/blog/news/2025/06/mattels-going-to-make-ai-powered-toys-kids-rights-advocates-are-worried?utm_source=iterable&utm_medium=email&utm_campaign=b2c_pro_oth_20250623_juneweeklynewsletter_paid_v4_1_175042146011&utm_content=AI_powered_toys


Toy company Mattel has announced a deal with OpenAI to create AI-powered toys, but digital rights advocates have urged caution.

(Feed generated with FetchRSS)

Ford Will Keep Battery Factory Even if Republicans Ax Tax Break

NYT Technology - Mon, 06/23/2025 - 19:37
Ford Motor said it would open a new plant in Michigan that could become ineligible for federal incentives under a policy bill championed by President Trump and passed by the House.

Memphis streets host Nuro's autonomous vehicle trials as part of 40-city tour

Memphis Business Journal - Mon, 06/23/2025 - 16:50
Nuro, a California-based autonomous driving company, has chosen Memphis as one of its testing grounds.

Media Matters Sues F.T.C. Over Advertising Investigation

NYT Technology - Mon, 06/23/2025 - 14:48
The liberal advocacy organization said in a lawsuit that the Federal Trade Commission’s inquiry into boycotts with other advertising groups was “retribution.”

Tesla Begins Limited Robotaxi Service in Austin

NYT Technology - Sun, 06/22/2025 - 14:39
The vehicles will have safety monitors and may not operate in bad weather, making them more restricted than the fully autonomous vehicles promised by Elon Musk.

Forget about SEO - Adobe already has an LLM Optimizer to help businesses rank on ChatGPT, Gemini, and Claude

Techradar - Sun, 06/22/2025 - 10:27
  • Adobe wants to help decide how your brand shows up inside ChatGPT and other AI bots
  • LLM Optimizer promises SEO-like results in an internet where search engines no longer rule
  • Your FAQ page could now influence what AI chatbots say about your brand to customers

Popular AI tools such as ChatGPT, Gemini, and Claude are increasingly replacing traditional search engines in how people discover content and make purchasing decisions.

Adobe is attempting to stay ahead of the curve by launching LLM Optimizer, which it claims can help businesses improve visibility across generative AI interfaces by monitoring how brand content is used and providing actionable recommendations.

The tool even claims to assign a monetary value to potential traffic gains, allowing users to prioritize optimizations.

Shift from search engines to AI interfaces

Adobe LLM Optimizer

(Image credit: Adobe)

With a reported 3,500% increase in generative AI-driven traffic to U.S. retail sites and a 3,200% spike to travel sites between July 2024 and May 2025, Adobe argues that conversational interfaces are no longer a trend but a transformation in consumer behavior.

“Generative AI interfaces are becoming go-to tools for how customers discover, engage and make purchase decisions, across every stage of their journey,” said Loni Stark, vice president of strategy and product at Adobe Experience Cloud.

The core of Adobe LLM Optimizer lies in its monitoring and benchmarking capabilities, as it claims to give businesses a “real-time pulse on how their brand is showing up across browsers and chat services.”

The tool can help teams identify the most relevant queries for their sector and understand how their offerings are presented, as well as enabling comparison with competitors for high-value keywords and uses this data to refine content strategies.

A recommendation engine detects gaps in brand visibility across websites, FAQs, and even external platforms like Wikipedia.

It suggests both technical fixes and content improvements based on attributes that LLMs prioritize, such as accuracy, authority, and informativeness.

These changes can be implemented “with a single click,” including code or content updates, which suggests an effort to reduce dependency on lengthy development cycles.

It is clear the best SEO tool tactics may need to adapt, especially as AI chat interfaces do not operate with the same crawling and ranking logic as standard web browsers.

For users who already rely on the best browser for private browsing or privacy tools to avoid data profiling, the idea that businesses are now optimizing to appear inside chatbots could raise questions about how content is sourced and attributed.

Adobe insists that the tool supports “enterprise-ready frameworks” and has integration pathways for agencies and third-party systems, though the wider implications for transparency and digital content ethics remain to be seen.

You might also like

How Vera Rubin Telescope Scientists Will Deal With 60 Million Billion Bytes of Imagery

NYT Technology - Fri, 06/20/2025 - 20:53
The Vera C. Rubin Observatory will make the study of stars and galaxies more like the big data-sorting exercises of contemporary genetics and particle physics.

I tried Google’s new Search Live feature and ended up debating an AI about books

Techradar - Fri, 06/20/2025 - 20:30
  • Google’s new Search Live feature lets users hold real-time voice conversations with an AI-powered version of Search
  • The Gemini-powered AI attempts to simulate a friendly and knowledgeable human.
  • Google is keen to have all roads lead to Gemini, and Search Live could help entice people to try the AI companion without realizing it

Google's quest to incorporate its Gemini into everything has a new outlet linked to its most central product. The new Google Search Live essentially gives Google Search's AI Mode a Gemini-powered voice.

It’s currently available to users in the U.S. via the Google app on iOS and Android, and it invites you to literally talk to your search bar. You speak, and it speaks back; unlike the half-hearted AI assistants of yesteryear, this one doesn’t stop listening after just one question. It’s a full dialogue partner, unlike the non-verbal AI Mode.

It also works in the background, which means I could leave the app during the chat to do something else on my phone, and the audio didn’t pause or glitch. It just kept going, as if I were on the phone with someone.

Google refers to this system as “query fan-out,” which means that instead of just answering your question, it also quietly considers related queries, drawing in more diverse sources and perspectives. You feel it, too. The answers don’t feel boxed into a single form of response, even on relatively straightforward queries like the one about linen dresses in Google's demo.

AI Search Live

To test Search Live out, I tapped the “Live” icon and asked for speculative fiction books I should read this summer. The genial voice offered a few classic and a few more recent options. I then opened Pandora's box by asking it about its own favorites. Surprisingly, it had a few. I then decided to push it a bit and tell it it was wrong about the best fantasy books and listed a few of my own. Suddenly, I found myself in a debate not only about the best examples of the genre, but also about how to define it.

We segued from there to philosophical and historical opinions about elvish empathy and whether AI should be compared to genies or the mythical brownies that do housework in exchange for cream. Were it not for the smooth, synthetic voice and its relentless good cheer, I might have thought I was actually having an idle argument with an acquaintance over nothing important.

It's obviously very different from the classic Google Search and its wall of links. If you look at the screen, you still see the links, but the focus is on the talk. Google isn't unique with a vocal version of its AI, as ChatGPT and others proffer similar features. Google Search Live does come off as smoother, and I didn't have to rephrase my questions or repeat myself once in 10 minutes. Being integrated with Google’s actual search systems might help keep things grounded. It’s like talking to someone who always has a stack of citations in their back pocket.

I don't think Search Live is what people will use to replace their usual online search methods, but here’s a real accessibility benefit to it. For people who can’t comfortably type or see, voice-first tools like this open new doors. Same goes for kids asking homework questions, or for someone cooking dinner who has a random question but doesn't want to pause to wipe flour off their screen.

There’s a tradeoff, of course, in terms of how people browse the web. If this kind of conversational AI becomes the dominant interface for search on Google, what happens to web traffic? Publishers already feel like they’re shouting into the void when their content is skimmed by AI and hiring lawyers to fight it. What will the AI search if its sources shrink or vanish? It's a complicated question, worthy of debate. I'll have to see how Search Live lays out the arguments.

You might also like

Forget virtual pets – the next AI video craze is cats doing Olympic diving, and it’s all thanks to this new Google Veo 3 rival

Techradar - Fri, 06/20/2025 - 18:30
  • MiniMax’s new Hailuo 02 AI video model has sparked a viral trend of cats performing Olympic dives
  • The videos blend advanced physics-based animation with internet absurdity
  • Though not the quality of Google Veo 3, Hailuo 2 is rapidly gaining in popularity among casual AI users

Watching the cat walk onto the diving board, I could imagine calls to the fire department or a huge crowd rushing to save it, causing a catastrophe, while the feline simply blinked at the tragedy. Instead, the cat executed an Olympic-caliber triple somersault into the pool. If it weren't for the impossible feat and my awareness that it was an AI-generated video, I'd be checking to see if there was a Freaky Friday situation with the U.S. swim team.

Instead, it's a hugely viral video produced using Chinese AI video developer MiniMax's Hailuo 02 model. The millions of people watching the video of cats diving may not be real, but it's real enough to elbow its way into the competition for AI video dominance, alongside Google Veo 3 and OpenAI's Sora, among many others.

MiniMax debuted Hailuo 02 earlier this summer, but the virality of the faux Olympics video suggests it's going to become a very popular tool for making still images or text prompts into videos. The model only makes five- to ten-second clips for now, but its motion customization, camera effects, and impressive imitation of real-world physics, like the movement of fur or splashing of water, make it more intriguing.

Testing Hailuo 02 on cats diving came about seemingly organically when X user R.B Keeper (presumably not their real name) tried a prompt they'd seen tested on Veo 3. The idea spread from there to a version that garnered millions of views in a matter of hours and appeared on TikTok, Reddit, and Instagram, with numerous variations.

A post shared by Pablo Prompt (@pabloprompt)

A photo posted by on

AI video battles

Hailuo 02 uses frame-by-frame physics simulation, attention-mapped motion prompts, and multimodal input parsing. In other words, if you type a strange idea, the model will do its best to make it look and behave like it would in an approximation of the real world.

Notably, Hailuo 02 is reportedly far cheaper and faster than Veo 3, though perhaps without quite the high-end gloss. Still, it's more accessible, not being limited to enterprise services and beta programs like Veo 3.

The cat diving videos are the apex of a very specific Venn diagram of internet trends, accessible tools, and low-stakes fun. You don’t need to be a professional editor or own a supercomputer to try it. And more upgrades are on the horizon. MiniMax has outlined plans to integrate synchronized audio, lighting, and texture control, as well as longer clips.

As for Google Veo 3 and other major players, they have their professional niche for now. But if they want to widen their appeal to the masses, they might look to what MiniMax and smaller developers like Midjourney, with its V1 video model, are doing. Hailuo 02 is the kind of tool that will get people, like the cats, to dive in.

You might also like

Europe’s Growing Fear: How Trump Might Use U.S. Tech Dominance Against It

NYT Technology - Fri, 06/20/2025 - 13:32
To comply with a Trump executive order, Microsoft recently helped suspend the email account of an International Criminal Court prosecutor in the Netherlands who was investigating Israel for war crimes.

I adore my Meta Ray-Bans, but these new Oakley smart glasses are making me jealous

Techradar - Fri, 06/20/2025 - 08:00
  • Meta and Oakley are officially making smart glasses
  • They're based on Oakley's HSTN glasses design
  • Launching later this summer, they'll start at $399 / £399

It’s official. Following a teaser earlier this week, Oakley and Meta have made smart glasses, and as an owner of the almost two-year-old Ray-Ban Meta smart specs, I’m green with envy.

Later this summer, six pairs of Oakley smart specs will be available in the US, UK, Australia, Canada, Ireland, and several other European countries, starting at $399 / £399 (we’re still waiting for Australian pricing details).

Limited-Edition Oakley Meta HSTN (featuring gold accents and 24K PRIZM polarized lenses) will be available for preorder sooner – from July 11 – for $499 / £499 (again, we’re waiting for Australian pricing).

Image 1 of 4

The Oakley Meta HSTN smart glasses being used by athletes

The limited edition Oakley Meta glasses (Image credit: Oakley / Meta)Image 2 of 4

The Oakley Meta HSTN smart glasses in their case

(Image credit: Oakley / Meta)Image 3 of 4

The Oakley Meta HSTN smart glasses from the side

(Image credit: Oakley / Meta)Image 4 of 4

The Oakley Meta HSTN smart glasses' camera

(Image credit: Oakley / Meta)

Why am I jealous? Well, for a start, these smart glasses are set to boast a few important hardware and software upgrades over my Ray-Bans.

First is an upgrade to the camera. The Ray-Bans have a built-in 12MP snapper which can capture full-HD (1440x1920 resolution) video at 30fps. Meta is promising these Oakley specs will record Ultra HD (3K) video, perhaps making them possible alternatives to the best action cameras for people who want to record their sporting stunts and look good doing it.

Secondly, they’ll be able to record for longer with a boosted battery life. My Meta Ray-Bans boast a four-hour battery life for ‘standard use.’ They can play music, Meta AI can answer the odd question, and they should last about this long; as soon as you start capturing videos, their battery will drain much faster

With the case recharging them, the Ray-Bans can get up to 36 hours of total use.

Meta is doubling the glasses’ built-in battery with its Oakleys, promising they’ll last for eight hours with standard use, and 19 hours if they’re on standby. Meta adds that you can recharge them to 50% in just 20 minutes with their case, and said the charging case holds up to 48 hours of charge.

The Oakley Meta HSTN smart glasses being used by an athlete on a hike

(Image credit: Oakley / Meta)

Finally, Meta’s AI will still be able to answer various questions for you and use the camera for context to your queries, as we’ve seen from the Ray-Ban Meta smart glasses, but it will also get some new sporting-related knowledge.

Golfers can ask about wind speed, while surfers can check the surf conditions, and you can also ask the glasses for possible ways to improve your sporting technique, too.

As with all these promises, we’ll want to test the Oakley Meta HSTNs for ourselves to see if they live up to the hype, but one way we can already see they’re excelling is on the design side.

Damn, are these things gorgeous.

The Oakley Meta HSTN smart glasses being used by a skateboarder

(Image credit: Oakley / Meta)

Interestingly, the Oakley specs design choice is one major detail the leaks got wrong. Instead of its Sphaera visor-style shades, it’s Oakley’s HSTN glasses (I’m told it’s pronounced how-stuhn).

These glasses look like more angular Ray-Ban Wayfarers – you know, one of Meta’s existing smart glasses designs – but they do boast a serious design upgrade for athletes that you won’t find on Meta’s non-Oakley specs: Oakley’s PRIZM lenses.

Without getting too technical, PRIZM lenses are designed to provide increased contrast to what you can see. There are different models for snow sports, cycling, and other sports (as well as everyday usage), but each is designed to highlight key details that might be relevant to the wearer, such as the contours in different snow terrains, or transitions in trail types and possible road hazards.

If PRIZM lenses sound like overkill, you can also get a pair with transition lenses or with completely clear lenses.

Orange RayBan Meta Smart Glasses in front of a wall of colorful lenses including green, blue, yellow and pink

The Ray-Ban specs still look great too (Image credit: Meta)

I swapped my always-shaded Ray-Bans for a pair with transition lenses, and the difference is stark. Because they’re clear in darker environments and shaded in brighter weather, I’ve found it so much easier to use the transition lens pair as everyday smart glasses. Previously, I could only use my shaded pair in the sun, and that doesn’t come out all too often here in the UK.

The complete list of six Oakley smart glasses options is:

  • Oakley Meta HSTN Warm Grey with PRIZM Ruby Lenses
  • Oakley Meta HSTN Black with PRIZM Polar Black Lenses
  • Oakley Meta HSTN Brown Smoke with PRIZM Polar Deep Water Lenses
  • Oakley Meta HSTN Black with Transitions Amethyst Lenses
  • Oakley Meta HSTN Clear with Transitions Grey Lenses
  • Oakley Meta HSTN Black with Clear Lenses

The Oakley Meta HSTN smart glasses different designs all together

(Image credit: Oakley / Meta)

Beyond the style and lenses, one striking factor is that despite some serious battery upgrades, the frames don’t seem massively chunky.

Like their Ray-Ban predecessors, they’re clearly thicker than normal specs, but they don’t look too much unlike normal shades.

All in all, these Oakley glasses look and sound really impressive. I’m chomping at the bit to try a pair, and if you’ve been on the fence about picking up the Ray-Ban Meta glasses, these enhanced options could be what convinces you to finally get some AI-powered eyewear.

The Oakley Meta HSTN smart glasses being used by a skateboarder

(Image credit: Oakley / Meta)You might also like

Trump Is Selling a Phone + The Start-Up Trying to Automate Every Job + Allison Williams Talks ‘M3GAN 2.0’

NYT Technology - Fri, 06/20/2025 - 06:00
“They’re calling it the T1 Phone 8002 Gold Version, which sounds kind of like a Taylor Swift album.”

New research says using AI reduces brain activity – but does that mean it's making us dumber?

Techradar - Fri, 06/20/2025 - 05:52

Amid all the debates about how AI affects jobs, science, the environment, and everything else, there's a question of how large language models impact the people using them directly.

A new study from the MIT Media Lab implies that using AI tools reduces brain activity in some ways, which is understandably alarming. But I think that's only part of the story. How we use AI, like any other piece of technology, is what really matters.

Here's what the researchers did to test AI's effect on the brain: They asked 54 students to write essays using one of three methods: their own brains, a search engine, or an AI assistant, specifically ChatGPT.

Over three sessions, the students stuck with their assigned tools. Then they swapped, with the AI users going tool-free, and the non-tool users employing AI.

EEG headsets measured their brain activity throughout, and a group of humans, plus a specially trained AI, scored the resulting essays. Researchers also interviewed each student about their experience.

As you might expect, the group relying on their brains showed the most engagement, best memory, and the most sense of ownership over their work, as evidenced by how much they could quote from them.

The ones using AI at first had less impressive recall and brain connectivity, and often couldn’t even quote their own essays after a few minutes. When writing manually in the final test, they still underperformed.

The authors are careful to point out that the study has not yet been peer-reviewed. It was limited in scope, focused on essay writing, not any other cognitive activity. And the EEG, while fascinating, is better at measuring overall trends than pinpointing exact brain functions. Despite all these caveats, the message most people would take away is that using AI might make you dumber.

But I would reframe that to consider if maybe AI isn’t dumbing us down so much as letting us opt out of thinking. Perhaps the issue isn’t the tool, but how we’re using it.

AI brains

If you use AI, think about how you used it. Did you get it to write a letter, or maybe brainstorm some ideas? Did it replace your thinking, or support it? There’s a huge difference between outsourcing an essay and using an AI to help organize a messy idea.

Part of the issue is that "AI" as we refer to it is not literally intelligent, just a very sophisticated parrot with an enormous library in its memory. But this study didn’t ask participants to reflect on that distinction.

The LLM-using group was encouraged to use the AI as they saw fit, which probably didn't mean thoughtful and judicious use, just copying without reading, and that’s why context matters.

Because the "cognitive cost" of AI may be tied less to its presence and more to its purpose. If I use AI to rewrite a boilerplate email, I’m not diminishing my intelligence. Instead, I’m freeing up bandwidth for things that actually require my thinking and creativity, such as coming up with this idea for an article or planning my weekend.

Sure, if I use AI to generate ideas I never bother to understand or engage with, then my brain probably takes a nap, but if I use it to streamline tedious chores, I have more brainpower for when it matters.

Think about it like this. When I was growing up, I had dozens of phone numbers, addresses, birthdays, and other details of my friends and family memorized. I had most of it written down somewhere, but I rarely needed to consult it for those I was closest to. But I haven't memorized a number in almost a decade.

I don't even know my own landline number by heart. Is that a sign I’m getting dumber, or just evidence I've had a cell phone for a long time and stopped needing to remember them?

We’ve offloaded certain kinds of recall to our devices, which lets us focus on different types of thinking. The skill isn’t memorizing, it’s knowing how to find, filter, and apply information when we need it. It's sometimes referred to as "extelligence," but really it's just applying brain power to where it's needed.

That’s not to say memory doesn’t matter anymore. But the emphasis has changed. Just like we don’t make students practice long division by hand once they understand the concept, we may one day decide that it’s more important to know what a good form letter looks like and how to prompt an AI to write one than to draft it line by line from scratch.

Humans are always redefining intelligence. There are a lot of ways to be smart, and knowing how to use tools and technology is one important measure of smarts. At one point, being smart meant knowing how to knap flint, make Latin declensions or working a slide rule.

Today, it might mean being able to collaborate with machines without letting them do all the thinking for you. Different tools prioritize different cognitive skills. And every time a new tool comes along, some people panic that it will ruin us or replace us.

The printing press. The calculator. The internet. All were accused of making people lazy thinkers. All turned out to be a great boon to civilization (well, the jury is still out on the internet).

With AI in the mix, we’re probably leaning harder into synthesis, discernment, and emotional intelligence – the human parts of being human. We don't need the kind of scribes who are only good at writing down what people say; we need people who know how to ask better questions.

Knowing when to trust a model and when to double-check. It means turning a tool that’s capable of doing the work into an asset that helps you do it better.

But none of it works if you treat the AI like a vending machine for intelligence. Punch in a prompt, wait for brilliance to fall out? No, that's not how it works. And if that's all you do with it, you aren't getting dumber, you just never learned how to stay in touch with your own thoughts.

In the study, the LLM group’s lower essay ownership wasn’t just about memory. It was about engagement. They didn’t feel connected to what they wrote because they weren’t the ones doing the writing. That’s not about AI. That’s about using a tool to skip the hard part, which means skipping the learning.

The study is important, though. It reminds us that tools shape thinking. It nudges us if we are using AI tools to expand our brains or to avoid using them. But to claim AI use makes people less intelligent is like saying calculators made us bad at math. If we want to keep our brains sharp, maybe the answer isn’t to avoid AI but to be thoughtful about using it.

The future isn't human brains versus AI. It’s about humans who know how to think with AI and any other tool, and avoiding becoming someone who doesn't bother thinking at all. And that’s a test I’d still like to pass.

You might also like

Midjourney just dropped its first AI video model and Sora and Veo 3 should be worried

Techradar - Fri, 06/20/2025 - 04:41
  • Midjourney has launched its first AI video model, V1.
  • The model lets users animate images into five-second motion clips.
  • The tool is relatively affordable and a possible rival for Google Veo or OpenAI’s Sora.

Midjourney has long been a popular AI image wizard, but now the company is making moves and movies with its first-ever video model, simply named V1.

This image-to-video tool is now available to Midjourney's 20 million-strong community, who want to see five-second clips based on their images, and up to 20 seconds of them extended in five-second increments.

Despite being a brand new venture for Midjourney, the V1 model has enough going on to at least draw comparisons to rival models like OpenAI’s Sora and Google’s Veo 3, especially when you consider the price.

For now, Midjourney V1 is in web beta, where you can spend credits to animate any image you create on the platform or upload yourself.

To make a video, you simply generate an image in Midjourney like usual, hit “Animate,” choose your motion settings, and let the AI go to work.

The same goes with uploading an image; you just have to mark it as the start frame and type in a custom motion prompt.

You can let the AI decide how to move it, or you can take the reins and describe how you want the motion to play out. You can pick between low motion or high motion depending on whether you want a calm movement or a more frenetic scene, respectively.

The results I've seen certainly fit into the current moment in AI video production, both good and bad. The uncanny valley is always waiting to ensnare users, but there are some surprisingly good examples from both Midjourney and initial users.

AI video battlesMidjourney video is really fun from r/midjourney

Midjourney isn’t trying to compete head-on with Sora or Veo in terms of technical horsepower. Those models are rendering cinematic-quality 4K footage with photorealistic lighting and long-form narratives based solely on text. They’re trained on terabytes of data and emphasize frame consistency and temporal stability that Midjourney is not claiming to offer.

Midjourney’s video tool isn’t pretending to be Hollywood’s next CGI pipeline. The pitch is more about being easy and fun to use for independent artists or tinkerers in AI media.

And it really does come out as pretty cheap. According to Midjourney, one video job costs about the same as upscaling, or one image’s worth of cost per second of video.

That’s 25 times cheaper than most AI video services on the market, according to Midjourney and a cursory examination of other alternatives.

That's probably for the best since a lot of Hollywood is going after Midjourney in court. The company is currently facing a high-stakes lawsuit from several Disney, Universal, and other studios over claims it trained its models on copyrighted content.

For now, Midjourney's AI generators for images and video remain active, and the company has plans to expand its video production capabilities. Midjourney is teasing long-term plans for full 3D rendering, scene control, and even immersive world exploration. This first version is just a stepping stone.

Advocates for Sora and Veo probably don't have to panic just yet, but maybe they should be keeping an eye on Midjourney's plans, because while they’re busy building the AI version of a studio camera crew, Midjourney just handed a magic flipbook to anyone with a little cash for its credits.

You might also like

Hybrid Cars, Once Derided and Dismissed, Have Become Popular

NYT Technology - Fri, 06/20/2025 - 04:01
Automakers and car buyers are taking a second, harder look at hybrids after leaving them behind for electric vehicles.

Pages