Do NOT follow this link or you will be banned from the site!
Techradar
New judge’s ruling makes OpenAI keeping a record of all your ChatGPT chats one step closer to reality
- A federal judge rejected a ChatGPT user's petition against her order that OpenAI preserve all ChatGPT chats
- The order followed a request by The New York Times as part of its lawsuit against OpenAI and Microsoft
- OpenAI plans to continue arguing against the ruling
OpenAI will be holding onto all of your conversations with ChatGPT and possibly sharing them with a lot of lawyers, even the ones you thought you deleted. That's the upshot of an order from the federal judge overseeing a lawsuit brought against OpenAI by The New York Times over copyright infringement. Judge Ona Wang upheld her earlier order to preserve all ChatGPT conversations for evidence after rejecting a motion by ChatGPT user Aidan Hunt, one of several from ChatGPT users asking her to rescind the order over privacy and other concerns.
Judge Wang told OpenAI to “indefinitely” preserve ChatGPT’s outputs since the Times pointed out that would be a way to tell if the chatbot has illegally recreated articles without paying the original publishers. But finding those examples means hanging onto every intimate, awkward, or just private communication anyone's had with the chatbot. Though what users write isn't part of the order, it's not hard to imagine working out who was conversing with ChatGPT about what personal topic based on what the AI wrote. In fact, the more personal the discussion, the easier it would probably be to identify the user.
Hunt pointed out that he had no warning that this might happen until he saw a report about the order in an online forum. and is now concerned that his conversations with ChatGPT might be disseminated, including “highly sensitive personal and commercial information.” He asked the judge to vacate the order or modify it to leave out especially private content, like conversations conducted in private mode, or when there are medical or legal matters discussed.
According to Hunt, the judge was overstepping her bounds with the order because “this case involves important, novel constitutional questions about the privacy rights incident to artificial intelligence usage – a rapidly developing area of law – and the ability of a magistrate [judge] to institute a nationwide mass surveillance program by means of a discovery order in a civil case.”
Judge Wang rejected his request because they aren't related to the copyright issue at hand. She emphasized that it's about preservation, not disclosure, and that it's hardly unique or uncommon for the courts to tell a private company to hold onto certain records for litigation. That’s technically correct, but, understandably, an everyday person using ChatGPT might not feel that way.
She also seemed to particularly dislike the mass surveillance accusation, quoting that section of Hunt's petition and slamming it with the legal language equivalent of a diss track. Judge Wang added a "[sic]" to the quote from Hunt's filing and a footnote pointing out that the petition "does not explain how a court’s document retention order that directs the preservation, segregation, and retention of certain privately held data by a private company for the limited purposes of litigation is, or could be, a “nationwide mass surveillance program.” It is not. The judiciary is not a law enforcement agency."
That 'sic burn' aside, there's still a chance the order will be rescinded or modified after OpenAI goes to court this week to push back against it as part of the larger paperwork battle around the lawsuit.
Deleted but not goneHunt's other concern is that, regardless of how this case goes, OpenAI will now have the ability to retain chats that users believed were deleted and could use them in the future. There are concerns over whether OpenAI will lean into protecting user privacy over legal expedience. OpenAI has so far argued in favor of that privacy and has asked the court for oral arguments to challenge the retention order that will take place this week. The company has said it wants to push back hard on behalf of its users. But in the meantime, your chat logs are in limbo.
Many may have felt that writing into ChatGPT is like talking to a friend who can keep a secret. Perhaps more will now understand that it still acts like a computer program, and the equivalent of your browser history and Google search terms are still in there. At the very least, hopefully, there will be more transparency. Even if it's the courts demanding that AI companies retain sensitive data, users should be notified by the companies. We shouldn't discover it by chance on a web forum.
And if OpenAI really wants to protect its users, it could start offering more granular controls: clear toggles for anonymous mode, stronger deletion guarantees, and alerts when conversations are being preserved for legal reasons. Until then, it might be wise to treat ChatGPT a bit less like a therapist and a bit more like a coworker who might be wearing a wire.
You might also likeWindows 10 users who don’t want to upgrade to Windows 11 get new lifeline from Microsoft
- Microsoft has launched a wizard to help Windows 10 devices stay secure
- It’s only intended as a temporary solution, though
- Windows 10 support ends later this year
Windows 10 has been around for almost a decade now, but official support is due to end on October 14 this year. Yet that doesn’t have to be the end of the road, as Microsoft has just announced a new process for anyone who needs a little more time to switch to Windows 11.
The updates are part of Microsoft’s Extended Security Updates (ESU) program, which brings monthly critical and important security patches to Windows 10 users for one year after official support ends. Microsoft says this is only meant to be a short-term solution, as it doesn’t include non-security updates or new features.
With today’s change, there are now a few new ways to get started. For individuals, there’s a new enrollment wizard that will give you three options: use Windows Backup to sync all your settings to the cloud; redeem 1,000 Microsoft Rewards points to get started; or pay a one-off fee of $30.
After you’ve picked an option and followed the instructions, your Windows 10 PC will be enrolled. ESU coverage for personal computers lasts from October 15, 2025 until October 13, 2026. The enrollment wizard is currently available in the Windows Insider Program, made available to regular Windows 10 users in July, and will roll out on a wider basis in mid-August.
Time to upgradeThe ESU changes aren’t just coming to individual Windows 10 users. Commercial organizations can pay $61 per device to subscribe to the ESU program for a year. This can be renewed annually for up to three years, although Microsoft warns that the cost will increase each year. Businesses can sign up today via the Microsoft Volume Licensing Program, while Cloud Service Providers will begin offering enrollment starting September 1.
As for Windows 10 devices that are accessing Windows 11 Cloud PCs via Windows 365 and virtual machines, these will be granted access to ESU free of charge and will receive security updates automatically, with no extra actions required.
In a way, Microsoft’s announcement highlights the struggles the company has had with getting people to upgrade to Windows 11. Microsoft first announced that it would kill off Windows 10 way back in June 2021, and yet there are still people and organizations that have not made the switch, despite many years of prompts and warnings.
For some people – especially those with mission-critical devices or large fleets of computers – upgrading to Windows 11 might be a herculean task. But if you’re able to make the switch, you really should do so to ensure you keep getting all the latest updates. We’ve even got a guide on upgrading to Windows 11 to help you through the process.
You might also like- Windows 10 diehards can keep their beloved OS secure for a little while longer (for a fee) as Microsoft pleas with them to be reasonable
- Windows 10 has a year left to live – but are users prepared to upgrade to Windows 11?
- Windows 10 goes dark in 6 months, yet shockingly, many businesses haven't even got a plan to upgrade
Now it’s dogs doing Olympic diving in the next AI video craze to sweep the Internet from the Veo 3 rival Hailuo 02
- New diving dogs AI-generated video follows the cat Olympics craze
- You can try Hailuo 02 yourself for free
- Are dogs better at 'diving' than cats? You decide
Hot on the heels (or should that be paws?) of the AI-generated ‘cats doing Olympic diving’ video that broke the Internet a few days ago comes the natural follow-up.
Yes, it’s dogs doing Olympic diving, which opens up the possibility of a debate on who does it better - cats or dogs?
@stanislav_laurier ♬ оригинальный звук - stanislav_laurierCreated by TikTok and Hailuo 02 user Stanislav Laurier, the video features the same impressive physics and realistic depictions of dogs that made the cat video so successful in the first place. The way the dogs bounce on the diving board before launching themselves into a spin makes this a truly impressive piece of AI work.
And of course, the dogs look just as realistic as the cats as they walk along the diving board. It’s only when you see them doing impossible spins that you realize that this must be AI.
Like the cat video, this was created in a new Veo 3 and Sora rival called Hailuo 02, and effortlessly demonstrates how far AI video has come.
On the podiumAfter a few impressive dives, the video ends with a winners' podium showing off which dogs got third, second, and first place. Here, AI lets itself down slightly, as it says "1nd" and "2st" on the podium. It's amazing that it can get all the complicated physics of spinning dogs correct, but can't get some simple text right.
The video was posted on TikTok, and received quite a few comments, especially from Laura Smith who perhaps hadn’t quite caught on that the video was made with AI: “Wowww!!!! This is so amazing that these clever dogs can do this!”
Other users seem to have worked it out, though, like Kaia : 3, who said, “I’m crying, I thought this was real until the Pomeranian started spinning.”
Try it yourselfHailuo 02 is created by Chinese AI video developer MiniMax and it debuted earlier this summer.
You’ll need to create an account to use Hailuo 02 (it let me log in with my Google account), but after that, you can give Hailuo a go yourself for free. I asked it to create “A cat throwing a shot put in the Kitty Olympics 2026”
As a “non member” (subscriptions are available, starting at $95.99 – about £70/AU$147 – a year) I got 500 free points that were valid for the next 3 days, and I had to wait in a four minute queue, which was more than acceptable, after which it started to generate the video. After a couple of minutes, my video was ready, and it had only used up 25 of my points.
I’ll admit that it doesn’t look great, but that was my first attempt. More time invested in refining the very simple prompt I used would produce much better results.
So, who do you think takes the prize for best Olympic diving? Dogs or cats? Comment with your opinion below, and let’s not pretend that this isn’t exactly what AI was created for.
You might also likeMicrosoft's 'if you can't beat them, join them' approach to the threat of Steam in the new Xbox PC app is a great idea
- Microsoft's improved Windows 11 Xbox PC app will be available for Xbox Insiders
- Its Aggregated Game Library will allow users to access games on multiple storefronts in one app
- It's going up against SteamOS and its game library setup
Microsoft's ROG Xbox Ally handheld gaming PCs are set for release later this summer, alongside a significant Xbox app upgrade – and it appears that our first taste of the handheld-friendly app is closer than ever.
Announced on Xbox Wire, Microsoft's new Aggregated Game Library will be available for Xbox Insiders to preview, leading up to its full launch alongside the ROG Xbox Ally handhelds. It will let users launch games across Steam, Battle.net, and multiple platforms like Epic Games, all on the Xbox app, essentially emulating Valve's SteamOS.
It's set to act as a direct competitor to Valve's efforts at creating a handheld-friendly gaming experience; first with the Steam Deck, and now with the Legion Go S and other handhelds without an official SteamOS license. It's been a while since fans and I have pleaded with Microsoft for a portable Windows 11 mode, and I couldn't be happier to see it doing just that.
However, I'd say it's evident that Microsoft has a lot of work ahead, attempting to improve Windows 11 and going up against SteamOS. We already know that gaming performance on SteamOS is better than Windows 11's – and yes, while we still need to see the Xbox app first, it may have some catching up to do.
While Windows 11 has the advantage of running most multiplayer games using anti-cheat, there's a strong chance of this compatibility on Linux improving – and that's because SteamOS is making its way to other handhelds away from the Steam Deck. Not to mention, Splitgate 2 developers tweaked its anti-cheat to make the title playable on SteamOS, so others may follow suit.
Analysis: I may not turn my back on SteamOS, but Microsoft's move is a welcome oneLet's get one thing straight: I'm absolutely all-in for the new Xbox app, and I'll more than likely be using it on my dual-booted Asus ROG Ally. However, I'm keeping my expectations low, and I don't think the new upgrade will convince me to move away from SteamOS completely.
Now, you could say it's an unfair judgment as the upgrades aren't available yet – but fans have been asking Microsoft to consider a portable handheld mode for a long while now, so the onus isn't on the fans, but rather Microsoft itself.
Valve's SteamOS has multiple years of work under its belt, with optimizations pushing for a smoother and more customizable handheld experience. Tools like Decky Loader (which isn't affiliated with Valve) are a massive part of that – and I hope that Microsoft can replicate a smooth and customizable experience within the Xbox app.
The preview should arrive later this week, and you can be certain that I'll be testing it on my Asus ROG Ally...
You might also like...- Microsoft is digging its own grave with Windows 11, and it has to stop
- Windows 11 user has 30 years of 'irreplaceable photos and work' locked away in OneDrive - and Microsoft's silence is deafening
- Still on Windows 11 23H2 because you’re worried 24H2 is a disaster for PC gaming? Microsoft’s latest update could persuade you to finally upgrade
Google Earth is now an even better time-travel machine thanks to this Street View upgrade – and I might get hooked
- Google Earth is celebrating is 20th birthday this month
- It's just added a new historical Street View feature for time-traveling
- Pro users will also get AI-powered upgrades to help with urban planning
Google Earth has just turned 20 years old and the digital globe has picked up a feature that could prove to be an addictive time-sink – historical Street View.
Yes, we've been able to time-travel around our cities and previous homes for years now on Google Maps, but Google Earth feels like a natural home for the feature, given its more immersive 3D views and satellite imagery. And from today, Google Earth now offers Street View with that historical menu bar.
That means you can visit famous buildings and landmarks (like the Vessel building in New York City above) and effectively watch their construction unfold. To do that, find a location in Google Earth, drag the pegman icon (bottom right) onto the street, click 'see more dates', and use the film strip menu to choose the year.
Around major cities and landmarks, Street View images are updated so regularly now that their snapshots are often only months apart, but in most areas they're renewed every one to two years. That opens up some major nostalgia potential, particularly if the shots happen to have frozen someone you know in time.
Bringing history to lifeTo celebrate Earth's birthday, Google has also made timelapses of its favorite historical aerial views, which stitch together satellite photos over several decades. This feature became available in the web and mobile versions of Earth last year – to find it, go the the layers icon and turn on the 'historical imagery' toggle.
One fascinating example is the aerial view of the Notre-Dame de Paris cathedral (above), which Google made exclusively for us. It shows the gothic icon from 1943 through to its unfortunate fire in 2019, followed by its recent reconstruction.
But other examples that Google has picked out include a view of Berlin, from its post-war devastation to the Berlin Wall and its modern incarnation, plus the stunning growth of Las Vegas and San Francisco over the decades.
There's a high chance that Google Earth will, once again, send me down a hours-long rabbit hole with these Street View and historical imagery tricks. But it's also giving Pro users some new AI-driven features in "the coming weeks", with features like 'tree canopy coverage' and heatmaps showing land surface temperatures underlining Earth's potential for urban planning.
That perhaps hints at the Gemini-powered treats to come for us non-professional users in the future. But for now, I have more than enough Earth-related treasure hunts to keep me occupied.
You might also likeForget about SEO - Adobe already has an LLM Optimizer to help businesses rank on ChatGPT, Gemini, and Claude
- Adobe wants to help decide how your brand shows up inside ChatGPT and other AI bots
- LLM Optimizer promises SEO-like results in an internet where search engines no longer rule
- Your FAQ page could now influence what AI chatbots say about your brand to customers
Popular AI tools such as ChatGPT, Gemini, and Claude are increasingly replacing traditional search engines in how people discover content and make purchasing decisions.
Adobe is attempting to stay ahead of the curve by launching LLM Optimizer, which it claims can help businesses improve visibility across generative AI interfaces by monitoring how brand content is used and providing actionable recommendations.
The tool even claims to assign a monetary value to potential traffic gains, allowing users to prioritize optimizations.
Shift from search engines to AI interfacesWith a reported 3,500% increase in generative AI-driven traffic to U.S. retail sites and a 3,200% spike to travel sites between July 2024 and May 2025, Adobe argues that conversational interfaces are no longer a trend but a transformation in consumer behavior.
“Generative AI interfaces are becoming go-to tools for how customers discover, engage and make purchase decisions, across every stage of their journey,” said Loni Stark, vice president of strategy and product at Adobe Experience Cloud.
The core of Adobe LLM Optimizer lies in its monitoring and benchmarking capabilities, as it claims to give businesses a “real-time pulse on how their brand is showing up across browsers and chat services.”
The tool can help teams identify the most relevant queries for their sector and understand how their offerings are presented, as well as enabling comparison with competitors for high-value keywords and uses this data to refine content strategies.
A recommendation engine detects gaps in brand visibility across websites, FAQs, and even external platforms like Wikipedia.
It suggests both technical fixes and content improvements based on attributes that LLMs prioritize, such as accuracy, authority, and informativeness.
These changes can be implemented “with a single click,” including code or content updates, which suggests an effort to reduce dependency on lengthy development cycles.
It is clear the best SEO tool tactics may need to adapt, especially as AI chat interfaces do not operate with the same crawling and ranking logic as standard web browsers.
For users who already rely on the best browser for private browsing or privacy tools to avoid data profiling, the idea that businesses are now optimizing to appear inside chatbots could raise questions about how content is sourced and attributed.
Adobe insists that the tool supports “enterprise-ready frameworks” and has integration pathways for agencies and third-party systems, though the wider implications for transparency and digital content ethics remain to be seen.
You might also like- These are the best AI website builders around
- Take a look at our pick of the best internet security suites
- How to grow your ecommerce business with AI
I tried Google’s new Search Live feature and ended up debating an AI about books
- Google’s new Search Live feature lets users hold real-time voice conversations with an AI-powered version of Search
- The Gemini-powered AI attempts to simulate a friendly and knowledgeable human.
- Google is keen to have all roads lead to Gemini, and Search Live could help entice people to try the AI companion without realizing it
Google's quest to incorporate its Gemini into everything has a new outlet linked to its most central product. The new Google Search Live essentially gives Google Search's AI Mode a Gemini-powered voice.
It’s currently available to users in the U.S. via the Google app on iOS and Android, and it invites you to literally talk to your search bar. You speak, and it speaks back; unlike the half-hearted AI assistants of yesteryear, this one doesn’t stop listening after just one question. It’s a full dialogue partner, unlike the non-verbal AI Mode.
It also works in the background, which means I could leave the app during the chat to do something else on my phone, and the audio didn’t pause or glitch. It just kept going, as if I were on the phone with someone.
Google refers to this system as “query fan-out,” which means that instead of just answering your question, it also quietly considers related queries, drawing in more diverse sources and perspectives. You feel it, too. The answers don’t feel boxed into a single form of response, even on relatively straightforward queries like the one about linen dresses in Google's demo.
AI Search LiveTo test Search Live out, I tapped the “Live” icon and asked for speculative fiction books I should read this summer. The genial voice offered a few classic and a few more recent options. I then opened Pandora's box by asking it about its own favorites. Surprisingly, it had a few. I then decided to push it a bit and tell it it was wrong about the best fantasy books and listed a few of my own. Suddenly, I found myself in a debate not only about the best examples of the genre, but also about how to define it.
We segued from there to philosophical and historical opinions about elvish empathy and whether AI should be compared to genies or the mythical brownies that do housework in exchange for cream. Were it not for the smooth, synthetic voice and its relentless good cheer, I might have thought I was actually having an idle argument with an acquaintance over nothing important.
It's obviously very different from the classic Google Search and its wall of links. If you look at the screen, you still see the links, but the focus is on the talk. Google isn't unique with a vocal version of its AI, as ChatGPT and others proffer similar features. Google Search Live does come off as smoother, and I didn't have to rephrase my questions or repeat myself once in 10 minutes. Being integrated with Google’s actual search systems might help keep things grounded. It’s like talking to someone who always has a stack of citations in their back pocket.
I don't think Search Live is what people will use to replace their usual online search methods, but here’s a real accessibility benefit to it. For people who can’t comfortably type or see, voice-first tools like this open new doors. Same goes for kids asking homework questions, or for someone cooking dinner who has a random question but doesn't want to pause to wipe flour off their screen.
There’s a tradeoff, of course, in terms of how people browse the web. If this kind of conversational AI becomes the dominant interface for search on Google, what happens to web traffic? Publishers already feel like they’re shouting into the void when their content is skimmed by AI and hiring lawyers to fight it. What will the AI search if its sources shrink or vanish? It's a complicated question, worthy of debate. I'll have to see how Search Live lays out the arguments.
You might also likeForget virtual pets – the next AI video craze is cats doing Olympic diving, and it’s all thanks to this new Google Veo 3 rival
- MiniMax’s new Hailuo 02 AI video model has sparked a viral trend of cats performing Olympic dives
- The videos blend advanced physics-based animation with internet absurdity
- Though not the quality of Google Veo 3, Hailuo 2 is rapidly gaining in popularity among casual AI users
Watching the cat walk onto the diving board, I could imagine calls to the fire department or a huge crowd rushing to save it, causing a catastrophe, while the feline simply blinked at the tragedy. Instead, the cat executed an Olympic-caliber triple somersault into the pool. If it weren't for the impossible feat and my awareness that it was an AI-generated video, I'd be checking to see if there was a Freaky Friday situation with the U.S. swim team.
Instead, it's a hugely viral video produced using Chinese AI video developer MiniMax's Hailuo 02 model. The millions of people watching the video of cats diving may not be real, but it's real enough to elbow its way into the competition for AI video dominance, alongside Google Veo 3 and OpenAI's Sora, among many others.
MiniMax debuted Hailuo 02 earlier this summer, but the virality of the faux Olympics video suggests it's going to become a very popular tool for making still images or text prompts into videos. The model only makes five- to ten-second clips for now, but its motion customization, camera effects, and impressive imitation of real-world physics, like the movement of fur or splashing of water, make it more intriguing.
Testing Hailuo 02 on cats diving came about seemingly organically when X user R.B Keeper (presumably not their real name) tried a prompt they'd seen tested on Veo 3. The idea spread from there to a version that garnered millions of views in a matter of hours and appeared on TikTok, Reddit, and Instagram, with numerous variations.
A post shared by Pablo Prompt (@pabloprompt)
A photo posted by on
AI video battlesHailuo 02 uses frame-by-frame physics simulation, attention-mapped motion prompts, and multimodal input parsing. In other words, if you type a strange idea, the model will do its best to make it look and behave like it would in an approximation of the real world.
Notably, Hailuo 02 is reportedly far cheaper and faster than Veo 3, though perhaps without quite the high-end gloss. Still, it's more accessible, not being limited to enterprise services and beta programs like Veo 3.
The cat diving videos are the apex of a very specific Venn diagram of internet trends, accessible tools, and low-stakes fun. You don’t need to be a professional editor or own a supercomputer to try it. And more upgrades are on the horizon. MiniMax has outlined plans to integrate synchronized audio, lighting, and texture control, as well as longer clips.
As for Google Veo 3 and other major players, they have their professional niche for now. But if they want to widen their appeal to the masses, they might look to what MiniMax and smaller developers like Midjourney, with its V1 video model, are doing. Hailuo 02 is the kind of tool that will get people, like the cats, to dive in.
You might also likeI adore my Meta Ray-Bans, but these new Oakley smart glasses are making me jealous
- Meta and Oakley are officially making smart glasses
- They're based on Oakley's HSTN glasses design
- Launching later this summer, they'll start at $399 / £399
It’s official. Following a teaser earlier this week, Oakley and Meta have made smart glasses, and as an owner of the almost two-year-old Ray-Ban Meta smart specs, I’m green with envy.
Later this summer, six pairs of Oakley smart specs will be available in the US, UK, Australia, Canada, Ireland, and several other European countries, starting at $399 / £399 (we’re still waiting for Australian pricing details).
Limited-Edition Oakley Meta HSTN (featuring gold accents and 24K PRIZM polarized lenses) will be available for preorder sooner – from July 11 – for $499 / £499 (again, we’re waiting for Australian pricing).
Image 1 of 4Why am I jealous? Well, for a start, these smart glasses are set to boast a few important hardware and software upgrades over my Ray-Bans.
First is an upgrade to the camera. The Ray-Bans have a built-in 12MP snapper which can capture full-HD (1440x1920 resolution) video at 30fps. Meta is promising these Oakley specs will record Ultra HD (3K) video, perhaps making them possible alternatives to the best action cameras for people who want to record their sporting stunts and look good doing it.
Secondly, they’ll be able to record for longer with a boosted battery life. My Meta Ray-Bans boast a four-hour battery life for ‘standard use.’ They can play music, Meta AI can answer the odd question, and they should last about this long; as soon as you start capturing videos, their battery will drain much faster
With the case recharging them, the Ray-Bans can get up to 36 hours of total use.
Meta is doubling the glasses’ built-in battery with its Oakleys, promising they’ll last for eight hours with standard use, and 19 hours if they’re on standby. Meta adds that you can recharge them to 50% in just 20 minutes with their case, and said the charging case holds up to 48 hours of charge.
Finally, Meta’s AI will still be able to answer various questions for you and use the camera for context to your queries, as we’ve seen from the Ray-Ban Meta smart glasses, but it will also get some new sporting-related knowledge.
Golfers can ask about wind speed, while surfers can check the surf conditions, and you can also ask the glasses for possible ways to improve your sporting technique, too.
As with all these promises, we’ll want to test the Oakley Meta HSTNs for ourselves to see if they live up to the hype, but one way we can already see they’re excelling is on the design side.
Damn, are these things gorgeous.
Interestingly, the Oakley specs design choice is one major detail the leaks got wrong. Instead of its Sphaera visor-style shades, it’s Oakley’s HSTN glasses (I’m told it’s pronounced how-stuhn).
These glasses look like more angular Ray-Ban Wayfarers – you know, one of Meta’s existing smart glasses designs – but they do boast a serious design upgrade for athletes that you won’t find on Meta’s non-Oakley specs: Oakley’s PRIZM lenses.
Without getting too technical, PRIZM lenses are designed to provide increased contrast to what you can see. There are different models for snow sports, cycling, and other sports (as well as everyday usage), but each is designed to highlight key details that might be relevant to the wearer, such as the contours in different snow terrains, or transitions in trail types and possible road hazards.
If PRIZM lenses sound like overkill, you can also get a pair with transition lenses or with completely clear lenses.
I swapped my always-shaded Ray-Bans for a pair with transition lenses, and the difference is stark. Because they’re clear in darker environments and shaded in brighter weather, I’ve found it so much easier to use the transition lens pair as everyday smart glasses. Previously, I could only use my shaded pair in the sun, and that doesn’t come out all too often here in the UK.
The complete list of six Oakley smart glasses options is:
- Oakley Meta HSTN Warm Grey with PRIZM Ruby Lenses
- Oakley Meta HSTN Black with PRIZM Polar Black Lenses
- Oakley Meta HSTN Brown Smoke with PRIZM Polar Deep Water Lenses
- Oakley Meta HSTN Black with Transitions Amethyst Lenses
- Oakley Meta HSTN Clear with Transitions Grey Lenses
- Oakley Meta HSTN Black with Clear Lenses
Beyond the style and lenses, one striking factor is that despite some serious battery upgrades, the frames don’t seem massively chunky.
Like their Ray-Ban predecessors, they’re clearly thicker than normal specs, but they don’t look too much unlike normal shades.
All in all, these Oakley glasses look and sound really impressive. I’m chomping at the bit to try a pair, and if you’ve been on the fence about picking up the Ray-Ban Meta glasses, these enhanced options could be what convinces you to finally get some AI-powered eyewear.
New research says using AI reduces brain activity – but does that mean it's making us dumber?
Amid all the debates about how AI affects jobs, science, the environment, and everything else, there's a question of how large language models impact the people using them directly.
A new study from the MIT Media Lab implies that using AI tools reduces brain activity in some ways, which is understandably alarming. But I think that's only part of the story. How we use AI, like any other piece of technology, is what really matters.
Here's what the researchers did to test AI's effect on the brain: They asked 54 students to write essays using one of three methods: their own brains, a search engine, or an AI assistant, specifically ChatGPT.
Over three sessions, the students stuck with their assigned tools. Then they swapped, with the AI users going tool-free, and the non-tool users employing AI.
EEG headsets measured their brain activity throughout, and a group of humans, plus a specially trained AI, scored the resulting essays. Researchers also interviewed each student about their experience.
As you might expect, the group relying on their brains showed the most engagement, best memory, and the most sense of ownership over their work, as evidenced by how much they could quote from them.
The ones using AI at first had less impressive recall and brain connectivity, and often couldn’t even quote their own essays after a few minutes. When writing manually in the final test, they still underperformed.
The authors are careful to point out that the study has not yet been peer-reviewed. It was limited in scope, focused on essay writing, not any other cognitive activity. And the EEG, while fascinating, is better at measuring overall trends than pinpointing exact brain functions. Despite all these caveats, the message most people would take away is that using AI might make you dumber.
But I would reframe that to consider if maybe AI isn’t dumbing us down so much as letting us opt out of thinking. Perhaps the issue isn’t the tool, but how we’re using it.
AI brainsIf you use AI, think about how you used it. Did you get it to write a letter, or maybe brainstorm some ideas? Did it replace your thinking, or support it? There’s a huge difference between outsourcing an essay and using an AI to help organize a messy idea.
Part of the issue is that "AI" as we refer to it is not literally intelligent, just a very sophisticated parrot with an enormous library in its memory. But this study didn’t ask participants to reflect on that distinction.
The LLM-using group was encouraged to use the AI as they saw fit, which probably didn't mean thoughtful and judicious use, just copying without reading, and that’s why context matters.
Because the "cognitive cost" of AI may be tied less to its presence and more to its purpose. If I use AI to rewrite a boilerplate email, I’m not diminishing my intelligence. Instead, I’m freeing up bandwidth for things that actually require my thinking and creativity, such as coming up with this idea for an article or planning my weekend.
Sure, if I use AI to generate ideas I never bother to understand or engage with, then my brain probably takes a nap, but if I use it to streamline tedious chores, I have more brainpower for when it matters.
Think about it like this. When I was growing up, I had dozens of phone numbers, addresses, birthdays, and other details of my friends and family memorized. I had most of it written down somewhere, but I rarely needed to consult it for those I was closest to. But I haven't memorized a number in almost a decade.
I don't even know my own landline number by heart. Is that a sign I’m getting dumber, or just evidence I've had a cell phone for a long time and stopped needing to remember them?
We’ve offloaded certain kinds of recall to our devices, which lets us focus on different types of thinking. The skill isn’t memorizing, it’s knowing how to find, filter, and apply information when we need it. It's sometimes referred to as "extelligence," but really it's just applying brain power to where it's needed.
That’s not to say memory doesn’t matter anymore. But the emphasis has changed. Just like we don’t make students practice long division by hand once they understand the concept, we may one day decide that it’s more important to know what a good form letter looks like and how to prompt an AI to write one than to draft it line by line from scratch.
Humans are always redefining intelligence. There are a lot of ways to be smart, and knowing how to use tools and technology is one important measure of smarts. At one point, being smart meant knowing how to knap flint, make Latin declensions or working a slide rule.
Today, it might mean being able to collaborate with machines without letting them do all the thinking for you. Different tools prioritize different cognitive skills. And every time a new tool comes along, some people panic that it will ruin us or replace us.
The printing press. The calculator. The internet. All were accused of making people lazy thinkers. All turned out to be a great boon to civilization (well, the jury is still out on the internet).
With AI in the mix, we’re probably leaning harder into synthesis, discernment, and emotional intelligence – the human parts of being human. We don't need the kind of scribes who are only good at writing down what people say; we need people who know how to ask better questions.
Knowing when to trust a model and when to double-check. It means turning a tool that’s capable of doing the work into an asset that helps you do it better.
But none of it works if you treat the AI like a vending machine for intelligence. Punch in a prompt, wait for brilliance to fall out? No, that's not how it works. And if that's all you do with it, you aren't getting dumber, you just never learned how to stay in touch with your own thoughts.
In the study, the LLM group’s lower essay ownership wasn’t just about memory. It was about engagement. They didn’t feel connected to what they wrote because they weren’t the ones doing the writing. That’s not about AI. That’s about using a tool to skip the hard part, which means skipping the learning.
The study is important, though. It reminds us that tools shape thinking. It nudges us if we are using AI tools to expand our brains or to avoid using them. But to claim AI use makes people less intelligent is like saying calculators made us bad at math. If we want to keep our brains sharp, maybe the answer isn’t to avoid AI but to be thoughtful about using it.
The future isn't human brains versus AI. It’s about humans who know how to think with AI and any other tool, and avoiding becoming someone who doesn't bother thinking at all. And that’s a test I’d still like to pass.
You might also likeMidjourney just dropped its first AI video model and Sora and Veo 3 should be worried
- Midjourney has launched its first AI video model, V1.
- The model lets users animate images into five-second motion clips.
- The tool is relatively affordable and a possible rival for Google Veo or OpenAI’s Sora.
Midjourney has long been a popular AI image wizard, but now the company is making moves and movies with its first-ever video model, simply named V1.
This image-to-video tool is now available to Midjourney's 20 million-strong community, who want to see five-second clips based on their images, and up to 20 seconds of them extended in five-second increments.
Despite being a brand new venture for Midjourney, the V1 model has enough going on to at least draw comparisons to rival models like OpenAI’s Sora and Google’s Veo 3, especially when you consider the price.
For now, Midjourney V1 is in web beta, where you can spend credits to animate any image you create on the platform or upload yourself.
To make a video, you simply generate an image in Midjourney like usual, hit “Animate,” choose your motion settings, and let the AI go to work.
The same goes with uploading an image; you just have to mark it as the start frame and type in a custom motion prompt.
You can let the AI decide how to move it, or you can take the reins and describe how you want the motion to play out. You can pick between low motion or high motion depending on whether you want a calm movement or a more frenetic scene, respectively.
The results I've seen certainly fit into the current moment in AI video production, both good and bad. The uncanny valley is always waiting to ensnare users, but there are some surprisingly good examples from both Midjourney and initial users.
AI video battlesMidjourney video is really fun from r/midjourneyMidjourney isn’t trying to compete head-on with Sora or Veo in terms of technical horsepower. Those models are rendering cinematic-quality 4K footage with photorealistic lighting and long-form narratives based solely on text. They’re trained on terabytes of data and emphasize frame consistency and temporal stability that Midjourney is not claiming to offer.
Midjourney’s video tool isn’t pretending to be Hollywood’s next CGI pipeline. The pitch is more about being easy and fun to use for independent artists or tinkerers in AI media.
And it really does come out as pretty cheap. According to Midjourney, one video job costs about the same as upscaling, or one image’s worth of cost per second of video.
That’s 25 times cheaper than most AI video services on the market, according to Midjourney and a cursory examination of other alternatives.
That's probably for the best since a lot of Hollywood is going after Midjourney in court. The company is currently facing a high-stakes lawsuit from several Disney, Universal, and other studios over claims it trained its models on copyrighted content.
For now, Midjourney's AI generators for images and video remain active, and the company has plans to expand its video production capabilities. Midjourney is teasing long-term plans for full 3D rendering, scene control, and even immersive world exploration. This first version is just a stepping stone.
Advocates for Sora and Veo probably don't have to panic just yet, but maybe they should be keeping an eye on Midjourney's plans, because while they’re busy building the AI version of a studio camera crew, Midjourney just handed a magic flipbook to anyone with a little cash for its credits.
You might also like‘My kids will never be smarter than AI’: Sam Altman’s advice on how to use ChatGPT as a parent leaves me shaking my head
Sam Altman has appeared in the first episode of OpenAI’s brand new podcast, called simply the OpenAI Podcast, which is available to watch now on Spotify, Apple Podcasts, and YouTube.
The podcast is hosted by Andrew Mayne and in the first episode, OpenAI CEO Sam Altman joins the host to talk about the future of AI: from GPT-5 and AGI to Project Stargate, new research workflows, and AI-powered parenting.
While Altman's thoughts on AGI are always worth paying attention to, it was his advice on AI-powered parenting that caught my ear this time.
You have to wonder if Altman’s PR advisors have taken the day off, because after being asked the softball question, “You’ve recently become a new parent, how is ChatGPT helping you with that?”, Altman somehow draws us into a nightmare scenario of a generation of AI-reared kids who have lost the ability to communicate with regular humans in favor of their parasocial relationships with ChatGPT.
“My kids will never be smarter than AI.”, says Altman in a matter-of-fact way. “But also they will grow up vastly more capable than we were when we grew up. They will be able to do things that we cannot imagine and they’ll be really good at using AI. And obviously, I think about that a lot, but I think much more about what they will have that we didn’t…. I don’t think my kids will ever be bothered by the fact that they’re not smarter than AI. “
That all sounds great, but then later in the conversation he says: “Again, I suspect this is not all going to be good, there will be problems and people will develop these problematic, or somewhat problematic, parasocial relationships.“
In case you’re wondering what "parasocial relationships" are, they develop when we start to consider media personalities or famous people as friends, despite having no real interactions with them; the way we all think we know George Clooney because he’s that friendly doctor from ER, or from his movies or the Nespresso advert, when, in fact, we have never met him, and most likely never will.
Mitigating the downsidesAltman is characterizing a child’s interactions with ChatGPT in the same way, but interestingly he doesn’t offer any solutions for a generation weaned on ChatGPT Advanced Voice mode rather than human interaction. Instead he sees it as a problem for society to figure out.
“The upsides will be tremendous and society in general is good at figuring out how to mitigate the downsides”, Altman assures the viewer.
Now I’ll admit to being of a more cynical bent, but this does seem awfully like he’s washing his hands of a problem that OpenAI is creating. Any potential problems that a generation of kids brought up interacting with ChatGPT are going to experience are, apparently, not OpenAI’s concern.
In fact, earlier when the podcast host brought up the example story of a parent using ChatGPT’s Advanced Voice Mode to talk to their child about Thomas the Tank Engine, instead of doing it themselves, because they are bored of talking about it endlessly, Altman simply nods and says ,“Kids love Voice Mode in ChatGPT”.
Indeed they do Sam, but is it wise to let your child loose on ChatGPT’s Advanced Voice Mode without supervision? As a parent myself (although of much older children now) I’m uncomfortable with hearing of young kids being given what sounds like unsupervised access to ChatGPT.
AI comes with all sorts of warnings for a reason. It can make mistakes, it can give bad advice, and it can hallucinate things that aren’t true. Not to mention that “ChatGPT is not meant for children under 13” according to OpenAI’s own guidelines, and I can’t imagine there are many kids older than 13 who are interested in talking about Thomas the Tank Engine!
I have no problem using ChatGPT with my kids, but when ChatGPT was available they were both older than 13. If I was using it with younger children I’d always make sure that they weren’t using it on their own.
I'm not suggesting that Altman is in any way a bad parent, and I appreciate his enthusiasm for AI, but I think he should leave the parenting advice to the experts for now.
You might also likeGoogle Gemini’s super-fast Flash-Lite 2.5 model is out now - here’s why you should switch today
- Google’s new Gemini 2.5 Flash-Lite model is its fastest and most cost-efficient
- The model is for tasks that don't require much processing, like translation and data organization
- The new model is in preview, while Gemini 2.5 Flash and Pro are now generally available
AI chatbots can respond at a pretty rapid clip at this point, but Google has a new model aimed at speeding things up even more under the right circumstances. The tech giant has unveiled the Gemini 2.5 Flash-Lite model as a preview, joining the larger Gemini family as the smaller, yet faster and more agile sibling to the Gemini 2.5 Flash and Gemini 2.5 Pro.
Google is pitching Flash-Lite as ideal for tasks where milliseconds matter and budgets are limited. It's intended for tasks that may be large but relatively simple, such as bulk translation, data classification, and organizing any information.
Like the other Gemini models, it can still process requests and handle images and other media, but the principal value lies in its speed, which is faster than that of the other Gemini 2.5 models. It's an update of the Gemini 2.0 Flash-Lite model. The 2.5 iteration has performed better in tests than its predecessor, especially in math, science, logic, and coding tasks. Flash-Lite is about 1.5 times faster than older models.
The budgetary element also makes Flash-Lite unique. While other models may turn to more powerful, and thus more expensive, reasoning tools to answer questions, Flash-Lite doesn’t always default to that approach. You can actually flip that switch on or off depending on what you’re asking the model to do.
And just because it can be cheaper and faster doesn't mean Flash-Lite is limited in the scale of what it can do. Its context window of one million tokens means you could ask it to translate a fairly hefty book, and it would do it all in one go.
Flash-Lite litThe preview release of Flash-Lite isn't Google's only AI model news. The Gemini 2.5 Flash and Pro models, which have been in preview, are now generally available. The growing catalogue of Gemini models isn't just a random attempt by Google to see what people like. The variations are tuned for specific needs, making it so Google can pitch Gemini as a whole to a lot more people and organizations, with a model to match most needs.
Flash-Lite 2.5 isn’t about being the smartest model, but in many cases, its speed and price make it the most appealing. You don’t need tons of nuance to classify social media posts, summarize YouTube transcripts, or translate website content into a dozen languages.
That’s exactly where this model thrives. And while OpenAI, Anthropic, and others are releasing their own fast-and-cheap AI models, Google’s advantage in integration with its other products likely helps it pull ahead in the race against its AI rivals.
You might also likeWindows 11 user has 30 years of 'irreplaceable photos and work' locked away in OneDrive - and Microsoft's silence is deafening
- A Redditor was moving a huge slab of data from old drives to a new one
- They used OneDrive as a midpoint in an ill-thought-out strategy that left all the data in Microsoft's cloud service temporarily
- When they came to download the data, they were locked out of OneDrive, and can't get Microsoft support to address this issue
A cautionary tale shared on Reddit tells the story of a Windows PC owner who used OneDrive to store 30 years' worth of their data and lost the lot when their Microsoft account was locked, with no apparent way to regain access.
This is a nasty sounding predicament (highlighted by Neowin) to say the least, with the loss of what's described as three decades of "irreplaceable photos and work" which was transferred to OneDrive as a temporary storage facility.
The idea the Redditor had was that they needed to move that huge collection of files from multiple old drives where they were stored to a large new drive, and OneDrive was selected as the midpoint in that data migration journey.
So, they moved all the files off the old drives onto Microsoft's cloud storage service and prepared to transfer the data to the new drive, when they ran into a huge stumbling block. The Redditor was suddenly locked out of their Microsoft account (and therefore OneDrive, and all Microsoft services).
Now, this isn't a sensible way to manage this data transfer, of course (and I'll come back to outline why in a moment, in case you're not sure), but the point here is that the mistake happened, and the Redditor can't get any joy whatsoever from Microsoft in terms of trying to resolve the problem.
In their Reddit post, which is gaining a lot of attention, they say: "Microsoft suspended my account without warning, reason, or any legitimate recourse. I've submitted the compliance form 18 times - eighteen - and each time I get an automated response that leads nowhere. No human contact. No actual help. Just canned emails and radio silence."
They continue: "This feels not only unethical but potentially illegal, especially in light of consumer protection laws. You can't just hold someone's entire digital life hostage with no due process, no warning, and no accountability," adding that Microsoft is a "Kafkaesque black hole of corporate negligence."
Analysis: Microsoft needs to do betterOkay, so first up, very quickly - because I don't want to labor on the mistakes made by the unfortunate Redditor - this is not a good way to proceed with a drive migration.
In transferring a large slab of data like this, you should never have a single point of failure in the process. By which I mean shoving all the data into the cloud, on OneDrive, and having that as the sole copy. That's obviously the crux of the problem here, because once the user was locked out of OneDrive, they no longer had access to their data at all.
When performing such an operation, or as a general rule for any data, you should always keep multiple copies. Typically, that would be the original data on your device, a backup on a separate external drive at home (preferably two drives, in fact), and an off-site copy in a cloud storage locker like OneDrive. The point is that if you lose the original data, you can resort to, say, the external drive, but if that's also gone to the great tech graveyard in the sky somehow, you can go to the second drive (or the cloud).
Anyway, you get the point, but the Redditor chanced this way of doing things - figuring, no doubt, that as a temporary measure, it was fine to rely solely on OneDrive - but clearly, that wasn't the case.
There are a number of issues with the scenario presented here where Microsoft has fallen short of the standards that a customer would rightly expect.
Why did this happen?First, there's the fact that the Microsoft account was simply locked with no notification or message provided as to why. The OneDrive user can only guess at why this ban was enacted (and the obvious guess is that some copyrighted material, or other content that contravened Microsoft's policies, was flagged in the uploaded files, which would trigger the account to be automatically locked). It's worth making it clear that we (obviously) don't have any idea about the contents of this data.
Secondly, with this having happened, the most worrying part here is the Redditor's description of how they feel like they're banging their head against a brick wall in trying to talk to Microsoft's support staff about how to resolve this. After all, this is essentially their whole life's worth of data, and there should be some way to at least find out what the problem is - and give the person who's been locked out a chance to explain, and potentially regain access.
For all we know, it could be a bug that's caused this. But if nobody at Microsoft's listening, nobody's investigating, probably. And if you do use OneDrive as a cloud backup, not having access to your data at a critical time is a frightening prospect indeed. (Which is why you must sort out those other local backups as an alternative, or indeed, another cloud service if you really wanted to push the 'data redundancy' boat out).
Hopefully, the Redditor will eventually get to speak to a Microsoft support agent - an actual person - to iron this out. In theory, all that data could still be on Microsoft's servers somewhere.
This incident has occurred at a time when Microsoft is pushing its account services on Windows 11 users, as you can't install the OS without one (well, you can by using loopholes, although the company is busy eradicating some of those fudges). Not to mention pushing OneDrive, Microsoft 365, and other services with ads in Windows, of course.
That broad drive is an unfortunate backdrop here when you consider another recent misstep recently brought to light. That was the highlighting of a potential problem with deleted Microsoft accounts (deleted by the user, that is), which could result in the loss of the key for the default drive encryption applied with new installations of Windows 11 24H2.
Again, that nasty little (albeit niche) scenario could lead to all the data on your drive disappearing into a blackhole, never to be seen again. It's another odd situation you could end up in with no recourse at all in this case - and this, along with the Redditor's awful plight, are predicaments that Microsoft clearly should not be inflicting on consumers.
We've contacted Microsoft for comment about this specific case, and will update this story if we get a response from the company.
You might also like...This island is getting the world’s first AI government, but I’ve read this story before – and it doesn’t end well
Sensay, a creator of AI-powered digital replicas of people, has established an AI-powered government on a real island it purchased off the coast of the Philippines. Previously known as Cheron Island, it's been renamed Sensay Island.
The Head of State (effectively, the President) of Sensay Island is Roman Emperor Marcus Aurelius, one of The Five Good Emperors of Rome, who was known for his love of Stoic philosophy and good judgement. Wartime British PM Winston Churchill is the Prime Minister, while Sun Tzu, author of the Chinese strategic classic, The Art of War, takes the reins at Defence. Alexander Hamilton is the new Treasury Secretary.
According to Sensay, “Each AI replica is designed to emulate the personality, values, and decision-making patterns of the historical figure it represents, providing a governance style infused with timeless wisdom and ethical principles.
To truly emulate the character of these historical figures, each recreation is uniquely trained on the literature, teaching, philosophies, and speeches of the real-life counterparts they represent."
How easily AI replicas from such disparate periods and with such strong characters will be able to work together in government remains to be seen, since their contrasting values must surely clash at points, not to mention be at odds with modern-day values.
The full cabinetHere’s the full list of Sensay Island cabinet members:
Head of State (President): Marcus Aurelius
Prime Minister: Winston Churchill
Foreign Affairs Minister: Eleanor Roosevelt
Defense Minister: Sun Tzu
Treasury Secretary: Alexander Hamilton
Justice Minister: Nelson Mandela
Science & Technology Minister: Ada Lovelace
Education Minister: Confucius
Health Minister: Florence Nightingale
Agriculture Minister: George Washington Carver
Environment Minister: Wangari Maathai
Culture Minister: Leonardo da Vinci
Ethics Advisor: Mahatma Gandhi
Innovation Advisor: Nikola Tesla
Infrastructure Director: Queen Hatshepsut
Chief Strategist: Zhuge Liang
Intelligence Chief: T.E. Lawrence
Personally, I think DaVinci was a wise choice for Culture Minister, and it’s nice to see Nikola Tesla being recognized as Innovation Advisor, but I have to say I’m a little disappointed not to see Queen Cleopatra anywhere in the mix.
Confucius also presents some challenges as Education Minister, considering his unfamiliarity with modern technology, like AI.
Sensay Island is indeed a real island off the coast of the Philippines. You can find it on Google Maps. It has a surface area of around 3.4 km², comprising beaches, rainforest, and coral lagoons.
From what we can see, there doesn’t seem to be any infrastructure of any kind on the island, so if you’re thinking of a visit, be aware that there’s probably no Wi-Fi.
While an AI government feels like something of a publicity stunt, there are serious reasons why Sensay has created an AI island:
“Sensay is looking to demonstrate that AI can be deployed in national governance to aid policymaking free from political partisanship and bureaucratic delays, and with unprecedented transparency and participation”, it says.
A fly on the wallAccording to Marisol Reyes, the (AI-powered) Tourism Manager for Sensay Island, who you can chat with at its website, you can visit the island whenever you like:
“Absolutely, you can visit our beautiful island! We're thrilled to welcome visitors to experience this unique blend of cutting-edge AI governance and traditional Filipino hospitality. Sensay Island is open to tourists who want to explore our pristine beaches, vibrant coral sanctuaries, and witness history in the making with our groundbreaking AI Council.”
For those without the means to visit, the good news is that you can still get involved. You will soon be able to register as an E-resident of Sensay Island, allowing you to propose new policies for its AI-powered administration via an open-access platform:
“This will combine direct democracy with AI-enhanced decision-making”, says Sensay.
Dan Thomson, CEO and founder of Sensay, added, “This project shows Sensay’s commitment to pushing the boundaries of AI in a responsible direction. I hope our approach will show the public and world leaders that AI is a feasible and efficient way to develop and implement policies."
Despite an AI-controlled civilization leading to (attempted) human extinction in just about every major Sci-Fi movie I’ve watched in the last 40 years, from Logan’s Run to The Terminator, it seems that humans are still determined to give it a go.
But could AI actually provide a more balanced and sane government than our elected officials can? There’s only one way to find out...
You might also like- I watched some of the viral ASMR videos made with AI and I feel more confused than soothed
- I don't like the idea of my conversations with Meta AI being public – here's how you can opt out
- Google’s Veo 3 is coming to Canva – as the graphic design giant claims ‘AI doesn’t need to mean we have to stop being creative’
Windows 11’s new Start menu falls short in one key area – and it’s making people angry
- Microsoft has a Start menu redesign in testing
- This introduces new layouts for the list of all apps
- One of those layouts is a category view, and we’ve had confirmation from Microsoft that it won’t be possible to customize this to your liking
We’ve just learned more about how Microsoft’s revamped Start menu will work when it arrives in Windows 11, and not everyone is happy about the new info aired here.
Windows Latest reports on an element of customization that falls short of what some Windows 11 users were hoping for, and it pertains to one of the new layouts being introduced for the list of apps.
As you may recall, with the redesigned Start menu – which is in test builds of Windows 11 now – the long list of apps installed on the PC can be set to a couple of more compact alternative layouts, one of which is a grid and the other a category view.
It’s the latter we’re interested in here, whereby apps are grouped into different categories such as Games, Productivity, Creativity, Social, Utilities and so forth. Each of these categories has a box in which up to four icons for the most commonly-used apps appear, and the full roster of apps is found within if you open the category – all of which allows for an easier way to locate the app you’re looking for, rather than scrolling through a lengthy alphabetical list.
So, what’s the beef that’s been raised here? Windows Latest has received confirmation from Microsoft that it won’t be possible to create your own category types.
Windows 11 will, of course, make the decisions on how to categorize apps and where they belong, but there are some interesting, and less than ideal, nuances picked up by Windows Latest here.
Any app that Windows 11 isn’t sure about will go in the ‘Other’ category, for one thing. Also, if there aren’t three apps for any given category – because you don’t have enough creativity apps installed on your machine, say – then a stray creativity app (like Paint) will be dumped in Other.
Analysis: improved customization could still be offered with any luckIf Microsoft gave folks the ability to make their own category folders, they could have a few alternative dumping grounds to Other – categories named so that the user could better remember what apps they contain.
However, with Windows 11 overseeing category allocation, it seems like Microsoft wants to keep a tight rein on the groups that are present in this part of the interface. Sadly, it isn’t possible to move an app from one category to another, either (as Windows Latest has highlighted in the past), should you disagree with where it’s been placed – and this latter ability is a more telling shortcoming here.
The new Start menu remains in testing, so Microsoft may make changes before it arrives in the finished version of Windows 11. That’s entirely possible, especially seeing as Microsoft has (again) been stressing how it’s listening to user feedback in order to better inform Windows 11’s design, the Start menu overhaul included.
So, simply being able to drag and drop icons between these categories is something we can hope for, in order to reclassify any given app – it’s a pretty basic piece of functionality, after all. We may eventually get to define our own categories, too, but for now it appears that Microsoft is taking a rather rigid approach to customization with this part of the menu.
Expect this Start menu makeover to be one of the central pillars of Windows 11 25H2 when it pitches up later this year.
You might also like...- Windows 11's hidden PC migration feature proves Microsoft isn't messing around when it comes to killing off Windows 10
- macOS Tahoe 26 is official - here's everything you need to know about all the new features
- Can’t upgrade to Windows 11? This Linux project wants to save your old PC from the scrapheap when Windows 10 support ends
I don't like the idea of my conversations with Meta AI being public – here's how you can opt out
- Meta AI prompts you to choose to post publicly in the app's Discovery feed by default
- Meta has a new warning pop-up, but accidental sharing remains a possibility
- You can opt out of having your conversations go public entirely through the Meta AI app’s settings
The Meta AI app's somewhat unique contribution to the AI chatbot app space is the Discovery feed, which allows people to show off the interesting things they are doing with the AI assistant.
However, it turns out that many people were unaware that they weren't just posting those prompts and conversation snippets for themselves or their friends to see. When you tap "Share" and "Post to feed," you're sharing those chats with everyone, much like a public Facebook post.
The Discovery feed is an oddity in some ways, a graft of the AI chatbot experience on a more classic social media structure. You’ll find AI-generated images of surprisingly human robots, terribly designed inspirational quote images, and more than a few examples of the kind of prompts the average person does not want just anyone seeing.
I've scrolled past people asking Meta AI to explain their anxiety dreams, draft eulogies, and brainstorm wedding proposals. It's voyeuristic, and not in the performative way of most social media; it's real and personal.
It seems that many people assumed sharing those posts was more like saving them for later perusal, rather than offering the world a peek at whatever awkward experiments with the AI you are conducting. Meta has hastily added a new pop-up warning to the process, making it clear that anything you post is public, visible to everyone, and may even appear elsewhere on Meta platforms.
If that warning doesn't seem enough to ensure your AI privacy on the app, you can opt out of the Discovery feed completely. Here's how to ensure your chats aren’t one accidental tap away from public display.
- Open the Meta AI app.
- Tap your profile picture or initials, whichever represents your digital self.
- Tap on "Data and Privacy" and "Manage Your Information."
- Tao on "Make all public prompts visible to only you," and then "Apply to all" in the pop-up. This will ensure that when you share a prompt, only you will be able to see it.
- If that doesn't seem like enough, you can completely erase the record of any interaction you've had with Meta AI by tapping "Delete all prompts." That includes any prompt you've written, regardless of whether it's been posted, so be certain.
Of course, even with the opt-out enabled and your conversations with Meta AI no longer public, Meta still retains the right to use your chats to improve its models.
It's common among all the big AI providers. That's supposedly anonymized and doesn't involve essentially publishing your private messages, but theoretically, what you and Meta AI say to each other could appear in a chat with someone else entirely in some form.
It's a paradox in that the more data AI models have, the better they perform, but people are reluctant to share too much with an algorithm. There was a minor furor when, for a brief period, ChatGPT conversations became visible to other users under certain conditions. It's the other edge of the ubiquitous “we may use your data to improve our systems” statement in every terms of service.
Meta’s Discovery feed simply removes the mask, inviting you to post and making it easy for others to see. AI systems are evolving faster than our understanding of them, hence the constant drumbeat about transparency. The idea is that the average user, unaware of the hidden complexities of AI, should be informed of how their data is being saved and used.
However, given how most companies typically address these kinds of issues, Meta is likely to stick to its strategy of fine-tuning its privacy options in response to user outcry. And maybe remember that if you’re going to tell your deepest dreams to an AI chatbot, make sure it’s not going to share the details with the world.
You might also like- I tried the new Meta AI app and it's like ChatGPT for people who like to overshare
- Mark Zuckerberg wants everyone to have AI friends, but I think he's missing the point of AI, and the point of friendship
- Meta AI is now the friend that remembers everything about you
- Meta wants to fill your social media feeds with bots – here's why I think it's wrong
You can now create ChatGPT AI images using WhatsApp and it's ridiculously easy to do – here's how
- You can now create ChatGPT images in WhatsApp
- Ask it to create any image you want
- Upload an image and ask it to modify it
You can now create and modify images using ChatGPT’s AI chops inside WhatsApp without having to use the ChatGPT app at all.
WhatsApp, the MetaAI-owned messaging app, caused more than a little controversy recently when it added a new Meta AI button to its interface that was impossible to remove.
The new button caused outrage from WhatsApp users, many of whom felt like they were being forced to use AI.
“Why do they have to slap that stuff on everything?” said Reddit user Special-Oil-7447. “I'm in the EU and it’s just been dumped on me. I am going to uninstall WhatsApp today after I have loaded Signal. Vote with your feet people”, said user BrainCell 7.
But Meta has not backed down, and the unpopular MetaAI button remains.
Tapping it will initiate a conversation with the MetaAI chatbot, however, it's not the only AI chatbot you can use with WhatsApp.
Accessing ChatGPTIf you’re a fan of AI, then there’s nothing stopping you from chatting using ChatGPT in WhatsApp so long as you know how, and what’s more, you can now use ChatGPT to generate AI images right inside WhatsApp. You can even upload a picture and get the AI to edit it, all from within WhatsApp.
It’s easy. All you need to do is set up ChatGPT as one of your contacts in WhatsApp - as if it’s a person.
Just add ChatGPT as a contact with the number 1-800-CHATGPT (that’s 1-800-242-8478). If you’re outside of the US, then you’ll need to add them as a US contact, which I've written about before.
Now you can chat with ChatGPT as if it were one of your friends. When you start a chat with ChatGPT, you can simply say “Create an image of...” and add some details. Sit back and let ChatGPT do its AI magic.
To upload an image that you want ChatGPT to edit, tap the + button, then Photos, and upload the image.
ChatGPT will ask you what you would like to do with the image, and you can just use natural language to describe what you want to do.
If you reach your limit for a free ChatGPT account, but you’ve got a Plus account, then WhatsApp will throw up a link so you can link to your Plus account and get more images. It couldn’t be simpler.
You might also likeMicrosoft has made it harder to log in to Windows 11 using your face - and that’s good and bad news
- Windows Hello facial recognition no longer works in poorly-lit rooms
- This is due to a move Microsoft made to shore up security with the feature
- The change to require a ‘color camera to see a visible face’ means logins now fail in dark rooms, where previously infrared allowed them to work
Windows Hello, the system that allows for secure login to your Windows 11 (or 10) PC, no longer works when using facial recognition in a dark environment.
Indeed, this has been the case for a couple of months, because as Windows Central reports, Microsoft made this change in the April update for Windows 11, but it flew under the radar.
When some Windows Hello users noticed that they couldn’t successfully log in because their face wasn’t recognized sometimes, they may have just assumed it was a bug (or the feature being flaky, which it is occasionally). However, this is an intentional change by Microsoft as the company made clear in the April patch release notes.
Microsoft said, “For enhanced security, Windows Hello facial recognition requires color cameras to see a visible face when signing in.”
This security improvement was necessary due to a vulnerability being discovered that could potentially allow an attacker with access to the Windows PC to spoof their way past Windows Hello protection.
That trick evidently involved messing with the infrared camera – leveraging “adversarial input perturbations,” as Microsoft puts it in fancy security-speak – so to avoid this exploit, the company added the requirement for a color camera.
Why has this scuppered logins in darkened environments? Before the April update, Windows Hello could go purely off the infrared sensor to achieve a login in low-light (infrared scanning works fine without light, of course). However, now the feature needs your face to be visible to the camera, logins in those conditions just won’t work anymore.
Analysis: There’s a workaround, but it isn’t helpfulThere’s no way of getting around this as such, and if you’re in a poorly lit room, Windows Hello facial login may well fail (when before it wouldn’t).
Okay, so Windows Central does point out there is a workaround here, namely that you can disable your webcam in Windows 11 (the actual camera can be turned off in Device Manager). With that done, Windows Hello will authenticate with the infrared sensor – because it’s the only option – and so it’ll work in the semi-dark again.
Presumably, if you go this route, though, you may be vulnerable to the mentioned exploit (unless that requires the camera to be active, a point that Microsoft doesn’t go into). At any rate, disabling the webcam is hardly a good solution, as it means you won’t be able to use it for video chatting (obviously, or anything else).
It’s a shame Microsoft had to tighten security in this way, but the software giant can’t risk leaving the door open to an exploit that someone who has stolen a Windows 11 laptop might be able to leverage in order to gain access to the device.
You might also like...- Windows 11's hidden PC migration feature proves Microsoft isn't messing around when it comes to killing off Windows 10
- macOS Tahoe 26 is official - here's everything you need to know about all the new features
- Can’t upgrade to Windows 11? This Linux project wants to save your old PC from the scrapheap when Windows 10 support ends
Windows 11’s new update is reportedly proving a nightmare to install for some, but I’m hardly surprised given its messy rollout
- Windows 11’s June update is failing to install for some people
- It’s complicated because Microsoft released an initial update this month – which was paused – and then a revised patch that replaced it
- This revised patch is also causing unfortunate bugs according to some reports
Windows 11’s latest update is proving problematic for some folks who can’t even install it, and others are running into trouble with bugs in the patch – or the fact that it doesn’t resolve the issues that it’s supposed to.
We need to rewind a bit here for context, and remember that Microsoft got off to a bad start with Windows 11 24H2’s update for June. The initial patch (codenamed KB5060842) was paused after Microsoft discovered that it was clashing with an anti-cheat tool, meaning games using that system would crash.
To resolve this, Microsoft released a second update (patch KB5063060) that replaced the first patch in Windows Update, but as Windows Latest reports, people are running into installation failures with that upgrade.
Some users are encountering the usual nonsensical and unhelpful error messages (bearing meaningless error codes like ‘0x800f0922’), while others say that the revised update gets stuck downloading and never actually finishes.
This is based on complaints from Microsoft’s Feedback Hub, readers contacting Windows Latest directly, and posts on Reddit like this one, which describes a worrying boot loop (of three to four reboots) before the user got back into Windows 11 to discover the update installation hadn’t worked.
There are people also saying they’ve run into bugs with KB5063060. Those include reports of the taskbar freezing when the PC wakes up from sleep, and issues with external monitors going wrong and Bluetooth devices being forgotten (so you must rediscover them every time Windows 11 is restarted).
There are some more worrying reports of PCs ending up freezing full-stop, so they need to be rebooted. And there are a few complaints (again on Reddit) that even after installing this second patch – which is supposed to work fine with games that use Easy Anti-Cheat (EAC) – some games are still problematic.
“I still get the same problem playing Star Citizen," wrote one gamer. "Game freezing randomly and the Windows event log viewer showing the EAC error.”
There are further reports of Fortnite and efootball25 (which used to be PES) still crashing, so it seems that not all the wrinkles have been ironed out.
Analysis: a disappointingly messy rollout for JuneInstallation failures are a long-running problem with Windows 11 (and Windows 10 for that matter). It’s therefore no surprise that, given the misfire with the initial update, more issues are now cropping up.
As Windows Latest points out, because there were two updates this time round, there may be issues with PCs that already grabbed KB5060842 and are now getting the second KB5063060 update, due to having a game with EAC installed that’s affected by the anti-cheat compatibility bug in the first one.
In such scenarios, it’s possible that Windows 11 trying to overwrite the first patch with the second is causing Windows Update to fall over. Those in this situation should be limited in numbers, though, as Microsoft pulled the first patch quite swiftly (so it didn’t get through to many PCs with games that use EAC, at least in theory anyway).
That’s just speculation, but whichever way you slice it, this has been a messy rollout of an update (well, a pair of updates technically).
What can you do if you’re stuck unable to install the revised June update? One approach is to download the update manually and install it directly, which you can do by grabbing the file from Microsoft's site (the x64 version, as the Arm-based one is for Snapdragon PCs).
That should install successfully, but I’d be rather wary of taking this approach if you’re not a reasonably confident computer user.
Alternatively, you can simply wait until Microsoft hopefully sorts out any issue(s) behind the scenes on its side, and the update might just succeed under its own steam later this week. There’s no guarantee of that, though, and you’re very much in a less-than-ideal situation.
Those who can install the revised update, but are still experiencing crashing with games (or elsewhere) can’t do much except wait and pray any issues are resolved. The only other possible route is to uninstall the patch, but that’s not recommended due to it leaving your PC without the latest round of security fixes provided with every cumulative update for Windows 11. (You also won’t get the newest features either, some of which are nifty additions).
You might also like...- Windows 11's hidden PC migration feature proves Microsoft isn't messing around when it comes to killing off Windows 10
- macOS Tahoe 26 is official - here's everything you need to know about all the new features
- Can’t upgrade to Windows 11? This Linux project wants to save your old PC from the scrapheap when Windows 10 support ends