Feed aggregator

How Vera Rubin Telescope Scientists Will Deal With 60 Million Billion Bytes of Imagery

NYT Technology - Fri, 06/20/2025 - 20:53
The Vera C. Rubin Observatory will make the study of stars and galaxies more like the big data-sorting exercises of contemporary genetics and particle physics.

I tried Google’s new Search Live feature and ended up debating an AI about books

Techradar - Fri, 06/20/2025 - 20:30
  • Google’s new Search Live feature lets users hold real-time voice conversations with an AI-powered version of Search
  • The Gemini-powered AI attempts to simulate a friendly and knowledgeable human.
  • Google is keen to have all roads lead to Gemini, and Search Live could help entice people to try the AI companion without realizing it

Google's quest to incorporate its Gemini into everything has a new outlet linked to its most central product. The new Google Search Live essentially gives Google Search's AI Mode a Gemini-powered voice.

It’s currently available to users in the U.S. via the Google app on iOS and Android, and it invites you to literally talk to your search bar. You speak, and it speaks back; unlike the half-hearted AI assistants of yesteryear, this one doesn’t stop listening after just one question. It’s a full dialogue partner, unlike the non-verbal AI Mode.

It also works in the background, which means I could leave the app during the chat to do something else on my phone, and the audio didn’t pause or glitch. It just kept going, as if I were on the phone with someone.

Google refers to this system as “query fan-out,” which means that instead of just answering your question, it also quietly considers related queries, drawing in more diverse sources and perspectives. You feel it, too. The answers don’t feel boxed into a single form of response, even on relatively straightforward queries like the one about linen dresses in Google's demo.

AI Search Live

To test Search Live out, I tapped the “Live” icon and asked for speculative fiction books I should read this summer. The genial voice offered a few classic and a few more recent options. I then opened Pandora's box by asking it about its own favorites. Surprisingly, it had a few. I then decided to push it a bit and tell it it was wrong about the best fantasy books and listed a few of my own. Suddenly, I found myself in a debate not only about the best examples of the genre, but also about how to define it.

We segued from there to philosophical and historical opinions about elvish empathy and whether AI should be compared to genies or the mythical brownies that do housework in exchange for cream. Were it not for the smooth, synthetic voice and its relentless good cheer, I might have thought I was actually having an idle argument with an acquaintance over nothing important.

It's obviously very different from the classic Google Search and its wall of links. If you look at the screen, you still see the links, but the focus is on the talk. Google isn't unique with a vocal version of its AI, as ChatGPT and others proffer similar features. Google Search Live does come off as smoother, and I didn't have to rephrase my questions or repeat myself once in 10 minutes. Being integrated with Google’s actual search systems might help keep things grounded. It’s like talking to someone who always has a stack of citations in their back pocket.

I don't think Search Live is what people will use to replace their usual online search methods, but here’s a real accessibility benefit to it. For people who can’t comfortably type or see, voice-first tools like this open new doors. Same goes for kids asking homework questions, or for someone cooking dinner who has a random question but doesn't want to pause to wipe flour off their screen.

There’s a tradeoff, of course, in terms of how people browse the web. If this kind of conversational AI becomes the dominant interface for search on Google, what happens to web traffic? Publishers already feel like they’re shouting into the void when their content is skimmed by AI and hiring lawyers to fight it. What will the AI search if its sources shrink or vanish? It's a complicated question, worthy of debate. I'll have to see how Search Live lays out the arguments.

You might also like

Forget virtual pets – the next AI video craze is cats doing Olympic diving, and it’s all thanks to this new Google Veo 3 rival

Techradar - Fri, 06/20/2025 - 18:30
  • MiniMax’s new Hailuo 02 AI video model has sparked a viral trend of cats performing Olympic dives
  • The videos blend advanced physics-based animation with internet absurdity
  • Though not the quality of Google Veo 3, Hailuo 2 is rapidly gaining in popularity among casual AI users

Watching the cat walk onto the diving board, I could imagine calls to the fire department or a huge crowd rushing to save it, causing a catastrophe, while the feline simply blinked at the tragedy. Instead, the cat executed an Olympic-caliber triple somersault into the pool. If it weren't for the impossible feat and my awareness that it was an AI-generated video, I'd be checking to see if there was a Freaky Friday situation with the U.S. swim team.

Instead, it's a hugely viral video produced using Chinese AI video developer MiniMax's Hailuo 02 model. The millions of people watching the video of cats diving may not be real, but it's real enough to elbow its way into the competition for AI video dominance, alongside Google Veo 3 and OpenAI's Sora, among many others.

MiniMax debuted Hailuo 02 earlier this summer, but the virality of the faux Olympics video suggests it's going to become a very popular tool for making still images or text prompts into videos. The model only makes five- to ten-second clips for now, but its motion customization, camera effects, and impressive imitation of real-world physics, like the movement of fur or splashing of water, make it more intriguing.

Testing Hailuo 02 on cats diving came about seemingly organically when X user R.B Keeper (presumably not their real name) tried a prompt they'd seen tested on Veo 3. The idea spread from there to a version that garnered millions of views in a matter of hours and appeared on TikTok, Reddit, and Instagram, with numerous variations.

A post shared by Pablo Prompt (@pabloprompt)

A photo posted by on

AI video battles

Hailuo 02 uses frame-by-frame physics simulation, attention-mapped motion prompts, and multimodal input parsing. In other words, if you type a strange idea, the model will do its best to make it look and behave like it would in an approximation of the real world.

Notably, Hailuo 02 is reportedly far cheaper and faster than Veo 3, though perhaps without quite the high-end gloss. Still, it's more accessible, not being limited to enterprise services and beta programs like Veo 3.

The cat diving videos are the apex of a very specific Venn diagram of internet trends, accessible tools, and low-stakes fun. You don’t need to be a professional editor or own a supercomputer to try it. And more upgrades are on the horizon. MiniMax has outlined plans to integrate synchronized audio, lighting, and texture control, as well as longer clips.

As for Google Veo 3 and other major players, they have their professional niche for now. But if they want to widen their appeal to the masses, they might look to what MiniMax and smaller developers like Midjourney, with its V1 video model, are doing. Hailuo 02 is the kind of tool that will get people, like the cats, to dive in.

You might also like

Europe’s Growing Fear: How Trump Might Use U.S. Tech Dominance Against It

NYT Technology - Fri, 06/20/2025 - 13:32
To comply with a Trump executive order, Microsoft recently helped suspend the email account of an International Criminal Court prosecutor in the Netherlands who was investigating Israel for war crimes.

I adore my Meta Ray-Bans, but these new Oakley smart glasses are making me jealous

Techradar - Fri, 06/20/2025 - 08:00
  • Meta and Oakley are officially making smart glasses
  • They're based on Oakley's HSTN glasses design
  • Launching later this summer, they'll start at $399 / £399

It’s official. Following a teaser earlier this week, Oakley and Meta have made smart glasses, and as an owner of the almost two-year-old Ray-Ban Meta smart specs, I’m green with envy.

Later this summer, six pairs of Oakley smart specs will be available in the US, UK, Australia, Canada, Ireland, and several other European countries, starting at $399 / £399 (we’re still waiting for Australian pricing details).

Limited-Edition Oakley Meta HSTN (featuring gold accents and 24K PRIZM polarized lenses) will be available for preorder sooner – from July 11 – for $499 / £499 (again, we’re waiting for Australian pricing).

Image 1 of 4

The Oakley Meta HSTN smart glasses being used by athletes

The limited edition Oakley Meta glasses (Image credit: Oakley / Meta)Image 2 of 4

The Oakley Meta HSTN smart glasses in their case

(Image credit: Oakley / Meta)Image 3 of 4

The Oakley Meta HSTN smart glasses from the side

(Image credit: Oakley / Meta)Image 4 of 4

The Oakley Meta HSTN smart glasses' camera

(Image credit: Oakley / Meta)

Why am I jealous? Well, for a start, these smart glasses are set to boast a few important hardware and software upgrades over my Ray-Bans.

First is an upgrade to the camera. The Ray-Bans have a built-in 12MP snapper which can capture full-HD (1440x1920 resolution) video at 30fps. Meta is promising these Oakley specs will record Ultra HD (3K) video, perhaps making them possible alternatives to the best action cameras for people who want to record their sporting stunts and look good doing it.

Secondly, they’ll be able to record for longer with a boosted battery life. My Meta Ray-Bans boast a four-hour battery life for ‘standard use.’ They can play music, Meta AI can answer the odd question, and they should last about this long; as soon as you start capturing videos, their battery will drain much faster

With the case recharging them, the Ray-Bans can get up to 36 hours of total use.

Meta is doubling the glasses’ built-in battery with its Oakleys, promising they’ll last for eight hours with standard use, and 19 hours if they’re on standby. Meta adds that you can recharge them to 50% in just 20 minutes with their case, and said the charging case holds up to 48 hours of charge.

The Oakley Meta HSTN smart glasses being used by an athlete on a hike

(Image credit: Oakley / Meta)

Finally, Meta’s AI will still be able to answer various questions for you and use the camera for context to your queries, as we’ve seen from the Ray-Ban Meta smart glasses, but it will also get some new sporting-related knowledge.

Golfers can ask about wind speed, while surfers can check the surf conditions, and you can also ask the glasses for possible ways to improve your sporting technique, too.

As with all these promises, we’ll want to test the Oakley Meta HSTNs for ourselves to see if they live up to the hype, but one way we can already see they’re excelling is on the design side.

Damn, are these things gorgeous.

The Oakley Meta HSTN smart glasses being used by a skateboarder

(Image credit: Oakley / Meta)

Interestingly, the Oakley specs design choice is one major detail the leaks got wrong. Instead of its Sphaera visor-style shades, it’s Oakley’s HSTN glasses (I’m told it’s pronounced how-stuhn).

These glasses look like more angular Ray-Ban Wayfarers – you know, one of Meta’s existing smart glasses designs – but they do boast a serious design upgrade for athletes that you won’t find on Meta’s non-Oakley specs: Oakley’s PRIZM lenses.

Without getting too technical, PRIZM lenses are designed to provide increased contrast to what you can see. There are different models for snow sports, cycling, and other sports (as well as everyday usage), but each is designed to highlight key details that might be relevant to the wearer, such as the contours in different snow terrains, or transitions in trail types and possible road hazards.

If PRIZM lenses sound like overkill, you can also get a pair with transition lenses or with completely clear lenses.

Orange RayBan Meta Smart Glasses in front of a wall of colorful lenses including green, blue, yellow and pink

The Ray-Ban specs still look great too (Image credit: Meta)

I swapped my always-shaded Ray-Bans for a pair with transition lenses, and the difference is stark. Because they’re clear in darker environments and shaded in brighter weather, I’ve found it so much easier to use the transition lens pair as everyday smart glasses. Previously, I could only use my shaded pair in the sun, and that doesn’t come out all too often here in the UK.

The complete list of six Oakley smart glasses options is:

  • Oakley Meta HSTN Warm Grey with PRIZM Ruby Lenses
  • Oakley Meta HSTN Black with PRIZM Polar Black Lenses
  • Oakley Meta HSTN Brown Smoke with PRIZM Polar Deep Water Lenses
  • Oakley Meta HSTN Black with Transitions Amethyst Lenses
  • Oakley Meta HSTN Clear with Transitions Grey Lenses
  • Oakley Meta HSTN Black with Clear Lenses

The Oakley Meta HSTN smart glasses different designs all together

(Image credit: Oakley / Meta)

Beyond the style and lenses, one striking factor is that despite some serious battery upgrades, the frames don’t seem massively chunky.

Like their Ray-Ban predecessors, they’re clearly thicker than normal specs, but they don’t look too much unlike normal shades.

All in all, these Oakley glasses look and sound really impressive. I’m chomping at the bit to try a pair, and if you’ve been on the fence about picking up the Ray-Ban Meta glasses, these enhanced options could be what convinces you to finally get some AI-powered eyewear.

The Oakley Meta HSTN smart glasses being used by a skateboarder

(Image credit: Oakley / Meta)You might also like

Trump Is Selling a Phone + The Start-Up Trying to Automate Every Job + Allison Williams Talks ‘M3GAN 2.0’

NYT Technology - Fri, 06/20/2025 - 06:00
“They’re calling it the T1 Phone 8002 Gold Version, which sounds kind of like a Taylor Swift album.”

New research says using AI reduces brain activity – but does that mean it's making us dumber?

Techradar - Fri, 06/20/2025 - 05:52

Amid all the debates about how AI affects jobs, science, the environment, and everything else, there's a question of how large language models impact the people using them directly.

A new study from the MIT Media Lab implies that using AI tools reduces brain activity in some ways, which is understandably alarming. But I think that's only part of the story. How we use AI, like any other piece of technology, is what really matters.

Here's what the researchers did to test AI's effect on the brain: They asked 54 students to write essays using one of three methods: their own brains, a search engine, or an AI assistant, specifically ChatGPT.

Over three sessions, the students stuck with their assigned tools. Then they swapped, with the AI users going tool-free, and the non-tool users employing AI.

EEG headsets measured their brain activity throughout, and a group of humans, plus a specially trained AI, scored the resulting essays. Researchers also interviewed each student about their experience.

As you might expect, the group relying on their brains showed the most engagement, best memory, and the most sense of ownership over their work, as evidenced by how much they could quote from them.

The ones using AI at first had less impressive recall and brain connectivity, and often couldn’t even quote their own essays after a few minutes. When writing manually in the final test, they still underperformed.

The authors are careful to point out that the study has not yet been peer-reviewed. It was limited in scope, focused on essay writing, not any other cognitive activity. And the EEG, while fascinating, is better at measuring overall trends than pinpointing exact brain functions. Despite all these caveats, the message most people would take away is that using AI might make you dumber.

But I would reframe that to consider if maybe AI isn’t dumbing us down so much as letting us opt out of thinking. Perhaps the issue isn’t the tool, but how we’re using it.

AI brains

If you use AI, think about how you used it. Did you get it to write a letter, or maybe brainstorm some ideas? Did it replace your thinking, or support it? There’s a huge difference between outsourcing an essay and using an AI to help organize a messy idea.

Part of the issue is that "AI" as we refer to it is not literally intelligent, just a very sophisticated parrot with an enormous library in its memory. But this study didn’t ask participants to reflect on that distinction.

The LLM-using group was encouraged to use the AI as they saw fit, which probably didn't mean thoughtful and judicious use, just copying without reading, and that’s why context matters.

Because the "cognitive cost" of AI may be tied less to its presence and more to its purpose. If I use AI to rewrite a boilerplate email, I’m not diminishing my intelligence. Instead, I’m freeing up bandwidth for things that actually require my thinking and creativity, such as coming up with this idea for an article or planning my weekend.

Sure, if I use AI to generate ideas I never bother to understand or engage with, then my brain probably takes a nap, but if I use it to streamline tedious chores, I have more brainpower for when it matters.

Think about it like this. When I was growing up, I had dozens of phone numbers, addresses, birthdays, and other details of my friends and family memorized. I had most of it written down somewhere, but I rarely needed to consult it for those I was closest to. But I haven't memorized a number in almost a decade.

I don't even know my own landline number by heart. Is that a sign I’m getting dumber, or just evidence I've had a cell phone for a long time and stopped needing to remember them?

We’ve offloaded certain kinds of recall to our devices, which lets us focus on different types of thinking. The skill isn’t memorizing, it’s knowing how to find, filter, and apply information when we need it. It's sometimes referred to as "extelligence," but really it's just applying brain power to where it's needed.

That’s not to say memory doesn’t matter anymore. But the emphasis has changed. Just like we don’t make students practice long division by hand once they understand the concept, we may one day decide that it’s more important to know what a good form letter looks like and how to prompt an AI to write one than to draft it line by line from scratch.

Humans are always redefining intelligence. There are a lot of ways to be smart, and knowing how to use tools and technology is one important measure of smarts. At one point, being smart meant knowing how to knap flint, make Latin declensions or working a slide rule.

Today, it might mean being able to collaborate with machines without letting them do all the thinking for you. Different tools prioritize different cognitive skills. And every time a new tool comes along, some people panic that it will ruin us or replace us.

The printing press. The calculator. The internet. All were accused of making people lazy thinkers. All turned out to be a great boon to civilization (well, the jury is still out on the internet).

With AI in the mix, we’re probably leaning harder into synthesis, discernment, and emotional intelligence – the human parts of being human. We don't need the kind of scribes who are only good at writing down what people say; we need people who know how to ask better questions.

Knowing when to trust a model and when to double-check. It means turning a tool that’s capable of doing the work into an asset that helps you do it better.

But none of it works if you treat the AI like a vending machine for intelligence. Punch in a prompt, wait for brilliance to fall out? No, that's not how it works. And if that's all you do with it, you aren't getting dumber, you just never learned how to stay in touch with your own thoughts.

In the study, the LLM group’s lower essay ownership wasn’t just about memory. It was about engagement. They didn’t feel connected to what they wrote because they weren’t the ones doing the writing. That’s not about AI. That’s about using a tool to skip the hard part, which means skipping the learning.

The study is important, though. It reminds us that tools shape thinking. It nudges us if we are using AI tools to expand our brains or to avoid using them. But to claim AI use makes people less intelligent is like saying calculators made us bad at math. If we want to keep our brains sharp, maybe the answer isn’t to avoid AI but to be thoughtful about using it.

The future isn't human brains versus AI. It’s about humans who know how to think with AI and any other tool, and avoiding becoming someone who doesn't bother thinking at all. And that’s a test I’d still like to pass.

You might also like

Midjourney just dropped its first AI video model and Sora and Veo 3 should be worried

Techradar - Fri, 06/20/2025 - 04:41
  • Midjourney has launched its first AI video model, V1.
  • The model lets users animate images into five-second motion clips.
  • The tool is relatively affordable and a possible rival for Google Veo or OpenAI’s Sora.

Midjourney has long been a popular AI image wizard, but now the company is making moves and movies with its first-ever video model, simply named V1.

This image-to-video tool is now available to Midjourney's 20 million-strong community, who want to see five-second clips based on their images, and up to 20 seconds of them extended in five-second increments.

Despite being a brand new venture for Midjourney, the V1 model has enough going on to at least draw comparisons to rival models like OpenAI’s Sora and Google’s Veo 3, especially when you consider the price.

For now, Midjourney V1 is in web beta, where you can spend credits to animate any image you create on the platform or upload yourself.

To make a video, you simply generate an image in Midjourney like usual, hit “Animate,” choose your motion settings, and let the AI go to work.

The same goes with uploading an image; you just have to mark it as the start frame and type in a custom motion prompt.

You can let the AI decide how to move it, or you can take the reins and describe how you want the motion to play out. You can pick between low motion or high motion depending on whether you want a calm movement or a more frenetic scene, respectively.

The results I've seen certainly fit into the current moment in AI video production, both good and bad. The uncanny valley is always waiting to ensnare users, but there are some surprisingly good examples from both Midjourney and initial users.

AI video battlesMidjourney video is really fun from r/midjourney

Midjourney isn’t trying to compete head-on with Sora or Veo in terms of technical horsepower. Those models are rendering cinematic-quality 4K footage with photorealistic lighting and long-form narratives based solely on text. They’re trained on terabytes of data and emphasize frame consistency and temporal stability that Midjourney is not claiming to offer.

Midjourney’s video tool isn’t pretending to be Hollywood’s next CGI pipeline. The pitch is more about being easy and fun to use for independent artists or tinkerers in AI media.

And it really does come out as pretty cheap. According to Midjourney, one video job costs about the same as upscaling, or one image’s worth of cost per second of video.

That’s 25 times cheaper than most AI video services on the market, according to Midjourney and a cursory examination of other alternatives.

That's probably for the best since a lot of Hollywood is going after Midjourney in court. The company is currently facing a high-stakes lawsuit from several Disney, Universal, and other studios over claims it trained its models on copyrighted content.

For now, Midjourney's AI generators for images and video remain active, and the company has plans to expand its video production capabilities. Midjourney is teasing long-term plans for full 3D rendering, scene control, and even immersive world exploration. This first version is just a stepping stone.

Advocates for Sora and Veo probably don't have to panic just yet, but maybe they should be keeping an eye on Midjourney's plans, because while they’re busy building the AI version of a studio camera crew, Midjourney just handed a magic flipbook to anyone with a little cash for its credits.

You might also like

Hybrid Cars, Once Derided and Dismissed, Have Become Popular

NYT Technology - Fri, 06/20/2025 - 04:01
Automakers and car buyers are taking a second, harder look at hybrids after leaving them behind for electric vehicles.

Chinese Companies Set Their Sights on Brazil

NYT Technology - Thu, 06/19/2025 - 23:00
Confronted with tariffs and scrutiny in the United States and Europe, Chinese consumer brands are betting that they can become household names in Latin America’s biggest economy.

TikTok Hits Cannes, Where a U.S. Ban Seems a Distant Dream

NYT Technology - Thu, 06/19/2025 - 12:57
TikTok executives hosted happy hours and played pickleball with influencers on the French Riviera this week, even as a U.S. ban loomed over the company.

‘My kids will never be smarter than AI’: Sam Altman’s advice on how to use ChatGPT as a parent leaves me shaking my head

Techradar - Thu, 06/19/2025 - 10:28

Sam Altman has appeared in the first episode of OpenAI’s brand new podcast, called simply the OpenAI Podcast, which is available to watch now on Spotify, Apple Podcasts, and YouTube.

The podcast is hosted by Andrew Mayne and in the first episode, OpenAI CEO Sam Altman joins the host to talk about the future of AI: from GPT-5 and AGI to Project Stargate, new research workflows, and AI-powered parenting.

While Altman's thoughts on AGI are always worth paying attention to, it was his advice on AI-powered parenting that caught my ear this time.

You have to wonder if Altman’s PR advisors have taken the day off, because after being asked the softball question, “You’ve recently become a new parent, how is ChatGPT helping you with that?”, Altman somehow draws us into a nightmare scenario of a generation of AI-reared kids who have lost the ability to communicate with regular humans in favor of their parasocial relationships with ChatGPT.

“My kids will never be smarter than AI.”, says Altman in a matter-of-fact way. “But also they will grow up vastly more capable than we were when we grew up. They will be able to do things that we cannot imagine and they’ll be really good at using AI. And obviously, I think about that a lot, but I think much more about what they will have that we didn’t…. I don’t think my kids will ever be bothered by the fact that they’re not smarter than AI. “

That all sounds great, but then later in the conversation he says: “Again, I suspect this is not all going to be good, there will be problems and people will develop these problematic, or somewhat problematic, parasocial relationships.“

In case you’re wondering what "parasocial relationships" are, they develop when we start to consider media personalities or famous people as friends, despite having no real interactions with them; the way we all think we know George Clooney because he’s that friendly doctor from ER, or from his movies or the Nespresso advert, when, in fact, we have never met him, and most likely never will.

Mitigating the downsides

Altman is characterizing a child’s interactions with ChatGPT in the same way, but interestingly he doesn’t offer any solutions for a generation weaned on ChatGPT Advanced Voice mode rather than human interaction. Instead he sees it as a problem for society to figure out.

“The upsides will be tremendous and society in general is good at figuring out how to mitigate the downsides”, Altman assures the viewer.

Now I’ll admit to being of a more cynical bent, but this does seem awfully like he’s washing his hands of a problem that OpenAI is creating. Any potential problems that a generation of kids brought up interacting with ChatGPT are going to experience are, apparently, not OpenAI’s concern.

In fact, earlier when the podcast host brought up the example story of a parent using ChatGPT’s Advanced Voice Mode to talk to their child about Thomas the Tank Engine, instead of doing it themselves, because they are bored of talking about it endlessly, Altman simply nods and says ,“Kids love Voice Mode in ChatGPT”.

Indeed they do Sam, but is it wise to let your child loose on ChatGPT’s Advanced Voice Mode without supervision? As a parent myself (although of much older children now) I’m uncomfortable with hearing of young kids being given what sounds like unsupervised access to ChatGPT.

AI comes with all sorts of warnings for a reason. It can make mistakes, it can give bad advice, and it can hallucinate things that aren’t true. Not to mention that “ChatGPT is not meant for children under 13” according to OpenAI’s own guidelines, and I can’t imagine there are many kids older than 13 who are interested in talking about Thomas the Tank Engine!

I have no problem using ChatGPT with my kids, but when ChatGPT was available they were both older than 13. If I was using it with younger children I’d always make sure that they weren’t using it on their own.

I'm not suggesting that Altman is in any way a bad parent, and I appreciate his enthusiasm for AI, but I think he should leave the parenting advice to the experts for now.

You might also like

Your A.I. Queries Come With a Climate Cost

NYT Technology - Thu, 06/19/2025 - 06:28
When it comes to artificial intelligence, more intensive computing uses more energy, producing more greenhouse gases.

Can A.I. Quicken the Pace of Math Discoveries?

NYT Technology - Thu, 06/19/2025 - 04:02
Breakthroughs in pure mathematics can take decades. A new Defense Department initiative aims to speed things up using artificial intelligence.

Google Gemini’s super-fast Flash-Lite 2.5 model is out now - here’s why you should switch today

Techradar - Wed, 06/18/2025 - 19:00
  • Google’s new Gemini 2.5 Flash-Lite model is its fastest and most cost-efficient
  • The model is for tasks that don't require much processing, like translation and data organization
  • The new model is in preview, while Gemini 2.5 Flash and Pro are now generally available

AI chatbots can respond at a pretty rapid clip at this point, but Google has a new model aimed at speeding things up even more under the right circumstances. The tech giant has unveiled the Gemini 2.5 Flash-Lite model as a preview, joining the larger Gemini family as the smaller, yet faster and more agile sibling to the Gemini 2.5 Flash and Gemini 2.5 Pro.

Google is pitching Flash-Lite as ideal for tasks where milliseconds matter and budgets are limited. It's intended for tasks that may be large but relatively simple, such as bulk translation, data classification, and organizing any information.

Like the other Gemini models, it can still process requests and handle images and other media, but the principal value lies in its speed, which is faster than that of the other Gemini 2.5 models. It's an update of the Gemini 2.0 Flash-Lite model. The 2.5 iteration has performed better in tests than its predecessor, especially in math, science, logic, and coding tasks. Flash-Lite is about 1.5 times faster than older models.

The budgetary element also makes Flash-Lite unique. While other models may turn to more powerful, and thus more expensive, reasoning tools to answer questions, Flash-Lite doesn’t always default to that approach. You can actually flip that switch on or off depending on what you’re asking the model to do.

And just because it can be cheaper and faster doesn't mean Flash-Lite is limited in the scale of what it can do. Its context window of one million tokens means you could ask it to translate a fairly hefty book, and it would do it all in one go.

Flash-Lite lit

The preview release of Flash-Lite isn't Google's only AI model news. The Gemini 2.5 Flash and Pro models, which have been in preview, are now generally available. The growing catalogue of Gemini models isn't just a random attempt by Google to see what people like. The variations are tuned for specific needs, making it so Google can pitch Gemini as a whole to a lot more people and organizations, with a model to match most needs.

Flash-Lite 2.5 isn’t about being the smartest model, but in many cases, its speed and price make it the most appealing. You don’t need tons of nuance to classify social media posts, summarize YouTube transcripts, or translate website content into a dozen languages.

That’s exactly where this model thrives. And while OpenAI, Anthropic, and others are releasing their own fast-and-cheap AI models, Google’s advantage in integration with its other products likely helps it pull ahead in the race against its AI rivals.

You might also like

BYD and Other Chinese Carmakers Expand Sales in Europe Despite Tariffs

NYT Technology - Wed, 06/18/2025 - 13:47
BYD and other companies doubled their share of the car market after the European Union imposed higher tariffs on electric vehicles from China.

Tesla’s Robotaxi, Long Promised by Elon Musk, Joins a Crowded Field

NYT Technology - Wed, 06/18/2025 - 13:14
Mr. Musk says the driverless taxis could begin ferrying passengers on Sunday in Austin, Texas, where other companies already have similar cars on the road.

Windows 11 user has 30 years of 'irreplaceable photos and work' locked away in OneDrive - and Microsoft's silence is deafening

Techradar - Wed, 06/18/2025 - 12:30
  • A Redditor was moving a huge slab of data from old drives to a new one
  • They used OneDrive as a midpoint in an ill-thought-out strategy that left all the data in Microsoft's cloud service temporarily
  • When they came to download the data, they were locked out of OneDrive, and can't get Microsoft support to address this issue

A cautionary tale shared on Reddit tells the story of a Windows PC owner who used OneDrive to store 30 years' worth of their data and lost the lot when their Microsoft account was locked, with no apparent way to regain access.

This is a nasty sounding predicament (highlighted by Neowin) to say the least, with the loss of what's described as three decades of "irreplaceable photos and work" which was transferred to OneDrive as a temporary storage facility.

The idea the Redditor had was that they needed to move that huge collection of files from multiple old drives where they were stored to a large new drive, and OneDrive was selected as the midpoint in that data migration journey.

So, they moved all the files off the old drives onto Microsoft's cloud storage service and prepared to transfer the data to the new drive, when they ran into a huge stumbling block. The Redditor was suddenly locked out of their Microsoft account (and therefore OneDrive, and all Microsoft services).

Now, this isn't a sensible way to manage this data transfer, of course (and I'll come back to outline why in a moment, in case you're not sure), but the point here is that the mistake happened, and the Redditor can't get any joy whatsoever from Microsoft in terms of trying to resolve the problem.

In their Reddit post, which is gaining a lot of attention, they say: "Microsoft suspended my account without warning, reason, or any legitimate recourse. I've submitted the compliance form 18 times - eighteen - and each time I get an automated response that leads nowhere. No human contact. No actual help. Just canned emails and radio silence."

They continue: "This feels not only unethical but potentially illegal, especially in light of consumer protection laws. You can't just hold someone's entire digital life hostage with no due process, no warning, and no accountability," adding that Microsoft is a "Kafkaesque black hole of corporate negligence."

Analysis: Microsoft needs to do better

Man upset using a laptop

(Image credit: Shutterstock)

Okay, so first up, very quickly - because I don't want to labor on the mistakes made by the unfortunate Redditor - this is not a good way to proceed with a drive migration.

In transferring a large slab of data like this, you should never have a single point of failure in the process. By which I mean shoving all the data into the cloud, on OneDrive, and having that as the sole copy. That's obviously the crux of the problem here, because once the user was locked out of OneDrive, they no longer had access to their data at all.

When performing such an operation, or as a general rule for any data, you should always keep multiple copies. Typically, that would be the original data on your device, a backup on a separate external drive at home (preferably two drives, in fact), and an off-site copy in a cloud storage locker like OneDrive. The point is that if you lose the original data, you can resort to, say, the external drive, but if that's also gone to the great tech graveyard in the sky somehow, you can go to the second drive (or the cloud).

Anyway, you get the point, but the Redditor chanced this way of doing things - figuring, no doubt, that as a temporary measure, it was fine to rely solely on OneDrive - but clearly, that wasn't the case.

There are a number of issues with the scenario presented here where Microsoft has fallen short of the standards that a customer would rightly expect.

Why did this happen?

First, there's the fact that the Microsoft account was simply locked with no notification or message provided as to why. The OneDrive user can only guess at why this ban was enacted (and the obvious guess is that some copyrighted material, or other content that contravened Microsoft's policies, was flagged in the uploaded files, which would trigger the account to be automatically locked). It's worth making it clear that we (obviously) don't have any idea about the contents of this data.

Secondly, with this having happened, the most worrying part here is the Redditor's description of how they feel like they're banging their head against a brick wall in trying to talk to Microsoft's support staff about how to resolve this. After all, this is essentially their whole life's worth of data, and there should be some way to at least find out what the problem is - and give the person who's been locked out a chance to explain, and potentially regain access.

For all we know, it could be a bug that's caused this. But if nobody at Microsoft's listening, nobody's investigating, probably. And if you do use OneDrive as a cloud backup, not having access to your data at a critical time is a frightening prospect indeed. (Which is why you must sort out those other local backups as an alternative, or indeed, another cloud service if you really wanted to push the 'data redundancy' boat out).

Hopefully, the Redditor will eventually get to speak to a Microsoft support agent - an actual person - to iron this out. In theory, all that data could still be on Microsoft's servers somewhere.

This incident has occurred at a time when Microsoft is pushing its account services on Windows 11 users, as you can't install the OS without one (well, you can by using loopholes, although the company is busy eradicating some of those fudges). Not to mention pushing OneDrive, Microsoft 365, and other services with ads in Windows, of course.

That broad drive is an unfortunate backdrop here when you consider another recent misstep recently brought to light. That was the highlighting of a potential problem with deleted Microsoft accounts (deleted by the user, that is), which could result in the loss of the key for the default drive encryption applied with new installations of Windows 11 24H2.

Again, that nasty little (albeit niche) scenario could lead to all the data on your drive disappearing into a blackhole, never to be seen again. It's another odd situation you could end up in with no recourse at all in this case - and this, along with the Redditor's awful plight, are predicaments that Microsoft clearly should not be inflicting on consumers.

We've contacted Microsoft for comment about this specific case, and will update this story if we get a response from the company.

You might also like...

This island is getting the world’s first AI government, but I’ve read this story before – and it doesn’t end well

Techradar - Wed, 06/18/2025 - 11:00

Sensay, a creator of AI-powered digital replicas of people, has established an AI-powered government on a real island it purchased off the coast of the Philippines. Previously known as Cheron Island, it's been renamed Sensay Island.

The Head of State (effectively, the President) of Sensay Island is Roman Emperor Marcus Aurelius, one of The Five Good Emperors of Rome, who was known for his love of Stoic philosophy and good judgement. Wartime British PM Winston Churchill is the Prime Minister, while Sun Tzu, author of the Chinese strategic classic, The Art of War, takes the reins at Defence. Alexander Hamilton is the new Treasury Secretary.

According to Sensay, “Each AI replica is designed to emulate the personality, values, and decision-making patterns of the historical figure it represents, providing a governance style infused with timeless wisdom and ethical principles.

To truly emulate the character of these historical figures, each recreation is uniquely trained on the literature, teaching, philosophies, and speeches of the real-life counterparts they represent."

How easily AI replicas from such disparate periods and with such strong characters will be able to work together in government remains to be seen, since their contrasting values must surely clash at points, not to mention be at odds with modern-day values.

The full cabinet

Here’s the full list of Sensay Island cabinet members:

Head of State (President):
Marcus Aurelius

Prime Minister: Winston Churchill

Foreign Affairs Minister: Eleanor Roosevelt

Defense Minister: Sun Tzu

Treasury Secretary: Alexander Hamilton

Justice Minister: Nelson Mandela

Science & Technology Minister: Ada Lovelace

Education Minister: Confucius

Health Minister: Florence Nightingale

Agriculture Minister: George Washington Carver

Environment Minister: Wangari Maathai

Culture Minister: Leonardo da Vinci

Ethics Advisor: Mahatma Gandhi

Innovation Advisor: Nikola Tesla

Infrastructure Director: Queen Hatshepsut

Chief Strategist: Zhuge Liang

Intelligence Chief: T.E. Lawrence

Personally, I think DaVinci was a wise choice for Culture Minister, and it’s nice to see Nikola Tesla being recognized as Innovation Advisor, but I have to say I’m a little disappointed not to see Queen Cleopatra anywhere in the mix.

Confucius also presents some challenges as Education Minister, considering his unfamiliarity with modern technology, like AI.

Sensay Island

Sensay Island is neighbor to Guinlep Island and Bamboo Private Island. (Image credit: Sensay)A real island

Sensay Island is indeed a real island off the coast of the Philippines. You can find it on Google Maps. It has a surface area of around 3.4 km², comprising beaches, rainforest, and coral lagoons.

From what we can see, there doesn’t seem to be any infrastructure of any kind on the island, so if you’re thinking of a visit, be aware that there’s probably no Wi-Fi.

While an AI government feels like something of a publicity stunt, there are serious reasons why Sensay has created an AI island:

“Sensay is looking to demonstrate that AI can be deployed in national governance to aid policymaking free from political partisanship and bureaucratic delays, and with unprecedented transparency and participation”, it says.

A fly on the wall

According to Marisol Reyes, the (AI-powered) Tourism Manager for Sensay Island, who you can chat with at its website, you can visit the island whenever you like:

“Absolutely, you can visit our beautiful island! We're thrilled to welcome visitors to experience this unique blend of cutting-edge AI governance and traditional Filipino hospitality. Sensay Island is open to tourists who want to explore our pristine beaches, vibrant coral sanctuaries, and witness history in the making with our groundbreaking AI Council.”

For those without the means to visit, the good news is that you can still get involved. You will soon be able to register as an E-resident of Sensay Island, allowing you to propose new policies for its AI-powered administration via an open-access platform:

“This will combine direct democracy with AI-enhanced decision-making”, says Sensay.

Dan Thomson, CEO and founder of Sensay, added, “This project shows Sensay’s commitment to pushing the boundaries of AI in a responsible direction. I hope our approach will show the public and world leaders that AI is a feasible and efficient way to develop and implement policies."

Despite an AI-controlled civilization leading to (attempted) human extinction in just about every major Sci-Fi movie I’ve watched in the last 40 years, from Logan’s Run to The Terminator, it seems that humans are still determined to give it a go.

But could AI actually provide a more balanced and sane government than our elected officials can? There’s only one way to find out...

You might also like

Windows 11’s new Start menu falls short in one key area – and it’s making people angry

Techradar - Wed, 06/18/2025 - 05:52
  • Microsoft has a Start menu redesign in testing
  • This introduces new layouts for the list of all apps
  • One of those layouts is a category view, and we’ve had confirmation from Microsoft that it won’t be possible to customize this to your liking

We’ve just learned more about how Microsoft’s revamped Start menu will work when it arrives in Windows 11, and not everyone is happy about the new info aired here.

Windows Latest reports on an element of customization that falls short of what some Windows 11 users were hoping for, and it pertains to one of the new layouts being introduced for the list of apps.

As you may recall, with the redesigned Start menu – which is in test builds of Windows 11 now – the long list of apps installed on the PC can be set to a couple of more compact alternative layouts, one of which is a grid and the other a category view.

It’s the latter we’re interested in here, whereby apps are grouped into different categories such as Games, Productivity, Creativity, Social, Utilities and so forth. Each of these categories has a box in which up to four icons for the most commonly-used apps appear, and the full roster of apps is found within if you open the category – all of which allows for an easier way to locate the app you’re looking for, rather than scrolling through a lengthy alphabetical list.

So, what’s the beef that’s been raised here? Windows Latest has received confirmation from Microsoft that it won’t be possible to create your own category types.

Windows 11 will, of course, make the decisions on how to categorize apps and where they belong, but there are some interesting, and less than ideal, nuances picked up by Windows Latest here.

Any app that Windows 11 isn’t sure about will go in the ‘Other’ category, for one thing. Also, if there aren’t three apps for any given category – because you don’t have enough creativity apps installed on your machine, say – then a stray creativity app (like Paint) will be dumped in Other.

Analysis: improved customization could still be offered with any luck

girl using laptop hoping for good luck with her fingers crossed

(Image credit: MAYA LAB / Shutterstock)

If Microsoft gave folks the ability to make their own category folders, they could have a few alternative dumping grounds to Other – categories named so that the user could better remember what apps they contain.

However, with Windows 11 overseeing category allocation, it seems like Microsoft wants to keep a tight rein on the groups that are present in this part of the interface. Sadly, it isn’t possible to move an app from one category to another, either (as Windows Latest has highlighted in the past), should you disagree with where it’s been placed – and this latter ability is a more telling shortcoming here.

The new Start menu remains in testing, so Microsoft may make changes before it arrives in the finished version of Windows 11. That’s entirely possible, especially seeing as Microsoft has (again) been stressing how it’s listening to user feedback in order to better inform Windows 11’s design, the Start menu overhaul included.

So, simply being able to drag and drop icons between these categories is something we can hope for, in order to reclassify any given app – it’s a pretty basic piece of functionality, after all. We may eventually get to define our own categories, too, but for now it appears that Microsoft is taking a rather rigid approach to customization with this part of the menu.

Expect this Start menu makeover to be one of the central pillars of Windows 11 25H2 when it pitches up later this year.

You might also like...

Pages