Feed aggregator

Valve leak hints that its next VR headset could be landing sooner than we realized

Techradar - Thu, 04/10/2025 - 10:34
  • Valve is reportedly ramping up facial interface production
  • These parts are essential for VR wearables, suggesting that Valve Deckard is coming soon
  • Valve's move could also be teasing future VR headsets beyond Deckard

Nearly six years since the launch of the smash hit Valve Index, it appears the Steam team is gearing up to start making the long-awaited standalone sequel according to new rumors.

Codenamed the Valve Deckard in leaks, the headset was initially believed to have a late 2025 release date, but details were few and far between, suggesting it might slip later.

However, new details from SadlyItsBradley – a source of several VR hardware leaks in the past – suggest that Valve could be steaming ahead with production, as the company has been importing the equipment needed to manufacture VR headset facial interfaces.

In a later post SadleyItsBradley pointed out that Valve has been receiving facial interface shipments since last year – suggesting that this new equipment may not just be to produce Deckard interfaces, but prototype designs for its next VR headset.

Alternatively, this might be an attempt by Valve to avoid some of the increased tariff costs that US President Donald Trump has been horsing around with recently.

FWIW, Valve also received pallet shipments of new facial interfaces (not machinery) since last year from the same companyThe machinery Valve recently received was probably* an injection moulder tweaked to focus on making future gaskets1/3 https://t.co/0qlgi2UcfD pic.twitter.com/D7YweQuRqJApril 10, 2025

Admittedly facial interfaces don’t tell us much about the headset itself, but given they're only used for devices like VR headsets and not anything else in Valve’s hardware line-up (such as a Steam Deck), Valve investing in their production suggests it has plans to make a new VR headset.

And since the Valve Index reportedly exited production sometime last year (via SadlyItsBradley, again) it’s unlikely these interfaces would be for its existing tech.

The Valve Steam Deck running Cyberpunk 2077

Valve Deckard could be a wearable Steam Deck (Image credit: Future / Roland Moore-Colyer)

Beyond this latest rumor, other leaks are teasing something special, with Deckard said to not only be a standalone headset (like the Meta Quest 3) but also a wearable Steam Deck that you can use for playing flat games.

Just be prepared for it to be pricey. We’re talking around $1,200 (around £950 / AU$1,950), though this would include a few games in a bundle.

This isn’t as cheap as a Meta Quest 3, but with Steam’s incredible software support the Valve Deckard could be a shoo-in for our best VR headsets list.

It’s worth remembering that all leaks should be taken with a pinch of salt, but with Valve Deckard it feels like where there’s smoke there's likely fire.

We’ll just have to wait and see if that’s the case as we roll through 2025 and the rumored Deckard release date approaches. If Deckard is announced you can sure we’ll be ready to keep you up to date on everything you need to know.

You might also like

How Do the iPhone 16E and Google Pixel 9A Compare to More Expensive Models?

NYT Technology - Thu, 04/10/2025 - 09:55
With tariffs threatening to drive up the costs of most things, the new entry-level phones from Apple and Google present a timely opportunity to save some bucks.

Reddit Answers just got a huge Google Gemini upgrade, and now I can’t wait to try it

Techradar - Thu, 04/10/2025 - 06:14
  • Reddit Answers is getting Google Gemini integration
  • The upcoming Reddit AI platform will pull results from Google AI
  • Currently in beta in the United States, Reddit Answers does not have a release date as of yet

Reddit Answers, the upcoming AI platform, is getting a major update that adds integration with Google Gemini.

Reddit Answers uses AI and Reddit's almost endless community-driven knowledge to provide quick responses to any question you ask it. Think AI Quora or Yahoo Answers from back in the day.

The new AI platform is currently in beta, and now it's getting even better before its official release to the wider public.

"By integrating Google Cloud's Vertex AI Search along with Reddit-built technology into Reddit Answers, we are bringing the same innovation that powers Google to our users," said Matt Snelham, SVP of Infrastructure, Reddit.

"Users are seeing improved search relevance and as a result, we've seen a growth in users directly navigating to the Reddit homepage through Reddit Answers, increasing platform engagement."

Reddit Answers is currently available for beta testers on the web and iOS devices in the United States, the service initially launched in December 2024, and we're yet to get an official release date.

Adding Gemini to the mix gives Reddit Answers an added credibility thanks to Google's impressive AI models. Although it's yet to be seen whether or not Reddit users will take to the new AI abilities.

One Reddit user, Panxma, said three months ago in a thread, "I feel like the Answers thing takes away the community aspect of Reddit. I probably won’t ever use the thing since I just Google what I need and find a Reddit post rather than asking the AI itself."

Now that Reddit Answers has Google Gemini integration, perhaps those who flock to Google to find Reddit posts will be more inclined to use Reddit's AI.

I'm excited by Reddit Answers

As an avid Reddit user who flocks to the platform to answer the majority of random questions that pop into my brain, I'm pretty intrigued by an AI chatbot that pulls from the community aspect of Reddit and merges it with Google Gemini.

Reddit is yet to confirm when, or if, a beta of Answers will launch in the UK, but I'm patiently waiting to give the AI chatbot a go.

I do, however, share the same sentiment as the Reddit user above. One of the best things about the website to begin with is the ability to get information from a wide variety of people from all walks of life.

Bringing AI into the mix kind of feels like the antithesis of Reddit, and I'm interested to see how it works without losing what makes Reddit great in the first place.

Google Gemini integration is a huge upgrade for Reddit Answers, now just launch the platform so we can all give it a try.

You might also like

Windows 11 update reportedly creates a mysterious folder on your system drive, which is certainly confusing – but ultimately harmless

Techradar - Thu, 04/10/2025 - 04:41
  • The monthly patch for Windows 11 24H2 reportedly has a bug that creates an enigmatic folder
  • This ‘inetpub’ folder appears in the system drive but it’s empty
  • It’s harmless, and you can seemingly delete it safely – though there appears to be a history of ‘inetpub’ popping up on PCs

If you’re scratching your head because a mysterious folder has appeared on your system drive in Windows 11, you wouldn’t be alone.

According to Windows Latest, the issue with the strange folder being created pertains to the latest April update for Windows 11 24H2.

After installing that upgrade, apparently a good many users, including Windows Latest, have spotted that an empty folder called ‘inetpub’ has been created on the drive that Windows 11 is installed on (the C: drive normally).

This folder is empty, and doesn’t do anything, but its appearance may cause quite some bewilderment among Windows 11 users who open their drive for whatever reason and happen to notice the folder.

As Windows Latest explains, this appears to relate to IIS (Internet Information Services), which is Microsoft’s web server software for developers. The ‘inetpub’ folder is used to store the likes of web pages that a developer might be testing locally on the host PC.

If you’re panicking at this point, thinking that this update has installed an entire piece of software you don’t need on Windows 11 without your permission, then don’t fret. Nothing has actually been installed, only the blank folder has been mistakenly created.

In other words, there’s no harm done as such. Windows Latest has deleted the folder – as have others – and reports that it’s quite safe to remove it from your drive. Still, those who are more paranoid may just want to leave it (as it’s empty, and not doing anything – unless its presence annoys you, of course).

AOC Agon Pro AG276FK gaming monitor tilted slightly to the side, showing the Windows desktop screen

(Image credit: Future / Jeremy Laird) Analysis: Folder whack-a-mole?

This is an odd quirk indeed, and yet another in a long line of weird bugs with Windows 11. As to what causes it, as Windows Latest theorizes, we can presume that somewhere within the April cumulative update, Microsoft is making some tweaks to IIS components. This process likely triggered the creation of the folder related to IIS storage errantly on machines that don’t use the software – but that’s all that has happened.

So, this is a relatively benign bug, and more a source of confusion than anything else. However, the danger is that strange behavior on a PC like this may cause a Windows 11 user to believe that they have perhaps been hit by a virus. Mysterious things appearing on your drive are, of course, a telltale sign of a virus infection (along with your PC becoming a lot more sluggish, or suffering from weird popups).

The bug also appears to be widespread, so quite a number of Windows 11 users may be annoyed, or bothered by, the folder’s appearance (if they notice it at all, that is). Windows Latest observes that it appeared on all of their Windows 11 machines, and there are a fair few reports from confused folks on Reddit, too.

At any rate, rest assured that nothing bad has happened to your system here, and hopefully Microsoft will be able to clean up this glitch relatively quickly. Running against that idea, though, is the fact that there are historical reports of the seemingly random appearance of this folder on system drives, and that doesn’t fill me with confidence that the software giant will stamp this out.

So, even if you do go ahead and delete it as Windows Latest suggests (at your own risk, I might add) – and that works fine – there’s no telling if at some point in the future, this folder might reappear on your Windows 11 drive.

It does indeed seem that some folks have been playing ‘inetpub’ whack-a-mole for some years now, and if that really is the case, it’s down to Microsoft to come up with a permanent solution.

You may also like...

OpenAI Asks Court to Bar Elon Musk From Unfairly Attacking It

NYT Technology - Wed, 04/09/2025 - 20:27
In a legal filing, OpenAI asked a federal court to hold Mr. Musk responsible for any damage he has caused the firm, in the latest sign of their bitter feud.

Launch of First Amazon Project Kuiper Internet Satellites Is Scrubbed

NYT Technology - Wed, 04/09/2025 - 20:00
The spacecraft are the online giant’s entry into beaming wireless service from space, but the company has much to do before it can compete with SpaceX’s Starlink.

How Musk and Trump Are Working to Consolidate Government Data About You

NYT Technology - Wed, 04/09/2025 - 15:38
Elon Musk’s team is leading an effort to link government databases, to the alarm of privacy and security experts.

Google Gemini could soon get a super-useful 'Power up' button – here's what it does

Techradar - Wed, 04/09/2025 - 15:00
  • Google is testing a new “Power Up” button in its Gemini app that upgrades your typed prompt
  • The button will produce a clearer, more detailed prompt without multiple attempts
  • The aim is to get better responses from Gemini immediately without needing to write perfect prompts

If you've ever used an AI chatbot, you know that figuring out how to phrase your prompts can make all the difference in getting a useful answer or gibberish. You can spend a long time fiddling with phrasing, word order, and detail level before stumbling upon the right way to ask the AI a question. Google is testing a new button for its Gemini AI assistant to help you get to that ideal prompt immediately. The upcoming “Power Up” button, found by Android Authority, gives your first attempt at a prompt a glow-up before you submit it to Gemini.

The idea is that instead of sweating over how to phrase your prompt to Gemini perfectly, you tap this button and let Gemini polish or 'power up' your initial attempt into something more detailed, more specific, and more likely to convey what you want to the AI model.

Gemini power

This matters a lot when you think about how much of the AI experience hinges on you and your ability to craft a prompt. You have to be specific but not too detailed, thorough but not so much as to distract from your main point. Sometimes, you even have to psychoanalyze the AI, figuring out weird quirks that may affect the result, like being polite or telling the AI not to be lazy.

I've often found it helpful to straight-up ask an AI chatbot for help crafting a prompt if I'm not sure what the best phrasing is to coax the information I want from the model. There are also some cases where the AI will automatically, but invisibly, reshape your prompt before answering. That can be helpful, but it might also be the culprit behind some of the more erratic responses you have seen.

The Power Up button would make it faster to get the right prompt and more transparent than just doing a behind-the-scenes polish. You write your prompt as usual, even if it's only half-formed, then hit Power Up and let Gemini heat up your scattered thoughts into a sharp inquiry worth submitting to the AI. The improved prompt then gets sent, and voilà, your AI assistant has a much better idea of how to help.

In some ways, this just expands on the suggestions for prompts you see from Gemini when you first open the chatbot. Those are much more generic than what the Power Up button might produce. It would also fit well into the other features providing variations on Gemini's output, like Deep Research, Canvas, and image creation with Imagen 3.

The Power Up button would be a relatively quiet kind of upgrade, but one that might serve Google's interests in preventing frustration among Google Gemini users who feel like they can't get the AI to fulfill their requests properly. It might also encourage those using other AI chatbots facing similar annoyances to come check out Gemini and its powered-up prompts.

You might also like

Pico just updated its best VR headset feature – and now I'm even more jealous my Meta Quest 3 doesn't have it too

Techradar - Wed, 04/09/2025 - 14:30
  • Pico just debuted waist trackers that work just like its feet trackers
  • It's out now for £39.99 (around $50 / AU$85)
  • When we reviewed the Pico 4 Ultra, its motion trackers were the headset's best feature

Pico has just announced an upgrade to its best VR headset feature: a new motion tracker for your waist, and yet again, I’m left wishing my Meta Quest 3 could get this upgrade, too.

When I tested the Pico 4 Ultra last year, its best feature was the pair of motion trackers you could buy as an add-on. These often came included for free as part of a bundle during sales of the VR headset.

You’d attach them to your feet, and you could bring surprisingly accurate foot tracking to supported games. Combined with hand tracking, this led to supremely immersive experiences, as you could interact with your whole body as if you were really in virtual space.

The new Pico Waist tracker next to a person using it to dance in VR

(Image credit: Pico)

Now, that simple yet effective solution is coming to a brand new Pico Motion Tracker for your waist. In fact, the device is launching today (April 9, 2025) for £39.99 (around $50 / AU$85).

We haven’t been able to test the tracker for ourselves, but given how impressive the foot trackers are, we expect Pico’s new waist trackers will be solid, too, when used in supported titles such as VRChat and compatible PCVR titles through the Pico Connect feature.

I can see this waist tracker being perfect for VR exercising, dancing games, and allowing players to use new VR props like a hula hoop, but we’ll have to wait and see how it’s implemented.

The tech inside the Pico 4 Ultra motion tracker

(Image credit: Pico) Meta needs better body tracking

Meta does offer its own basic body tracking via your headset’s downward-facing cameras but it’s not the most robust solution. The AI it relies on is fairly good at predicting where your legs and body should be, but it isn’t always perfect. Further, it hasn’t been implemented into many VR apps in the way that Pico’s sophisticated solution has been.

While Pico’s motion tracking is superb, I’m still not convinced it’s the best headset option for most people.

It doesn’t have the same impressive software catalog you’ll find on Meta’s Quests, and it’s pricier than the Meta Quest 3S and Meta Quest 3 without offering a significant performance bump.

That coupled with it not being available to buy in every region the Meta Quest 3 is including the United States, and that it's a device that has struggled to build a name for itself except amongst VR enthusiasts who are staunchly anti-Meta.

Meta has yet to announce physical motion trackers for Quest – with its CTO shooting the idea down last year (via UplaodVR) – but I seriously hope that changes. They’re easily the best Pico feature, and the Meta Quest 3 would be much more versatile if it had access to the same tools.

You might also like

CodeCrew's Audrey Willis takes to the airwaves

Memphis Business Journal - Wed, 04/09/2025 - 14:16
CodeCrew's Audrey Willis: “There's so many tech stories in Memphis, and nobody tells them because nobody knows that they're here."

Pilot Files Defamation Lawsuit Against Matt Wallace, X Influencer

NYT Technology - Wed, 04/09/2025 - 13:40
Jo Ellis, a National Guard pilot, is suing an influencer who falsely identified her as the captain of a helicopter that collided with a passenger plane in January.

You might get a free Meta Quest 3 to use on your next flight, but I'm not keen on the advertising it'll serve you

Techradar - Wed, 04/09/2025 - 08:00

The next time you fly, you could be handed a Meta Quest 3 to keep you entertained with mixed reality experiences and movies following the success of Meta’s recent pilot program. Though it’s somehow already being ruined by being used for some next-gen in-flight advertising.

Travel Mode landed on Quest headsets a little less than a year ago to allow you to use your VR device while on a flight (and later while on a train journey). Normally, the vehicle’s movements would confuse your headset’s sensors, but travel mode uses a “tuned” algorithm, according to Meta, that accounts for your airplane’s motion so it doesn’t cause disruption.

At the time, Meta announced a partnership with Lufthansa to provide in-flight entertainment to people traveling in their Allegris Business Class Suite (on select flights) so they could enjoy activities like virtual chess, meditation exercises, and virtual sightseeing previews.

Now, 4,000 travelers later, Meta and Lufthansa are heralding that trial a success and announced that this service will be expanding to “more airlines and routes” in the near future. Something I’m super excited about.

A Meta Quets 3 headset in a case on a

(Image credit: Meta / Lufthansa)

Beyond more immersive in-flight entertainment – which could lift your movie off that tiny screen on the seat in front of you and suspend it on a giant virtual display instead – I’m particularly interested in those in-flight meditation exercises and other techniques that could help nervous fliers.

I’m fine with flying, but I know plenty of people who find the experience stress-inducing. A VR headset that can whisk you away to somewhere more relaxing. Useful mindfulness exercises could be just what they need to make flights a less nerve-wracking experience.

One feature I’m not keen on, though, is how the Quest headsets could be used for in-flight advertising – something Meta also just announced in its blog post.

Lufthansa Chess on Travel Mode

(Image credit: Meta/Lufthansa)

Lufthansa and Cupra (a brand in the Volkswagen Group) have partnered to create an “in-flight test-drive app.” Meta explains that headset users will be able to customize their own Cupra car and “engage with the CUPRA Tavascan” as they explore virtual recreations of the streets of Barcelona and a Cupra garage – where you can learn more about the cars the company offers.

Presumably, this will be an opt-in experience rather than a feature that will be forced onto users, but I still can’t help but feel like it’s already cheapening the revolutionary in-flight entertainment system VR headsets could offer by reducing it to another boring way to sell you stuff. A cool Cupra-sponsored in-flight driving sim would be one thing; this is something way more icky-feeling.

I still believe in-flight virtual and mixed reality will be an awesome thing – I got a taste when using the Xreal One AR glasses on a few recent trips – but we’ll have to wait and see if it evolves in a fun way or if it just becomes another tool to sell us stuff.

You might also like

No Phone, No Internet: A First-Time Visit to Casablanca

NYT Technology - Wed, 04/09/2025 - 06:58
On her first visit to Morocco’s largest city, a visitor swears off her phone, the internet and even printed guides. Her aim? To get lost, learn as she goes, and reclaim the serendipity of travel.

The Wizard of Oz is coming to the Las Vegas sphere in 16K thanks to the power of Google DeepMind AI

Techradar - Wed, 04/09/2025 - 05:49
  • 1939 classic The Wizard of Oz is coming to the Las Vegas Sphere
  • Using the power of AI, Google is reimagining the film for the 16k spherical screen
  • The Wizard of Oz at The Sphere opens on August 28

The Wizard of Oz is coming to the Las Vegas Sphere, and it's all thanks to Google's incredible AI technology.

Following last week's announcement that the 1939 classic The Wizard of Oz is being reimagined for Las Vegas' iconic 16K LED screen spherical theater, set to open on August 28, Google is now giving us a behind-the-scenes look at the magic behind the production.

While The Wizard of Oz was not the first film to be shot in color, it's often referenced as one of the first true movie experiences to capture color efficiently, thanks to its incredible mix of colors and use of black-and-white in the film's Kansas scenes.

In Google's blog post, the company says, "Likewise, “The Wizard of Oz” may not be the first film to be reconceptualized with AI, but it may soon be known for that, too."

This is a massive project combining the teams at Google DeepMind, Google Cloud, Sphere Studios, Magnopus, and Warner Bros. Discovery to create an incredible experience, coming off the success of Wicked, which is set in the same world as The Wizard of Oz.

With the launch of Wicked: For Good set for November 2025, it's the perfect time to put eyes on the movie that inspired Elphaba and Glinda's epic two-part musical.

The power of tech and AI will showcase The Wizard of Oz in the "venue's 17,600-seat spherical space to create an immersive sensory experience," and Google says "generative AI will take center stage, alongside Dorothy, Toto and more munchkins than could ever fit in a multiplex."

The Wizard of Oz The Sphere

(Image credit: Google) How to turn a classic into a modern epic

Elphaba and Glinda looking at something magical off-camera in Universal's Wicked Part One movie

(Image credit: Universal Pictures)

Google's blog post on the work that has gone into bringing The Wizard of Oz to The Sphere is nothing short of mind-blowing.

The man behind the project, Buzz Hays, is the global lead for entertainment industry solutions at Google Cloud and a veteran producer in the world of Hollywood.

He said, "We’re starting with the original four-by-three image on a 35mm piece of celluloid — it’s actually three separate, grainy film negatives; that’s how they shot Technicolor,” Hays says. “That obviously won’t work on a screen that is 160,000 square feet. So we’re working with Sphere Studios, Magnopus and visual effects artists around the world, alongside our AI models, to effectively bring the original characters and environments to life on a whole new canvas — creating an immersive entertainment experience that still respects the original in every way.”

The Sphere has the highest resolution screen in the world, which means The Wizard of OZ's grainy 1939 imagery would've caused a huge issue for the experience. Luckily, the teams found solutions using Veo, Imagen, and Gemini to completely transform the movie using an "AI-based 'super resolution' tool to turn those tiny celluloid frames from 1939 into ultra-ultra-high definition imagery that will pop inside Sphere."

Following the upscaling, the teams then perform a process called AI outpainting, which essentially expands the scenes of The Wizard of Oz to fit the larger space found on the massive screen. AI then generates elements of the performances to fill out the created space and make the shots look and feel seamless.

Keeping the soul of the original

While I don't blame you if you think this sounds like an AI-generated catastrophe, ruining a classic that shouldn't be messed with, Google emphasises how the team has the traditions of cinema at the forefront of every decision.

"In addition to old footage, the team scoured archives to build a vast collection of supplementary material, such as the shooting script, production illustrations, photographs, set plans and scores."

Then, these materials were uploaded to Veo and Gemini to train the models and build on the "specific details of the original characters, their environments and even elements of the production, like camera focal lengths for specific scenes."

"With far more source material than just the 102-minute film to work with, the quality of the outputs dramatically improved. Now, Dorothy’s freckles snap into focus, and Toto can scamper more seamlessly through more scenes."

You might also like

Sora needs to up its game to match the new Runway AI video model

Techradar - Tue, 04/08/2025 - 21:00

I always enjoy a chance to mess with AI video generators. Even when they're terrible, they can be entertaining, and when they pull it off, they can be amazing. So, I was keen to play with Runway's new Gen-4 model.

The company boasted that the Gen-4 (and its smaller, faster sibling model, Gen-4 Turbo) can outperform the earlier Gen-3 model in quality and consistency. Gen-4 supposedly nails the idea that characters can and should look like themselves between scenes, along with more fluid motion and improved environmental physics.

It’s also supposed to be remarkably good at following directions. You give it a visual reference and some descriptive text, and it produces a video that resembles what you imagined. In fact, it sounded a lot like how OpenAI promotes its own AI video creator, Sora.

Though the videos Sora makes are usually gorgeous, they are also sometimes unreliable in quality. One scene might be perfect, and the next might have characters floating like ghosts or doors leading to nowhere.

Magic movie

Runway Gen-4 pitched itself as video magic, so I decided to test it with that in mind and see if I could make videos telling the story of a wizard. I devised a few ideas for a little fantasy trilogy starring a wandering wizard.

I wanted the wizard to meet an elf princess and then chase her through magic portals. Then, when he encounters her again, she's disguised as a magical animal, and he transforms her back into a princess.

The goal wasn’t to create a blockbuster. I just wanted to see how far Gen-4 could stretch with minimal input. Not having any photos of real wizards, I took advantage of the newly upgraded ChatGPT image generator to create convincing still images.

Sora may not be blowing up Hollywood, but I can't deny the quality of some of the pictures produced by ChatGPT. I made the first video, then used Runway's option to "fix" a seed so that the characters would look consistent in the videos. I pieced the three videos into a single film below, with a short break between each.

AI Cinema

You can see it's not perfect. There are some odd object movements, and the consistent looks aren't perfect. Some background elements shimmered oddly, and I wouldn’t put these clips on a theater screen just yet. However, the characters' actual movements, expressions, and emotions felt surprisingly real.

Further, I liked the iteration options, which didn't overwhelm me with too many manual options but also gave me enough control so that it felt like I was actively involved in the creation and not just pressing a button and praying for coherence.

Now, will it take down Sora and OpenAI's many professional filmmaker partners? No, certainly not right now. But I'd probably at least experiment with it if I were an amateur filmmaker who wanted a relatively cheap way to see what some of my ideas could look like. At least, before spending a ton of money on the people needed to actually make movies look and feel as powerful as my vision for a film.

And if I grew comfortable enough with it and good enough at using and manipulating the AI to get what I wanted from it every time, I might not even think about using Sora. You don't need to be a wizard to see that's the spell Runway is hoping to cast on its potential user base.

You might also like

Robert W. McChesney, Who Warned of Corporate Media Control, Dies at 72

NYT Technology - Tue, 04/08/2025 - 15:12
In over a dozen books, he explored the failures of journalism and the internet, blaming capitalism and calling for the nationalization of Facebook and Google.

Justice Dept. Disbands Cryptocurrency Enforcement Unit

NYT Technology - Tue, 04/08/2025 - 14:30
The Trump administration is dialing back its enforcement of cryptocurrency, and criticizing Biden-era prosecutions.

Trump’s Tariffs Are Already Reducing Car Imports and Idling Factories

NYT Technology - Tue, 04/08/2025 - 14:21
A few carmakers have closed factories, laid off workers or shifted production in response to the auto tariffs that took effect last week.

iOS 18.4 is quietly a big iPhone upgrade – here are 5 features you may have missed

Techradar - Tue, 04/08/2025 - 07:21

Apple fans are still discovering the new upgrades found in the latest iOS 18.4 software update since it came out last week – and now we’ve had more time to experiment with it, we think it’s safe to say that Apple’s mid-year update is bigger than you might think.

We’ve already drawn your attention to the more obvious iOS 18.4 changes such as the new Apple Intelligence Priority Notifications feature, and are patiently waiting for bigger upgrades like the delayed next-gen Siri. But iOS 18.4 is still a glimmer of hope that hints at better things to come from Apple Intelligence, and the update contains five other handy little features that'll help tide us over for now.

While these new tricks haven’t gone unnoticed, they’re still small enough to easily fly under-the-radar. So in case you missed them, here are five other new features in iOS 18.4 that could help change the way you use your iPhone...

1. New Shortcuts actions

A screen shot of new Apple Shortcuts functions

(Image credit: Future)

The new upgrades to Shortcuts are small but effective, pointing to the possibility that changes to Siri could be next on Apple’s agenda, since Shortcuts could serve as the foundation for Siri’s upcoming upgrades.

When you go to the Shortcuts app, there’s a new action that allows you to change settings for a number of different Apple apps, including Safari, Apple Maps and Apple News, with each one packing another layer of actions you can perform.

2. Lots of new emojis

Apple emoji face with under eye bags

(Image credit: Unicode / Emojipedia)

Everyone uses Apple’s extensive keyboard of emojis and iOS 18.4 introduces eight new emojis to brighten up your texts and social media captions. It’s a very minor upgrade, but new emojis are always exciting.

Out of all of the newest additions to Apple’s emoji gallery, which includes a new fingerprint, harp, and funky splatter emoji, one has already proven itself to be the next most-used emoji - the new smiley with sinking, under-eye bags. Relatable? I think so.

3. Ambient Music

An option to add Ambient Music buttons to the iOS 18.4 Control Center.

(Image credit: Future)

One of the more well-known, but easily missed, additions to iOS 18.4 is the expansion of Apple’s existing Background Sounds function.

The Ambient Sounds feature packs four different playlists – Chill, Sleep, Productivity, and Wellbeing. Perfect if you rely on instrumental music for studying, working, or relaxing.

To use it, just add the Ambient Music icon to your Control Center during customization, then choose your relaxing playlist.

4. Apple Photos improvements

A screen shot of Apple Photos' new filter options

(Image credit: Future)

iOS 18.4 is doubling down on organizational tools, bringing a shed load of new improvements to the Photos app.

For starters, you’ll have the freedom to enable and disable your ‘Recently Viewed’ and ‘Recently Shared’ galleries, as well as two new filtering options, ‘Shared With You’ and ‘Not in an Album’, saving you having to scroll for ages trying to find a specific photo.

The new Photos functions also include a new album-sorting category ‘Sort by Date Modified’ and the ability to delete or recover photos all at once. It’s a helpful software improvement for iPhone photographers everywhere.

5. Visual Intelligence for iPhone 15 Pro

Visual Intelligence on an iPhone 16

(Image credit: Apple)

The new Apple Intelligence Priority Notifications are one of iOS 18.4’s most handy new features, but you may have missed that Apple has also added a new Action Button that opens Visual Intelligence. Also, while this feature was previously exclusive to the iPhone 16 range, Apple has now brought it to the iPhone 15 Pro and Pro Max model, too.

In a nutshell, it’s an Apple Intelligence feature that's similar to Google Lens and allows you to take a photo of something in front of you, before finding out more about it. You can get more information using ChatGPT, Google Search or by highlighting any text within the snap.

Although the iPhone 15 Pro doesn’t pack the same Camera Control button like its superior iPhone 16, Visual Intelligence can be accessed through the Control Center or Action Button.

You might also like

We've tried Google Pixel 9's new Gemini Astra upgrade, and users are in for a real treat

Techradar - Tue, 04/08/2025 - 05:46
  • Pixel 9 smartphones now have access to Gemini Live Astra capabilities
  • Astra can answer questions related to what you see or what's on your device's screen
  • The powerful AI tool is free, and it arrived on Samsung S25 devices yesterday

Google Pixel 9, 9 Pro, and 9a owners just got a huge free Gemini upgrade that adds impressive Astra capabilities to their smartphones.

As we reported yesterday (April 7), Gemini Visual AI capabilities have started to roll out for Samsung S25 devices, and now Pixel 9s are also getting the awesome features.

So what is Gemini Astra? Well, you can now launch Gemini Live and grant it access to your camera, and it can then chat about what you see as well as what's on your smartphone's screen.

Gemini Astra has been hinted at for a long time, and it's immensely exciting to get access to it via a free update.

You should see the option to access Gemini's Astra capabilities from the Gemini Live interface. If you don't have access yet, be patient, as it should be available to all Pixel 9 users in the coming days.

While I don't personally have access to a Google Pixel 9 to test Gemini Live's Astra prowess, my colleague and TechRadar's Senior AI Editor, Graham Barlow, does.

I asked him to test out Gemini Astra and give me his first impressions of the new Pixel 9 AI capabilities, and you can see what he made of it below.

Hands-on impressions with Pixel 9's new Gemini Astra capabilities

Once you’re in Gemini Live you’ll notice two new icons at the bottom of the screen – a camera icon and a screen share icon.

Tap the camera icon and Gemini switches to a camera mode, showing you video of what your phone is looking at, but the Gemini Live icons remain at the bottom of the screen.

There’s also a camera reverse button, so you can get Gemini to look directly at you. I tapped that, and asked Gemini what it thought of my hair, to which it replied that my hair was “a lovely natural brown color”. Gee, thanks Gemini!

I tested Gemini Live with a few objects on my desk – a water bottle, a magazine, and a laptop, all of which it identified correctly and could tell me about. I pointed the phone at the window towards a fairly nondescript car park and asked Gemini which city I was in, and it instantly, and correctly, told me it was Bath, UK, because the architecture was quite distinctive, and there was a lot of greenery.

Gemini Live Astra Google Pixel 9

(Image credit: Future)

Gemini can’t use Google search while going live, so for now it’s great for brainstorming, chatting, coming up with ideas, or simply identifying what you’re looking at.

For example, it could chat with me about Metallica, and successfully identified the Kirk Hammett Funko Pop I’ve got on my desk, but it couldn’t go online and find out how much it would cost to buy.

The screen share icon comes up with a message prompting you to share the screen with Google, then when you say "Share screen" it puts a little Gemini window at the top of the screen that looks like the phone call window you get when you start to use your phone while you’re on a call.

As you start to interact with your phone the window minimizes even further into a tiny red time counter that counts how long you’ve been live for.

You can keep using your phone and talking to Gemini at the same time, so you could ask it, "What am I looking at?", and it will describe what’s on your phone screen, or "Where are my Bluetooth settings?", and it will tell you which parts of the Settings app to look in.

It’s pretty impressive. One thing it can’t do, though, is interact with your phone in any way, so if you ask it to take you to the Bluetooth settings it can’t do it, but it will tell you what to tap to get you there.

Overall I’m impressed by how well Gemini Live works in both of these new modes. We’ve had features like Google Lens that can use your camera like this for a while now, but having it all inside the Gemini app is way more convenient. It’s fast, it bug-free, and it just works.

You might also like

Pages