Feed aggregator

Why Antitrust Breakups of Google and Meta Could Be Difficult

NYT Technology - Tue, 04/15/2025 - 17:24
For the first time since the late 1990s Microsoft case, federal trials are weighing antitrust breakups, a tactic that harks back to Standard Oil.

Google’s new AI model could someday let you understand and talk to dolphins

Techradar - Tue, 04/15/2025 - 17:00
  • Google and the Wild Dolphin Project have developed an AI model trained to understand dolphin vocalizations
  • DolphinGemma can run directly on Pixel smartphones
  • It will be open-sourced this summer

For most of human history, our relationship with dolphins has been a one-sided conversation: we talk, they squeak, and we nod like we understand each other before tossing them a fish. But now, Google has a plan to use AI to bridge that divide. Working with Georgia Tech and the Wild Dolphin Project (WDP), Google has created DolphinGemma, a new AI model trained to understand and even generate dolphin chatter.

The WDP has been collecting data on a specific group of wild Atlantic spotted dolphins since 1985. The Bahamas-based pod has provided huge amounts of audio, video, and behavioral notes as the researchers have observed them, documenting every squawk and buzz and trying to piece together what it all means. This treasure trove of audio is now being fed into DolphinGemma, which is based on Google’s open Gemma family of models. DolphinGemma takes dolphin sounds as input, processes them using audio tokenizers like SoundStream, and predicts what vocalization might come next. Imagine autocomplete, but for dolphins.

The model is very slim and can run on a Google Pixel. WDP is already deploying DolphinGemma in the field this summer, using Pixel 9s in waterproof rigs. These models will be listening in, identifying vocal patterns, and helping researchers flag meaningful sequences in real time.

Flipper speaks

But the ultimate goal here isn’t just passive listening. WDP and Georgia Tech are also working on a system called CHAT (short for Cetacean Hearing Augmentation Telemetry), which is essentially a two-way communication system for humans and dolphins. CHAT lets researchers assign synthetic whistles to objects dolphins like, including seagrass and floating scarves, and then waits to see if the dolphins mimic those sounds to request them. It’s kind of like inventing a shared language, except with underwater microphones instead of flashcards.

DolphinGemma doesn’t just analyze dolphin sounds after the fact; it helps anticipate what sounds might be coming, enabling faster response times and smoother interactions. In essence, it’s like a predictive keyboard for dolphins. The whole project is still in an early stage, but Google plans to open-source DolphinGemma later this year to accelerate progress.

The initial model is trained on the vocalizations of Atlantic spotted dolphins, but it could theoretically be adapted to other species with some tuning. The idea is to hand other researchers the keys to the AI so they can apply it to their own acoustic datasets. Of course, this is still a long way from chatting with dolphins about philosophy or their favorite snacks. There’s no guarantee that dolphin vocalizations map neatly to human-like language. But DolphinGemma will help sift through years of audio for meaningful patterns.

Dolphins aren't the only animals humans may use AI to communicate with. Another group of scientists developed an AI algorithm to decode pigs' emotions based on their grunts, squeals, and snuffles to help farmers understand their emotional and physical health. Dolphins are undeniably more charismatic, though. Who knows, maybe someday you'll be able to ask a dolphin for directions while you're sailing, at least if you don't drop your phone in the water.

You might also like

At Trial, Mark Zuckerberg Avoids Explaining Takeovers of Instagram and WhatsApp

NYT Technology - Tue, 04/15/2025 - 16:49
The Meta chief executive testified in a landmark antitrust trial that it was business as usual when he bought rival apps. He denied he was trying to snuff out competitors.

OpenAI might build its own social network, and we think we know why

Techradar - Tue, 04/15/2025 - 13:55

In what we can only assume is a potential thumb in the eye of Elon Musk, Sam Altman's Open AI is reportedly considering building a social network, possibly inside ChatGPT.

This comes via a new report from The Verge, which claims this week that the social network possibly being built on top of OpenAI's AI services is only in the "early stages." Still, it could set up ChatGPT and other OpenAI platforms for a head-to-head battle with Grok, a generative AI platform built on top of Elon Musk's X (formerly Twitter).

There are essentially no details about what this social platform might look like, and OpenAI has little experience with shareable content outside of what its models can generate and what you can see in Sora (the video generation system) of other people's creations.

Take that, X

The fact that this rumor is out there might have little to do with behind-the-scenes development and more to do with Altman's ongoing battle with former partner Musk.

The pair founded OpenAI together before Musk walked away in 2018. He has since criticized and sued OpenAI for, among other things, becoming, in part at least, a for-profit entity (see OpenAI's partnership with Microsoft and the rise of Copilot).

Let's assume for a moment, though, that this is real. Why would OpenAI want to build a social network? In a word: data.

If millions flock to the platform and then start, I guess, sharing AI-generated memes on it, they'll be dropping a ton of rich data into the OpenAI system. If users allow it, future versions of the GPT model could be trained on it. Real data and activities that show how real people think, talk, act, create, etc, can be invaluable to a young generative model.

Social timing is everything

I wonder if this might've made more sense a year or two ago when Musk took over Twitter, transformed it into X, removed many of the protective content guardrails, and turned it into a social media hellscape. It was in that moment that Meta's Threads first rushed in. It was followed in notoriety by Bluesky. Both of them are distributed social networks, meaning no one owns your identity or your data.

Their growth has been remarkable, and it stands in contrast to X's fortunes. Depending on who you talk to, active user growth is stagnant or shrinking. But that doesn't mean the public's appetite for more alternative platforms is growing. Threads' growth has slowed, and Bluesky is relatively small compared to X and Threads.

The action is mostly on image and video-based social platforms like Snapchat, TikTok, Instagram Reels, and YouTube Shorts. The Verge report does not mention video, which leads us to assume this could be another micro-blogging-style network – something no one necessarily needs or, perhaps, wants.

Even so, as an opportunity to cause Elon Musk a little more agita, it's probably a worthy trial balloon from Altman.

You might also like

Trump Tariffs Could Raise iPhone Prices, But Affordable Options Remain

NYT Technology - Tue, 04/15/2025 - 10:47
Even if gadget prices surge, we have plenty of cheaper options, like buying last year’s phone model instead of the latest and greatest.

Couldn’t install Windows 11 24H2 because of your wallpaper? Microsoft has finally lifted blocks on upgrades due to customization apps – with some catches

Techradar - Tue, 04/15/2025 - 09:29
  • Windows 11 24H2 couldn’t be installed by the users of some wallpaper customization apps
  • This was because of compatibility issues with said apps and 24H2
  • Microsoft has now resolved those problems, for the most part anyway

Microsoft has finally lifted a compatibility block preventing some Windows 11 users from upgrading to the latest version of the OS because they had certain third-party wallpaper apps installed.

These are apps that let you install custom wallpaper on your Windows 11 machine, and the applications in question didn’t play nice with Windows 11 24H2, misfiring in various ways. That included causing errors, glitches with the wallpapers themselves, vanishing desktop icons, or even issues with virtual desktops.

Windows Central noticed that Microsoft has now marked these problems as fixed in a Windows 11 release health status update. So, those with said wallpaper apps are okay to go ahead and install the 24H2 update.

Well, in theory anyway, although there are some caveats here, which I’ll come onto next.

You may recall that this compatibility block was put in place right when 24H2 first came out, at the end of September last year, so it has taken quite some time to smooth over these issues – and there’s a reason for that, too.

Samsung Galaxy Book 4 Edge

(Image credit: Future/Jacob Krol) Analysis: Slight complications

As noted, there are some catches here, as Microsoft tells us in its release health dashboard update.

For starters, the compatibility block is only being gradually removed, so you may be waiting a bit longer yet, depending on your PC configuration and the exact app you have installed.

Microsoft explains: “Note that several wallpaper applications are currently included in this compatibility hold. For this reason, the specific timing of the resolution of this issue on a given device may depend on the application being used and the timing of that application’s update. As new versions and updates are released for these applications by their respective developers, it’s expected that these issues will be resolved.”

Hence the lengthy wait for the resolution of this affair, as it seems that Microsoft wasn’t tinkering with Windows 11 24H2 itself to make these apps work, or at least not doing much in the way of that. Rather, it was apparently waiting on the individual app developers to make their software good with 24H2 themselves.

Microsoft further notes that when you fire up the Windows 11 24H2 installation process, you might see a message telling you to uninstall a wallpaper app. You’re advised to either do this – and just dump the wallpaper app for now – or try updating the app, as said prompt might have appeared because you’re running an older version of the program.

In other words, updating the wallpaper app and trying to install Windows 11 24H2 again may work – but if not, you’ll likely have to remove the application.

Windows 11 24H2 has a history of issues with third-party customization software going back well before release, deep in its testing phase when some popular utilities were banned (to the chagrin of some Windows Insiders). Because 24H2 is built on an entirely new underlying platform, Germanium, this has caused a whole lot more problems than any other update for Windows 11 thus far.

And while such a big shift could be expected to be a headache, and trigger more bugs than normal, the amount we’ve witnessed has essentially been a minor avalanche, and a distinctly unpleasant experience for some Windows 11 users.

You may also like...

Google Photos is getting a big overhaul - here are 3 new features you should look out for

Techradar - Tue, 04/15/2025 - 09:00
  • As well as making room for more Gemini, Google has been updating its photo and video sharing service
  • One of the new features includes Google Photos integration in the Gemini app
  • While one of the features is still a rumor, we hope that it will be rolled out soon

It’s a busy and certainly an interesting time for Google at the moment, and it’s not just because Gemini is slowly taking over. While the company is doubling down on Gemini’s integration across the board of its services, it’s taking the time to seriously upgrade other platforms under its broad umbrella - Google Photos being one of them.

Just as Google Messages has been testing some useful upgrades, the company hasn’t forgotten about its photo sharing and storage software, which has also received its fair share of updates and new features, big and small. Two of them are very recent rollouts for Google, and though the third is only a speculation as of now, we’re hoping to see it come to fruition in the near future.

Google Photos gets Gemini integration on Android

A screenshot of the new Google Photos integration in the Android Gemini app

(Image credit: 9to5Google)

In October 2024, Google Photos rolled out ‘Ask Photos,’ an AI search tool powered by Gemini that allows you to use natural language questions to filter through your gallery in the Photos app. With this new integration, Android users will be able to connect it to Google Photos and find photos inside the Gemini app itself.

According to 9to5Google, there are two sides to this new integration, the first allowing you to find images and videos based on the following:

  • Your saved faces and relationships in Google Photos
  • The location / date a photo / video was taken on
  • A description of what’s in a photo
  • Conversation with the Gemini app

For example, you can use prompts such as ‘Find my photos of Alex’, ‘Show me recent selfies’, and ‘Show my photos from my most recent trip’. The second part allows you to ask about specific details in your photos and videos, such as ‘What are the top 10 things we saw on our last trip?’ - similar to the Ask Photos function in the Google Photos app.

Dark mode for Google Photos’ web version

Whether you use Google Photos or not, you probably use dark mode settings on other platforms - for me, it’s always turned on when I'm using YouTube or TikTok. For a while, dark mode was exclusive to the Google Photos mobile app, but just a few weeks ago, Google finally brought it to the web version.

It’s a small upgrade for Google Photos, but one that will be very popular with users for sure. You can activate dark mode for Google Photos on the web very easily:

  • Head to photos.google.com in your web browser
  • Click Settings, and then go into the Appearance section
  • From there, you can select your choice from different options, including Light, Dark, or Use Device Default
Google Photos tipped for a big redesign

The current Google Photos design

(Image credit: Future)

While this is still speculated, it could be a great design overhaul and one that could make managing your photo library a little smoother.

We first spotted this a few weeks back following a leak shared by Android Authority, which pointed to possible changes we could see in the future, one of which shows the ‘Today’ heading no longer having a checkmark next to it but with what looks like a filter icon instead. Additionally, the leak shows a floating search bar in place of the usual Photos Collections and Search tabs.

You might also like

Starship blooper: Windows 10 update gets weirdest bug yet

Techradar - Tue, 04/15/2025 - 08:39
  • Windows 10 recently got a patch tinkering with the Recovery Environment
  • Some users are seeing an error message saying the patch failed to install
  • In fact, the update installed just fine, and the error message is the error

Some Windows 10 users are encountering an error message after applying a fresh patch for the operating system, informing them that the update failed – when in fact it didn’t.

Neowin spotted the update in question (known as KB5057589) which was released last week (separately from the main cumulative update for April) and tinkers with the Windows Recovery Environment (WinRE) on some Windows 10 PCs (versions 21H2 and 22H2).

Far from all Windows 10 users will get this, then, but those who do might be confronted by an error message after it has installed (which is visible in the Windows Update settings page).

It reads: “0x80070643 – ERROR_INSTALL_FAILURE.”

That looks alarming, of course, and seeing this, you’re going to make the fair assumption that the update has failed. However, as mentioned, the error isn’t with the update, but the actual error message itself.

Microsoft explains: “This error message is not accurate and does not impact the update or device functionality. Although the error message suggests the update did not complete, the WinRE update is typically applied successfully after the device restarts.”

Microsoft further notes that the update may continue to display as ‘failed’ (when it hasn’t) until the next check for updates, after which the error message should be cleared from your system.

Frustrated person with their head buried underneath their open laptop

(Image credit: Lipik Stock Media / Shutterstock) Analysis: Bugs in the bugs

There’s nothing wrong here, in short, except the error itself, but that’s going to confuse folks, and maybe send them down some unnecessary – and potentially lengthy – rabbit holes in order to find further information, or a solution to a problem that doesn’t exist.

The trouble is what’s compounding this is that the whole WinRE debacle has been a long-running affair. A patch for this was previously released in January 2025 most recently, and there were others before that, with some folks witnessing repeated installations of this WinRE fix, which is confusing in itself.

That’s why those rabbit holes that you might get lost down could end up seeming so deep if you don’t manage to catch Microsoft’s clarification on this matter.

Microsoft says it’s working to resolve this errant error and will let us know when that happens. At least you’re now armed with the knowledge that the update should be fine despite what the error – in block capitals plastered across your screen – tells you (and it should be cleared from your PC in a short time).

You may also like...

James Cameron thinks VR is the future of cinema, but Meta needs to solve a major content problem first

Techradar - Tue, 04/15/2025 - 08:28
  • James Cameron sat down with Meta's CTO Andrew Bosworth on Bosworth's podcast to talk VR cinema
  • Cameron says VR allows him to finally show his films the way they should be seen
  • He teased the Quest 4, but Bosworth stopped Cameron revealing too much

James Cameron, the Hollywood director behind Titanic, The Terminator and Avatar – believes VR headsets are the future of cinema, based on his experience with next-gen Meta Quest headsets that he isn’t allowed to talk about.

And, Meta Quest 4 teasers aside, I think he’s got a point – but Meta needs to make some big changes before Cameron's vision can become a reality (and I’m not talking about its hardware).

In a sit-down interview with Meta CTO Andrew Bosworth on Bosworth’s Boz to the Future podcast, Cameron was keen to serve as VR cinema’s hype-man.

Cameron explained that VR headsets allow you to get all the benefits of a movie theatre – the curated, 3D, immersive experience – without the downsides – such as a “dim and dull” picture – resulting in an end product that much more closely matches the creator’s vision for the film.

“It was like the heavens parted, light shone down,” Cameron told Bosworth. “There was an angel choir singing. It's like, 'Ah'! This is how people can see the movie the way I created it to be seen!”

It seems that Cameron isn’t simply using a Meta Quest 3 or Quest 3S to enjoy his 3D movies either; instead he’s using some Quest prototypes that Andrew Bosworth wasn’t keen for him to talk more about.

While he couldn’t reveal much beyond the prototypes’ existence – which isn’t much considering that Meta very openly develops VR headset prototypes to inspire future designs (and even lets people try them from time to time at tech events) – we do know that the experience is apparently “at least as good as Dolby Laser Vision Cinema” according to Cameron.

It’s the “ne plus ultra” (read: ultimate) theater option according to the director, suggesting that Meta is focusing on visual performance with its prototypes, and therefore possibly making that the main upgrade for the Meta Quest 4 or Meta Quest Pro 2.

As with all leaks and rumors we can’t read too much into Cameron’s comments. Even with these prototypes Meta could focus on other upgrades instead of the display, or it could be designing for the Quest 5 or Quest Pro 3, but given that previous leaks have teased that upcoming Meta headsets will pack an OLED screen it feels safe to assume that visual upgrades are inbound.

That will certainly be no bad thing – in fact it would be a fantastic improvement to Meta’s headsets – but if Meta wants to capture the home cinema experience it shouldn’t just focus on its screens, it needs to focus on content too.

VR's 3D film problem

A screenshot of Disney Plus' 3D movies functionality on the Apple Vision Pro headset

Apple Vision Pro has the easy 3D film access Meta Quest needs (Image credit: Walt Disney Company/Apple Inc.)

I’ve previously discussed how it’s an open secret that the simplest (and really the only) way to watch blockbuster 3D movies on a Quest headset involves some level of digital piracy.

3D movie files are difficult to acquire, and 3D movie rental services from the likes of Bigscreen aren’t currently available. And I’ve also complained how absurd this is because, as James Cameron points out, using your VR headset for cinema is superb, because it immerses you in your own portable, private theater.

So while the prospect of the Meta Quest 4 boasting high-end displays for visual excellency is enticing, I’m more concerned about how Meta will tackle its digital content library issue.

The simplest solution would be to form streaming deals like Apple did with Disney Plus. Disney’s service on the Vision Pro allows users to watch Disney’s 3D content library at no additional charge – though it frustratingly appears to be some kind of exclusivity deal, based on the fact the same benefits are yet to roll out to other headsets or the best AR smart glasses for entertainment.

Another option – which Cameron points to – is for Meta to make exclusive deals with creatives directly, so they create new 3D films just for Quest, although worthwhile films take time (and a lot of money) to produce, meaning that Meta’s 3D catalog can’t rely on fresh exclusives alone.

A Meta Quets 3 headset in a case on a

Offline films would be perfect for in-flight entertainment (Image credit: Meta / Lufthansa)

Hopefully this podcast is a sign that Meta is looking to tackle the 3D movies in VR problem from all sides – both hardware and software – as VR entertainment can be superb.

While it is more isolated than the usual home theater experience, the immersive quality or VR, combined with its ability to display your show or film of choice on a giant virtual screen, is a blast.

At the moment the big drawback is the lack of content – but here’s hoping that’s about to change.

You might also like

The Techno-Utopian Seasteaders Who Want to Colonize the Ocean

NYT Technology - Tue, 04/15/2025 - 04:01
Libertarians have long looked at ocean living as the next frontier. Some wealthy men are testing the waters.

What Are Rare Earth Metals, the Exports Halted by China?

NYT Technology - Mon, 04/14/2025 - 23:00
China’s new restrictions on exports of the metals could have an impact on the production of everything from LED lights to fighter jets.

Grok may start remembering everything you ask it to do, according to new reports

Techradar - Mon, 04/14/2025 - 20:00
  • xAI’s Grok 3 chatbot has added a voice mode with multiple personalities
  • One personality is called “unhinged” and will scream and insult you
  • Grok also has personalities for NSFW roleplay, crazy conspiracies, and an “Unlicensed Therapist” mode

xAI’s Grok may be about to start remembering your conversations as part of a broader slate of updates rolling out, all of which seek to match ChatGPT, Google Gemini, and other rivals. Elon Musk’s company tends to pitch Grok as a plucky upstart in a world of staid AI tools; it also seems to be aiming for parity on features like memory, voice, and image editing.

As spotted by one user on X, it appears that Grok will get a new "Personalise with Memories" switch in settings. This would be a big deal if it works and mark a shift from momentary utility to long-term reliability. Grok's reported memory system, which is still in development but already appearing in the web app, will allow Grok to reference previous chats.

This means if you’ve been working with it on something like planning a vacation, writing a screenplay, or just keeping track of the name of that documentary you wanted to watch, Grok could say, “Hey, didn’t we already talk about this?”

Grok’s memory is expected to be user-controlled as well, which means you’ll be able to manage what the AI remembers and delete specific memories or everything Grok has remembered all at once. That’s increasingly the standard among AI competitors, and it’ll likely be essential for trust, especially as more people start using these tools for work, personal planning, and remembering which child prefers which bedtime story.

This should put Grok more or less on par with what OpenAI has done with ChatGPT’s memory rollout, albeit on a much shorter timeline. The breakneck pace is part of the pitch for Grok, even when it doesn't quite work yet. Some users have reported already seeing the memory feature available, but it's not available to everyone yet, and the exact rollout schedule is unclear.

Remember Grok

Of course, giving memory to a chatbot is a bit like giving a goldfish a planner, meaning it’s only useful if it knows what to do with it. Even so, xAI seems to be layering memory into Grok Web in tandem with a handful of other upgrades that lean toward making it feel more like an actual assistant and less like a snarky trivia machine.

This memory update is starting to appear as a range of other Grok upgrades loom on the horizon. Grok 3.5 is expected any day now, with Grok 4 slotted for the end of the year.

There’s also a new vision feature in development for Grok’s voice mode, allowing users to point their phones at things and hear a description and analysis of what's around them.

It's another feature that ChatGPT and Gemini users will find familiar, and Grok’s vision tool is still being tested. Upgrades are also coming to the recently released image editing feature that lets users upload a picture, select a style, and ask Grok to modify it.

It’s part of the ongoing competition among AI chatbots to make AI models artistically versatile. Combine that with the upcoming Google Drive integration, and Grok starts to look a little more serious as a competitor.

Also on the horizon is Grok Workspaces, a kind of digital whiteboard for collaborating with Grok on a more significant project. These updates suggest that xAI is pivoting to make Grok seem less like a novelty and more like a necessity. xAI clearly sees Grok’s future as being more useful than just a set of sarcastic and mean voice responses.

Still, even as Grok gains these long-awaited features, questions remain about whether it can match the depth and polish of its more established counterparts. It’s one thing to bolt a memory system onto a chatbot. It’s another thing entirely to make that memory meaningful.

Whether Grok becomes your go-to assistant or stays a curious toy used only when some aspect goes viral depends on how well xAI can connect all these new capabilities into something cohesive, intuitive, and a little less chaotic. But for now, at least, it finally remembers your name.

You might also like

Apple has a plan for improving Apple Intelligence, but it needs your help – and your data

Techradar - Mon, 04/14/2025 - 16:56

Apple Intelligence has not had the best year so far, but if you think Apple is giving up, you're wrong. It has big plans and is moving forward with new model training strategies that could vastly improve its AI performance. However, the changes do involve a closer look at your data – if you opt-in.

In a new technical paper from Apple's Machine Learning Research, "Understanding Aggregate Trends for Apple Intelligence Using Differential Privacy," Apple outlined new plans for combining data analytics with user data and synthetic data generation to better train the models behind many of Apple Intelligence features.

Some real data

Up to now, Apple's been training its models on purely synthetic data, which tries to mimic what real data might be like, but there are limitations. In Genmoji's, for instance, Apple's use of synthetic data doesn't always point to how real users engage with the system. From the paper:

"For example, understanding how our models perform when a user requests Genmoji that contain multiple entities (like “dinosaur in a cowboy hat”) helps us improve the responses to those kinds of requests."

Essentially, if users opt-in, the system can poll the device to see if it has seen a data segment. However, your phone doesn't respond with the data; instead, it sends back a noisy and anonymized signal, which is apparently enough for Apple's model to learn.

The process is somewhat different for models that work with longer texts like Writing tools and Summarizations. In this case, Apple uses synthetic models, and then they send a representation of these synthetic models to users who have opted into data analytics.

On the device, the system then performs a comparison that seems to compare these representations against samples of recent emails.

"These most-frequently selected synthetic embeddings can then be used to generate training or testing data, or we can run additional curation steps to further refine the dataset."

A better result

It's complicated stuff. The key, though, is that Apple applies differential privacy to all the user data, which is the process of adding noise that makes it impossible to connect that data to a real user.

Still, none of this works if you don't opt into Apple's Data Analytics, which usually happens when you first set up your iPhone, iPad, or MacBook.

Doing so does not put your data or privacy at risk, but that training should lead to better models and, hopefully, a better Apple Intelligence experience on your iPhone and other Apple devices.

It might also mean smarter and more sensible rewrites and summaries.

You might also like

What If Mark Zuckerberg Had Not Bought Instagram and WhatsApp?

NYT Technology - Mon, 04/14/2025 - 15:40
Meta’s antitrust trial, in which the government contends the company killed competition by buying young rivals, hinges on unknowable alternate versions of Silicon Valley history.

Meta’s Antitrust Trial Begins as FTC Argues Company Built Social Media Monopoly

NYT Technology - Mon, 04/14/2025 - 15:05
Mr. Zuckerberg went to court on Monday in a trial focused on his social media company’s acquisitions of Instagram and WhatsApp. The case could reshape Meta’s business.

5 reasons why Apple making iPadOS 19 more like macOS is a great idea – and 3 reasons why it could be a disaster

Techradar - Mon, 04/14/2025 - 12:57

Notorious Apple leaker Mark Gurman has reported that Apple is planning a major overhaul of iPadOS (the operating system iPads use) to make it work a lot like macOS – and I think this could be a great move, though one that also comes with plenty of danger.

Gurman is very well respected when it comes to Apple leaks, so while we probably won’t get any official idea of how iPadOS 19 is shaping up until Apple’s WWDC event in June, this could still be a big hint at the direction Apple is planning to take its tablet operating system.

In his weekly Power On newsletter for Bloomberg, Gurman claims that “this year’s upgrade will focus on productivity, multitasking and app window management — with an eye on the device operating more like a Mac,” and that Apple is keen to make its operating systems (macOS, iPadOS, iOS and visionOS primarily) more consistent.

As someone who uses the M4-powered iPad Pro, this is music to my ears. Ever since I reviewed it last year, I’ve been confused by the iPad Pro. It was Apple’s first product to come with the M4 chip, a powerful bit of hardware that is now more commonly found in Macs and MacBooks (previous M-class chips were only used in Apple’s Mac computers, rather than iPad tablets).

However, despite offering the kind of performance you’d expect from a MacBook, I found the power of the M4 chip largely went to waste with the iPad Pro due to it still using iPadOS, and was confined to running simplified iPad apps, rather than full desktop applications.

Even if this move still means you can’t run macOS apps on the iPad Pro, it could still make a massive difference, especially when it comes to multitasking (running multiple apps at the same time and switching between them). If Apple nails this, it would go a long way to making the iPad Pro a true MacBook alternative.

But, making iPadOS more like macOS could bring downsides as well, so I’ve listed five reasons why this could be a great move – and three reasons why it could all go wrong.

5 reasons why making iPadOS more like macOS is a great idea 1. It means the iPad Pro makes more sense

iPad Pro 13-inch with M4 chip on a wooden table

(Image credit: Future)

The biggest win when it comes to making iPadOS more like macOS is with the powerful iPad Pro. Hardware-wise, the iPad Pro is hard to fault, with a stunning screen, thin and light design, and powerful components.

However, despite its cutting-edge hardware, it can only run iPad apps. These are generally simple and straightforward apps that have been designed to be used with a touchscreen. These apps also need to be able to be run on less powerful iPads as well.

This means advanced features are often left for the desktop version of the app, and any performance improvement owners of the iPad Pro get over people using, say, the iPad mini will be modest. Certainly, when I use the iPad Pro, it feels like a lot of its power and potential is limited by this – so a lot of the expensive hardware is going to waste.

Making iPadOS more like macOS could – in an ideal world – lead to the ability to run Mac applications on the iPad Pro. At the very least, it could mean some app designers make their iPad apps come with a Mac-like option.

If it means multitasking is easier, then that will be welcome as well. One of the things I struggled with when I tried using the iPad Pro for work instead of my MacBook was having multiple apps open at once and quickly moving between them. Cutting and pasting content between apps was particularly cumbersome, not helped by the web browser I was using (Chrome) being the mobile version that doesn’t support extensions.

It made tasks that would take seconds on a MacBook a lot more hassle – a critical problem that meant I swiftly moved back to my MacBook Pro for work.

2. It could be just in time for M5-powered iPad Pros

Woman using an iPad Pro

(Image credit: Shutterstock / Prathankarnpap)

If, as rumored, this major change to iPadOS will be announced at Apple’s WWDC 2025 event, then it could nicely coincide with the rumored reveal of a new iPad Pro powered by the M5 chip.

While I’m not 100% convinced about an M5 iPad Pro, seeing as Apple is still releasing M4 devices, the timing would make sense. If Apple does indeed announce an even more powerful iPad Pro, then iPadOS, in its current form, would feel even more limiting.

However, if Apple announces both a new M5 iPad Pro and an overhaul of iPadOS to make use of this power, then that could be very exciting indeed. And, with WWDC being an event primarily aimed at developers, it could be a great opportunity for Apple to show off the new-look iPadOS and encourage those developers to start making apps that take full advantage of the new and improved operating system.

3. It makes it easier for Mac owners to get into iPad ecosystem

MacBook Air 15-inch with M4 chip on a creative's desk with screen open

(Image credit: Future)

Gurman’s mention of Apple wanting to make its operating systems more consistent is very interesting. One of Apple’s great strengths is in its ecosystem. If you have an iPhone, it’s more likely that you’ll get an Apple Watch over a different smartwatch, and it means you might also have an Apple Music subscription and AirPods as well.

Making iPadOS more like macOS (and iOS and other Apple operating systems) can benefit both Apple and its customers.

If a MacBook owner decides to buy an iPad (Apple’s dream scenario) and the software looks and works in a similar way, then they’ll likely be very happy as it means their new device is familiar and easy to use. And that could mean they buy even more products, which will again be just what Apple wants.

4. It would give iPadOS more of an identity

iPadOS 17

(Image credit: Apple)

I don’t know about you, but I just think of iPadOS as just iOS (the operating system for iPhones) with larger icons. Maybe that’s unfair, but when the iPad first launched, it was running iOS, and even with the launch of iPadOS in 2019, there are only a handful of features and apps that don’t work on both operating systems.

By making iPadOS a combination of iOS and macOS, it would ironically mean that iPadOS would feel like a more unique operating system, and it could finally step out of the shadow cast by iOS while still benefitting from being able to run almost all apps found in the iPhone’s massive app library.

5. It could mean macOS becomes a bit more like iPadOS

Man using macOS Monterey on a MacBook

(Image credit: Kaspars Grinvalds / Apple)

iPadOS getting macOS features could work both ways – so could we get some iPad-like features on a Mac or MacBook? There are things that iPadOS does better, such as being more user-friendly for beginners and turning an iPad into a second display for a nearby MacBook. All this would be great to see in macOS.

Having the choice of a larger interface that works well with touchscreens could even pave the way for one of the devices people most request from Apple: a touchscreen MacBook.

3 reasons why making iPadOS more like macOS is a bad idea 1 . It could overcomplicate things

iPad mini 2021

(Image credit: TechRadar)

One of iPadOS’ best features is its simplicity, and while I feel that simplicity holds back a device like the iPad Pro, for more casual users on their iPad, iPad mini, or iPad Air, that ease-of-use is a huge bonus.

If iPadOS were to become more like macOS, that could delight iPad Pro owners, but let’s not lose sight of the fact that the iPad Pro is a niche device that’s too expensive for most people. macOS-like features on an iPad mini, for example, just doesn’t make sense, and Apple would be silly to make a major change that annoys the majority of its customers to please just a few.

2. It could cause a divergence with iOS – and lead to fewer apps

Apple App Store

(Image credit: Apple)

The iPad initially launching with iOS was an excellent decision by Apple, as it meant that people who had bought the new product had instant access to thousands of iPhone apps.

While it wasn’t perfect at first – some apps didn’t work well with the iPad’s larger screen- it was likely much easier than if the iPad had launched with a completely new operating system that then needed developers to create bespoke applications for it.

Think of it this way: if you were an app developer with limited resources (both time and money), would you make an app for a system that already had millions of users or risk making an app for a new product with a tiny user base? The answer is simple – you’d go for the large user base (almost) every time, so if it hadn’t launched with iOS and access to the App Store, then the original iPad could have been a flop. Just look at Microsoft’s attempts with the Windows Phone – it needed developers to create a third version of their apps, alongside iOS and Android versions. Very few developers wanted to do that, which meant Windows Phone devices launched with far fewer apps than Android and iPhone rivals.

If iPadOS moves closer to macOS, could we see fewer apps make it to iPad? While iPads are incredibly popular, they are still nowhere near as popular as iPhones, so if devs have to choose between which audience to make an app for, you can bet it’ll be for the iPhone.

However, if future iPadOS apps will remain essentially iOS apps but with an optional macOS-like interface, that could still mean the new look is dead on arrival, as developers will prefer to concentrate on the interface that can be used by the widest audience rather than just iPad Pro users.

3. You’ll probably need expensive peripherals to make the most of it

iPad Pro 13-inch with M4 chip on a wooden table

(Image credit: Future)

iPadOS works so well because it’s been designed from the ground up to be used on a touchscreen device. You can buy a new iPad, and all you need to do is jab the screen to get going.

However, macOS is designed for keyboard and mouse/trackpad, so if you want to make the most out of a future version of iPadOS that works like macOS, you’re going to need to invest in peripherals – and some of them can be very expensive.

The Magic Keyboard for iPad Pro is a brilliant bit of kit that quickly attaches to the iPad and turns it into a laptop-like device with a physical keyboard and touchpad, but it also costs $299 / £299 / AU$499 – a hefty additional expense, and I can almost guarantee that to use any macOS-like features in iPadOS, you’ll really need some sort of peripherals. This will either make things too expensive for a lot of people, or if you choose a cheaper alternative such as a Bluetooth keyboard and mouse, it then takes away from the simplicity of using an iPad.

This could mean fewer people actually use the macOS-like elements, which in turn would mean there’s less incentive for app developers to implement features and designs that only a small proportion of iPad users will use.

So, I’m all for more macOS features for my iPad Pro – but I am also very aware that I am in the minority when it comes to iPad owners, and Apple needs to be careful not to lose what made the iPad so successful in the first place just to placate people (like me) who moan about iPads being too much like iPads. Maybe it would just be better if I stuck with my MacBook instead.

You might also like

Meta’s Antitrust Trial to Put Mark Zuckerberg, Serial Witness, to the Test Again

NYT Technology - Mon, 04/14/2025 - 12:57
Meta’s chief has grown accustomed to tough questioning in courts and hearings, but an antitrust trial that started Monday could be more grueling, experts said.

Google Messages is testing some useful upgrades – here are 5 features that could be coming

Techradar - Mon, 04/14/2025 - 12:00
  • Google is testing even more new features in its Messages beta app
  • These include an expanded 14-line message view and new RCS message labels
  • While these are still in beta testing, they could start rolling out to users this month

Over the past couple of months, Google has been doubling down on eradicating all traces of Google Assistant to make Gemini its flagship voice assistant, but amidst the organized Gemini chaos, Google has been paying a lot of attention to improving its Messages app, giving it some much-needed TLC.

It’s safe to say that the new revisions to the Google Messages app have significantly improved its UI. Its new snooze function for group chats also comes to mind, but Google is still in its beta testing era. For a while, Google was experimenting with an easier way to join group chats, following WhatsApp’s footsteps. Now, it’s testing five more features that could make up the next wave of Google Messages upgrades this month.

Although these features are in beta, there’s been no comment on whether they’ll be officially rolling out to users. With that said, we’ll be keeping an eye out for any further updates.

Google expands its 4-line text field limit

A screen shot of Google Message's expanded text lines

(Image credit: 9to5Google)

Just a few weeks ago, we reported on a new upgrade found in Google Messages beta indicating that Google would get better at handling lengthy text messages.

For a while, Google Messages users have been restricted to a four-line view limit when sending texts, meaning that you would need to scroll to review your entire message before sending. This is particularly frustrating when sending long URL links.

But that could soon be a thing of the past, as 9to5Google has picked up a new beta code that reveals an expanded message composition field on the Pixel 9a that now reaches up to 14 lines.

New RCS labels

Recently, Google has been testing new in-app labels that could distinguish whether you’re sending an SMS or RCS message.

Thanks to an APK teardown from Android Authority, the labels found in beta suggest that soon you’ll be able to see which of your contacts are using RCS in Messages, adding a new RCS label to the right side of a contact’s name or number.

Unsubscribe from automated texts

This is a feature we’re quite excited to see, and we’re hoping for a wider rollout this month. A few weeks ago, an unsubscribe button was spotted at the bottom of some messages, which could give users an easier way of unsubscribing to automated texts and even the option to report spam.

When you tap this, a list of options will appear asking you for your reasons for unsubscribing, which include ‘not signed up’, ‘too many messages’, and ‘no longer interested’ as well as an option for ‘spam’. If you select one of the first three, a message reading ‘STOP’ will be sent automatically, and you’ll be successfully unsubscribed.

Read receipts gets a new look

A screen shot of Google Messages read receipts redesign

(Image credit: 9to5Google)

Google could introduce another revamp of how you can view read receipts in the Messages app. In November 2024, Google tested a redesign of its read receipts that placed the checkmark symbols inside the message bubbles, which used to appear underneath sent messages.

In January, Google tested another small redesign introducing a new white background, which could roll out soon, and while this isn’t a major redesign, it’s effective enough to make read receipts stand out more.

Camera and gallery redesign, and sending ‘original quality’ media

We first noticed that Google Messages was prepping a new photo and video quality upgrade. In March, more users started to notice a wider availability, but it’s still not yet fully rolled out, meaning it could be one of the next new updates in the coming weeks.

Essentially, Google could be rolling out a new option that allows you to send media, such as photos and videos, in their original quality. This will give you the choice of the following two options:

‘Optimize for chat’ - sends photos and videos at a faster speed, compromising quality.

‘Original quality’ - sends photos and videos as they appear in your phone’s built-in storage.

You might also like

How Geo Group’s Surveillance Tech Is Aiding Trump’s Immigration Agenda

NYT Technology - Mon, 04/14/2025 - 11:12
Geo Group, a private prison firm that makes digital tools to track immigrants, becomes one of the Trump administration’s big business winners as its tech is increasingly used in deportations.

OpenAI promises new ChatGPT features this week – all the latest as Sam Altman says ‘we've got a lot of good stuff for you’

Techradar - Mon, 04/14/2025 - 08:51

Another week, another OpenAI announcement. Just last week the company announced ChatGPT would get a major memory upgrade, and now CEO, Sam Altman, is hinting at more upgrades coming this week.

On X (formerly Twitter), Altman wrote last night, "We've got a lot of good stuff for you this coming week! Kicking it off tomorrow."

Well, tomorrow has arrived, and we're very excited to see what the world's leading AI company has up its sleeve.

We're not sure when to expect the first announcement, but we'll be live blogging throughout the next week as OpenAI showcases what it's been working on. Could we finally see the next major ChatGPT AI model?

Good afternoon everyone, TechRadar's Senior AI Writer, John-Anthony Disotto, here to take you through the next few hours in the lead up to OpenAI's first announcement of the week.

Will we see something exciting today? Time will tell.

Let's get started by looking at what Sam Altman said on X yesterday. The OpenAI CEO hinted at a big week for the company, and it's all "kicking off" today!

we've got a lot of good stuff for you this coming week!kicking it off tomorrow.April 13, 2025

ChatGPT logo /Sam Altman

(Image credit: Shutterstock/EI Editorial)

One of the announcements I expect to see this week is ChatGPT 4.1, the successor to 4o. Just last week, a report from The Verge said the new AI model was imminent, and considering Altman's tweet, it very well could arrive today.

GPT-4.1 will be the successor to 4o, and while we're not sure what it will be called, it could set a new standard for general use AI models as OpenAI's competitors like Google Gemini and DeepSeek continue to catch up, and sometimes surpass ChatGPT.

ChatGPT was the most downloaded app in the world for March, surpassing Instagram and TikTok to take the crown.

That's an impressive feet for OpenAI's chatbot which has become the go-to AI offering for most people. The recently released native 4o image generation has definitely helped increase the user count, as I've started to see more and more of my friends and family jump on the latest trends.

Whether that's creating a Studio Ghibli-esque image, an action figure of yourself, or turning your pet into a human, ChatGPT is thriving thanks to its image generation tools.

French Bulldog and its human AI counterpart

(Image credit: Future / ChatGPT)

Speaking of the pet-to-human trend, I tried it earlier, and I was horrified by the results.

If you've not been on social media over the weekend, you may have missed thousands of people sharing images of what their dogs or cats would like as humans.

This morning I decided to give it a go, and then I went even further and converted an image of myself into a dog. Let's just say this is one of my least favorite AI trends of 2025 so far, and I don't want to think about my French Bulldog as a human ever again!

When will we get an announcement today?

There's no information on when to expect OpenAI's announcement today, but based off of previous announcements we should get something around 6 PM BST / 1 PM ET / 11 AM PT.

Your guess is as good as mine on whether we'll get daily announcements this week like the 12 days of OpenAI announcements in December.

We'll be running this live blog over the next few days so as soon as Altman and Co makes an announcement you'll get it here. Stay tuned!

One hour to go?

Ai tech, businessman show virtual graphic Global Internet connect Chatgpt Chat with AI, Artificial Intelligence.

(Image credit: Shutterstock/SomYuZu)

Around an hour to go until the expected OpenAI announcement. What will it be?

Could we see GPT-4.1? Or will we see some new agentic AI capabilities that take OpenAI's offerings to a whole new level?

Last week's memory upgrade was a huge deal, will today's announcement top that, or are we getting excited over a fairly minimal update? Stay tuned to find out!

Hello, Jacob Krol – TechRadar's US Managing Editor News – stepping in here as we await whatever OpenAI has in store for us today.

As my colleague John-Anthony has explained thus far, CEO Sam Altman teased, "We've got a lot of good stuff for you" this week, and it's a waiting game now.

OpenAI drops GPT-4.1 in the API, which is purpose-built for developers

This one is for developers, that is unless OpenAI has something else up its sleeve for later today. The AI giant has just unveiled GPT-4.1 in the API, a model purpose-built for developers. It's a family consisting of these models: GPT-4.1, GPT-4.1 mini, and GPT-4.1 nano.

While Sam Altman is not on the live stream for this unveiling, the OpenAI team – Michelle Pokrass, Ishaan Singal, and Kevin Weil – is walking through the news. GPT-4.1 is specifically designed for coding, following instructions, and long-context understanding.

It seems that with the instruction following, the focus is on letting the models understand the prompt as intended. I've experienced this in the standard GPT-4o mini with ChatGPT Plus, but it seems that with the evaluation metrics, the GPT-4.1 in the API is much better at following instructions since it's trained for it.

And the ideal result will be a much more straightforward experience, where you might need to converse further to get the results you were after.

On the live stream, the OpenAI team walks through several demos that show how GPT-4.1 focuses on its three specialties – coding, following instructions, and long-context understanding – and that it's less degenerate and less verbose.

OpenAI says these models are the fastest and cheapest models it has ever built, and all three are out now in the API. And if you have access to the API, these are available right now.

Sam Altman, OpenAI's CEO, wasn't on the livestream today but still took to X (formerly Twitter) to discuss the updates. He later retweeted a few others, including one user who said GPT-4.1 has already helped with workflows.

Again, it's not available to the general consumer, but it is out in the API already with major enhancements promised. So, while you might not encounter it, some websites or services you use might be employing it.

GPT-4.1 (and -mini and -nano) are now available in the API!these models are great at coding, instruction following, and long context (1 million tokens).benchmarks are strong, but we focused on real-world utility, and developers seem very happy.GPT-4.1 family is API-only.April 14, 2025

Yesterday's announcements

Digital transformation

(Image credit: Wichy / Shutterstock)

Just in case you missed yesterday's major announcements: OpenAI unveiled GPT-4.1 in the API — a model purpose-built for developers. It's a family consisting of GPT-4.1, GPT-4.1 Mini, and GPT-4.1 Nano.

These models are expected to become available later this year, replacing GPT-4o. While consumers won't see any immediate benefits, there's plenty to be excited about going forward.

It's unclear whether OpenAI will announce more updates today, but there's a chance, especially given Sam Altman’s suggestion that this could be a full week of announcements.

Either way, stay tuned to the live blog — we’ll update it as soon as we hear anything new in the world of ChatGPT.

Pages