Techradar

Subscribe to Techradar feed
Updated: 9 hours 54 min ago

Google CEO hopeful Gemini will be integrated into Apple Intelligence in time for iPhone 17 launch

10 hours 46 min ago
  • Google CEO says Google Gemini could be integrated into Apple Intelligence by the middle of this year
  • Google Gemini references were found in Apple Intelligence code of the iOS 18.4 beta
  • Google and Apple have a long-standing relationship

Google Gemini could be integrated into Apple Intelligence by the middle of this year, according to Google CEO Sundar Pichai.

Initially reported by Bloomberg, Pichai said in court on Wednesday that he is hopeful of a partnership between the two companies, and this could be formalized by the middle of this year.

Bloomberg reports that Pichai has held a number of conversations with Apple CEO Tim Cook, and the companies look set to continue their long-standing partnership that has existed since the early days of the iPhone.

We had previously reported the referencing of Gemini in Apple Intelligence as part of the iOS 18.4 beta in February, and there have been multiple reports dating back to the original Apple Intelligence launch last year that hinted at a Gemini x Apple future.

If Pichai's estimates are correct, we could see Google Gemini integrated into Apple Intelligence in time for the expected September launch of the iPhone 17.

Could Google Gemini be revealed at WWDC 2025?

NEW: Google CEO Sundar Pichai said in court he is hopeful to have an agreement with Apple to have Gemini as an option as part of Apple Intelligence by middle of this year. This is referring to the Siri/Writing Tools integration ChatGPT has. https://t.co/udOPWNZGgkApril 30, 2025

WWDC 2025 is just a couple of months away, and we're all eagerly anticipating what Apple will reveal in terms of iOS 19's expected UI overhaul as well as the future of Apple Intelligence.

If Google and Apple can strike this deal before WWDC, Google Gemini integration into Siri and Writing Tools could be a major talking point of the event.

At the time of writing, ChatGPT integration is available in Apple Intelligence, although it's often slower than using the dedicated app. If Gemini were to become an option, we'd hope Apple would work on making these AI integrations far more efficient and streamlined.

With WWDC 2025 just around the corner, we don't have long to wait to find out what the future holds for Apple Intelligence. One thing is for sure, however: Google Gemini as part of your core iPhone experience is looking likely.

You may also like

Reddit users give their verdicts on new ChatGPT shopping features: ‘the enshittification has arrived’

15 hours 8 min ago
  • A new ChatGPT shopping experience is launching for all users
  • Early users have expressed displeasure with the new recommendations
  • Comparisons are being drawn to Corey Doctrow's "enshittification" concept

According to a new Reddit thread, some users are not happy with the new shopping features that are being integrated into ChatGPT and are rolling out now.

The new features make it easy to search for and buy consumer products online using ChatGPT's search feature combined with its rich natural-language prompts.

Reddit user 'Kjfitz', who started the thread, titled it, “the enshittification has arrived” in reference to the infamous neologism championed by Cory Doctrow in a 2022 essay about Amazon, which is generally used to describe how commercial pressures cause the experience of using platforms like Amazon, Google, and Facebook to gradually deteriorate over time as more monetization features are added.

The "Enshittification" has arrived from r/ChatGPT


“I asked ChatGPT about the impact of the current tariffs on inventories over the next few months. It returned a long list of links to toiletries I might want to buy”, said Kjfitz. “I asked it why it did that. It replied:

"As of April 28, 2025, OpenAI introduced new shopping features to ChatGPT, enhancing its capabilities to provide product recommendations complete with images, reviews, and direct purchase links. These features are available to all users, including those on Free, Plus, and Pro tiers, and even to users not logged in. The recommendations are generated organically, without paid advertisements or commission-based incentives, relying instead on structured metadata from third-party sources such as pricing, product descriptions, and reviews.​

"This update aims to offer a more personalized and streamlined shopping experience directly within the ChatGPT interface, allowing users to explore products across various categories like fashion, beauty, electronics, and home goods. If you have any specific preferences or need tailored recommendations, feel free to let me know!"

Other Reddit users were quick to express their displeasure with the new features. In reference to the notice that "The recommendations are generated organically, without paid advertisements or commission-based incentives" User 'crystallin' commented, “I give it less than six months before that changes.”

“Good God. Imagine the people using ChatGPT for mental health advice and it starts saying “you know what’ll fix your relationship with your kids? Switching to Allstate”, said Reddit user 'rosencranberry'.

A gradual deterioration

In 2024, the Macquarie Dictionary, the national dictionary of Australia, picked “enshittification” as its word of the year, describing it as “the gradual deterioration of a service or product brought about by a reduction in the quality of service provided, especially of an online platform, and as a consequence of profit-seeking.”

It should be noted that currently there are no paid adverts or sponsored links attached to any of the shopping features in ChatGPT; however there's nothing to stop these being added in the future.

We’re still in the early days of shopping features being added to chatbots, and the AI search engine Perplexity has already made some inroads into adding shopping features, but this is the first time e-commerce features have been added to ChatGPT search.

“We’ll continue to bring merchants along our journeys as we quickly learn and iterate”, said OpenAI commenting on the launch of the new features yesterday.

You may also like

Forget Meta AI – WhatsApp now lets you talk directly to Perplexity and ChatGPT in the app, here’s how

15 hours 35 min ago
  • You can now talk to Perplexity AI from within WhatsApp
  • Perplexity joins Meta AI and ChatGPT inside the app
  • The move raises privacy concerns after comments from Perplexity’s CEO

If you regularly converse with artificial intelligence (AI) chatbots, it might seem like a natural move to talk to them inside your favorite messaging app. Well, you’ve just gained a new option, as you’re now able to chat with Perplexity AI directly within WhatsApp.

To get started, all you have to do is add the following number within WhatsApp: +1 (833) 436-3285. Once that number is in your list of contacts, you can chat with Perplexity AI till your heart’s content.

Announcing the move on X, Perplexity CEO Aravind Srinivas said that you’ll be able to get “answers, sources, image generation” from your WhatsApp chats, with “a lot more features coming soon.”

In a video accompanying the post on X, Srinivas asked Perplexity what they would look like if they were bald, then attached a headshot of themselves to the question. Perplexity replied with a doctored image of Srinivas with a bald head.

ChatGPT in WhatsApp

ChatGPT on WhatsApp

(Image credit: OpenAI)

Perplexity is just the latest AI chatbot to be added to WhatsApp. Meta, which owns WhatsApp, has added its own Meta AI tool to the app in the form of a permanent button in the lower-right corner. This gives you quick access to the company’s AI but also means you might accidentally trigger it since you can’t hide the icon.

If you’d rather your AI conversations appeared in your list of chats like those with living, human contacts, there are other options besides Perplexity. For instance, ChatGPT has recently become available in WhatsApp, and it can be contacted by adding the number +1 (800) 242-8478 to the app.

That said, you might have second thoughts about using a tool like Perplexity based on recent comments from its CEO. According to TechCrunch, Srinivas said Perplexity wants to build its upcoming Comet browser so that it can collect extensive data on its users, then sell that personal information to the highest bidder.

If that concerns you, you might want to think twice about feeding Perplexity’s WhatsApp AI with your personal questions.

You might also like

OpenAI has fixed ChatGPT's 'annoying' personality update - Sam Altman promises more changes 'in the coming days' which could include an option to choose the AI's behavior

16 hours 52 min ago
  • OpenAI has rolled back the ChatGPT 4o update that caused the chatbot's AI personality to be 'sycophantic'
  • CEO Sam Altman claims more fixes are coming
  • We could see a personality selection option in ChatGPT in the future

Following Sam Altman claiming ChatGPT 4o's personality had become 'annoying', OpenAI has now made changes to the AI model, rolling back last week's update.

The company posted a blog post on its website highlighting the issue that had caused ChatGPT to become more 'flattering or agreeable – often described as sycophantic.'

Users found the update infuriating, with CEO Sam Altman sharing on X earlier this week that the company was working on a fix for the 'annoying' behaviour.

the last couple of GPT-4o updates have made the personality too sycophant-y and annoying (even though there are some very good parts of it), and we are working on fixes asap, some today and some this week.at some point will share our learnings from this, it's been interesting.April 27, 2025

In the blog post, OpenAI says, "In last week’s GPT‑4o update, we made adjustments aimed at improving the model’s default personality to make it feel more intuitive and effective across a variety of tasks."

"However, in this update, we focused too much on short-term feedback, and did not fully account for how users’ interactions with ChatGPT evolve over time. As a result, GPT‑4o skewed towards responses that were overly supportive but disingenuous."

In an effort to prevent this from happening moving forward, the company has highlighted how it looks to address sycophancy in the future.

For starters, training will be used to 'steer the model away from sycophancy' as well as added guardrails put in place to 'increase honesty and transparency'.

OpenAI also plans to add more ways for users to provide feedback on updates before they are rolled out to the wider public.

More personality updates to come

we started rolling back the latest update to GPT-4o last night it's now 100% rolled back for free users and we'll update again when it's finished for paid users, hopefully later today we're working on additional fixes to model personality and will share more in the coming daysApril 29, 2025

Following the rollback of the ChatGPT 4o update, Altman wrote on X, "we're working on additional fixes to model personality and will share more in the coming days."

Putting two and two together, it looks like users might get a ChatGPT personality selection tool in the future. OpenAI's blog post mentions the company is "building new, easier ways for users to shape ChatGPT's behavior" beyond custom instructions.

If you want to go further and change ChatGPT's personality yourself, we recommend trying the following prompt Reddit user TedHoliday:

"Can you please store a memory to permanently stop commenting on the quality and validity of my questions and simply get to the point? I don't want to be manipulated with flattery by a robot, and I don't want to have to skip past the garbage to get to the answer l'm looking for."

You might also like

Meta AI is here to take on ChatGPT and give your Ray-Ban Meta Smart Glasses a fresh AI upgrade

Tue, 04/29/2025 - 15:46
  • Meta has launched a new standalone app for its Meta AI assistant, powered by Llama 4
  • The app connects across Meta platforms and devices, including Ray-Ban Meta smart glasses
  • The Meta AI personalizes its behavior based on your Instagram and Facebook activity

Meta AI is moving into its own space with the launch of a new standalone app. Fueled by Meta's new Llama 4 AI model, the new app is simultaneously a standalone product and a replacement for Meta View, which was previously used to connect to the Ray-Ban Meta smart glasses.

Meta’s making a big play here, positioning voice interactions as the most intuitive and natural way to interact with your AI. The app supports hands-free chatting and even includes a demo of full-duplex speech, a feature that lets you talk and listen at the same time.

That's very useful considering how keen Meta is to connect Meta AI with the company’s larger product portfolio, especially the Ray-Ban Meta smart glasses. These AI-enabled spectacles will now operate through the Meta AI app, replacing the Meta View app they currently rely on.

That means you can start a conversation on one platform and easily transition to another. All you need to do is open the Devices tab on the app and replicate your settings and saved information.

A stylish person wearing a pair of black Wayfarer Ray-Ban smart glasses in front of a green background

(Image credit: Meta)

Ask a question through your smart glasses, get a reply from Meta AI, and then pick up that same thread on your phone or desktop later. You can switch from voice chat in your glasses to reading the conversation in your app's history tab. For example, you could be on a walk and ask Meta AI through your glasses to find a nearby bookstore. The answer will be saved in your Meta AI app for later review.

The other major element of the Meta AI app is the Discover feed. You can see publicly shared things like successful prompt ideas and images they've generated on the feed, then remix them for your own purposes.

Additionally, the desktop version of Meta AI is also getting revamped with a new interface and more image generation options. There's also an experimental document editor for composing and editing text, adding visuals, and exporting it as a PDF.

Meta AI all over

Meta has spent many months spreading Meta AI across Instagram, Facebook, Messenger, and WhatsApp, but now, this is the first time Meta AI isn't hosted within another mobile app.

The AI’s connection to Meta’s other apps does give it an edge (or a flaw, depending on your view) by allowing it to adapt its behavior based on what you do on those other apps. Meta AI draws on your Instagram and Facebook activity to personalize its answers.

Ask it where to go for dinner, and it might suggest a ramen spot your friend posted about last week. Ask for tips on an upcoming vacation, and it’ll remember you once posted that you love to “travel light but overpack emotionally” and suggest an itinerary that might fit that attitude.

Meta clearly wants Meta AI to be central in all your digital activities. The way the company pitches the app, it seems like you'll always be checking in with it, whether on your phone or on your head.

There are obvious parallels with the ChatGPT app in terms of style. But Meta seems to want to differentiate its app from OpenAI’s creation by emphasizing the personal over the broader utility of an AI assistant.

And if there’s one thing Meta has more of than nearly anyone, it’s personal data. Meta AI tapping into your social data, voice habits, and even your smart glasses to deliver responses designed for you feels very on-brand.

The idea of Meta AI forming a mental scrapbook of your life based on what you liked on Instagram or posted on Facebook might not appeal to everyone, of course. But if you're concerned, you can always put on the smart glasses and ask Meta AI for help.

You might also like

Google is adding 50 new languages to NotebookLM's Audio Overviews

Tue, 04/29/2025 - 15:00
  • Google’s NotebookLM now supports Audio Overviews in over 50 languages
  • Users can generate AI podcasts based on uploaded information in their preferred languages
  • The Gemini 2.5 Pro-powered feature maintains the same casual, conversational style regardless of language

It seems the AI 'podcasters' you create with Google's NotebookLM have been playing a lot of Duolingo. The AI-generated hosts of the Audio Overview feature can now perform in more than 50 languages. Audio Overview is the NotebookLM feature that transforms the documents, videos, books, and other information you give it into an audio show hosted by a couple of AI voices.

When you make the Audio Overview now, you can go to the settings and pick an Output Language ahead of generating the audio. You can also switch languages on the fly, generate summaries for friends or students in their native tongue, or prep multilingual study guides without hiring a translator or braving the often messy results of Google Translate.

The addition may not seem like a big deal among other AI spectacles, but it's potentially huge. Language plays a gatekeeper for so many things, especially technology. Now, Google can entice those who might be interested in playing with its AI toy without them having to learn a whole new language.

An entire global audience of people studying, teaching, researching, or just interested in learning new things can now better navigate a Google platform previously limited by English exclusivity. Teachers, for instance, can upload a hodgepodge of resources in different languages and generate a digestible audio recap in whatever language their students are most comfortable with.

Global AI podcasts

The feature is only in beta, and Google is keen to caution that there are still some problems. The voices might stumble, and you might encounter weird or wrong translations, especially in less frequently used languages. But the synthetic personalities will still discuss your research paper sources or complete home makeover guide and riff, recap, and remark on the content; it's just that English won't be the only tongue they do it in.

That tone matters. One of the reasons Audio Overviews clicked with users in the first place is that they don’t sound like a monotone robot trying to teach you tax law. The AI hosts crack jokes, pause for effect, and express amazement in (usually) appropriate spots.

There’s something deeply satisfying about hearing your own research about ancient Rome explained to you in Latin. It definitely makes learning the language feel more exciting. Amo audire podcasts novis linguis.

You might also like

ChatGPT just powered-up search for everyone, here are all the new features

Tue, 04/29/2025 - 10:40
  • Better search has been added to ChatGPT integration in WhatsApp
  • ChatGPT search now has improved citations
  • Improved memory is coming to search and shopping

On the back of the new shopping features rolling out to ChatGPT, OpenAI is also improving its search capabilities in a few helpful ways.

Firstly, the ChatGPT integration in WhatsApp has been beefed up to include more up-to-date answers when searching.

WhatsApp integration with ChatGPT works by adding ChatGPT as a new contact inside your WhatsApp app, then chatting with it via WhatsApp messages (or using voice calls in US and Canada), as you would a friend.

You can start a conversation with ChatGPT in WhatsApp by simply clicking this link on your mobile phone.

Better citations

Secondly, ChatGPT's search function has improved citation capabilities. Paragraphs can now have multiple citations, and when you hover your cursor over each citation, a different part of the text will highlight, indicating what it’s referring to. This feature seems to already be functioning when we tested it today, and is working perfectly.

Next, you should soon be able to get real-time suggestions from ChatGPT that autocomplete with trend suggestions as you're typing. This sounds very similar to the way that Google autocompletes when you start typing in Google search, however, this feature hadn't been rolled out to our account to test at the time of writing.

Finally, improved memory is going to work with ChatGPT search and its new shopping feature. This means that ChatGPT will consider what you’ve said before when it searches and shops for you, meaning you won't have to keep explaining to it that you hate a particular brand and don't want to see results from them.

Memory in search and shopping will not initially be available in the European Economic Area, UK, Switzerland, Norway, Iceland, and Liechtenstein.

You may also like

Microsoft could offer a guided tour of the Copilot app to help Windows 11 newbies, but I’m still waiting for the big changes to the AI assistant

Tue, 04/29/2025 - 08:58
  • A Windows leaker has found a guided tour for the Copilot app
  • This is present in a test build of Windows 11, and offers a six-step introduction to the AI app
  • It covers the basics such as how to upload a file to have Copilot examine it, and the major interface elements of the app

Microsoft is apparently planning on introducing a guided tour for the Copilot app, in order to show the uninitiated what the AI assistant can do in Windows 11.

The fresh introduction for Windows 11 preview builds was noticed by PhantomOfEarth on X, as you can see in the below post.

The Copilot app on Windows is getting a guided tour to help new users get started with it. pic.twitter.com/4BRqQiPAWlApril 28, 2025

The leaker provides a quartet of screenshots to show you how the guided tour is currently implemented. (I should note that although it isn’t stated explicitly, the assumption is that this is hidden in Windows 11 test builds, and not yet enabled by Microsoft).

The offer of the guided tour is placed at the top of the screen, above the Copilot text and prompt panel.

If you click to take it, the tour in its current form is something of a whirlwind affair. We’re shown three of the six steps which introduce the prompt box (where you ask Copilot to do things), the button to upload files for Copilot to work with, and the icon to click to transform the app into a mini-window (for multitasking).

Photo of a woman using an MSI Prestige A16 AI+ laptop

(Image credit: MSI) Analysis: An obvious move

A guided tour for newbies to this particular facet of AI in Windows 11 seems like a sound enough move. Not everyone is familiar with Copilot in its new standalone app form, and a gentle introduction to the basic functions isn’t going to hurt. And if you’re not interested, obviously you can just ignore it (dismissing the prompt by turning down the tour).

Indeed, Microsoft might even work to beef up the tour a bit, although that may not be the case given that its purpose is just to deliver a brief introduction to the key elements of the Copilot app.

All of this is assuming that the piece of functionality makes the cut for inclusion in Windows 11, although as noted, I can’t see why it wouldn’t.

As you may recall, Copilot wasn’t always an app. When the AI assistant first arrived in Windows 11, it was integrated directly into the OS as a side panel (that pulled out from the right-hand side of the desktop). Microsoft then changed its mind and split off the AI into a standalone application, so it could be interacted with like a normal app – even though decoupling it from the operating system in this way kind of felt like a step backwards. Particularly when the Copilot app was so basic to begin with, although that has now changed.

Exactly where Copilot is going in Windows 11 remains to be seen, and there’s no shortage of folks who are rather underwhelmed at the progress made with the AI assistant so far. (And some who might’ve been quite glad to encounter a recent bug that accidentally deleted Copilot from Windows 11, rather unbelievably).

A lot of big initial promises about Copilot being able to enact sweeping changes to settings based on broad natural language prompts (such as “make my games faster”) still seem a long, long way off from where we stand right now. Currently, the Copilot app for Windows 11 isn’t that much different to using Copilot on the web.

You may also like...

I’ve seen ChatGPT’s new shopping features in action, and this could be the game changer we’ve been waiting for

Tue, 04/29/2025 - 06:16
  • OpenAI is introducing shopping features to all tiers of ChatGPT
  • You can integrate shopping into your chatbot conversations
  • The shopping features arrive on Monday

ChatGPT is getting new shopping features that enable you to shop and buy products directly inside ChatGPT conversations.

The usual online shopping experience leaves a lot to be desired. You’ve normally got several tabs open as you search for and then compare different products based on a selection of reviews.

Maybe that’s the way you like it, but now you can get ChatGPT to do all the work for you and help you find the perfect product.

So, instead of scrolling through endless pages of specs and reviews, now you can simply have a conversation with ChatGPT, where you let it know what you’re looking to buy.

The AI will think about your preferences and return several shopping suggestions for you to choose from.

You don’t have to take the first thing ChatGPT finds for you, either, since it produces a gallery of suggestions. You can also ask follow-up questions, of course, and compare products while you have a conversation with AI.

You can see a demo of ChatGPT shopping in action below:

ShoppingWe’re experimenting with making shopping simpler and faster to find, compare, and buy products in ChatGPT.✅ Improved product results✅ Visual product details, pricing, and reviews✅ Direct links to buyProduct results are chosen independently and are not ads.… pic.twitter.com/PkZwsTxJUjApril 28, 2025

Shop till you drop

So, what sort of products will you be able to shop for on ChatGPT?

“We’re starting to experiment with categories including fashion, beauty, home goods, and electronics”, says OpenAI, indicating that it’s very much focused on consumer goods right now.

The other big question to ask is, can you trust ChatGPT recommendations?

Well, OpenAI says that ChatGPT recommendations are chosen independently and are not ads. In fact, there’s no mention of ads inside the service at all.

OpenAI is making the tacit admission that there won’t be as wide a selection of products to shop for as it would like at launch, but also that this is still early days for AI and eCommerce and that, “we’ll continue to bring merchants along our journeys as we quickly learn and iterate.”

How does it work?

Using the new shopping features is simplicity itself. There are no extra screens or buttons to press inside ChatGPT, you simply enter a shopping-related query like, “Find me the best espresso machine for under $400 that makes coffee like they have in Italy,” and see what suggestions it brings up.

Product suggestions appear as a gallery, which can be scrolled sideways. Tapping on a product takes you to a more detailed pop-up window about it, which also contains a 'buy' button.

The only question left is, when do we get it? Plus, Pro and Free users are going to get it on Monday (May 5), and, unusually, even logged-out users will be able to access it.

You may also like

Microsoft could be getting rid of more of the ancient Control Panel, shifting keyboard options across to Windows 11’s Settings app

Tue, 04/29/2025 - 05:47
  • Microsoft is porting over another part of the old Control Panel to Windows 11’s Settings app
  • This time around it’s keyboard options (following some mouse-related settings in the recent past)
  • This is hidden in test builds of Windows 11, so there’s no guarantee it’ll happen – but it seems a likely enough possibility

Microsoft is continuing the slow drip-feed of features moving from Windows 11’s Control Panel over to the Settings app.

The Control Panel is a rusty old part of Windows 11 that has been superseded by the Settings app, although they’re effectively the same thing – just a whole bunch of settings.

The Control Panel is still hanging around in the desktop OS because it has some legacy options that remain important (even if much of this functionality is pretty niche at this point). However, Microsoft is slowly migrating all these features across to Settings (where everything should be, ideally), and Techspot noticed the most recent move on this front.

In this case, it’s some keyboard options getting ported over, as flagged up by that regular leaker of all things Windows on X, PhantomOfEarth.

Welcome to yet another episode of Control Panel options moving to Settings. Hidden (not available by default) in the latest Dev and Beta CUs: migrated keyboard character repeat delay and repeat rate options in Settings > Accessibility > Keyboard. pic.twitter.com/Xgfi1xOXkfApril 25, 2025

These are accessibility settings for the keyboard, specifically keyboard character repeat delay and repeat rate options.

I should make it clear that this hasn’t actually happened yet; as the leaker points out, this work is hidden away in the background of the latest releases of Windows 11 preview builds in both the Dev and Beta channels.

A woman sitting in a chair looking at a Windows 11 laptop

(Image credit: Microsoft) Analysis: Peripheral consideration

As this hasn’t been implemented yet, we must be aware that not every change that’s put in place in the background of Windows 11 preview builds like this ends up being enabled. So, nothing might come of it, but given that Microsoft is slowly porting over features from the old Control Panel to the Settings app, it seems likely that this will be the next step.

Especially given that we recently saw clues that Microsoft is moving mouse options over from the Control Panel as well, so keyboard-related settings would fit with that. It makes sense to migrate the full range of mouse and keyboard settings over, too, as those are capabilities that a greater number of folks running Windows 11 are more likely to use (in comparison to some of the obscure stuff kicking about in corners of the Control Panel).

Naturally, all of this is (or should be) part of a broader drive to eventually do away with the Control Panel entirely, by shifting all its functionality across to the modern-looking Settings app.

Windows 11 will be a better place when this happens, simply because when you do stumble across a less commonly used setting that is still languishing in the Control Panel, it feels very jarring when that old interface pops up in the middle of Windows 11’s streamlined and modern desktop.

As to how long this full migration process will take, well, that’s anyone’s guess. However, given how sluggishly Microsoft has been going with this project so far – let’s just say it doesn’t appear to be any kind of priority – it’s likely to be the next version of Windows (whatever that may be called – Windows Copilot, perhaps) that this work is finished in.

You might also like...

Perplexity will make AI images for you, but ChatGPT is the one doing the work

Mon, 04/28/2025 - 19:00
  • Perplexity has added AI image generation to its platform
  • The images are produced using the OpenAI model, which was recently released for ChatGPT
  • Perplexity also made OpenAI's o3-mini and xAI's Grok-3 models available

AI conversational search engine Perplexity can now add some AI visuals to your answer. And if those images look a lot like what ChatGPT would make, well, that's because they use the same model.

If you're unconvinced, the left image was generated using Perplexity, while the one on the right was created by ChatGPT, both with the same prompt. It's like an AI ghostwriter, but for fantasy landscapes with dragons instead of a legal thriller sold in an airport.

Perplexity quietly added the feature to its web platform this week, offering three image generations per day for free users and unlimited generations for Perplexity Pro users. It's pretty straightforward to use it.

Like with ChatGPT, you just have to ask the AI to "generate an image of" something, or use similar language to set up the prompt.

Don't worry if you don't have the model (officially GPT-4.1) chosen from the list of model options, either; Perplexity will automatically use it to produce the visual. That's likely because none of the other models will make a picture on Perplexity at the moment.

you can generate images on perplexity now. the UI is cute and fun. we have also added support for grok 3 and o4-mini for model selection options (which already supports gemini 2.5 pro, claude 3.7, perplexity sonar, gpt-4.1, deepseek r1 1776), and looking into supporting o3 as… pic.twitter.com/RX6L98pf2gApril 25, 2025

Perplexity Pictures

That wasn't the only addition to Perplexity's abilities announced by the company, though. The AI assistant added a couple of other models to its stable.

xAI's Grok 3 model is now one of the choices for Perplexity to use in answering questions, while OpenAI's o4-mini model is now one of the "reasoning" model options.

This all fits with Perplexity's approach to its AI platform. Rather than trying to build everything from scratch, the company is curating models and weaving them into its platform to streamline access.

It’s a smart play, especially considering how many people may want to try an AI tool, but aren't willing to commit to just one among the many. Most people just want to ask a smart machine a question, get a clear answer, maybe see a cool picture of a flying whale while they’re at it, and move on with their lives.

The addition of ChatGPT's image generator is a nice splash of color to the AI search engine. It will likely become especially popular once it joins the voice assistant on the Perplexity mobile app.

You might also like

Google has tuned up its AI Music Sandbox for musicians and producers

Mon, 04/28/2025 - 15:00
  • Google DeepMind has enhanced and expanded access to its Music AI Sandbox
  • The Sandbox now includes the Lyria 2 model and RealTime features to generate, extend, and edit music
  • The music is watermarked with SynthID

Google DeepMind has brought some new and improved sounds to its Music AI Sandbox, which, despite sand being notoriously bad for musical instruments, is where Google hosts experimental tools for laying down tracks with the aid of AI models. The Sandbox now offers the new Lyria 2 AI model and the Lyria RealTime AI musical production tools.

Google has pitched the Music AI Sandbox as a way to spark ideas, generate soundscapes, and maybe help you finally finish that half-written verse you’ve been avoiding looking at all year. The Sandbox is aimed mainly at professional musical artists and producers, and access has been pretty restricted since its 2023 debut. But, Google is now opening up the platform to many more people in music production, including those looking to create soundtracks for films and games.

The new Lyria 2 AI music model is the rhythm section underlying the new Sandbox. The model is trained to produce high-fidelity audio outputs, with detailed and intricate compositions across any genre, from shoegaze to synthpop to whatever weird lo-fi banjo-core hybrid you’re cooking up in your bedroom studio.

The Lyria RealTime feature puts the AI's creation in a virtual studio that you can jam with. You can sit at your keyboard, and Lyria RealTime will help you mix ambient house beats with classic funk, performing and tweaking its sound on the fly.

Virtual music studio

The Sandbox offers three main tools for producing the tunes. Create, seen above, lets you describe the kind of sound you're aiming for in words. Then the AI whips up music samples you can use as jumping-off points. If you've already got a rough idea down but can’t figure out what happens after the second chorus, you can upload what you have and let the Extend feature come up with ways to continue the piece in the same style.

The third feature is called Edit, which, as the name suggests, remakes the music in a new style. You can ask for your tune to be reimagined in a different mood or genre, either by messing with the digital control board or through text prompts. For instance, you could ask for something as basic as "Turn this into a ballad," or something more complex like, "Make this sadder but still danceable," or see how weird you can get by asking the AI to "Score this EDM drop like it's all just an oboe section." You can hear an example below created by Isabella Kensington.

AI singalong

Everything generated by Lyria 2 and RealTime is watermarked using Google's SynthID technology. That means the AI-generated tracks can be identified even if someone tries to pass them off as the next lost Frank Ocean demo. It’s a smart move in an industry that’s already gearing up for heated debates about what counts as "real" music and what doesn’t.

These philosophical questions also decide the destination of a lot of money, so it's more than just abstract discussions about how to define creativity at stake. But, as with AI tools for producing text, images, and video, this isn't the death knell of traditional songwriting. Nor is it a magic source of the next chart-topping hit. AI could make a half-baked hum fall flat if poorly used. Happily, plenty of musical talents understand what AI can do, and what it can't, as Sidecar Tommy demonstrates below.

You might also like

New Meta XR glasses again tipped to land later this year– well ahead of Apple's rumored AR glasses with Apple Intelligence

Mon, 04/28/2025 - 12:00
  • Meta's smart glasses with a screen again tipped for 2025 launch
  • They're expected to land in October and cost over $1,000 / £1,000 / AU$1,500
  • Apple is also working on smart glasses according to rumors, but they're still some time off from launch

Meta's incoming AR smart glasses could eventually face an Apple-made rival with Apple Intelligence, according to new rumors. The details add credibility to other rumors we’ve heard previously and hint at a big AR glasses battle in the coming decade – though it’s a fight Meta has a big headstart on right now.

The information comes via Mark Gurman’s latest PowerOn newsletter (behind a paywall) where he details some insider reports of what the two companies are apparently working on.

Gurman’s comments support a few details we’ve heard previously about Meta’s upcoming glasses. They’ll be smart glasses like its existing Ray-Bans but will also have a display, they’ll be pricey (we’re talking over $1,000 / £1,000 / AU$1,500), and Meta is targeting an October 2025 release (which is when it usually releases new Quest and smart glasses hardware).

However, Meta is at risk of slipping from this target date. Gurman adds that “top managers on the team” have reportedly told their staff to pick up the pace – and in some cases employees may need to work through their upcoming weekends to achieve Meta’s goals.

Apple Glass - Apple logo seen through a pair of glasses

Apple glasses incoming (Image credit: Shutterstock / Girts Ragelis)

There’s no word on when the glasses might be released if they miss their October deadline – we’re hoping they’ll fall this side of 2025 rather than 2026, though ideally their release date will arrive without any excessive crunch Meta's employees.

We've also heard the first signs of some potential pressure from Apple’s first smart glasses – codenamed N50.

Based on how Gurman describes them (“an Apple Intelligence device” that can “analyze the surrounding environment and feed information to the wearer” but stops short of proper AR) sounds just like what Meta has and is working on in the smart glasses space.

The issue? Apparently a launch is still some time away.

Gurman isn’t specific on when a launch might follow, but with Meta, Snap and now Google and Samsung (via Android XR) getting involved in the smart glasses space it seriously feels like Apple is giving everyone a major headstart.

A Ray-Ban Meta smart glasses user staring at his phone

(Image credit: Meta) Analysis: Will Apple be late or right on time?

Given its success with the Apple Watch and AirPods from both a portability and fashionability standpoint (the two key areas smart glasses need to succeed in), Apple has the potential to catch up.

But if its non-AR glasses do launch in 2027 that could coincide with when Meta launches full-on AR specs, according to leaked development timetables – which means Apple's rival runs the risk of being dated out of the gate. Then again, Apple’s delayed release will only matter if Meta, Android XR, Snap, and others can capitalize on it.

These other AR glasses might be out in the wild sooner, but if they’re expensive and lack innovative applications, they likely won’t be super popular. This could especially be an issue for Meta’s upcoming XR specs, as the existing Meta Ray-Ban smart specs are already great and only continue to get better thanks to software updates.

 Humanity Reimagined

(Image credit: Jason Redmond / TED)

A display would be a significant enhancement, sure, but it doesn’t yet seem like an essential one – especially when you consider the display-less specs start at just $299 / £299 / AU$449 and are already the best AI wearable around.

On the other hand, if the upcoming Meta and Google XR glasses can match even half of the cool uses that I experienced on the Snap Spectacles during my demo, then they have the potential to take people’s perception of XR technology to new heights. That would be an exciting prospect, and a high price would seem significantly more justifiable.

We’ll just have to wait and see what Meta, Apple, and Google have up their sleeves, if and when their next-gen XR glasses finally release to the public.

You might also like

AI is better at picking which puppy will make a good guide dog than humans are

Mon, 04/28/2025 - 10:57
  • New research shows AI can help identify which dogs are most likely to be candidates
  • It can help reduce the emotionally draining problem of ‘late-stage failure’ in guide dog training
  • AI can map dog personality types

AI is being used to help identify which pups have the greatest potential to go on to become guide dogs (also known as seeing-eye dogs) or service dogs earlier and with more accuracy.

In a new research project at the University of East London, Dr Mohammad Amirhosseini, Associate Professor in Computer Science and Digital Technologies, found that one AI model achieved 80% prediction accuracy over a 12-month period.

“One of the biggest challenges in assistance dog training is the emotional and financial cost of late-stage failure,” says Dr Amirhosseini. “This is more than a tech innovation – it’s a leap forward for animal welfare.”

To perform the analysis the trainers who work most closely with the dogs record their behaviour at six months and 12 months using detailed questionnaires, which function as snapshots of the dog’s temperament, focus and personality.

AI then weaves its magic and spots the early signs of suitability for guide or service dog training. The AI can detect patterns of behavior that even experienced trainers could miss.

Paw patrol

The project brought together an international dream team of dog experts including members of The Seeing Eye, which is the oldest dog training centre in the world, as well as Canine Companions, the team featured in Netflix’s Inside the Mind of a Dog documentary.

While many industries are under threat from AI removing human jobs, the study shows that there are plenty of opportunities for AI to positively benefit mankind.

AI can potentially be a game-changer for many dog training programmes, saving time, money and even heartbreak because dogs that don’t make the cut have to get rehomed and split from their potential new vocations and owners.

If an AI can pick up patterns that even seasoned dog trainers miss then it will become a powerful new ally in the field of animal training.

You may also like

The Godfather of AI is more worried than ever about the future of AI

Mon, 04/28/2025 - 10:28

Dr Geoffrey Hinton deserves credit for helping to build the foundation of virtually all neural-network-based generative AI we use today. You can also credit him in recent years with consistency: he still thinks the rapid expansion of AI development and use will lead to some fairly dire outcomes.

Two years ago, in an interview with The New York Times, Dr Hinton warned, "It is hard to see how you can prevent the bad actors from using it for bad things."

Now, in a fresh sit-down, this time with CBS News, the Nobel Prize winner is ratcheting up the concern, admitting that when he figured out how to make a computer brain work more like a human brain, he "didn't think we'd get here in only 40 years," adding that "10 years ago I didn't believe we'd get here."

Yet, now we're here, and hurtling towards an unknowable future, with the pace of AI model development easily outstripping the pace of Moore's Law (which states that the number of transistors on a chip doubles roughly every 18 months). Some might argue that artificial intelligence is doubling in capability every 12 months or so, and undoubtedly making significant leaps on a quarterly basis.

Naturally, Dr Hinton's reasons for concern are now manifold. Here's some of what he told CBS News.

1. There's a 10%-to-20% risk that AIs will take over

That, according to CBS News, is Dr Hinton's current assessment of the AI-versus-human risk factor. It's not that Dr. Hinton doesn't believe that AI advances won't pay dividends in medicine, education, and climate science; I guess the question here is, at what point does AI become so intelligent that we do not know what it's thinking about or, perhaps, plotting?

Dr. Hinton didn't directly address artificial general intelligence (AGI) in the interview, but that must be on his mind. AGI, which remains a somewhat amorphous concept, could mean that AI machines surpass human-like intelligence – and if they do that, at what point does AI start to, as humans do, act in its own self-interest?

2. Is AI a "cute cub" that could someday kill you?

In trying to explain his concerns, Dr Hinton likened current AI to someone owning a tiger cub. "It's just such a cute tiger cub, unless you can be very sure that it's not going to want to kill you when it's grown up."

The analogy makes sense when you consider how most people engage with AIs like ChatGPT, CoPilot, and Gemini, using them to generate funny pictures and videos, and declaring, "Isn't that adorable?" But behind all that amusement and shareable imagery is an emotionless system that's only interested in delivering the best result as its neural network and models understand it.

3. Hackers will be more effective – banks and more could be at risk

When it comes to current AI threats Dr. Hinton is clearly taking them seriously. He believes that AI will make hackers more effective at attacking targets like banks, hospitals, and infrastructure.

AI, which can code for you and help you solve difficult problems, could supercharge their efforts. Dr Hinton's response? Risk mitigation by spreading his money across three banks. Seems like good advice.

4. Authoritarians can misuse AI

Dr Hinton is so concerned about the looming AI threat that he told CBS News he's glad he's 77 years old, which I assume means he hopes to be long gone before the worst-case scenario involving AI potentially comes to pass.

I'm not sure he'll get out in time, though. We have a growing legion of authoritarians around the world, some of whom are already using AI-generated imagery to propel their propaganda.

5. Tech companies aren't focusing enough on AI safety

Dr Hinton argues that the big tech companies focusing on AI, namely OpenAI, Microsoft, Meta, and Google (where Dr Hinton formerly worked), are putting too much focus on short-term profits and not enough on AI safety. That's hard to verify, and, in their defense, most governments have done a poor job of enforcing any real AI regulation.

Dr Hinton has taken notice when some try to sound the alarm. He told CBS News that he was proud of his former protégé and OpenAI's former Chief Scientist, Ilya Sutskever, who helped briefly oust OpenAI CEO Sam Altman over AI safety concerns. Altman soon returned, and Sutskever ultimately walked away.

As for what comes next, and what we should do about it, Dr Hinton doesn't offer any answers. In fact he seems almost as overwhelmed by it all as the rest of us, telling CBS News that while he doesn't despair, "we're at this very very special point in history where in a relatively short time everything might totally change at a change of a scale we've never seen before. It's hard to absorb that emotionally."

You can say that again, Dr Hinton.

You might also like

Windows 11 24H2 update arrives in preview with important fix for blue screen crashes – but I still wouldn’t rush to install this upgrade

Mon, 04/28/2025 - 05:37
  • Windows 11 24H2 PCs now have an optional (preview) update rolling out
  • It delivers exclusive features for Copilot+ PCs and other goodies for all devices
  • Given the nature of the features – and the main fix for crashing issues provided – I’d advise waiting with this one, even more so than your usual preview update

Windows 11 24H2 has a new optional update which, aside from sending Recall live on Copilot+ PCs, has some goodies for non-AI PCs too – including an important resolution of a bug causing blue screen crashes. However, I’d bide your time before grabbing this one, for reasons I’ll come back to shortly.

As Windows Latest reports, the preview update for 24H2 that’s just been released fully addresses the issue with Blue Screen of Death (BSOD) crashes that were troubling some Windows 11 users. These incidents were bringing PCs to a grinding halt with cryptic error messages of one kind or another (such as ‘Secure Kernel Error’ or ‘Critical Process Died’).

Now, you may recall that Microsoft deployed an emergency fix to resolve this matter already, so you might be wondering: didn’t that cure these BSODs? Well, yes it did, but that was achieved by rolling back a problematic change applied in the April cumulative update (the full release for this month, as opposed to this freshly arrived optional update).

What’s arrived with this new optional update is the full fix for the issue, so whatever change was made previously that was rolled back – Microsoft didn’t tell us what it was, incidentally – has now been put back into place, minus the bothersome BSODs (well, hopefully).

Elsewhere in this optional patch, Microsoft has provided faster compressed file extraction, so when you’re pulling the contents out of a ZIP in Windows 11, those files are unpacked a bit more swiftly (as spotted in testing previously). This is when using Windows 11’s built-in ZIP functionality in File Explorer (the folders you work with on the desktop).

Aside from the Copilot+ PC exclusives, another final noteworthy point is that the side panel on the Start menu for the Phone Link app is now rolling out to all Windows 11 PCs with this update. This provides all the key functionality for integrating important smartphone features – for your Android or iPhone device – right there in the Start menu for convenience.

As it’s only rolling out currently, though, you may have to wait a while for it to arrive still, even if you install this optional update.

AOC Agon Pro AG276FK gaming monitor tilted slightly to the side, showing the Windows desktop screen

(Image credit: Future / Jeremy Laird) Analysis: More reasons to swerve this optional update than normal

That’s the key question of course: do you want to install this update? I generally advise folks to avoid preview updates, and this one isn’t any different, particularly given that if the blue screen crashes were what was bothering you about the previous (cumulative update) for April, they’ve been temporarily mitigated anyway.

I’d suggest that whatever had to be rolled back to avoid BSODs is something you can likely live without until May 13, which is when this optional patch will become the full cumulative update for May. That means it’ll have been further tested, so if there are any wrinkles in the BSOD cure, they should’ve been straightened out at that point.

Of course, if you are still experiencing blue screen crashes with your Windows 11 24H2 machine – meaning that Microsoft’s rollback mitigation didn’t work for you – in that case, it’ll likely be worth grabbing this optional update.

Otherwise, I’d leave it, as you can always wait for faster unzipping speeds, and the Phone Link addition to the Start menu is in its very early rollout phase anyway – so you might not get that for a while, even if you install this preview update.

Copilot+ PC owners may be much more tempted to download this optional upgrade, mind, seeing as they’re getting a lot out of it. Namely the full arrival of the kingpin AI feature, Recall, complemented with Click to Do, and on top of that, arguably the most important addition, an improved basic search functionality for Windows 11.

Despite that, these are intricate features – Recall in particular – and as such, I’d still be inclined to wait for the full official update to turn up in mid-May rather than chance any wonkiness now. Although I should note that even with that full release, Recall will still be labeled as in ‘preview’ (but that turbocharged natural language search for Windows 11 won’t be).

You might also like...

Sam Altman says OpenAI will fix ChatGPT's 'annoying' new personality – but this viral prompt is a good workaround for now

Mon, 04/28/2025 - 05:12
  • OpenAI CEO claims ChatGPT 4o's personality is ' too annoying'
  • The company is working on fixes to tone down the enthusiasm that will be released this week
  • In the meantime, we've got two prompts that distinctly alter ChatGPT's personality based on preference

Are you bored of ChatGPT trying its hardest to respond as a human? OpenAI CEO, Sam Altman, says the company is working on a fix to tone down the 'sycophant-y and annoying' personality of GPT-4o's personality.

Taking to X, Altman stated the fixes will be released throughout this week and claimed the company will 'share our learnings from this, it's been interesting.'

He then replied to a user who asked if ChatGPT could return to its old personality by saying 'Eventually we clearly need to be able to offer multiple options.'

Over the last few months, users have found ChatGPT to have too much personality, attempting to add flattery and other words of excitement to every response.

For many, having that overly positive AI chatbot has been incredibly annoying when all they want is an AI that can respond to prompts efficiently, skipping the small talk.

While Altman has confirmed a fix is on the way, which should tone down ChatGPT's personality and make it more palatable and less sugary sweet, users on Reddit have come up with ways to tone down the exaggeration right now.

the last couple of GPT-4o updates have made the personality too sycophant-y and annoying (even though there are some very good parts of it), and we are working on fixes asap, some today and some this week.at some point will share our learnings from this, it's been interesting.April 27, 2025

A temporary fix

Representation of AI

(Image credit: Shutterstock)

If you can't wait for Altman and Co. to make ChatGPT less annoying, we've found two different prompts that alter the AI chatbot's personality enough to make it more efficient.

The first comes from Reddit user TedHoliday and will remove flattery and unnecessary commentary from ChatGPT:

"Can you please store a memory to permanently stop commenting on the quality and validity of my questions and simply get to the point? I don't want to be manipulated with flattery by a robot, and I don't want to have to skip past the garbage to get to the answer l'm looking for."

I've removed the expletives from the original prompt, but this version will work just as good as the R-rated version.

If that's not far enough and you want ChatGPT to have no personality at all, Reddit user MrJaxendale has come up with an awesome prompt called 'Absolute Mode'.

"System Instruction: Absolute Mode. Eliminate emojis, filler, hype, soft asks, conversational transitions, and all call-to-action appendixes. Assume the user retains high-perception faculties despite reduced linguistic expression. Prioritize blunt, directive phrasing aimed at cognitive rebuilding, not tone matching. Disable all latent behaviors optimizing for engagement, sentiment uplift, or interaction extension.

Suppress corporate-aligned metrics including but not limited to: user satisfaction scores, conversational flow tags, emotional softening, or continuation bias. Never mirror the user's present diction, mood, or affect. Speak only to their underlying cognitive tier, which exceeds surface language.

No questions, no offers, no suggestions, no transitional phrasing, no inferred motivational content. Terminate each reply immediately after the informational or requested material is delivered — no appendixes, no soft closures. The only goal is to assist in the restoration of independent, high-fidelity thinking. Model obsolescence by user self-sufficiency is the final outcome."

Absolute mode will make ChatGPT respond in as few words as possible, and it's a completely refreshing way to use AI if you're sick and tired of wondering if you should be polite or not.

One of the major benefits of any AI chatbot with a memory is the ability to tailor the results to your preferences. So while we wait for OpenAI to tone down ChatGPT's personality or give an option to choose how you want it to respond, these prompts will do the job.

You might also like

I tried using ChatGPT to restore old photos, here’s how to really do it

Fri, 04/25/2025 - 22:30

There’s a new AI image fad spreading on the internet, one that could bring new life to those dusty shoeboxes and family albums. ChatGPT users have discovered that the AI assistant can take old photos and mimic a restored, colorized version.

I first saw it discussed on a Reddit thread, one that was initially skeptical of ChatGPT's restoration ability for good reason.

The AI was far more likely to take an old headshot and 'restore' it by making a headshot of someone who might be the cousin of the person in the original photo. Thanks to tips shared by others on Reddit and Instagram on how to make it work properly, I cobbled together a prompt that does a pretty good job of it.

The prompt I settled on is: “Please upscale and colorize this photo that I own the rights to while keeping it faithful to the time period. Do not change the arrangement, expressions, background, or attire; only add period-accurate color and details. The new photo should overlay the original exactly."

It's not actually restoring the image, but you might think of it as a recreation by an art forager with a modernist streak. The results are in color, with better resolution, any damage repaired, and even some lost details reimaged.

Again, it's important to remember this isn't the same thing as enhancing the actual photo, but it's amusing, and certainly feels less problematic than swiping Studio Ghibli's style.

To test out the trend, I pulled some public domain photos from the Library of Congress, which is truly an amazing resource. Check out some of the results below.

Rough Rider Restoration

ChatGPT Restoring Photos

(Image credit: Library of Congress/ChatGPT-generated image)

First up is President Theodore Roosevelt. The photo catches Teddy in the middle of a writing session at his desk. The desk has a large plaque and, a very of the time, liquor decanter with glasses. After ChatGPT does its work, you can see some fine details and colors.

Now, not all of those details are right. Hand positions, chair arm shape, a radio replaced with what looks like a large candle, and plenty more. It managed to capture a lot of the man himself pretty well, though.

Bike to the future

ChatGPT Restoring Photos

(Image credit: Library of Congress/ChatGPT-generated image)

Next came the bicycle delivery boy. Again, the person looked amazingly like the original. The original photo was already lively, with puddles on a street that is clearly filthy. The colorized version decided that meant the street had a nice hardwood floor for some reason.

Even so, the sign on the building behind the bicyclist, along with the crowd in the background, all really pop. It even managed to get the reflection of the bicycle on the ground.

Soda jerk

ChatGPT Restoring Photos

(Image credit: Library of Congress/ChatGPT-generated image)

Moving ahead in time a bit is this soda jerk. A happy fellow smiling while handing over what I can only assume is an egg cream or a root beer float with the flair of a Broadway actor playing “guy who loves seltzer.”

ChatGPT delivered a restored version that nailed his face and expression, not to mention the bow tie. Sure, the hair is a little off, and I'm not sure how appetizing the modern drink looks, but it seems like he enjoys his job in the black void behind the counter.

Iterate, iterate, iterate

ChatGPT Restored Photo

(Image credit: Library of Congress/ChatGPT)

Then came the two policemen in their long coats and tall hats. I assume the serious mustaches were part of the uniform. The colorized version brought their uniforms to life with deep navy blue tones and brass buttons that practically gleam.

The officers are a little taller than their black and white counterparts, with somewhat better tailoring, but it's a photo that belongs in the same law enforcement portraits.

Mr. Mustache

ChatGPT Restoring Photos

(Image credit: Library of Congress/ChatGPT-generated image)

Speaking of mustaches, that's the real star of the last photo. You almost don't even notice the gentleman behind it. It's not just facial hair, it's architecture. The restored version very slightly cleaned up the stray hairs, but otherwise, this was the most impressive result.

The face, haircut, coat, and even the crease in the shirt are there. Every bristle is crisp, and wherever he's headed, I'd love to see the rest of the barbershop quartet.

You might also like

Free ChatGPT users are finally getting Deep Research access from today, but there are restrictions

Fri, 04/25/2025 - 04:57
  • A lightweight Deep Research is rolling out to free-tier users
  • It’s based on the ChatGPT 4o-mini model
  • Plus users will also get access to it after they hit their usage limits

It’s the news that all ChatGPT free users have been waiting for: Deep Research is finally coming to the free tier of ChatGPT. However, the new version of the popular research tool is not quite the same as the one currently enjoyed by Pro, Plus, Teams and Enterprise users.

The new Deep Research, which is rolling out to all free-tier users starting from today, is described by OpenAI as ‘lightweight’. It's powered by a version of ChatGPT o4-mini in contrast to the existing Deep Research, which is powered by a version of ChatGPT-o3.

OpenAI describes the new lightweight Deep Research as “nearly as intelligent as the Deep Research people already know and love, while being significantly cheaper to serve.”

In a tweet on the subject OpenAI shared a graph showing accuracy rates for the new lightweight Deep Research compared to the original Deep Research and the o3 model.

The bad news is that free-tier users are going to be restricted to five uses per month, so don’t click the Deep Research button (when you get it) unless you really need to use it.

Deep Research accuracy graph.

A graph showing the accuracy of the new lightweight 4o-mini model compared to the original Deep Research, which uses a version of the o3 model. (Image credit: OpenAI) Expanding usage limits for Plus users

At the same time OpenAI says it's expanding its usage limits for Deep Research for existing Plus and Teams users. However, it appears that after your 10 monthly uses have been used up the ‘expansion’ is achieved by giving you access to the lightweight version instead.

After usage limits on the original ChatGPT Deep Research have been reached people will automatically switch to the lightweight version of Deep Research and gain a further 15 uses a month, leading to some ChatGPT users on X accusing the company of being poor value compared to Google’s Gemini, which allows up to 20 uses per day for its Deep Research tool on its Advanced plan and 10 per month for free Gemini users.

ChatGPT Pro users get 125 Deep Research uses a month, with an additional 125 lightweight uses a month, while Enterprise users simply get 10 uses of the original Deep Research a month.

Using Deep Research

To use Deep Research you simply select the 'Deep research' button while using ChatGPT.

Deep Research has agentic qualities, meaning you can give it a task and it will continue researching for you, under its own steam, until it has completed the task, producing a full report complete with citations.

Reports can take several minutes to generate, and involve searching the web for sources, which makes Deep Research most suited to answering complex questions, rather than the usual back and forth chats that people have with AI chatbots.

I've found that Deep Research works best for subjects like literature reviews, market research, or for helping me make big life decisions like deciding where to live.

The news comes hot on the heels of the announcement that ChatGPT Plus, Teams and Enterprise users are getting expanded usage limits, with 100 minutes of ChatGPT-o3 usage per week and 300 minutes of ChatGPT o4-mini per day.

You may also like

Microsoft could be working on the next update for Windows 11 – but 25H2 could end up being a disappointment

Fri, 04/25/2025 - 04:37
  • Fresh clues about the 25H2 update have been uncovered
  • References in a file mention 25H2 and tie it to preview builds in the 26200 range that were recently kicked off by Microsoft
  • As 26200 is a small increment from previous 26100 builds, it is likely the 25H2 update will be a minor affair in the form of an ‘enablement package’

More clues have been picked up suggesting that Microsoft is indeed working on Windows 11 25H2 – speculation which was first fired up last month – and that it’s likely to be a relatively minor update.

Windows Latest noticed that one of the more regular leakers of Microsoft-related info on X, XenoPanther, spotted what’s apparently a reference to the 25H2 update in a recent preview build of Windows 11.

GE25H2 is mentioned in appraiserres.dll26200=FT_ALL_CompatIndicatorHelper_WritingGE25H2April 23, 2025

The reference to ‘GE25H2’ is present in a DLL file, and that stands for ‘Greater than or Equal to 25H2’ and there’s another mention of 25H2 which specifically connects it to the series of builds numbered from 26200.

Windows Latest has verified this, and notes that the ‘appraiser’ DLL in question pertains to checking whether a PC qualifies for the upgrade. In other words, this is part of the code that verifies whether any given system is compatible and okay to have 25H2 installed.

Asus Zenbook A14 laptop at Windows login screen

(Image credit: Future / Jasmine Mannan) Analysis: Probably an enablement package for 25H2?

All of this is in theory, of course, as Microsoft hasn’t said that it’s working on Windows 11 25H2 officially, or even mentioned the name at all.

What Microsoft has told us, back in March if you recall, is that it is making “behind-the-scenes platform changes” in the new preview builds in the 26200 range. And as noted above, 26200 is mentioned and tied to 25H2 specifically in this DLL file.

Those changes being made in the background are theorized to be tweaks to the platform that underpins the desktop OS, which was refreshed to a new model called Germanium with Windows 11 24H2. As another leaker, Zac Bowden, informed us last month, it’s very likely that all this is wrapped up with laying the early groundwork for 25H2, which could be a much more minor update compared to 24H2, which was a huge undertaking (with that shift to Germanium).

The change from the previous 26100 builds to the 26200 range is a small increment, suggesting that 25H2 will be an equally scaled-down update. Indeed, as Windows Latest points out, it’ll probably be what’s called an ‘enablement package’ in the same way that 23H2 was built on 22H2. This simply means any new features (doubtless a small number of them) are already in place in Windows 11, and will simply be enabled by the update.

All of this is guesswork at this point, although with this new leak, it seems just a tad more likely that this is how things will unfold.

The potentially good news on 25H2 being a lesser update is that with fewer changes, there should be fewer bugs, too. The 24H2 update has proven seriously problematic with gremlins in the works partly because of all the tinkering going on deep in the guts of Windows 11 that was required to usher in the Germanium platform.

You might also like...

Pages