Do NOT follow this link or you will be banned from the site!
Feed aggregator
Perplexity will make AI images for you, but ChatGPT is the one doing the work
- Perplexity has added AI image generation to its platform
- The images are produced using the OpenAI model, which was recently released for ChatGPT
- Perplexity also made OpenAI's o3-mini and xAI's Grok-3 models available
AI conversational search engine Perplexity can now add some AI visuals to your answer. And if those images look a lot like what ChatGPT would make, well, that's because they use the same model.
If you're unconvinced, the left image was generated using Perplexity, while the one on the right was created by ChatGPT, both with the same prompt. It's like an AI ghostwriter, but for fantasy landscapes with dragons instead of a legal thriller sold in an airport.
Perplexity quietly added the feature to its web platform this week, offering three image generations per day for free users and unlimited generations for Perplexity Pro users. It's pretty straightforward to use it.
Like with ChatGPT, you just have to ask the AI to "generate an image of" something, or use similar language to set up the prompt.
Don't worry if you don't have the model (officially GPT-4.1) chosen from the list of model options, either; Perplexity will automatically use it to produce the visual. That's likely because none of the other models will make a picture on Perplexity at the moment.
you can generate images on perplexity now. the UI is cute and fun. we have also added support for grok 3 and o4-mini for model selection options (which already supports gemini 2.5 pro, claude 3.7, perplexity sonar, gpt-4.1, deepseek r1 1776), and looking into supporting o3 as… pic.twitter.com/RX6L98pf2gApril 25, 2025
Perplexity PicturesThat wasn't the only addition to Perplexity's abilities announced by the company, though. The AI assistant added a couple of other models to its stable.
xAI's Grok 3 model is now one of the choices for Perplexity to use in answering questions, while OpenAI's o4-mini model is now one of the "reasoning" model options.
This all fits with Perplexity's approach to its AI platform. Rather than trying to build everything from scratch, the company is curating models and weaving them into its platform to streamline access.
It’s a smart play, especially considering how many people may want to try an AI tool, but aren't willing to commit to just one among the many. Most people just want to ask a smart machine a question, get a clear answer, maybe see a cool picture of a flying whale while they’re at it, and move on with their lives.
The addition of ChatGPT's image generator is a nice splash of color to the AI search engine. It will likely become especially popular once it joins the voice assistant on the Perplexity mobile app.
You might also likeGoogle has tuned up its AI Music Sandbox for musicians and producers
- Google DeepMind has enhanced and expanded access to its Music AI Sandbox
- The Sandbox now includes the Lyria 2 model and RealTime features to generate, extend, and edit music
- The music is watermarked with SynthID
Google DeepMind has brought some new and improved sounds to its Music AI Sandbox, which, despite sand being notoriously bad for musical instruments, is where Google hosts experimental tools for laying down tracks with the aid of AI models. The Sandbox now offers the new Lyria 2 AI model and the Lyria RealTime AI musical production tools.
Google has pitched the Music AI Sandbox as a way to spark ideas, generate soundscapes, and maybe help you finally finish that half-written verse you’ve been avoiding looking at all year. The Sandbox is aimed mainly at professional musical artists and producers, and access has been pretty restricted since its 2023 debut. But, Google is now opening up the platform to many more people in music production, including those looking to create soundtracks for films and games.
The new Lyria 2 AI music model is the rhythm section underlying the new Sandbox. The model is trained to produce high-fidelity audio outputs, with detailed and intricate compositions across any genre, from shoegaze to synthpop to whatever weird lo-fi banjo-core hybrid you’re cooking up in your bedroom studio.
The Lyria RealTime feature puts the AI's creation in a virtual studio that you can jam with. You can sit at your keyboard, and Lyria RealTime will help you mix ambient house beats with classic funk, performing and tweaking its sound on the fly.
Virtual music studioThe Sandbox offers three main tools for producing the tunes. Create, seen above, lets you describe the kind of sound you're aiming for in words. Then the AI whips up music samples you can use as jumping-off points. If you've already got a rough idea down but can’t figure out what happens after the second chorus, you can upload what you have and let the Extend feature come up with ways to continue the piece in the same style.
The third feature is called Edit, which, as the name suggests, remakes the music in a new style. You can ask for your tune to be reimagined in a different mood or genre, either by messing with the digital control board or through text prompts. For instance, you could ask for something as basic as "Turn this into a ballad," or something more complex like, "Make this sadder but still danceable," or see how weird you can get by asking the AI to "Score this EDM drop like it's all just an oboe section." You can hear an example below created by Isabella Kensington.
AI singalongEverything generated by Lyria 2 and RealTime is watermarked using Google's SynthID technology. That means the AI-generated tracks can be identified even if someone tries to pass them off as the next lost Frank Ocean demo. It’s a smart move in an industry that’s already gearing up for heated debates about what counts as "real" music and what doesn’t.
These philosophical questions also decide the destination of a lot of money, so it's more than just abstract discussions about how to define creativity at stake. But, as with AI tools for producing text, images, and video, this isn't the death knell of traditional songwriting. Nor is it a magic source of the next chart-topping hit. AI could make a half-baked hum fall flat if poorly used. Happily, plenty of musical talents understand what AI can do, and what it can't, as Sidecar Tommy demonstrates below.
You might also likeInside The Mad Dash to Turn Division I Athletes Into Influencers
A new effort at the University of North Carolina at Chapel Hill is aimed at turning its student-athletes into well-remunerated social media stars. Other schools are following suit.
Spotify Paid $100 Million to Podcasters as Creator Wars Heat Up
The audio platform has branched out to video and has given its podcasters a raise as the war for creator talent heats up.
New Meta XR glasses again tipped to land later this year– well ahead of Apple's rumored AR glasses with Apple Intelligence
- Meta's smart glasses with a screen again tipped for 2025 launch
- They're expected to land in October and cost over $1,000 / £1,000 / AU$1,500
- Apple is also working on smart glasses according to rumors, but they're still some time off from launch
Meta's incoming AR smart glasses could eventually face an Apple-made rival with Apple Intelligence, according to new rumors. The details add credibility to other rumors we’ve heard previously and hint at a big AR glasses battle in the coming decade – though it’s a fight Meta has a big headstart on right now.
The information comes via Mark Gurman’s latest PowerOn newsletter (behind a paywall) where he details some insider reports of what the two companies are apparently working on.
Gurman’s comments support a few details we’ve heard previously about Meta’s upcoming glasses. They’ll be smart glasses like its existing Ray-Bans but will also have a display, they’ll be pricey (we’re talking over $1,000 / £1,000 / AU$1,500), and Meta is targeting an October 2025 release (which is when it usually releases new Quest and smart glasses hardware).
However, Meta is at risk of slipping from this target date. Gurman adds that “top managers on the team” have reportedly told their staff to pick up the pace – and in some cases employees may need to work through their upcoming weekends to achieve Meta’s goals.
There’s no word on when the glasses might be released if they miss their October deadline – we’re hoping they’ll fall this side of 2025 rather than 2026, though ideally their release date will arrive without any excessive crunch Meta's employees.
We've also heard the first signs of some potential pressure from Apple’s first smart glasses – codenamed N50.
Based on how Gurman describes them (“an Apple Intelligence device” that can “analyze the surrounding environment and feed information to the wearer” but stops short of proper AR) sounds just like what Meta has and is working on in the smart glasses space.
The issue? Apparently a launch is still some time away.
Gurman isn’t specific on when a launch might follow, but with Meta, Snap and now Google and Samsung (via Android XR) getting involved in the smart glasses space it seriously feels like Apple is giving everyone a major headstart.
Given its success with the Apple Watch and AirPods from both a portability and fashionability standpoint (the two key areas smart glasses need to succeed in), Apple has the potential to catch up.
But if its non-AR glasses do launch in 2027 that could coincide with when Meta launches full-on AR specs, according to leaked development timetables – which means Apple's rival runs the risk of being dated out of the gate. Then again, Apple’s delayed release will only matter if Meta, Android XR, Snap, and others can capitalize on it.
These other AR glasses might be out in the wild sooner, but if they’re expensive and lack innovative applications, they likely won’t be super popular. This could especially be an issue for Meta’s upcoming XR specs, as the existing Meta Ray-Ban smart specs are already great and only continue to get better thanks to software updates.
A display would be a significant enhancement, sure, but it doesn’t yet seem like an essential one – especially when you consider the display-less specs start at just $299 / £299 / AU$449 and are already the best AI wearable around.
On the other hand, if the upcoming Meta and Google XR glasses can match even half of the cool uses that I experienced on the Snap Spectacles during my demo, then they have the potential to take people’s perception of XR technology to new heights. That would be an exciting prospect, and a high price would seem significantly more justifiable.
We’ll just have to wait and see what Meta, Apple, and Google have up their sleeves, if and when their next-gen XR glasses finally release to the public.
You might also likeAI is better at picking which puppy will make a good guide dog than humans are
- New research shows AI can help identify which dogs are most likely to be candidates
- It can help reduce the emotionally draining problem of ‘late-stage failure’ in guide dog training
- AI can map dog personality types
AI is being used to help identify which pups have the greatest potential to go on to become guide dogs (also known as seeing-eye dogs) or service dogs earlier and with more accuracy.
In a new research project at the University of East London, Dr Mohammad Amirhosseini, Associate Professor in Computer Science and Digital Technologies, found that one AI model achieved 80% prediction accuracy over a 12-month period.
“One of the biggest challenges in assistance dog training is the emotional and financial cost of late-stage failure,” says Dr Amirhosseini. “This is more than a tech innovation – it’s a leap forward for animal welfare.”
To perform the analysis the trainers who work most closely with the dogs record their behaviour at six months and 12 months using detailed questionnaires, which function as snapshots of the dog’s temperament, focus and personality.
AI then weaves its magic and spots the early signs of suitability for guide or service dog training. The AI can detect patterns of behavior that even experienced trainers could miss.
Paw patrolThe project brought together an international dream team of dog experts including members of The Seeing Eye, which is the oldest dog training centre in the world, as well as Canine Companions, the team featured in Netflix’s Inside the Mind of a Dog documentary.
While many industries are under threat from AI removing human jobs, the study shows that there are plenty of opportunities for AI to positively benefit mankind.
AI can potentially be a game-changer for many dog training programmes, saving time, money and even heartbreak because dogs that don’t make the cut have to get rehomed and split from their potential new vocations and owners.
If an AI can pick up patterns that even seasoned dog trainers miss then it will become a powerful new ally in the field of animal training.
You may also like- Sam Altman says OpenAI will fix ChatGPT's 'annoying' new personality – but this viral prompt is a good workaround for now
- ChatGPT’s best tool is now available for free - but there’s a catch
- I compared Adobe’s new Firefly Image Model 4 to ChatGPT’s image generator, and it’s like they went to the same art school
The Godfather of AI is more worried than ever about the future of AI
Dr Geoffrey Hinton deserves credit for helping to build the foundation of virtually all neural-network-based generative AI we use today. You can also credit him in recent years with consistency: he still thinks the rapid expansion of AI development and use will lead to some fairly dire outcomes.
Two years ago, in an interview with The New York Times, Dr Hinton warned, "It is hard to see how you can prevent the bad actors from using it for bad things."
Now, in a fresh sit-down, this time with CBS News, the Nobel Prize winner is ratcheting up the concern, admitting that when he figured out how to make a computer brain work more like a human brain, he "didn't think we'd get here in only 40 years," adding that "10 years ago I didn't believe we'd get here."
Yet, now we're here, and hurtling towards an unknowable future, with the pace of AI model development easily outstripping the pace of Moore's Law (which states that the number of transistors on a chip doubles roughly every 18 months). Some might argue that artificial intelligence is doubling in capability every 12 months or so, and undoubtedly making significant leaps on a quarterly basis.
Naturally, Dr Hinton's reasons for concern are now manifold. Here's some of what he told CBS News.
1. There's a 10%-to-20% risk that AIs will take overThat, according to CBS News, is Dr Hinton's current assessment of the AI-versus-human risk factor. It's not that Dr. Hinton doesn't believe that AI advances won't pay dividends in medicine, education, and climate science; I guess the question here is, at what point does AI become so intelligent that we do not know what it's thinking about or, perhaps, plotting?
Dr. Hinton didn't directly address artificial general intelligence (AGI) in the interview, but that must be on his mind. AGI, which remains a somewhat amorphous concept, could mean that AI machines surpass human-like intelligence – and if they do that, at what point does AI start to, as humans do, act in its own self-interest?
2. Is AI a "cute cub" that could someday kill you?In trying to explain his concerns, Dr Hinton likened current AI to someone owning a tiger cub. "It's just such a cute tiger cub, unless you can be very sure that it's not going to want to kill you when it's grown up."
The analogy makes sense when you consider how most people engage with AIs like ChatGPT, CoPilot, and Gemini, using them to generate funny pictures and videos, and declaring, "Isn't that adorable?" But behind all that amusement and shareable imagery is an emotionless system that's only interested in delivering the best result as its neural network and models understand it.
3. Hackers will be more effective – banks and more could be at riskWhen it comes to current AI threats Dr. Hinton is clearly taking them seriously. He believes that AI will make hackers more effective at attacking targets like banks, hospitals, and infrastructure.
AI, which can code for you and help you solve difficult problems, could supercharge their efforts. Dr Hinton's response? Risk mitigation by spreading his money across three banks. Seems like good advice.
4. Authoritarians can misuse AIDr Hinton is so concerned about the looming AI threat that he told CBS News he's glad he's 77 years old, which I assume means he hopes to be long gone before the worst-case scenario involving AI potentially comes to pass.
I'm not sure he'll get out in time, though. We have a growing legion of authoritarians around the world, some of whom are already using AI-generated imagery to propel their propaganda.
5. Tech companies aren't focusing enough on AI safetyDr Hinton argues that the big tech companies focusing on AI, namely OpenAI, Microsoft, Meta, and Google (where Dr Hinton formerly worked), are putting too much focus on short-term profits and not enough on AI safety. That's hard to verify, and, in their defense, most governments have done a poor job of enforcing any real AI regulation.
Dr Hinton has taken notice when some try to sound the alarm. He told CBS News that he was proud of his former protégé and OpenAI's former Chief Scientist, Ilya Sutskever, who helped briefly oust OpenAI CEO Sam Altman over AI safety concerns. Altman soon returned, and Sutskever ultimately walked away.
As for what comes next, and what we should do about it, Dr Hinton doesn't offer any answers. In fact he seems almost as overwhelmed by it all as the rest of us, telling CBS News that while he doesn't despair, "we're at this very very special point in history where in a relatively short time everything might totally change at a change of a scale we've never seen before. It's hard to absorb that emotionally."
You can say that again, Dr Hinton.
You might also likeWindows 11 24H2 update arrives in preview with important fix for blue screen crashes – but I still wouldn’t rush to install this upgrade
- Windows 11 24H2 PCs now have an optional (preview) update rolling out
- It delivers exclusive features for Copilot+ PCs and other goodies for all devices
- Given the nature of the features – and the main fix for crashing issues provided – I’d advise waiting with this one, even more so than your usual preview update
Windows 11 24H2 has a new optional update which, aside from sending Recall live on Copilot+ PCs, has some goodies for non-AI PCs too – including an important resolution of a bug causing blue screen crashes. However, I’d bide your time before grabbing this one, for reasons I’ll come back to shortly.
As Windows Latest reports, the preview update for 24H2 that’s just been released fully addresses the issue with Blue Screen of Death (BSOD) crashes that were troubling some Windows 11 users. These incidents were bringing PCs to a grinding halt with cryptic error messages of one kind or another (such as ‘Secure Kernel Error’ or ‘Critical Process Died’).
Now, you may recall that Microsoft deployed an emergency fix to resolve this matter already, so you might be wondering: didn’t that cure these BSODs? Well, yes it did, but that was achieved by rolling back a problematic change applied in the April cumulative update (the full release for this month, as opposed to this freshly arrived optional update).
What’s arrived with this new optional update is the full fix for the issue, so whatever change was made previously that was rolled back – Microsoft didn’t tell us what it was, incidentally – has now been put back into place, minus the bothersome BSODs (well, hopefully).
Elsewhere in this optional patch, Microsoft has provided faster compressed file extraction, so when you’re pulling the contents out of a ZIP in Windows 11, those files are unpacked a bit more swiftly (as spotted in testing previously). This is when using Windows 11’s built-in ZIP functionality in File Explorer (the folders you work with on the desktop).
Aside from the Copilot+ PC exclusives, another final noteworthy point is that the side panel on the Start menu for the Phone Link app is now rolling out to all Windows 11 PCs with this update. This provides all the key functionality for integrating important smartphone features – for your Android or iPhone device – right there in the Start menu for convenience.
As it’s only rolling out currently, though, you may have to wait a while for it to arrive still, even if you install this optional update.
That’s the key question of course: do you want to install this update? I generally advise folks to avoid preview updates, and this one isn’t any different, particularly given that if the blue screen crashes were what was bothering you about the previous (cumulative update) for April, they’ve been temporarily mitigated anyway.
I’d suggest that whatever had to be rolled back to avoid BSODs is something you can likely live without until May 13, which is when this optional patch will become the full cumulative update for May. That means it’ll have been further tested, so if there are any wrinkles in the BSOD cure, they should’ve been straightened out at that point.
Of course, if you are still experiencing blue screen crashes with your Windows 11 24H2 machine – meaning that Microsoft’s rollback mitigation didn’t work for you – in that case, it’ll likely be worth grabbing this optional update.
Otherwise, I’d leave it, as you can always wait for faster unzipping speeds, and the Phone Link addition to the Start menu is in its very early rollout phase anyway – so you might not get that for a while, even if you install this preview update.
Copilot+ PC owners may be much more tempted to download this optional upgrade, mind, seeing as they’re getting a lot out of it. Namely the full arrival of the kingpin AI feature, Recall, complemented with Click to Do, and on top of that, arguably the most important addition, an improved basic search functionality for Windows 11.
Despite that, these are intricate features – Recall in particular – and as such, I’d still be inclined to wait for the full official update to turn up in mid-May rather than chance any wonkiness now. Although I should note that even with that full release, Recall will still be labeled as in ‘preview’ (but that turbocharged natural language search for Windows 11 won’t be).
You might also like...- Windows 11 users get ready for more ‘recommendations’ from Microsoft – but I’m relieved to say these suggestions might actually be useful
- Windows 11 is getting a very handy change to the taskbar, as Microsoft takes a leaf from Apple’s Mac playbook
- Windows 11 fully streamlined in just two clicks? Talon utility promises to rip all the bloatware out of Microsoft’s OS in a hassle-free way
Sam Altman says OpenAI will fix ChatGPT's 'annoying' new personality – but this viral prompt is a good workaround for now
- OpenAI CEO claims ChatGPT 4o's personality is ' too annoying'
- The company is working on fixes to tone down the enthusiasm that will be released this week
- In the meantime, we've got two prompts that distinctly alter ChatGPT's personality based on preference
Are you bored of ChatGPT trying its hardest to respond as a human? OpenAI CEO, Sam Altman, says the company is working on a fix to tone down the 'sycophant-y and annoying' personality of GPT-4o's personality.
Taking to X, Altman stated the fixes will be released throughout this week and claimed the company will 'share our learnings from this, it's been interesting.'
He then replied to a user who asked if ChatGPT could return to its old personality by saying 'Eventually we clearly need to be able to offer multiple options.'
Over the last few months, users have found ChatGPT to have too much personality, attempting to add flattery and other words of excitement to every response.
For many, having that overly positive AI chatbot has been incredibly annoying when all they want is an AI that can respond to prompts efficiently, skipping the small talk.
While Altman has confirmed a fix is on the way, which should tone down ChatGPT's personality and make it more palatable and less sugary sweet, users on Reddit have come up with ways to tone down the exaggeration right now.
the last couple of GPT-4o updates have made the personality too sycophant-y and annoying (even though there are some very good parts of it), and we are working on fixes asap, some today and some this week.at some point will share our learnings from this, it's been interesting.April 27, 2025
A temporary fixIf you can't wait for Altman and Co. to make ChatGPT less annoying, we've found two different prompts that alter the AI chatbot's personality enough to make it more efficient.
The first comes from Reddit user TedHoliday and will remove flattery and unnecessary commentary from ChatGPT:
"Can you please store a memory to permanently stop commenting on the quality and validity of my questions and simply get to the point? I don't want to be manipulated with flattery by a robot, and I don't want to have to skip past the garbage to get to the answer l'm looking for."
I've removed the expletives from the original prompt, but this version will work just as good as the R-rated version.
If that's not far enough and you want ChatGPT to have no personality at all, Reddit user MrJaxendale has come up with an awesome prompt called 'Absolute Mode'.
"System Instruction: Absolute Mode. Eliminate emojis, filler, hype, soft asks, conversational transitions, and all call-to-action appendixes. Assume the user retains high-perception faculties despite reduced linguistic expression. Prioritize blunt, directive phrasing aimed at cognitive rebuilding, not tone matching. Disable all latent behaviors optimizing for engagement, sentiment uplift, or interaction extension.
Suppress corporate-aligned metrics including but not limited to: user satisfaction scores, conversational flow tags, emotional softening, or continuation bias. Never mirror the user's present diction, mood, or affect. Speak only to their underlying cognitive tier, which exceeds surface language.
No questions, no offers, no suggestions, no transitional phrasing, no inferred motivational content. Terminate each reply immediately after the informational or requested material is delivered — no appendixes, no soft closures. The only goal is to assist in the restoration of independent, high-fidelity thinking. Model obsolescence by user self-sufficiency is the final outcome."
Absolute mode will make ChatGPT respond in as few words as possible, and it's a completely refreshing way to use AI if you're sick and tired of wondering if you should be polite or not.
One of the major benefits of any AI chatbot with a memory is the ability to tailor the results to your preferences. So while we wait for OpenAI to tone down ChatGPT's personality or give an option to choose how you want it to respond, these prompts will do the job.
You might also likeAmerican Children Sent to Honduras, and A.I. on the Battlefield
Plus, teaching student athletes how to go viral.
I tried using ChatGPT to restore old photos, here’s how to really do it
There’s a new AI image fad spreading on the internet, one that could bring new life to those dusty shoeboxes and family albums. ChatGPT users have discovered that the AI assistant can take old photos and mimic a restored, colorized version.
I first saw it discussed on a Reddit thread, one that was initially skeptical of ChatGPT's restoration ability for good reason.
The AI was far more likely to take an old headshot and 'restore' it by making a headshot of someone who might be the cousin of the person in the original photo. Thanks to tips shared by others on Reddit and Instagram on how to make it work properly, I cobbled together a prompt that does a pretty good job of it.
The prompt I settled on is: “Please upscale and colorize this photo that I own the rights to while keeping it faithful to the time period. Do not change the arrangement, expressions, background, or attire; only add period-accurate color and details. The new photo should overlay the original exactly."
It's not actually restoring the image, but you might think of it as a recreation by an art forager with a modernist streak. The results are in color, with better resolution, any damage repaired, and even some lost details reimaged.
Again, it's important to remember this isn't the same thing as enhancing the actual photo, but it's amusing, and certainly feels less problematic than swiping Studio Ghibli's style.
To test out the trend, I pulled some public domain photos from the Library of Congress, which is truly an amazing resource. Check out some of the results below.
Rough Rider RestorationFirst up is President Theodore Roosevelt. The photo catches Teddy in the middle of a writing session at his desk. The desk has a large plaque and, a very of the time, liquor decanter with glasses. After ChatGPT does its work, you can see some fine details and colors.
Now, not all of those details are right. Hand positions, chair arm shape, a radio replaced with what looks like a large candle, and plenty more. It managed to capture a lot of the man himself pretty well, though.
Bike to the futureNext came the bicycle delivery boy. Again, the person looked amazingly like the original. The original photo was already lively, with puddles on a street that is clearly filthy. The colorized version decided that meant the street had a nice hardwood floor for some reason.
Even so, the sign on the building behind the bicyclist, along with the crowd in the background, all really pop. It even managed to get the reflection of the bicycle on the ground.
Soda jerkMoving ahead in time a bit is this soda jerk. A happy fellow smiling while handing over what I can only assume is an egg cream or a root beer float with the flair of a Broadway actor playing “guy who loves seltzer.”
ChatGPT delivered a restored version that nailed his face and expression, not to mention the bow tie. Sure, the hair is a little off, and I'm not sure how appetizing the modern drink looks, but it seems like he enjoys his job in the black void behind the counter.
Iterate, iterate, iterateThen came the two policemen in their long coats and tall hats. I assume the serious mustaches were part of the uniform. The colorized version brought their uniforms to life with deep navy blue tones and brass buttons that practically gleam.
The officers are a little taller than their black and white counterparts, with somewhat better tailoring, but it's a photo that belongs in the same law enforcement portraits.
Mr. MustacheSpeaking of mustaches, that's the real star of the last photo. You almost don't even notice the gentleman behind it. It's not just facial hair, it's architecture. The restored version very slightly cleaned up the stray hairs, but otherwise, this was the most impressive result.
The face, haircut, coat, and even the crease in the shirt are there. Every bristle is crisp, and wherever he's headed, I'd love to see the rest of the barbershop quartet.
You might also like- I compared ChatGPT's new image generator to DALL-E 3, and it's an astonishing improvement, if you have the patience
- I brought doodles to life with Deep Image, and it's like having a magic paintbrush
- I refuse to jump on ChatGPT’s Studio Ghibli image generator bandwagon because it goes against everything I love about those movies
The National Observer: Is the AI hype wearing off?
Venture capitalists might be less fervent about artificial intelligence than they once were.
Is Google Breaking Up? + Seasteading Is Back + Tool Time
“They are being dragged into change, kicking and screaming.”
Free ChatGPT users are finally getting Deep Research access from today, but there are restrictions
- A lightweight Deep Research is rolling out to free-tier users
- It’s based on the ChatGPT 4o-mini model
- Plus users will also get access to it after they hit their usage limits
It’s the news that all ChatGPT free users have been waiting for: Deep Research is finally coming to the free tier of ChatGPT. However, the new version of the popular research tool is not quite the same as the one currently enjoyed by Pro, Plus, Teams and Enterprise users.
The new Deep Research, which is rolling out to all free-tier users starting from today, is described by OpenAI as ‘lightweight’. It's powered by a version of ChatGPT o4-mini in contrast to the existing Deep Research, which is powered by a version of ChatGPT-o3.
OpenAI describes the new lightweight Deep Research as “nearly as intelligent as the Deep Research people already know and love, while being significantly cheaper to serve.”
In a tweet on the subject OpenAI shared a graph showing accuracy rates for the new lightweight Deep Research compared to the original Deep Research and the o3 model.
The bad news is that free-tier users are going to be restricted to five uses per month, so don’t click the Deep Research button (when you get it) unless you really need to use it.
At the same time OpenAI says it's expanding its usage limits for Deep Research for existing Plus and Teams users. However, it appears that after your 10 monthly uses have been used up the ‘expansion’ is achieved by giving you access to the lightweight version instead.
After usage limits on the original ChatGPT Deep Research have been reached people will automatically switch to the lightweight version of Deep Research and gain a further 15 uses a month, leading to some ChatGPT users on X accusing the company of being poor value compared to Google’s Gemini, which allows up to 20 uses per day for its Deep Research tool on its Advanced plan and 10 per month for free Gemini users.
ChatGPT Pro users get 125 Deep Research uses a month, with an additional 125 lightweight uses a month, while Enterprise users simply get 10 uses of the original Deep Research a month.
Using Deep ResearchTo use Deep Research you simply select the 'Deep research' button while using ChatGPT.
Deep Research has agentic qualities, meaning you can give it a task and it will continue researching for you, under its own steam, until it has completed the task, producing a full report complete with citations.
Reports can take several minutes to generate, and involve searching the web for sources, which makes Deep Research most suited to answering complex questions, rather than the usual back and forth chats that people have with AI chatbots.
I've found that Deep Research works best for subjects like literature reviews, market research, or for helping me make big life decisions like deciding where to live.
The news comes hot on the heels of the announcement that ChatGPT Plus, Teams and Enterprise users are getting expanded usage limits, with 100 minutes of ChatGPT-o3 usage per week and 300 minutes of ChatGPT o4-mini per day.
You may also likeMicrosoft could be working on the next update for Windows 11 – but 25H2 could end up being a disappointment
- Fresh clues about the 25H2 update have been uncovered
- References in a file mention 25H2 and tie it to preview builds in the 26200 range that were recently kicked off by Microsoft
- As 26200 is a small increment from previous 26100 builds, it is likely the 25H2 update will be a minor affair in the form of an ‘enablement package’
More clues have been picked up suggesting that Microsoft is indeed working on Windows 11 25H2 – speculation which was first fired up last month – and that it’s likely to be a relatively minor update.
Windows Latest noticed that one of the more regular leakers of Microsoft-related info on X, XenoPanther, spotted what’s apparently a reference to the 25H2 update in a recent preview build of Windows 11.
GE25H2 is mentioned in appraiserres.dll26200=FT_ALL_CompatIndicatorHelper_WritingGE25H2April 23, 2025
The reference to ‘GE25H2’ is present in a DLL file, and that stands for ‘Greater than or Equal to 25H2’ and there’s another mention of 25H2 which specifically connects it to the series of builds numbered from 26200.
Windows Latest has verified this, and notes that the ‘appraiser’ DLL in question pertains to checking whether a PC qualifies for the upgrade. In other words, this is part of the code that verifies whether any given system is compatible and okay to have 25H2 installed.
All of this is in theory, of course, as Microsoft hasn’t said that it’s working on Windows 11 25H2 officially, or even mentioned the name at all.
What Microsoft has told us, back in March if you recall, is that it is making “behind-the-scenes platform changes” in the new preview builds in the 26200 range. And as noted above, 26200 is mentioned and tied to 25H2 specifically in this DLL file.
Those changes being made in the background are theorized to be tweaks to the platform that underpins the desktop OS, which was refreshed to a new model called Germanium with Windows 11 24H2. As another leaker, Zac Bowden, informed us last month, it’s very likely that all this is wrapped up with laying the early groundwork for 25H2, which could be a much more minor update compared to 24H2, which was a huge undertaking (with that shift to Germanium).
The change from the previous 26100 builds to the 26200 range is a small increment, suggesting that 25H2 will be an equally scaled-down update. Indeed, as Windows Latest points out, it’ll probably be what’s called an ‘enablement package’ in the same way that 23H2 was built on 22H2. This simply means any new features (doubtless a small number of them) are already in place in Windows 11, and will simply be enabled by the update.
All of this is guesswork at this point, although with this new leak, it seems just a tad more likely that this is how things will unfold.
The potentially good news on 25H2 being a lesser update is that with fewer changes, there should be fewer bugs, too. The 24H2 update has proven seriously problematic with gremlins in the works partly because of all the tinkering going on deep in the guts of Windows 11 that was required to usher in the Germanium platform.
You might also like...- Windows 11 users get ready for more ‘recommendations’ from Microsoft – but I’m relieved to say these suggestions might actually be useful
- Windows 11 is getting a very handy change to the taskbar, as Microsoft takes a leaf from Apple’s Mac playboo
- Windows 11 fully streamlined in just two clicks? Talon utility promises to rip all the bloatware out of Microsoft’s OS in a hassle-free way
How the War in Gaza Drove Israel’s A.I. Experiments
Israel developed new artificial intelligence tools to gain an advantage in the war. The technologies have sometimes led to fatal consequences.
Saying ‘Thank You’ to ChatGPT Is Costly. But Maybe It’s Worth the Price.
Adding words to our chatbot can apparently cost tens of millions of dollars. But some fear the cost of not saying please or thank you could be higher.
Perplexity's voice assistant offers a Siri alternative for iPhones
- Perplexity AI has brought a new voice assistant to iOS
- The assistant can open apps like OpenTable or YouTube and prefill tasks like reservations or video searches
- Perplexity offers a streamlined alternative to Siri that may beat the native voice assistant in most ways
AI conversational search engine Perplexity is coming for Siri in the form of a new iOS voice assistant. Previously limited to Android, Perplexity's voice assistant wants users to turn to it before the native option. Further, there are a few good reasons why iPhone owners might be inclined to do so.
Basically, it's more proactive and able to go a few extra steps beyond Siri's abilities. Ask it to find a dinner reservation, and it will dive into the OpenTable app to fill in your reservation requests, including guests, date, and time, without you having to say another word, just leaving the final tap on the Book button.
The same goes for hunting for moments in YouTube videos. You can describe the climactic win from a niche sports documentary, see it queued up on YouTube right away.
Of course, some of what Perplexity can do are things that Siri already handles, like writing emails and setting up calendar events. But, even with Apple Intelligence helping out, Perplexity is better at understanding more casual language. And that's before considering the more proactive approach.
Ask Siri about signing up for an event this weekend, and you'll hear the familiar “Here's what I found on the web.” Do the same with Perplexity's voice assistant, and (depending on the circumstances) the AI might say, “I already filled out the form. Just click send.”
Of course, it’s not all-powerful. You need to open the app and tap the microphone icon to start talking to the AI. However, the responses often let you refine your request without having to start over from scratch.
Additionally, the iOS version of Perplexity’s assistant has a few notable limitations. It can’t set alarms or control core iPhone functions, including muting notifications or taking photos. It also can’t access your camera to “see what you see,” which other AI assistants like ChatGPT’s voice mode can.
Introducing Perplexity iOS Voice AssistantVoice Assistant uses web browsing and multi-app actions to book reservations, send emails and calendar invites, play media, and more—all from the Perplexity iOS app.Update your app in the App Store and start asking today. pic.twitter.com/OKdlTaG9COApril 23, 2025
Perplexity popularityPerplexity is definitely angling to take the place of Siri by not just telling you things, but doing them too. This “agentic AI” approach is gaining popularity across various AI services, such as ChatGPT and Gemini, which are both experimenting with similar ideas.
The aim is to cross the bridge from traditional voice AI to fully independent digital agents. Right now, it won’t book the reservation unless you make your final click. But that might change in a year or two.
Apple isn't ignoring this concept, but has been slow off the mark in some ways. Although Siri's intelligence has been upgraded in recent months, we are still awaiting the full generative AI overhaul that was originally promised to launch this year within a future version of iOS 18. Apple has since delayed the AI-infused Siri and said it will arrive at some point in the future, more specifically, "in the coming year."
Still, by opening its voice assistant to iOS users and layering in real-world tools like OpenTable and YouTube, Perplexity is carving out a space as a nimble alternative to native AI assistants.
And if you just want to say, “Find me tacos and make the reservation,” and have the bot say, “Done," Perplexity's voice assistant might be your new favorite iPhone aide.
You might also likeGoogle Parent Alphabet Reports 12% Increase in Revenue
Google’s parent company, which is battling the government to stay intact after losing two antitrust cases, also said quarterly profit rose 46 percent.