Do NOT follow this link or you will be banned from the site!
Feed aggregator
Trump Is Selling a Phone + The Start-Up Trying to Automate Every Job + Allison Williams Talks ‘M3GAN 2.0’
“They’re calling it the T1 Phone 8002 Gold Version, which sounds kind of like a Taylor Swift album.”
New research says using AI reduces brain activity – but does that mean it's making us dumber?
Amid all the debates about how AI affects jobs, science, the environment, and everything else, there's a question of how large language models impact the people using them directly.
A new study from the MIT Media Lab implies that using AI tools reduces brain activity in some ways, which is understandably alarming. But I think that's only part of the story. How we use AI, like any other piece of technology, is what really matters.
Here's what the researchers did to test AI's effect on the brain: They asked 54 students to write essays using one of three methods: their own brains, a search engine, or an AI assistant, specifically ChatGPT.
Over three sessions, the students stuck with their assigned tools. Then they swapped, with the AI users going tool-free, and the non-tool users employing AI.
EEG headsets measured their brain activity throughout, and a group of humans, plus a specially trained AI, scored the resulting essays. Researchers also interviewed each student about their experience.
As you might expect, the group relying on their brains showed the most engagement, best memory, and the most sense of ownership over their work, as evidenced by how much they could quote from them.
The ones using AI at first had less impressive recall and brain connectivity, and often couldn’t even quote their own essays after a few minutes. When writing manually in the final test, they still underperformed.
The authors are careful to point out that the study has not yet been peer-reviewed. It was limited in scope, focused on essay writing, not any other cognitive activity. And the EEG, while fascinating, is better at measuring overall trends than pinpointing exact brain functions. Despite all these caveats, the message most people would take away is that using AI might make you dumber.
But I would reframe that to consider if maybe AI isn’t dumbing us down so much as letting us opt out of thinking. Perhaps the issue isn’t the tool, but how we’re using it.
AI brainsIf you use AI, think about how you used it. Did you get it to write a letter, or maybe brainstorm some ideas? Did it replace your thinking, or support it? There’s a huge difference between outsourcing an essay and using an AI to help organize a messy idea.
Part of the issue is that "AI" as we refer to it is not literally intelligent, just a very sophisticated parrot with an enormous library in its memory. But this study didn’t ask participants to reflect on that distinction.
The LLM-using group was encouraged to use the AI as they saw fit, which probably didn't mean thoughtful and judicious use, just copying without reading, and that’s why context matters.
Because the "cognitive cost" of AI may be tied less to its presence and more to its purpose. If I use AI to rewrite a boilerplate email, I’m not diminishing my intelligence. Instead, I’m freeing up bandwidth for things that actually require my thinking and creativity, such as coming up with this idea for an article or planning my weekend.
Sure, if I use AI to generate ideas I never bother to understand or engage with, then my brain probably takes a nap, but if I use it to streamline tedious chores, I have more brainpower for when it matters.
Think about it like this. When I was growing up, I had dozens of phone numbers, addresses, birthdays, and other details of my friends and family memorized. I had most of it written down somewhere, but I rarely needed to consult it for those I was closest to. But I haven't memorized a number in almost a decade.
I don't even know my own landline number by heart. Is that a sign I’m getting dumber, or just evidence I've had a cell phone for a long time and stopped needing to remember them?
We’ve offloaded certain kinds of recall to our devices, which lets us focus on different types of thinking. The skill isn’t memorizing, it’s knowing how to find, filter, and apply information when we need it. It's sometimes referred to as "extelligence," but really it's just applying brain power to where it's needed.
That’s not to say memory doesn’t matter anymore. But the emphasis has changed. Just like we don’t make students practice long division by hand once they understand the concept, we may one day decide that it’s more important to know what a good form letter looks like and how to prompt an AI to write one than to draft it line by line from scratch.
Humans are always redefining intelligence. There are a lot of ways to be smart, and knowing how to use tools and technology is one important measure of smarts. At one point, being smart meant knowing how to knap flint, make Latin declensions or working a slide rule.
Today, it might mean being able to collaborate with machines without letting them do all the thinking for you. Different tools prioritize different cognitive skills. And every time a new tool comes along, some people panic that it will ruin us or replace us.
The printing press. The calculator. The internet. All were accused of making people lazy thinkers. All turned out to be a great boon to civilization (well, the jury is still out on the internet).
With AI in the mix, we’re probably leaning harder into synthesis, discernment, and emotional intelligence – the human parts of being human. We don't need the kind of scribes who are only good at writing down what people say; we need people who know how to ask better questions.
Knowing when to trust a model and when to double-check. It means turning a tool that’s capable of doing the work into an asset that helps you do it better.
But none of it works if you treat the AI like a vending machine for intelligence. Punch in a prompt, wait for brilliance to fall out? No, that's not how it works. And if that's all you do with it, you aren't getting dumber, you just never learned how to stay in touch with your own thoughts.
In the study, the LLM group’s lower essay ownership wasn’t just about memory. It was about engagement. They didn’t feel connected to what they wrote because they weren’t the ones doing the writing. That’s not about AI. That’s about using a tool to skip the hard part, which means skipping the learning.
The study is important, though. It reminds us that tools shape thinking. It nudges us if we are using AI tools to expand our brains or to avoid using them. But to claim AI use makes people less intelligent is like saying calculators made us bad at math. If we want to keep our brains sharp, maybe the answer isn’t to avoid AI but to be thoughtful about using it.
The future isn't human brains versus AI. It’s about humans who know how to think with AI and any other tool, and avoiding becoming someone who doesn't bother thinking at all. And that’s a test I’d still like to pass.
You might also likeMidjourney just dropped its first AI video model and Sora and Veo 3 should be worried
- Midjourney has launched its first AI video model, V1.
- The model lets users animate images into five-second motion clips.
- The tool is relatively affordable and a possible rival for Google Veo or OpenAI’s Sora.
Midjourney has long been a popular AI image wizard, but now the company is making moves and movies with its first-ever video model, simply named V1.
This image-to-video tool is now available to Midjourney's 20 million-strong community, who want to see five-second clips based on their images, and up to 20 seconds of them extended in five-second increments.
Despite being a brand new venture for Midjourney, the V1 model has enough going on to at least draw comparisons to rival models like OpenAI’s Sora and Google’s Veo 3, especially when you consider the price.
For now, Midjourney V1 is in web beta, where you can spend credits to animate any image you create on the platform or upload yourself.
To make a video, you simply generate an image in Midjourney like usual, hit “Animate,” choose your motion settings, and let the AI go to work.
The same goes with uploading an image; you just have to mark it as the start frame and type in a custom motion prompt.
You can let the AI decide how to move it, or you can take the reins and describe how you want the motion to play out. You can pick between low motion or high motion depending on whether you want a calm movement or a more frenetic scene, respectively.
The results I've seen certainly fit into the current moment in AI video production, both good and bad. The uncanny valley is always waiting to ensnare users, but there are some surprisingly good examples from both Midjourney and initial users.
AI video battlesMidjourney video is really fun from r/midjourneyMidjourney isn’t trying to compete head-on with Sora or Veo in terms of technical horsepower. Those models are rendering cinematic-quality 4K footage with photorealistic lighting and long-form narratives based solely on text. They’re trained on terabytes of data and emphasize frame consistency and temporal stability that Midjourney is not claiming to offer.
Midjourney’s video tool isn’t pretending to be Hollywood’s next CGI pipeline. The pitch is more about being easy and fun to use for independent artists or tinkerers in AI media.
And it really does come out as pretty cheap. According to Midjourney, one video job costs about the same as upscaling, or one image’s worth of cost per second of video.
That’s 25 times cheaper than most AI video services on the market, according to Midjourney and a cursory examination of other alternatives.
That's probably for the best since a lot of Hollywood is going after Midjourney in court. The company is currently facing a high-stakes lawsuit from several Disney, Universal, and other studios over claims it trained its models on copyrighted content.
For now, Midjourney's AI generators for images and video remain active, and the company has plans to expand its video production capabilities. Midjourney is teasing long-term plans for full 3D rendering, scene control, and even immersive world exploration. This first version is just a stepping stone.
Advocates for Sora and Veo probably don't have to panic just yet, but maybe they should be keeping an eye on Midjourney's plans, because while they’re busy building the AI version of a studio camera crew, Midjourney just handed a magic flipbook to anyone with a little cash for its credits.
You might also likeHybrid Cars, Once Derided and Dismissed, Have Become Popular
Automakers and car buyers are taking a second, harder look at hybrids after leaving them behind for electric vehicles.
Chinese Companies Set Their Sights on Brazil
Confronted with tariffs and scrutiny in the United States and Europe, Chinese consumer brands are betting that they can become household names in Latin America’s biggest economy.
TikTok Hits Cannes, Where a U.S. Ban Seems a Distant Dream
TikTok executives hosted happy hours and played pickleball with influencers on the French Riviera this week, even as a U.S. ban loomed over the company.
‘My kids will never be smarter than AI’: Sam Altman’s advice on how to use ChatGPT as a parent leaves me shaking my head
Sam Altman has appeared in the first episode of OpenAI’s brand new podcast, called simply the OpenAI Podcast, which is available to watch now on Spotify, Apple Podcasts, and YouTube.
The podcast is hosted by Andrew Mayne and in the first episode, OpenAI CEO Sam Altman joins the host to talk about the future of AI: from GPT-5 and AGI to Project Stargate, new research workflows, and AI-powered parenting.
While Altman's thoughts on AGI are always worth paying attention to, it was his advice on AI-powered parenting that caught my ear this time.
You have to wonder if Altman’s PR advisors have taken the day off, because after being asked the softball question, “You’ve recently become a new parent, how is ChatGPT helping you with that?”, Altman somehow draws us into a nightmare scenario of a generation of AI-reared kids who have lost the ability to communicate with regular humans in favor of their parasocial relationships with ChatGPT.
“My kids will never be smarter than AI.”, says Altman in a matter-of-fact way. “But also they will grow up vastly more capable than we were when we grew up. They will be able to do things that we cannot imagine and they’ll be really good at using AI. And obviously, I think about that a lot, but I think much more about what they will have that we didn’t…. I don’t think my kids will ever be bothered by the fact that they’re not smarter than AI. “
That all sounds great, but then later in the conversation he says: “Again, I suspect this is not all going to be good, there will be problems and people will develop these problematic, or somewhat problematic, parasocial relationships.“
In case you’re wondering what "parasocial relationships" are, they develop when we start to consider media personalities or famous people as friends, despite having no real interactions with them; the way we all think we know George Clooney because he’s that friendly doctor from ER, or from his movies or the Nespresso advert, when, in fact, we have never met him, and most likely never will.
Mitigating the downsidesAltman is characterizing a child’s interactions with ChatGPT in the same way, but interestingly he doesn’t offer any solutions for a generation weaned on ChatGPT Advanced Voice mode rather than human interaction. Instead he sees it as a problem for society to figure out.
“The upsides will be tremendous and society in general is good at figuring out how to mitigate the downsides”, Altman assures the viewer.
Now I’ll admit to being of a more cynical bent, but this does seem awfully like he’s washing his hands of a problem that OpenAI is creating. Any potential problems that a generation of kids brought up interacting with ChatGPT are going to experience are, apparently, not OpenAI’s concern.
In fact, earlier when the podcast host brought up the example story of a parent using ChatGPT’s Advanced Voice Mode to talk to their child about Thomas the Tank Engine, instead of doing it themselves, because they are bored of talking about it endlessly, Altman simply nods and says ,“Kids love Voice Mode in ChatGPT”.
Indeed they do Sam, but is it wise to let your child loose on ChatGPT’s Advanced Voice Mode without supervision? As a parent myself (although of much older children now) I’m uncomfortable with hearing of young kids being given what sounds like unsupervised access to ChatGPT.
AI comes with all sorts of warnings for a reason. It can make mistakes, it can give bad advice, and it can hallucinate things that aren’t true. Not to mention that “ChatGPT is not meant for children under 13” according to OpenAI’s own guidelines, and I can’t imagine there are many kids older than 13 who are interested in talking about Thomas the Tank Engine!
I have no problem using ChatGPT with my kids, but when ChatGPT was available they were both older than 13. If I was using it with younger children I’d always make sure that they weren’t using it on their own.
I'm not suggesting that Altman is in any way a bad parent, and I appreciate his enthusiasm for AI, but I think he should leave the parenting advice to the experts for now.
You might also likeYour A.I. Queries Come With a Climate Cost
When it comes to artificial intelligence, more intensive computing uses more energy, producing more greenhouse gases.
Can A.I. Quicken the Pace of Math Discoveries?
Breakthroughs in pure mathematics can take decades. A new Defense Department initiative aims to speed things up using artificial intelligence.
Google Gemini’s super-fast Flash-Lite 2.5 model is out now - here’s why you should switch today
- Google’s new Gemini 2.5 Flash-Lite model is its fastest and most cost-efficient
- The model is for tasks that don't require much processing, like translation and data organization
- The new model is in preview, while Gemini 2.5 Flash and Pro are now generally available
AI chatbots can respond at a pretty rapid clip at this point, but Google has a new model aimed at speeding things up even more under the right circumstances. The tech giant has unveiled the Gemini 2.5 Flash-Lite model as a preview, joining the larger Gemini family as the smaller, yet faster and more agile sibling to the Gemini 2.5 Flash and Gemini 2.5 Pro.
Google is pitching Flash-Lite as ideal for tasks where milliseconds matter and budgets are limited. It's intended for tasks that may be large but relatively simple, such as bulk translation, data classification, and organizing any information.
Like the other Gemini models, it can still process requests and handle images and other media, but the principal value lies in its speed, which is faster than that of the other Gemini 2.5 models. It's an update of the Gemini 2.0 Flash-Lite model. The 2.5 iteration has performed better in tests than its predecessor, especially in math, science, logic, and coding tasks. Flash-Lite is about 1.5 times faster than older models.
The budgetary element also makes Flash-Lite unique. While other models may turn to more powerful, and thus more expensive, reasoning tools to answer questions, Flash-Lite doesn’t always default to that approach. You can actually flip that switch on or off depending on what you’re asking the model to do.
And just because it can be cheaper and faster doesn't mean Flash-Lite is limited in the scale of what it can do. Its context window of one million tokens means you could ask it to translate a fairly hefty book, and it would do it all in one go.
Flash-Lite litThe preview release of Flash-Lite isn't Google's only AI model news. The Gemini 2.5 Flash and Pro models, which have been in preview, are now generally available. The growing catalogue of Gemini models isn't just a random attempt by Google to see what people like. The variations are tuned for specific needs, making it so Google can pitch Gemini as a whole to a lot more people and organizations, with a model to match most needs.
Flash-Lite 2.5 isn’t about being the smartest model, but in many cases, its speed and price make it the most appealing. You don’t need tons of nuance to classify social media posts, summarize YouTube transcripts, or translate website content into a dozen languages.
That’s exactly where this model thrives. And while OpenAI, Anthropic, and others are releasing their own fast-and-cheap AI models, Google’s advantage in integration with its other products likely helps it pull ahead in the race against its AI rivals.
You might also likeBYD and Other Chinese Carmakers Expand Sales in Europe Despite Tariffs
BYD and other companies doubled their share of the car market after the European Union imposed higher tariffs on electric vehicles from China.
Tesla’s Robotaxi, Long Promised by Elon Musk, Joins a Crowded Field
Mr. Musk says the driverless taxis could begin ferrying passengers on Sunday in Austin, Texas, where other companies already have similar cars on the road.
Windows 11 user has 30 years of 'irreplaceable photos and work' locked away in OneDrive - and Microsoft's silence is deafening
- A Redditor was moving a huge slab of data from old drives to a new one
- They used OneDrive as a midpoint in an ill-thought-out strategy that left all the data in Microsoft's cloud service temporarily
- When they came to download the data, they were locked out of OneDrive, and can't get Microsoft support to address this issue
A cautionary tale shared on Reddit tells the story of a Windows PC owner who used OneDrive to store 30 years' worth of their data and lost the lot when their Microsoft account was locked, with no apparent way to regain access.
This is a nasty sounding predicament (highlighted by Neowin) to say the least, with the loss of what's described as three decades of "irreplaceable photos and work" which was transferred to OneDrive as a temporary storage facility.
The idea the Redditor had was that they needed to move that huge collection of files from multiple old drives where they were stored to a large new drive, and OneDrive was selected as the midpoint in that data migration journey.
So, they moved all the files off the old drives onto Microsoft's cloud storage service and prepared to transfer the data to the new drive, when they ran into a huge stumbling block. The Redditor was suddenly locked out of their Microsoft account (and therefore OneDrive, and all Microsoft services).
Now, this isn't a sensible way to manage this data transfer, of course (and I'll come back to outline why in a moment, in case you're not sure), but the point here is that the mistake happened, and the Redditor can't get any joy whatsoever from Microsoft in terms of trying to resolve the problem.
In their Reddit post, which is gaining a lot of attention, they say: "Microsoft suspended my account without warning, reason, or any legitimate recourse. I've submitted the compliance form 18 times - eighteen - and each time I get an automated response that leads nowhere. No human contact. No actual help. Just canned emails and radio silence."
They continue: "This feels not only unethical but potentially illegal, especially in light of consumer protection laws. You can't just hold someone's entire digital life hostage with no due process, no warning, and no accountability," adding that Microsoft is a "Kafkaesque black hole of corporate negligence."
Analysis: Microsoft needs to do betterOkay, so first up, very quickly - because I don't want to labor on the mistakes made by the unfortunate Redditor - this is not a good way to proceed with a drive migration.
In transferring a large slab of data like this, you should never have a single point of failure in the process. By which I mean shoving all the data into the cloud, on OneDrive, and having that as the sole copy. That's obviously the crux of the problem here, because once the user was locked out of OneDrive, they no longer had access to their data at all.
When performing such an operation, or as a general rule for any data, you should always keep multiple copies. Typically, that would be the original data on your device, a backup on a separate external drive at home (preferably two drives, in fact), and an off-site copy in a cloud storage locker like OneDrive. The point is that if you lose the original data, you can resort to, say, the external drive, but if that's also gone to the great tech graveyard in the sky somehow, you can go to the second drive (or the cloud).
Anyway, you get the point, but the Redditor chanced this way of doing things - figuring, no doubt, that as a temporary measure, it was fine to rely solely on OneDrive - but clearly, that wasn't the case.
There are a number of issues with the scenario presented here where Microsoft has fallen short of the standards that a customer would rightly expect.
Why did this happen?First, there's the fact that the Microsoft account was simply locked with no notification or message provided as to why. The OneDrive user can only guess at why this ban was enacted (and the obvious guess is that some copyrighted material, or other content that contravened Microsoft's policies, was flagged in the uploaded files, which would trigger the account to be automatically locked). It's worth making it clear that we (obviously) don't have any idea about the contents of this data.
Secondly, with this having happened, the most worrying part here is the Redditor's description of how they feel like they're banging their head against a brick wall in trying to talk to Microsoft's support staff about how to resolve this. After all, this is essentially their whole life's worth of data, and there should be some way to at least find out what the problem is - and give the person who's been locked out a chance to explain, and potentially regain access.
For all we know, it could be a bug that's caused this. But if nobody at Microsoft's listening, nobody's investigating, probably. And if you do use OneDrive as a cloud backup, not having access to your data at a critical time is a frightening prospect indeed. (Which is why you must sort out those other local backups as an alternative, or indeed, another cloud service if you really wanted to push the 'data redundancy' boat out).
Hopefully, the Redditor will eventually get to speak to a Microsoft support agent - an actual person - to iron this out. In theory, all that data could still be on Microsoft's servers somewhere.
This incident has occurred at a time when Microsoft is pushing its account services on Windows 11 users, as you can't install the OS without one (well, you can by using loopholes, although the company is busy eradicating some of those fudges). Not to mention pushing OneDrive, Microsoft 365, and other services with ads in Windows, of course.
That broad drive is an unfortunate backdrop here when you consider another recent misstep recently brought to light. That was the highlighting of a potential problem with deleted Microsoft accounts (deleted by the user, that is), which could result in the loss of the key for the default drive encryption applied with new installations of Windows 11 24H2.
Again, that nasty little (albeit niche) scenario could lead to all the data on your drive disappearing into a blackhole, never to be seen again. It's another odd situation you could end up in with no recourse at all in this case - and this, along with the Redditor's awful plight, are predicaments that Microsoft clearly should not be inflicting on consumers.
We've contacted Microsoft for comment about this specific case, and will update this story if we get a response from the company.
You might also like...This island is getting the world’s first AI government, but I’ve read this story before – and it doesn’t end well
Sensay, a creator of AI-powered digital replicas of people, has established an AI-powered government on a real island it purchased off the coast of the Philippines. Previously known as Cheron Island, it's been renamed Sensay Island.
The Head of State (effectively, the President) of Sensay Island is Roman Emperor Marcus Aurelius, one of The Five Good Emperors of Rome, who was known for his love of Stoic philosophy and good judgement. Wartime British PM Winston Churchill is the Prime Minister, while Sun Tzu, author of the Chinese strategic classic, The Art of War, takes the reins at Defence. Alexander Hamilton is the new Treasury Secretary.
According to Sensay, “Each AI replica is designed to emulate the personality, values, and decision-making patterns of the historical figure it represents, providing a governance style infused with timeless wisdom and ethical principles.
To truly emulate the character of these historical figures, each recreation is uniquely trained on the literature, teaching, philosophies, and speeches of the real-life counterparts they represent."
How easily AI replicas from such disparate periods and with such strong characters will be able to work together in government remains to be seen, since their contrasting values must surely clash at points, not to mention be at odds with modern-day values.
The full cabinetHere’s the full list of Sensay Island cabinet members:
Head of State (President): Marcus Aurelius
Prime Minister: Winston Churchill
Foreign Affairs Minister: Eleanor Roosevelt
Defense Minister: Sun Tzu
Treasury Secretary: Alexander Hamilton
Justice Minister: Nelson Mandela
Science & Technology Minister: Ada Lovelace
Education Minister: Confucius
Health Minister: Florence Nightingale
Agriculture Minister: George Washington Carver
Environment Minister: Wangari Maathai
Culture Minister: Leonardo da Vinci
Ethics Advisor: Mahatma Gandhi
Innovation Advisor: Nikola Tesla
Infrastructure Director: Queen Hatshepsut
Chief Strategist: Zhuge Liang
Intelligence Chief: T.E. Lawrence
Personally, I think DaVinci was a wise choice for Culture Minister, and it’s nice to see Nikola Tesla being recognized as Innovation Advisor, but I have to say I’m a little disappointed not to see Queen Cleopatra anywhere in the mix.
Confucius also presents some challenges as Education Minister, considering his unfamiliarity with modern technology, like AI.
Sensay Island is indeed a real island off the coast of the Philippines. You can find it on Google Maps. It has a surface area of around 3.4 km², comprising beaches, rainforest, and coral lagoons.
From what we can see, there doesn’t seem to be any infrastructure of any kind on the island, so if you’re thinking of a visit, be aware that there’s probably no Wi-Fi.
While an AI government feels like something of a publicity stunt, there are serious reasons why Sensay has created an AI island:
“Sensay is looking to demonstrate that AI can be deployed in national governance to aid policymaking free from political partisanship and bureaucratic delays, and with unprecedented transparency and participation”, it says.
A fly on the wallAccording to Marisol Reyes, the (AI-powered) Tourism Manager for Sensay Island, who you can chat with at its website, you can visit the island whenever you like:
“Absolutely, you can visit our beautiful island! We're thrilled to welcome visitors to experience this unique blend of cutting-edge AI governance and traditional Filipino hospitality. Sensay Island is open to tourists who want to explore our pristine beaches, vibrant coral sanctuaries, and witness history in the making with our groundbreaking AI Council.”
For those without the means to visit, the good news is that you can still get involved. You will soon be able to register as an E-resident of Sensay Island, allowing you to propose new policies for its AI-powered administration via an open-access platform:
“This will combine direct democracy with AI-enhanced decision-making”, says Sensay.
Dan Thomson, CEO and founder of Sensay, added, “This project shows Sensay’s commitment to pushing the boundaries of AI in a responsible direction. I hope our approach will show the public and world leaders that AI is a feasible and efficient way to develop and implement policies."
Despite an AI-controlled civilization leading to (attempted) human extinction in just about every major Sci-Fi movie I’ve watched in the last 40 years, from Logan’s Run to The Terminator, it seems that humans are still determined to give it a go.
But could AI actually provide a more balanced and sane government than our elected officials can? There’s only one way to find out...
You might also like- I watched some of the viral ASMR videos made with AI and I feel more confused than soothed
- I don't like the idea of my conversations with Meta AI being public – here's how you can opt out
- Google’s Veo 3 is coming to Canva – as the graphic design giant claims ‘AI doesn’t need to mean we have to stop being creative’
Windows 11’s new Start menu falls short in one key area – and it’s making people angry
- Microsoft has a Start menu redesign in testing
- This introduces new layouts for the list of all apps
- One of those layouts is a category view, and we’ve had confirmation from Microsoft that it won’t be possible to customize this to your liking
We’ve just learned more about how Microsoft’s revamped Start menu will work when it arrives in Windows 11, and not everyone is happy about the new info aired here.
Windows Latest reports on an element of customization that falls short of what some Windows 11 users were hoping for, and it pertains to one of the new layouts being introduced for the list of apps.
As you may recall, with the redesigned Start menu – which is in test builds of Windows 11 now – the long list of apps installed on the PC can be set to a couple of more compact alternative layouts, one of which is a grid and the other a category view.
It’s the latter we’re interested in here, whereby apps are grouped into different categories such as Games, Productivity, Creativity, Social, Utilities and so forth. Each of these categories has a box in which up to four icons for the most commonly-used apps appear, and the full roster of apps is found within if you open the category – all of which allows for an easier way to locate the app you’re looking for, rather than scrolling through a lengthy alphabetical list.
So, what’s the beef that’s been raised here? Windows Latest has received confirmation from Microsoft that it won’t be possible to create your own category types.
Windows 11 will, of course, make the decisions on how to categorize apps and where they belong, but there are some interesting, and less than ideal, nuances picked up by Windows Latest here.
Any app that Windows 11 isn’t sure about will go in the ‘Other’ category, for one thing. Also, if there aren’t three apps for any given category – because you don’t have enough creativity apps installed on your machine, say – then a stray creativity app (like Paint) will be dumped in Other.
Analysis: improved customization could still be offered with any luckIf Microsoft gave folks the ability to make their own category folders, they could have a few alternative dumping grounds to Other – categories named so that the user could better remember what apps they contain.
However, with Windows 11 overseeing category allocation, it seems like Microsoft wants to keep a tight rein on the groups that are present in this part of the interface. Sadly, it isn’t possible to move an app from one category to another, either (as Windows Latest has highlighted in the past), should you disagree with where it’s been placed – and this latter ability is a more telling shortcoming here.
The new Start menu remains in testing, so Microsoft may make changes before it arrives in the finished version of Windows 11. That’s entirely possible, especially seeing as Microsoft has (again) been stressing how it’s listening to user feedback in order to better inform Windows 11’s design, the Start menu overhaul included.
So, simply being able to drag and drop icons between these categories is something we can hope for, in order to reclassify any given app – it’s a pretty basic piece of functionality, after all. We may eventually get to define our own categories, too, but for now it appears that Microsoft is taking a rather rigid approach to customization with this part of the menu.
Expect this Start menu makeover to be one of the central pillars of Windows 11 25H2 when it pitches up later this year.
You might also like...- Windows 11's hidden PC migration feature proves Microsoft isn't messing around when it comes to killing off Windows 10
- macOS Tahoe 26 is official - here's everything you need to know about all the new features
- Can’t upgrade to Windows 11? This Linux project wants to save your old PC from the scrapheap when Windows 10 support ends
Lawmakers Demand Palantir Provide Information About U.S. Contracts
Ten Democratic lawmakers sent a letter to the tech company this week asking about its expanding contracts under the Trump administration.
I don't like the idea of my conversations with Meta AI being public – here's how you can opt out
- Meta AI prompts you to choose to post publicly in the app's Discovery feed by default
- Meta has a new warning pop-up, but accidental sharing remains a possibility
- You can opt out of having your conversations go public entirely through the Meta AI app’s settings
The Meta AI app's somewhat unique contribution to the AI chatbot app space is the Discovery feed, which allows people to show off the interesting things they are doing with the AI assistant.
However, it turns out that many people were unaware that they weren't just posting those prompts and conversation snippets for themselves or their friends to see. When you tap "Share" and "Post to feed," you're sharing those chats with everyone, much like a public Facebook post.
The Discovery feed is an oddity in some ways, a graft of the AI chatbot experience on a more classic social media structure. You’ll find AI-generated images of surprisingly human robots, terribly designed inspirational quote images, and more than a few examples of the kind of prompts the average person does not want just anyone seeing.
I've scrolled past people asking Meta AI to explain their anxiety dreams, draft eulogies, and brainstorm wedding proposals. It's voyeuristic, and not in the performative way of most social media; it's real and personal.
It seems that many people assumed sharing those posts was more like saving them for later perusal, rather than offering the world a peek at whatever awkward experiments with the AI you are conducting. Meta has hastily added a new pop-up warning to the process, making it clear that anything you post is public, visible to everyone, and may even appear elsewhere on Meta platforms.
If that warning doesn't seem enough to ensure your AI privacy on the app, you can opt out of the Discovery feed completely. Here's how to ensure your chats aren’t one accidental tap away from public display.
- Open the Meta AI app.
- Tap your profile picture or initials, whichever represents your digital self.
- Tap on "Data and Privacy" and "Manage Your Information."
- Tao on "Make all public prompts visible to only you," and then "Apply to all" in the pop-up. This will ensure that when you share a prompt, only you will be able to see it.
- If that doesn't seem like enough, you can completely erase the record of any interaction you've had with Meta AI by tapping "Delete all prompts." That includes any prompt you've written, regardless of whether it's been posted, so be certain.
Of course, even with the opt-out enabled and your conversations with Meta AI no longer public, Meta still retains the right to use your chats to improve its models.
It's common among all the big AI providers. That's supposedly anonymized and doesn't involve essentially publishing your private messages, but theoretically, what you and Meta AI say to each other could appear in a chat with someone else entirely in some form.
It's a paradox in that the more data AI models have, the better they perform, but people are reluctant to share too much with an algorithm. There was a minor furor when, for a brief period, ChatGPT conversations became visible to other users under certain conditions. It's the other edge of the ubiquitous “we may use your data to improve our systems” statement in every terms of service.
Meta’s Discovery feed simply removes the mask, inviting you to post and making it easy for others to see. AI systems are evolving faster than our understanding of them, hence the constant drumbeat about transparency. The idea is that the average user, unaware of the hidden complexities of AI, should be informed of how their data is being saved and used.
However, given how most companies typically address these kinds of issues, Meta is likely to stick to its strategy of fine-tuning its privacy options in response to user outcry. And maybe remember that if you’re going to tell your deepest dreams to an AI chatbot, make sure it’s not going to share the details with the world.
You might also like- I tried the new Meta AI app and it's like ChatGPT for people who like to overshare
- Mark Zuckerberg wants everyone to have AI friends, but I think he's missing the point of AI, and the point of friendship
- Meta AI is now the friend that remembers everything about you
- Meta wants to fill your social media feeds with bots – here's why I think it's wrong
Trump to Again Extend TikTok’s Reprieve From U.S. Ban
The president plans to sign another executive order this week that would give the popular video app more time to change its ownership structure.
Senate Passes Cryptocurrency and Stablecoin Rules Bill
The bill was a significant step toward giving the cryptocurrency industry the credibility and legitimacy it has sought, without limitations it has worked to head off.
China’s Spy Agencies Are Investing Heavily in A.I., Researchers Say
A new report comes amid rising concern about how China will use new tools to power covert actions, as Western intelligence services also embrace the technology.