Do NOT follow this link or you will be banned from the site!
Feed aggregator
Google Researchers Win Nobel Prize Amid Company’s Antitrust Battle
Two of the company’s A.I. researchers shared the Nobel Prize in Chemistry, just hours after the Justice Department started spelling out plans that could lead to its break up.
Gemini is taking over some more Google Assistant tasks
Google continues to replace Google Assistant features using its Gemini AI models and branding, as discovered in as-yet unreleased code by Android Authority. The tech giant's infusion of Gemini is rolling throughout its product line.
Upcoming Gemini Extensions will take over Google Assistant integrations with apps including Google Messages, Spotify, and WhatsApp. These Gemini extensions are designed to take over for those Google Assistant integrations while promising to personalize the interaction beyond the capacity of the older AI assistant.
Right now, if you ask your Android device or Google app to play a song on Spotify or send a message via WhatsApp, Google Assistant completes the task. The code in the beta version of the Google app lays out how Gemini will pick up that role through Extensions running independently from Google Assistant.
It's not just a cosmetic shift. The Spotify Extension works like the YouTube Music Gemini Extension, playing a song upon request and showing an image of the song that you can tap to open Spotify. The Gemini Extensions also allow you to use much more casual language than Google Assistant when asking Google to play music on Spotify or call someone with WhatsApp. They also speed up the process by eliminating some of the follow-up clarifications required by Google Assistant.
The AI will adapt to your preferences, remembering what apps you favor for carrying out tasks. So, if you always use WhatsApp to call certain people and your phone to call others, you'll be able to ask the AI to call one of them without specifying which app to use, and it will automatically pick the right one. The same goes for asking Gemini to play a song on YouTube Music or Spotify. You can also manually adjust which apps are used for which tasks.
Extension StarsThe Gemini Extensions being in beta means they can't do everything you might want as of yet. Gemini can't read or show old messages when asked yet, though that may be included when the final version of the update rolls out. When it does arrive, you might not find the update groundbreaking, but if you pay attention, you'll likely notice how much better Google's voice AI is at understanding what you want it to do and how it is completing your requests faster than you remember.
Gemini Extensions are key to Google's efforts to encourage people to use Gemini by spreading the AI everywhere. Gemini is becoming the foundation for both Google products and how they link to third-party apps and services. Google Assistant as both a brand and a product is being superseded, though marketing is definitely as much a part of that shift as actual technical upgrades.
You might also likeI created this creepy avatar and now I might never stand in front of a camera again
I've shot thousands of hours of video over the years of my career and I can tell you that it takes lots of preparation, work, and energy. I can also tell you that if you use an AI avatar video generator like HeyGen, it takes almost none of the above, and that scares the heck out of me.
With the advent of high-quality generative video, these AI video avatars are popping up everywhere. I haven't paid much attention, mostly because I like being on camera and am happy to do it for TV and social video. Even so, I know not everyone loves the spotlight and would happily hand the duties over to an avatar, and when I got a glimpse of the apparent quality of HeyGen's avatars, I was intrigued enough to give it a try. Now I honestly wish I hadn't.
HeyGen, which you can use on mobile or desktop, is a simple and powerful platform for creating AI avatars that can, based on scripts you provide, speak to the camera for you. They're useful for video presentations, social media, interactive avatars, training videos, and essentially anything where an engaging human face might help sell the topic or information.
HeyGen lets you create digital twins that can appear in relatively static videos or ones in which the other you is on the move. For my experience, I chose the 'Still' option.
Setting up another me (Image credit: Future)There are some rules for creating your avatar and I think following them as I did may have resulted in the slightly off-putting quality of my digital twin.
HeyGen recommends you start the process by shooting a video of yourself using either a professional camera or one of your best smartphones, but the video should be at least 1080p. If you use the free version as I did, you'll note that the final videos are only 720p. Upgrade later and you can start producing full HD video avatars (more on the pricing structure later).
There are other bits of guidance like using a "nice background," avoiding harsh shadows" and background noise, and a few that are key to selling the digital twin version of you. HeyGen asked that I look directly (but not creepily, I assume) at the camera, make normal (open to interpretation) gestures below chest level, and take pauses between sentences. The last bit is actually good advice for making real videos. I have a habit of speaking stream of consciousness and forget to pause and create obvious soundbites for editing.
Here, though, the pauses are not about what you're saying, at least for the training video. It seems to be about learning to manage your twin's face and mouth when you're talking and when you're not.
In any case, I could say anything I wanted to the camera as long as it was for at least 2 minutes. More video will help with the quality of new videos featuring your avatar.
Training to be me (Image credit: Future)I set up my iPhone 16 Pro Max and a couple of lights and filmed myself in my home office for 2 minutes speaking about nonsense, all the while making sure to take 1-second pauses and to keep my gestures from being too wild. After I Airdropped it to my MacBook Air, I uploaded the video. It was at this point it became clear that as a non-paying user, I was handing over virtually all video rights to HeyGen. Not optimal at all but I was not about to start paying $24 a month for the basic plan and to return control of my image.
The HeyGen system took considerable time to ingest the video and prepare my digital twin. Once it was ready, I was able to create my first 3-minute video. Paying customers can create 5-minute videos or longer, depending on which service tier they choose, Paying also grants access to faster video processing.
To create a video, I selected the video format: portrait or landscape. I shot my training video in portrait but that did not seem to matter. I also had to provide a script that I could type or paste into a field that accepts a maximum of 2000 characters.
For someone who writes for a living, I struggled with the script, finally settling a on brief soliloquy from Hamlet. After checking the script length, the system went to work and slowly generated my first HeyGen Digital Twin video. I must've accidentally kept some blank spaces at the end of my script because about half of it is the digital me silently vamping for the cameras. It's unsettling.
Nothing is real @lanceulanoff ♬ original sound - LanceUlanoffI followed this up with a tight TikTok video where I revealed that the video they were watching was not really me. My third video and the last of my free monthly allotment, was of me telling a joke: "Have you ever played quiet tennis? It's the same as regular tennis but without the racket. Ha ha ha ha ha ha ha ha!" As you might've guessed, the punchline doesn't really land and because my digital twin never smiles and delivers the "laughter" in a completely humorless way, none of it is even remotely funny.
In all of these videos, I was struck by the audio quality. It's the essence of my voice but also not my voice. It's too robotic and lacking in emotion. At least it's properly synced with the mouth. The visuals, on the other hand, are almost perfect. My digital twin looks just like me or, at least a very emotionless version of me who is into Tim Cook keynote-style hand gestures. To be fair, I didn't know what to do with my hands when I originally recorded my training video, worrying that if I didn't control my often wild hand gestures they would look bizarre with my digital twin. I was wrong. This overly-controlled twin is the bizarre one.
Just nopeCan an AI version of me tell a joke? Sort of. #heygen @HeyGen_Official pic.twitter.com/ODke9z67VHOctober 9, 2024
On TikTok, someone wrote, "Nobody likes this. Nobody wants this." When I posted the video on Threads, the reactions ranged from shock to dismay. People noticed my "distracting" hand gestures, called it "creepy", and worried that such videos represented the "death of truth."
But here's the thing. While the AI-generated video is concerning, it did not say anything I did not write or copy and paste. Yes, my digital twin is well past uncanny and deep into unnervingly accurate but at least it's doing my bidding. The concern is if you have a good 2-minute video of someone else speaking, could you upload that and then make it say whatever you want? Possibly.
HeyGen gets credit for effectively creating a no-fuss digital twin video generator. It's far from perfect and could be vastly improved if they also had users train it on emotions (the right looks for 'funny', 'sad', 'mad', you get it) and a wider variety of facial expressions (a smile or two would be nice). Until then these digital twins will be our emotionless doubles, waiting to do our video bidding.
You might also likeNobel Prize in Chemistry Goes to 3 Scientists for Predicting and Creating Proteins
The Nobel, awarded to David Baker of the University of Washington and Demis Hassabis and John M. Jumper of Google DeepMind, is the second this week to involve artificial intelligence.
U.S. Weighs Forcing Google to Break Off Parts of the Company
They include making Google’s data available to rivals and forcing it to break off parts of the company, the Justice Department said in a court filing.
Does Your School Use Suicide Prevention Software? We Want to Hear From You.
Concerned about anxiety and depression among students, some schools are monitoring what children type into their devices to detect suicidal thinking or self-harm.
After 147 years, Wimbledon is getting rid of line judges in favor of AI – and adding VAR, which always goes well
Wimbledon, the oldest tennis tournament in the world, is replacing around 300 line judges with artificial intelligence at next year’s tournament - saying goodbye to a 147-year tradition.
The line judges at Wimbledon have for years stood around the court watching the lines with laser focus to determine whether a tennis ball is in or out. But at Wimbledon in 2025, you’ll not spot the cream berets and navy blazers. The system the All England Lawn Tennis Club (AELTC) has opted for instead is an evolution of the Hawk-Eye technology that has been used for tight calls since 2007. The technology is called electronic line calling (ELC) and will be used on all of Wimbledon’s 18 courts throughout the 2025 competition.
The AELTC confirmed in a statement on Wednesday, ‘officiating technology will be in place for all Championships and qualifying match courts and cover the ‘out’ and ‘fault’ calls that have previously been made by line umpires.’
This artificial intelligence technology is nothing new to tennis, having been implemented following the COVID pandemic at other major tournaments like the US Open. The Australian Open was the first grand slam to ever remove line judges on all courts back in 2021 and the ATP Tour will bring in the technology in 2025.
Wimbledon is founded on tradition, so today’s announcement, while not surprising, is indicative of the new AI-driven world we now live in. Back in 2014, IBM, one of Wimbledon’s major sponsors, didn’t think we’d be replacing humans at Wimbledon anytime soon. But a lot can change in 10 years, and now we’ll have an AI on Centre Court.
What is ELC and how does it work?The system that is set to be implemented at Wimbledon for the 2025 tournament works by tracking the ball’s movement through 12 cameras strategically placed on every court. There are also microphones on the court to listen for the sound of the ball as well as a computer to interpret the ball’s location in real time. A video operator, similar to the Video Assistant Referee in soccer will oversee the technology from an external room, communicating with the Chair umpire on the court.
This isn’t the first AI technology introduced at Wimbledon, earlier this year the AELTC unveiled a new Catch Me Up tool, powered by IBM’s Watson generative AI platform. Catch Me Up allows fans to watch highlights from games on a second screen, ideal for the perfect couch tennis companion.
You might also like...Can You Turn Off Google and Meta’s AI Tools? Sometimes, and Here’s How.
Google, Microsoft and Meta are shoving A.I. chatbots into our faces. Sometimes, there’s a way out.
Can Your Electric Vehicle Catch Fire During a Hurricane?
E.V. batteries that are submerged in saltwater can catch fire after the floods subside, but experts say it’s a rarity.
Bitcoin Documentary ‘Money Electric’ Reopens Search for Satoshi Nakamoto
The identity of the pseudonymous Bitcoin creator has eluded sleuths for years. But does finding the real Mr. Nakamoto really matter?
Instacart's AI-powered shopping cart turns shopping into a side quest
Grocery shopping as a game without checkout lines could make a sometimes tedious errand more bearable thanks to an AI infusion into Instacart’s Caper Carts. Instacart augmented the smart grocery carts with new AI features. They offer a touchscreen that provides navigation guidance to the products you’re looking for, as well as personalized recommendations, discounts, and even a way to make shopping into a kind of treasure-hunting game.
Instacart created the Caper Carts to skip the need for checkout lines. They have cameras, a scale, and other sensors to evaluate items placed within, while the screen lets you know what the cart thinks you just pulled from the shelf using Nvidia’s AI libraries of information about products plus its on-device AI processing platform. You can buy everything before you even get to the front of the store and simply walk out when you’re done.
The AI in the new Caper Cart offers a lot more than just a faster checkout, though. They come with real-time location tracking to note where you are and lead you to what you want to buy. You can upload your grocery list to the cart with a QR code in the Instacart app, which then displays it on the cart’s screen. It can also do the same for products on sale that pique your interest, offering discounts based on your location.
(Image credit: Instacart) Shopping GamesIf you want more fun in your shopping tasks, there’s always gamification. The Caper Cart has a feature that creates quests for items on your shopping list as though you’re in a video game. There are real prizes, too. Adding one item to your cart can lead to products with additional discounts related to what you’re buying, such as an extra percentage off soda when buying disposable cups. Anything that leads to more purchases is likely to make grocery store owners happy, of course, even if the Caper Cart is more expensive than the traditional metal basket.
“Caper Carts are ushering in a new era at the grocery store – making shopping more delightful while delivering a seamless experience for customers, retailers, and brands,” Instacart Chief Connected Stores Officer David McIntosh said. “With Caper Cart’s digital screen, we’re now delivering an unmatched omnichannel experience for retailers and brands in-store. Today’s news is further proof of how we’re truly transforming grocery shopping from a chore to a fun adventure, giving customers a one-of-a-kind, interactive experience in every aisle of the grocery store.”
Grocery stores might find the Caper Cart appealing not just for making shopping faster for customers but for the advertising option on the screen that they can use for in-store promotions and sponsored product recommendations from brands with products in the store. It makes for a kind of mobile impulse purchase aisle that way. More than 70 locations have rolled out Caper Carts, including big grocery chains like Fairway Market, Kroger, and ShopRite, as well as ALDI stores in Austria.
You might also like...- Amazon's AI personal shopper is sharing ads with its advice
- Europe's largest fintech firm froze recruitment because its AI assistant is so good — Klarna's AI bot does the work of 700 people and no, it's not connected to the layoff of 700 employees in 2022
- Amazon's new AI feature might be your secret weapon for winning at Prime Day
A.I. Pioneer Geoffrey Hinton Reflects on Winning the Nobel Prize
The computer scientist Geoffrey Hinton spoke with The Times shortly after learning he had won the Nobel Prize for Physics.
Nobel Physics Prize Awarded for Pioneering A.I. Research by 2 Scientists
With work on machine learning that uses artificial neural networks, John J. Hopfield and Geoffrey E. Hinton “showed a completely new way for us to use computers,” the committee said.
Apple has updated iCloud.com to look shiny and new - here are all the new features you can use right now
Apple has unveiled a refreshed iCloud.com. The update introduces a modest range of new features, as well as aesthetic updates to enhance the user experience.
Users can now customize their home page’s background, using a selection of colors, and the site now honors users’ existing dark mode settings.
Navigation within iCloud Photos has been streamlined, for a more efficient browsing experience. The homepage now includes tile options showcasing photo albums more prominently. A calendar icon has been added, enabling users to quickly jump to specific months or years, and adjust the time, date, and location of their photos directly from the info pane.
Apple has also upgraded iCloud Notes. Now – with a right click or Control-click – users can pin important notes to the top of their list for easy access.
iCloud Calendar’s design has been elevated to improve usability, and support for the Hijri calendar has been added. And iCloud Drive now has a Shared View tab, which makes it easy for users to see files that are shared with them.
These improvements are not going to knock anyone’s socks clean off, but they certainly will make users of the site’s lives just that little bit easier - and frankly, some of the changes have been a long time coming. I mean, how does a platform not have dark mode in 2024?
You might also like...- As Apple prepares to launch new M4 Macs, I’ve got one question: where’s my new MacBook Air?
- Apple claims to have found a balance between user privacy and convenience with a tweak in macOS Sequoia: “fewer permission alerts” for screen recording
- M4 MacBook Pro seemingly revealed again in ‘unprecedented’ benchmark and video leaks
Apple claims to have found a balance between user privacy and convenience with a tweak in macOS Sequoia: “fewer permission alerts” for screen recording
If you rely on screen recording tools, Apple’s macOS Sequoia 15.1 update could have some good news for you: fewer permission pop-ups. One of Apple’s key goals with Sequoia was to strengthen security and privacy for its users. Unfortunately, these protective measures felt a little too like an overbearing, over-worrying parent.
Early Sequoia beta testers were effectively being made to reauthorize screen recording apps on a weekly basis, which quickly became irritating for many users. Just before the launch of Sequoia, Apple addressed this by making the requests monthly instead. Now, Apple is reportedly taking it down a further notch.
The reminders serve an important role: ensuring users are aware of the real risks of screen recording – if an app can directly see your screen it can see all sorts of sensitive data. There’s no doubt about the importance of such measures, but when you’ve granted permission to the app already and use it daily, those frequent reminders are a nightmare.
Pop-ups, begonePreviously, the Amnesia app was developed to disable the monthly reminders on an app-by-app basis. Fortunately, Apple now seems to have found a balance between security and convenience. In the latest beta release notes, Apple explains there’s been a change in how macOS handles older content capture tech, which means that trusted apps will trigger fewer interruptions.
Mind you, this doesn’t mean the prompts will disappear altogether – privacy is still Apple’s top priority – they just won’t be as obnoxious for frequent users of the apps.
If you screen-record daily, you can be slightly less grumpy now, though some users may still want to reserve judgment until they see the update in action when it’s released on October 28.
You might also like...Windows 11 PCs could soon get the ability to set up much faster Wi-Fi hotspots to share their internet with other devices
Windows 11 received support for Wi-Fi 7 in the recent 24H2 update, but Microsoft is working to extend functionality with wireless connections further in allowing users to establish 6GHz Wi-Fi hotspots.
Currently, Windows 11 lets you set up a hotspot – to allow other devices to connect to your PC on the Wi-Fi network, and use its internet connection – on the 5GHz or 2.4GHz bands.
But as spotted by leaker PhantomOfEarth on X, there’s now the ability to set up such a hotspot over 6GHz – as brought in with Wi-Fi 6E – albeit this isn’t live in testing yet, not for everyone.
To enable support for 6 GHz mobile hotspot connections on devices with the right hardware and drivers in the latest Dev CUs (started rolling out in .1912), run:vivetool /enable /id:40466470 pic.twitter.com/K5A5bmElA2October 7, 2024
The feature is currently rolling out in the most recent preview builds in the Dev channel, so some testers may have it, and others may not. In the latter case, Windows 11 testers can enable 6GHz support using a Windows configuration utility (ViVeTool), as the leaker mentions.
(Image credit: Shutterstock) Support is required across the board with your hardwareNote that to use this feature when it arrives in Windows 11, you will of course need a PC that supports Wi-Fi 6E, and a router that supports the standard too – plus your connecting devices will need 6GHz support.
The 6GHz band offers benefits above the traditional 5GHz and 2.4GHz bands (in Wi-Fi 6) including faster Wi-Fi speeds and more bandwidth, with less potential for interference in crowded environments (like apartment blocks).
We wouldn’t recommend diving in to install a test build of Windows 11 just to see this feature, mind. While 6GHz support might still be in the early stages of preview, given that not all testers in the Dev channel even have it yet, hopefully it won’t be too long before support debuts in the full version of Windows 11.
Via Neowin
You might also like...'Godfather of AI' Geoffrey Hinton just won a Nobel even though he's now scared of AI
Geoffrey Hinton, the oft-recognized 'Godfather of AI' and now-vocal alarm ringer for an AI-infused future, just won a Nobel Prize in Physics for his work in – wait for it – training artificial neural networks using physics.
That's right, the brilliant Turing Prize-winning scientist most afraid of how artificial intelligence might harm humanity has won the world's biggest science award for his foundational work in AI.
As The Royal Swedish Academy of Sciences (the group that awards the Nobel Prize) describes it, "Geoffrey Hinton invented a method that can autonomously find properties in data, and so perform tasks such as identifying specific elements in pictures." Hinton shares his Nobel with John J. Hopfield of Princeton University. Hinton's work built upon Hopfield's breakthrough work where he created a network system that could save and recreate patterns.
Combined, their work led to future breakthroughs in Machine Learning (systems that can learn and improve data without programming) and the concept of artificial neural networks, which is often at the core of modern AI.
Post by @nobelprize_org View on ThreadsHinton, who is currently teaching Computer Science at the University of Toronto, has a storied AI history that started with those early breakthroughs and led him to Google's DeepMind where he and his team helped lay the groundwork for today's chatbots like OpenAI's ChatGPT and Google Gemini. However, when Hinton left in 2023, he sounded the alarm, worrying that Google was no longer, as he told The New York Times, "a proper steward" for AI.
The warnings ranged from companies going too fast and acting recklessly to AI being responsible for a flood of fake content, gutting the job market, and outthinking us. A year later, it seems like some of those fears are coming true; companies are increasingly employing AI to handle basic writing tasks, our feeds are now flooded with AI-generated content that sometimes includes AI watermarks, but not consistently, and we are racing toward the unknown of General Artificial Intelligence, which may mean computers that can think as well or better than we do.
I emailed Hinton for comment on his win and how that affects his thinking about the current state of AI and will update this article when I hear back.
Still, it makes sense to honor Hinton for his pioneering work. AI as we know it would probably not exist without Hinton and Hopfield. Applying physics to the problem of pattern recognition was a novel solution that, in some ways, helped computers operate more like the human brain. The concept of neural networks, arguably AI's most powerful tool, would not exist without Hinton.
Surely, Hinton's other accomplishment is waking us up to the notion that AI is a double-edged sword. It's a vastly powerful tool that is already changing our lives and it's one that desperately needs guardrails to protect humanity from AI run amuck. Hinton may not have understood what he unleashed when he first developed these concepts in the 1970s but he's now an honored beacon of light and reason in a confusing and fast-moving world of AI.
You might also like- 5 Ways the 'Godfather of AI' thinks AI could ruin everything
- Google Gemini Live is the first AI that almost encourages you to be rude
- Google is making Gemini AI part of everything you do with your smartphone – here's how
- Google Gemini Live's AI voice now comes in ten more styles that take inspiration from the stars
Google Gemini could soon get a big AI image generator upgrade to match ChatGPT's DALL-E
Google has upgraded Gemini’s AI image generator in the latest Android beta, adding tweaking options to help users create the perfect image.
Initially reported by Android Authority, Android beta (v15.40.31.29) adds a precise editing feature that lets you make small tweaks to any image generated by Gemini. In Android Authority’s demo, Gemini generates an image of ‘a cute dog wearing hat and sunglasses’, and the user then asks to ‘change the hat with a birthday hat’ which Gemini does with ease.
This kind of fine tweaking isn’t anything new in the world of the best AI image generators, but it’s cool to see Google add more image-generation tools to Gemini to compete with the likes of ChatGPT’s DALL-E.
With Apple Intelligence’s Image Playground set to arrive before the end of the year, adding more features to image generation in Gemini will help cement Google’s AI as a fantastic alternative for iPhone users who want to generate images with third-party options.
From Android Authority’s demo, we can see that these precise image-editing tools are still very much in beta, and the author who tested these new features wrote that "the edits aren’t always precise and reliable". It’s also worth noting that the video has been edited to remove wait times, so when this feature launches on Android it’s likely not to be as fast as this demo would suggest.
A better AI image generatorIt’s impressive to see how far AI image generators have come in such a short space of time. Midjourney, our pick for the best AI image generator, definitely has its work cut out to keep the top spot, with new tools, and updates to existing ones, arriving almost daily. This upcoming update to Google Gemini looks very promising, but we’ll need to wait until it officially releases to properly test out its capabilities.
You might also likeLatest Windows 11 24H2 bug performs a vanishing act on your mouse cursor – and I hope Microsoft fixes it soon
Windows 11’s 24H2 update recently arrived and it comes with some problems, as we’ve seen, but here’s another issue with the upgrade, and it’s a strange one – the case of the vanishing cursor.
Windows Latest reports that it faced the odd problem after installing the 24H2 update on an HP Spectre PC, and some others have reported the bug too – although admittedly it doesn’t seem to be that widespread.
As the tech site observed, the mouse pointer disappeared when they clicked in text fields in certain apps, notably Google Chrome, Microsoft Edge, Slack, and Spotify.
The common theme here? These are pieces of software that leverage Chromium (it’s the web engine that Chrome is actually built on, and Edge too, as well as some of the other best web browsers out there).
No, it’s not the worst bug in the world – and it’ll hardly bring your PC to its knees – but it’s a rather off-putting quirk if you’re affected.
As noted, though, it doesn’t seem to have hit that many folks, at least not yet. Part of the reason why could be the limited number of those upgrading to the 24H2 update so far (which is still in the early stages of its phased rollout).
Windows Latest points out that there are some folks posting about the bug on Microsoft’s Feedback forum, and Answers.com support website. We’ve also seen the occasional affected Windows 11 user on Reddit too.
(Image credit: Future) Analysis: There is an unofficial fix of sortsMicrosoft is yet to acknowledge the problem, sadly, perhaps because it isn’t making big enough waves in the Windows 11 community to be fully on the radar for the software giant.
Windows Latest made some valiant attempts to cure the bug including reinstalling mouse drivers, and trying a different mouse, none of which worked, but they eventually stumbled on a fudge of a fix – resetting the mouse pointer to use the default icon.
To do this, Windows Latest explains that in the taskbar search box, you should search for ‘main.cpl’ and click it to bring up the legacy Mouse Properties panel. Head to the Pointers tab, and in the ‘Customize’ panel, find and click on Text select and then click on the Browse button. Now scroll through the list and choose ‘beam_r.cur’ (the default pointer) and click Open, then click OK.
The caveat is, of course, that while this worked on the tech site’s HP computer, it may not work on yours – who knows. Hopefully this is a bug Microsoft is now looking into, and we may hear about it soon enough if that’s the case. Either that, or the next Windows 11 update could find the issue magically cured without any fanfare (that has certainly happened before).
We’ve experienced the cursor disappearing at times on our PC, in Microsoft Word notably. Usually simply closing the app, and reopening it, fixes things, but this is a much trickier beast of a bug to deal with, clearly.
You may also like...Google Must Open Android to Other App Stores, Judge Says
The internet giant was ordered by a federal judge to make a series of changes to address its anticompetitive conduct.