Feed aggregator

This AI tool lets you confront your future self – and you might like what you find

Techradar - Mon, 10/07/2024 - 17:00

Imagining what you'll be like in the future is a common game for kids, full of the sometimes unlikely hopes and fears we all feel when contemplating what's yet to come. Researchers at the Massachusetts Institute of Technology (MIT) have leveraged AI to make that concept a little more realistic through the new Future You project. The AI-powered chatbot simulates your older self, specifically a version from 30 years in the future. 

MIT's Media Lab built Future You with the of idea encouraging thoughtful introspection about who you are, who you want to be, and how to develop and pursue long-range goals. With some digital aging technology, you can even see how you (potentially) will look decades from now.

"Our system allows users to chat with a relatable yet AI-powered virtual version of their future selves that is tuned to their future goals and persona qualities," the scientists explain in the abstract for their research paper on Future You. "The "Future You" character also adopts the persona of an age-progressed image of the user's present self."

To try out Future You, you just run through the initial setup, answering questions about your current life. That includes relationships, professional situations, goals, and your history up to now. It might seem personal, but the more information the AI has about who you are now, the better it can project who you might become. Once the survey is done, the AI builds a profile of your future self and links it to a customized version of OpenAI’s GPT-3.5 AI model. You also have the option of uploading a current photo of yourself that the AI will then make look 30 years older.

When you talk to that persona, you'll find it has a synthetic memory of the last 30 years. That way, it can talk to you about what led it to become the (projected, fictional) version of yourself. That might mean reaching the career goals you've mentioned or your dreams of family life. The AI won't just say that those goals have been achieved, but will have a whole timeline explaining how it reached that point. The result should be far from generic, and the AI should be able to perform as a convincing potential version of your future self.

Future AI Self

The idea of interacting with a digital doppelganger from the future at first seems like nothing more than a game of what if, and one without any real value beyond entertainment. However, those who have tried out Future You have reported feeling like they have new insight into their lives and more motivation to pursue current goals. In fact, even a short interaction with their 'future' self has users saying they feel less anxiety about the future as a whole.

The point is to make the future seem more real. The MIT researchers believe that even though the AI simulation is very clearly not predicting anyone's real future, it can make the future seem more real, shortening the psychological distance you might feel toward that future self and encouraging better decision-making because you can now envision how your choices now affect who you will become. 

Future You is still experimental, but its effect on people is encouraging. Making the future real with deliberately synthetic versions of someone is not the most obvious deployment of AI models, but, according to my own future self, it's a great start toward better choices and will make sure I never go bald. 

You Might Also Like

Goodnotes adds an AI that can read and explain even the worst handwriting

Techradar - Mon, 10/07/2024 - 13:00

AI has become very good at holding up its end of a conversation with humans, but a set of new AI features from the digital notetaking app Goodnotes performs an even more impressive stunt by reading handwriting well enough to discuss it and even answer questions about what's been scribbled. Goodnotes, which claims 24 million monthly active users, debuted handwriting editing capabilities along with a math-specific AI helper and the Ask Goodnotes assistant that serves as a kind of secretary for your notetaking. 

The handwriting editing tools impressively link human writing to digital understanding. They're based on the proprietary Goodnotes Smart Ink technology, which takes down your handwriting and attempts to turn it into typed text. Now, though, the app lets you edit what you've handwritten the way you would something typed out in a document. That includes aligning notes, copying and pasting some of the handwriting, and reflowing the text to make it more logical when going through it. 

That's on top of the Spellcheck and Word Complete tools already available for handwritten text. Goodnotes pitches these AI handwriting editing capabilities as a way to combine the flexibility of pen-and-paper notetaking with the ease of editing offered by digital tools. You can see how it works below.

Goodnotes AI

(Image credit: Goodnotes) AI Secretary

Ask Goodnotes, as the name implies, lets you ask questions about what's in your notes, get summaries, explain concepts you jotted down, and even help put together quizzes to test you on the knowledge. So, if you are a student or at a work presentation, the AI can take your hastily scrawled notes and, days later, explain what you were writing about, including researching any concepts you were too vague about to remember what you meant. It can then help you study for a test on the topic or prepare to talk about it with others.

It works with more than just handwritten notes, so you can augment what you wrote with printed text, images, and PDFs. The answers from the AI are personalized and will link to your notes to ensure you understand the context of what it is saying and what you wrote earlier.

The Math Assist feature zeroes in on helping with mathematical equations written out by hand in your notebook. Math Assist recognizes handwritten math problems and can perform calculations to give the answer. It can also show the steps for solving the problem for anything from arithmetic to calculus. If you don't want the full answer, the AI can also restrict itself to hints so you can solve the equations on your own. Goodnotes is available on Apple devices with up to three notebooks for free. All features are available for $10 a year or a lifetime fee of $30.

"We're constantly inspired by the sheer volume of ideas and knowledge that our users capture in their Goodnotes notebooks. Our aim with Ask Goodnotes is to give users new powers to interact with their notes, documents, and PDFs, and unlock fresh possibilities for productivity, creativity, and learning," said Steven Chan, founder and CEO of Goodnotes. "With our new handwriting editing and math features, we focused on how our proprietary machine learning models could be leveraged behind the scenes to make everyday note-taking and document annotation more seamless and intuitive."

You might also like...

Microsoft is tightening its Windows 11 restrictions - but hackers swoop in to save the day, again

Techradar - Mon, 10/07/2024 - 11:29

The Microsoft Windows 11 24H2 update dropped on October 1, 2024, and with it came the need for another workaround to address persistent hardware upgrade issues.

Since Windows 11 was originally released in 2021, Microsoft has required users to run its latest OS on a machine with Trusted Platform Module (TPM) 2.0, and with a sufficiently new 64-bit CPU supporting Secure Boot. Microsoft must be aware that users are unhappy with Windows 11’s strict hardware requirements, but the company has only become more inflexible.

Users have already been using creative solutions to run Windows 11 on their machines. One popular method involves the 'Rufus' utility software, which can be used to make bootable OS disks. Rufus circumvents Microsoft’s checks by the simple expedient of replacing the code used to do them – contained in the file appraiserres.dll – with an empty file.

It’s this particular method that no longer works, resulting in users running Windows 11 on older machines being left unable to install the 24H2 update.

Thankfully, Rufus developer Pete Bard dug deep into his hacker’s toolbox and pulled out a new solution. If you want to update right now, you can head to GitHub and follow the instructions, which involve a set of registry fixes. Future versions of Rufus will contain code to do this automatically.

Rufus to the rescue?

This is all great if tinkering with Windows 11's innards is your thing, but the average user is probably not feeling too great about how difficult and inconvenient that is for them.

At least the official method for upgrading systems appears untouched. If your PC is using Secure Boot, UEFI, and has a TPM 2.0, upgrading to Windows 11 should be relatively pain-free. Additionally, if you already have Windows 11 on your PC there’s no need for any new compatibility upgrades, so your system should continue to receive updates with no problems.

Microsoft seems hell-bent on making a huge chunk of its users miserable, because introducing such strict hardware checks, especially for TPM is just making things unnecessarily hard for many of its users. Sure, TPM and secure boot add more security, but perfectly functioning older hardware shouldn’t have to be excluded from receiving the most basic Windows updates.

The fact that Microsoft seems so hellbent on forcing users to buy new hardware when they have perfectly working older systems is another example of how Microsoft seems to prove again and again that it is not customer-focused enough and wants to dictate what its users can or can’t do with their own systems. Add the growing mountain of e-waste that we'll have to deal with eventually, and you've got a perfect storm of upset Windows users.

You might also like...

Can’t upgrade your PC to Windows 11? Buy a new one, is Microsoft’s laughable solution

Techradar - Mon, 10/07/2024 - 08:19

Windows 11 adoption has been way slower than Microsoft would like, no doubt, and part of the reason for that is that some PCs (particularly older models) can’t upgrade due to system requirements – and if you fall into that boat, the software giant has some simple advice for you: buy a new PC.

Neowin noticed that Microsoft has updated a help document about what it means if you’re using an unsupported version of Windows (spoiler alert: if you’re online at all, it’s a huge security risk), which currently means PCs running Windows 8.1 (or 8) and Windows 7, or earlier.

It’s worth noting, however, that this will also be the case for Windows 10 devices in a year’s time if their owners don’t take any action, as the end of support rolls around for that OS in October 2025.

Microsoft’s article takes the form of a short discussion followed by a FAQ, and the main update applied to the document pertains to the options for staying supported with Windows, with a new choice added here: ‘Recommended: New PC with Windows 11.’

So, this is Microsoft’s primary recommendation if your unsupported PC isn’t up to scratch, hardware-wise, for Windows 11 – get a new computer.

Microsoft elaborates: “Windows 11 is the most current version of Windows. If you have an older PC, we recommend you move to Windows 11 by buying a new PC. Hardware and software have improved a lot, and today’s computers are faster, more powerful, and more secure.”

Then there’s a link to ‘view Windows 11 PCs’ which takes you to Microsoft’s hub which showcases new devices from itself and partners.

A man at a desk using a laptop and holding his hands up, while having a confused look on his face

(Image credit: Shutterstock/fizkes) Analysis: That enormous landfill blot looming on the horizon

That first (‘recommended’) choice of buying a new PC is not the only option covered in the FAQ, of course. Microsoft also lists a couple of other possibilities, including upgrading your old computer to Windows 11 – maybe via Windows 10 first – but this may not be possible with older PCs. Indeed, a PC running Windows 8 (from pre-2015, when Windows 10 started arriving on new hardware) will very likely not meet the needed system specs for Windows 11 (the CPU will probably be too old, and TPM security requirement may not be met either).

And, in fairness to Microsoft, an upgrade of such an ailing PC to Windows 11 may indeed be relatively impractical (as you’ll likely have to replace a bunch of components – the CPU, the motherboard to get a socket that fits the new CPU, and probably memory too, and maybe more besides). By the time you reach the end of the component shopping list, you may as well be buying a new PC (with a new warranty to boot), and of course some PC owners won’t want to take on such an upgrade, or have the technical knowhow to do so.

So, in this case, Microsoft’s foremost recommendation to get a new PC makes at least some sense – to those with rapidly aging PCs, as frankly, in some scenarios they might not have much choice, particularly if they’re not tech-savvy, or they have a laptop (or all-in-one PC) which can’t be got inside and upgraded anyway.

However, it’s equally true that some folks (perhaps quite a few) could upgrade to Windows 10 (with its lighter system requirements) if not Windows 11, a possibility Microsoft touches on – while simultaneously observing that support runs out in a year for Windows 10, a fairly sizeable caveat. And indeed, therein lies the rub – we must bear in mind that this article’s advice will apply to Windows 10 PC owners next year, when they find themselves in the unsupported boat.

Given that, it’d be nice to see Microsoft working towards a solution in respect of somewhat newer PCs, which goes somewhere down the path of tackling some of the alarming stats we’ve heard about the number of Windows 10 machines heading to landfill in the future. This is a potential environmental disaster that could see hundreds of millions of PCs lumped unceremoniously on the scrapheap.

And ever since those concerns have been raised, we haven’t heard anything from Microsoft as to how they might be mitigated. What Windows 10 users (who can’t, or won’t, upgrade) can do is pay for extended support beyond October 2025 – but that could turn out to be an expensive way to go, particularly beyond the first year if Microsoft’s previous pricing in these schemes is anything to go by.

Logically, then, Microsoft needs to be looking at a way of keeping Windows 10 alive – for those totally blocked by Windows 11’s more demanding requirements on the security front and elsewhere – which works out to be way more cost-friendly for users, in an effort to save what might be a much heavier price to pay for the planet. In short, ‘buy a new PC’ will soon not be the answer we need frontloaded here, and pushing folks to make a purchase of a new computer is already a very dubious first port of call given what we’re facing down the road.

You may also like...

Watch out – your Ray-Ban smart glasses photos are helping to train Meta AI

Techradar - Mon, 10/07/2024 - 06:55

If you use your Ray-Ban Meta smart glasses all the time you might want to be careful of what you're snapping pictures of, and what you're asking Meta AI, through them, as Meta has confirmed that it may use these visual and audio inputs to train its smart assistant.

That’s by its own admission, in a statement it sent to TechCrunch in which Meta’s policy communications manager Emil Vazquez explained that “Images and videos shared with Meta AI may be used to improve it per our Privacy Policy.”

It’s worth highlighting that Meta only trains its AI on images and videos that you share with it – such as through the Look and Ask feature which has the glasses take a picture which it uses to contextualize a request like “Look and tell more about this landmark” or “Look and translate this sign.”

So if you live in an area that doesn’t yet have access to Meta AI (i.e. outside the US and Canada) or you simply never interact with the Ray-Ban smart glasses’ AI analysis tools, then your snaps should be staying private; that is, unless you post the image on Facebook or Instagram and you live in a region where Meta now has permission to trains its AI on your posts.

Unfortunately there’s no way to use the AI image analysis and also keep your submitted pictures private. You have to consent to sharing your images to opt in to the feature, and you can’t currently opt out beyond stopping using AI analysis.

Not the biggest surprise

While I feel that there’s something distinctly off-putting about Meta using my pictures to train its AI, this news isn’t all that surprising. Other AI creators openly train their assistants on user inputs, and given how much Google and Apple have hyped up the privacy of their own on-device AI the Ray-Ban glasses’ reliance on a cloud-based AI is clearly going to involve the sharing of data.

Also for anyone confused about me saying my snaps have probably trained Meta’s AI, even though I live in the UK I have access to Meta AI on my Ray-Bans (somehow, I think it might have something to do with my VPN) – I’ve used it quite a lot, so I’ve likely also agreed to the Privacy Policy giving Meta permission to use my submitted images for training purposes.

I guess the difference between using, say, ChatGPT to analyze an image and using the glasses is that you aren’t always wearing ChatGPT on your face. Even with all the safeguards – you can turn the glasses off completely with an on-device switch, and the AI only uses the images you choose to feed it – I feel this news still adds another layer of concern for users.

The Meta Orion glasses on a table next to a wristband and controller.

The Meta Orion AR glasses (Image credit: Meta)

And for smart glasses like the newly announced Meta Orion AR glasses to take off, these layers need to be peeled back not added to. Because while most of us do carry around much of the same tech now in smartphones, there’s a big psychological difference between a handset and something you’re always wearing.

It’s also becoming easier to activate the AI with more natural speech. While this is handy for people who want to use the Meta assistant, it does open up the possibility that people may share images that didn’t intend to if they aren’t careful.

We’ll have to see what measures Meta introduces to better alert users about how their data is used by AI – and perhaps offer more comprehensive opt-out options that don’t strip away functionality. For now, we recommend being a little more careful what you share with your Ray-Ban Meta smart glasses, and other AI for that matter, as it might not be as private a conversation as you thought.

You might also like

How a Lobbying Group Is Arguing That Big Tech Protects Free Speech

NYT Technology - Mon, 10/07/2024 - 04:01
NetChoice, backed by tech giants including Meta and Google, has successfully argued in court that Big Tech hosts protected speech.

Apple's smart AR glasses are rumored to be arriving in 2026 – with microLED tech

Techradar - Sun, 10/06/2024 - 09:30

Meta's impressive demo of its Orion AR glasses has got us interested in augmented reality specs again, and a small tidbit of rumored information has come our way that suggests Apple could have its own device in this category by 2026 – with microLED tech included.

This comes from tipster @Jukanlosreve (via Wccftech), who has a decent record for leaks (though that was under a different username). The source says Apple "has not given up" on microLED tech, which it's been exploring for several years now.

The tech combines the brightness of microLED with the deep blacks of OLED, making it superior to both existing display technologies. It's also very difficult and expensive to manufacture, which is why we haven't seen it on smaller gadgets to date.

As our feature on microLED TVs will tell you, this is an innovation that has shown potential for years – but again the problem is in getting the prices down to a point where people are actually going to be available to afford these devices.

Specs appeal

Apple has not given up on Micro LED technology.1. They are preparing Micro LED for AR glasses, with mass production expected in 2026.2. The plan to include Micro LED in the Apple Watch Ultra is also still in place, with a target launch in 2026.October 5, 2024

The Apple Watch Ultra should also get a microLED display, according to this tipster. Somewhat surprisingly, we didn't get an Apple Watch Ultra 3 this year, so we're not sure what number Apple might be up to in a couple of years.

When it comes to the Apple Glasses, we've had several waves of rumors around these AR specs in recent years. The previous estimate for a launch window was 2027 – so it's possible that Apple has accelerated development on this new device.

Last year there was word that Apple's AR glasses had been delayed indefinitely, with focus switching to a cheaper VR headset. It's possible that Meta's success with the Ray-Ban Meta Smart Glasses have shifted Apple's thinking in this respect.

There's also the Apple Vision Pro of course: the mixed reality headset is very powerful and very expensive, and hasn't sold in huge numbers, and perhaps Apple wants to get something smaller and more affordable out of the door next.

You might also like

8 of the Most Celebrated Awards in Science Outside of Nobel Prizes

NYT Technology - Sun, 10/06/2024 - 04:03
The Nobel Foundation offers prizes in only three disciplines, but other awards have been created to honor scientists in different fields.

You'll want to try Meta's amazing new AI video generator

Techradar - Fri, 10/04/2024 - 17:00

Meta has shared another contestant in the AI video race that's seemingly taken over much of the industry in recent months. The tech giant released a new model called Movie Gen, which, as the name indicates, generates movies. It's notably more comprehensive in its feature list than many others in its initial rollout, comparable to OpenAI's Sora model, which garnered so much attention upon its initial unveiling. That said, Movie Gen also shares with Sora a limitation on access to specific filmmakers partnering with Meta rather than a public rollout.

Movie Gen is impressive based on the demonstrations of its ability to produce movies from text prompts, as seen above. The model can make 16-second videos and upscale them to 1080p resolution. The caveat is that the video comes out at 16 frames per second, a speed slower than any filming standard. For a more normal 24 fps, the film clip can't be more than 10 seconds long. 

Movie Gen Action

Still, 10 seconds can be plenty with the right prompt. Meta gave Movie Gen a fun personalization feature reminiscent of its Imagine tool for making images with you in them. Movie Gen can do the same with a video, using a reference image to put a real person into a clip. If the model can regularly match the demonstration, a lot of filmmakers might be eager to try it.

And the videos aren't just limited to a prompt that then has to be rewritten to make another video that you hope will be better. Movie Gen has a text-based editing feature where a prompt can narrowly adjust one bit of the film or change an aspect of it as a whole. You might ask for characters to wear different outfits or set the background to a different location. That flexibility is impressive. The adjustments extend to the camera moves too, with panning and tracking requests understood by the AI and incorporated into the video or its later edits. The awareness of objects and their movements is likely borne out of the SAM 2 model Meta recently released, which is capable of tagging and tracking objects in videos.

Audio AI Future

Good visuals are all too common now among AI video makers, but Meta is going for the audio side of filmmaking too. Movie Gen will use the text prompts for the video to produce a soundtrack that blends with the sight, putting rain sounds in a rainy scene or car engines revving to go with a film set in a traffic jam. It will even create new music to play in the background and try to match the mood of the prompted video. Human speech is not currently part of Movie Gen's repertoire.

Meta has kept impressive AI engines from the public before, most notably with an AI song generator it said was too good to release due to concerns over misuse. The company didn't claim that as the reason for keeping Movie Gen away from most people, but it wouldn't be surprising if it was a contributing factor. 

Still, going the OpenAI Sora route means Meta has to ignore the possibility of a more open rival winning some of its market share. And there are an awful lot of AI video generators out or coming soon. That includes the new or recently upgraded models from Runway, Pika, Stability AI, Hotshot, and Luma Labs' Dream Machine, among many others. 

You might also like...

Qualcomm's AI Conductor wants to harmonize your schedule and, maybe, your life

Techradar - Fri, 10/04/2024 - 15:00

AI tools usually require cloud computing access to have enough power to run, but your future AI usage may end up not needing more than is available on your device if Qualcomm has its way. The tech giant has unveiled a new system called the Qualcomm AI Orchestrator aimed at integrating AI tools and experiences and keeping the process on your devices. 

Qualcomm AI Orchestrator incorporates all of your AI usage, including what you do on your computer, mobile device, and even in your car. The orchestrator takes personal preferences and the surrounding context into account when running to make the best use of what various accessible AI apps and services can provide. 

It's the individual adaptation aspect that stands out as the biggest appeal of the AI Orchestrator. The AI uses information on your device about your contacts, where you travel regularly, what you do in a day, and even your go-to apps to personalize the experience and make a personal knowledge graph. For instance, if you have an app that you rely on to reserve seats at a restaurant, the AI Orchestrator will use that app when recommending places to go and reserve a time if you ask. It's a more proactive approach than the standard query and response system you might be familiar with when doing text or voice searches. 

The whole process is faster and safer because the AI is run on the device. That means you can store personal information without worrying about it being stolen or shared from a cloud server by malicious actors. It also makes the AI faster in its responses to you, even when taking up more power for multimodal interactions with voice and visuals. 

Orchestrated Life

“Imagine a scenario where you start your day with a bunch of notifications on your phone,” Qualcomm described in a blog post. “You don’t have time to read them until your lunch break, so instead of reading all the notifications yourself, your generative AI assistant automatically creates a notification summary and can pick out important ones.”

The Qualcomm AI Orchestrator is essentially the conductor of the ‘orchestrator’ of AI instruments, not only within a device but across multiple interfaces. So, the AI using your phone to set up your dinner reservation is also linked to your car navigation and your calendar on your home computer so that you can get to the restaurant and have an alert up that you’re not responding to emails at that time. 

In some ways, this sounds a lot like the personal intelligence and contextual relevance Apple Intelligence is set to bring to Siri in the coming months. The difference is that Apple Intelligence is only supporting certain iPhones, Macs, and iPads. Qualcomm AI Orchestrator could end up on all sorts of devices running the latest Qualcomm chips.

Qualcomm hopes to further expand the orchestrator as AI tools continue to evolve. It may end up helping run your smart home devices and even take over the phone calls with customer service agents you don’t want to make. Even if Qualcomm’s dream of redefining the way you interact with AI is unlikely to happen any time soon, the concept of interconnected but on-device AI processing could prove popular enough that others take a cue from Qualcomm in their next line of AI products. 

You might also like...

Meta Unveils New Instant A.I. Generator

NYT Technology - Fri, 10/04/2024 - 08:57
The tech giant is among the many companies building technology that could remake Hollywood — or help spread disinformation.

Generative AI and ChatGPT are making their way to your Samsung TV

Techradar - Fri, 10/04/2024 - 06:36

Good news if you own one of the best Samsung TVs and are eager to get even more generative AI into your life: Samsung has announced that a bunch of AI features, plus the ChatGPT bot, are heading to the company's televisions in the future.

This news comes out of the Samsung Developer Conference (SDC) from SamMobile, and while details are a little thin on the ground at the moment – as you would expect from a developer conference – we do have some idea of what's coming.

At the show, Samsung emphasized an "AI for all" approach that involves getting AI just about everywhere. For TVs, that means more capabilities for the built-in Bixby assistant, in terms of searching for content and customizing on-board settings.

It sounds as though you'll be able to describe in more detail the sort of show or movie you're after, and Bixby will oblige. The smart AI-powered assistant is also getting more control over other smart-home devices too – as long as they're made by Samsung.

From phone to TV

Samsung AI Cast

Developers will get access to Samsung AI Cast first (Image credit: Samsung)

Samsung has also announced Samsung AI Cast, which makes it easier to get AI results from your Galaxy phone to your Samsung TV. Modern Samsung phones come packed with AI, and we can imagine generating text or images and then being able to quickly beam them across to a big screen.

We can also expect "an integration with ChatGPT" right from the Samsung TV home screen, as part of this Samsung AI Cast feature – so we're presuming that you'll be able to talk to ChatGPT on your phone and see the results on your TV.

Again, Samsung is a bit vague on the specifics – not least when these updates might start rolling out – but it gives you an idea of what's on the way in the next few months or so. You're certainly not going to be able to get away from AI anytime soon.

There was plenty of other news from SDC2024, including the announcement that One UI 7 – Samsung's take on Android 15 – would be making its way to users at the start of next year, most probably with the Samsung Galaxy S25. If you already own a Samsung phone, you might well be able to test the software before then.

You might also like

Everybody gets 10 minutes a month to talk to ChatGPT on their phone, and you can try it right now

Techradar - Fri, 10/04/2024 - 05:50

OpenAI has decided to grant all ChatGPT users on its Free tier a 10-minute-a-month preview of its Advanced Voice mode, and if you’ve got the ChatGPT app on your phone you can try it right now.

Usually only available to ChatGPT Plus subscribers, Advanced Voice mode gives you the ability to talk to ChatGPT on your smartphone and get it to talk back to you in a voice of your choosing. You can ask it pretty much anything within reason and get a human-like response. In many ways, it’s the natural evolution of the chatbot into something that feels even more futuristic.

So long, Scarlett

It feels like we're now a long way from the launch of ChatGPT Advanced Voice mode, back in May this year, when the actress Scarlett Johansson went to war with OpenAI over the use of its voice called Sky, which sounded very much like her voice from the movie Her, in which she played an AI-powered assistant.

OpenAI denied claims that it had copied her voice. A statement from CEO, Sam Altman, on May 20, 2024 read: “The voice of Sky is not Scarlett Johansson's, and it was never intended to resemble hers. We cast the voice actor behind Sky’s voice before any outreach to Ms. Johansson. Out of respect for Ms. Johansson, we have paused using Sky’s voice in our products. We are sorry to Ms. Johansson that we didn’t communicate better.”

Even though your time is limited, you get access to all the features of Advanced Voice mode for those 10 glorious minutes, including the ability to choose a voice. In the current version of Advanced Voice mode you’ll find there are nine different voices to choose between, which have different qualities: Sol, Juniper, Vale, Spruce, Breeze, Arbor, Cove, Maple, and Ember.

ChatGPT Advanced Voice mode on a smartphone.

(Image credit: OpenAI)

Advanced Voice mode is a real step up from the previous standard voice mode that you’d find on ChatGPT – it can sense and respond to humor and you can interrupt it at any time while it's talking. Interestingly, once you’ve chosen a voice you can’t ask ChatGPT to then change to another voice. Instead, you have to go into Settings, which you get to by tapping on your name at the bottom of the screen and scrolling down until you find Voice and tapping on it.

How to know you’ve got Advanced Voice mode

It’s hard to miss Advanced Voice mode, in the iOS or Android app you’ll see a message pop up in the prompt window informing you that you have access to Advanced Voice mode and pointing to a new icon to the far right. Simply tap it to begin. The first time you access it you’ll be asked to choose a voice and once you’ve done that you’re good to go.

When you see a glowing blue orb, somewhat reminiscent of a palantír from Lord of the Rings, you know that ChatGPT is listening and you can start having a conversation - ask it what you should have for lunch today, or where would be a good place to go on a vacation. You’ll discover that the conversation feels startlingly real.

You might also like...

Apple Visual Intelligence is already behind the curve as Google adds video search to Lens

Techradar - Fri, 10/04/2024 - 04:45

Google just announced loads of AI search updates including video search capabilities in Google Lens adding a whole new way for people to search the internet.

The new Video Search feature comes at a time when every major tech company is looking to one-up its competitors in the race to have truly useful AI functionality that sticks with consumers - could searching the web via video be it?

Rolling out to all Google app users on iOS and Android, Video Search flexes Google’s AI muscles just in time for the launch of Apple’s Lens competitor the Apple Intelligence feature, Visual Intelligence. Apple’s offering is yet to receive a release date but it’s at the core of the company’s marketing for the iPhone 16 and iPhone 16 Pro, taking advantage of Camera Control.

Visual Intelligence lets you snap an image of something and quickly get information on whatever you’re looking at. Whether you’re snapping a photo of a closed restaurant to check opening times (apparently places don’t show opening hours in their windows anymore) or aiming your iPhone’s camera at a friend’s dog to check the breed (we don’t ask questions anymore either), Visual Intelligence is essentially Apple’s competitor to Google Lens - but new video and voice features in Lens leave it behind before the feature even launches.

Search what you record

AI video search results Google Lens

(Image credit: Google)

So how does Video Search work? And would you even want to use it? You’re now able to snap videos on Google Lens and quickly ask questions related to what you’re seeing. The example Google gave was a person recording a school of fish in an aquarium and asking Lens to analyze the species based on search results. It’s pretty cool stuff, but how much more useful is video recording than snapping a quick photo?

At the time of writing, I’ve not been able to test Google’s new Video Search functionality, which is available globally for users enrolled in Search Labs ‘AI Overview and more’ experiment. I’ve also not had the opportunity to test Visual Intelligence, and as far as I’m aware no one outside the walls of Apple has had the pleasure either. With new video search functionality and even voice search functionality coming to Lens, I can’t help but feel like Visual Intelligence is already lagging behind in the same way Siri was compared to other voice assistants when it launched back in 2011.

There are a lot of questions here and we won’t get answers for at least a few months. But I have to ask, do people even care about Video Search anyway? Or will Visual Intelligence’s dedicated launch button on the side of all iPhone 16s be enough to make people start searching without typing?

You might also like...

Nvidia’s new app gets a major upgrade – and proves the company is finally listening to gamers

Techradar - Fri, 10/04/2024 - 04:33

Nvidia has updated its new Nvidia App, currently in beta, introducing much-needed features, including G-Sync controls, RTX HDR multi-monitor support, driver rollback, and other user-requested tweaks.

Nvidia has been working on the new app since February 2024, aiming to incorporate features of its separate Nvidia Control Panel and GeForce Experience apps into a seamless and simplified one-stop shop for all Nvidia driver updates and game settings.

Nvidia claims the app will be exiting its beta testing phase by the end of the year, with a definite plan to phase out the older ones.

“Your feedback matters,” Andrew Burnes writes in Nvidia’s announcement post, “and we appreciate your continued support. In future updates, we’ll continue to add the remaining Nvidia Control Panel options, with the goal of unifying the Nvidia Control Panel and GeForce Experience’s key features in one app.

“Additionally, we intend to migrate all remaining GeForce Experience users to the new Nvidia app when it exits beta before the end of the year.”

Building a better app

Previous updates to the app have allowed users to alter their monitor refresh rate, resolution and orientation, which are essential features that have been part of the Nvidia Control Panel app for years. If Nvidia really wants its new app to replace its existing ones, then making sure it offers all of the features its users rely on. The new update adds G-SYNC controls, and, most-interestingly, RTX HDR multi-monitor support.

RTX HDR is a filter that uses AI to bring High Dynamic Range to games that weren’t designed for it, which can have a big impact on visual quality. Using it in a game is seamless; it can be activated just by typing ALT+Z, and its potential is huge: out of Nvidia’s 50 most-played GeForce games, only 12 offer HDR support. Games running on Vulkan, DirectX 9, 11, and 12 will benefit from the enhanced experience on multiple HDR-capable monitors at once.

The app also contains updates introduced based on user feedback. Now you can view system stats and latency info in game and on your desktop, and frame rates from the “heads up display” settings tab. You can also adjust how and which stats are shown, and sort and filter your games and apps, hide programs and remove manually-added programs.

The app update was released on October 1 , 2024. Users will have to install the new app beta update and GeForce Game Ready Driver 565.90 WHQL driver to take advantage of these new features.

User feedback has certainly appeared to have had a positive influence on the app’s development. Nvidia encourages users to continue sending feedback, which they can do via a button at the top right of the app. Meanwhile, features still to come include custom resolutions, surround options and multi-monitor setup.

It’s good to see big companies like Nvidia listening to their customers – as sometimes it can feel like they can forget about us and putting their financial interests first. This is especially important when making big changes to apps many of us use every day. Other companies *cough* Microsoft *cough* would do well to remember this.

You might also like

Can California Regulate A.I.? + Silicon Valley’s Super Babies + System Update!

NYT Technology - Fri, 10/04/2024 - 04:03
“In the United States, we have 50 laboratories of democracy and they’re called states.”

ChatGPT's new 'Canvas' is the AI collaborator you didn't know you needed

Techradar - Thu, 10/03/2024 - 20:00

ChatGPT has been writing text and software code since it debuted, but any fine-tuning of your prompt has required a full rewrite. OpenAI released a new feature called Canvas that offers a shared, editable page where ChatGPT can mimic a human collaborator and repeatedly edit or offer feedback on the particular parts of the text and code you select.

A useful way to think of Canvas is to imagine ChatGPT as your partner on a writing or coding project (you might even say 'copilot' if you were at Microsoft). Canvas operates on a separate page from the standard chatbot window, where you can ask the AI to write a blog post, code a mobile app feature, and so on. Instead of reading through the result and asking for a change in tone or adjustment to the code, you can highlight the specific bits you want changed and comment on the kind of edits you're looking for. 

So, if you love what ChatGPT wrote for your newsletter except for the introduction, you could highlight those paragraphs and say you want it to be more formal or expand on a preview for the rest of the newsletter that's too short. You can do the same with editing your own writing if you share some text and ask for it to be longer or use less complex language. The suggestions even extend to asking ChatGPT for emoji ideas. 

The same general idea applies to getting ChatGPT to edit code on Canvas, whether AI-generated or written by humans. You can ask ChatGPT to debug code, suggest improvements, or insert comments to make it more useful when sharing with actual humans. While emojis may not be relevant in coding software, you can use Canvas to ask ChatGPT to translate a program into another programming language, switching among them depending on what is most useful. 

Canvas relies on the new GPT-4o model. For now, only ChatGPT Plus and Team subscribers have access, though it will be opened to Enterprise and Education clients soon. OpenAI will also make it available to those relying on the free tier of ChatGPT, but not until the beta stage is complete.

Canvas opens in a separate window, allowing you and ChatGPT to work on ideas side by side.In canvas, ChatGPT can suggest edits, adjust length, change reading levels, and offer inline feedback. You can also write and edit directly in canvas. pic.twitter.com/yHYVGtwJHVOctober 3, 2024

Blank Canvas

Canvas is a logical step in OpenAI's expansion of ChatGPT's features. It's a lot like a text version of the editing tools for AI-produced images made with OpenAI's DALL-E models. Instead of highlighting a part of an image and sending a prompt for how to change it, Canvas centers on text. Often, the text-based features come first and only later become multimodal in some form, so this is an interesting inverse of the standard release pattern.

The appeal to even the most casual ChatGPT users is obvious, as narrowly focused editing and suggestions are a lot more helpful than the one-dimensional conversational approach of ChatGPT's standard form. That's especially true when complex code or long-form text is involved. Of course, this might reasonably raise the hackles of educators and others already concerned with misuse or overreliance on AI-generated writing. It's one thing for a student to ask ChatGPT to write an essay for them; it's another when they can narrowly adjust the result to make it harder to spot any telltale signs of AI composition. Still, making ChatGPT a more active assistant and being able to task it to very specific issues is likely to be a huge boon in more legitimate pursuits, particularly when you need an editor who will never lose patience with you. 

"People use ChatGPT every day for help with writing and code. Although the chat interface is easy to use and works well for many tasks, it’s limited when you want to work on projects that require editing and revisions," OpenAI explained in its announcement. "Making AI more useful and accessible requires rethinking how we interact with it. Canvas is a new approach and the first major update to ChatGPT’s visual interface since we launched two years ago."

You might also like...

Picture this – Gemini streamlines image sharing to AI assistant

Techradar - Thu, 10/03/2024 - 16:00

Google has streamlined a key feature of its Gemini AI assistant on Android devices, speeding up image sharing and editing, as spotted by Android Authority. The latest Gemini update lets you send images directly from other apps to Gemini instead of the more cumbersome setup that was in place before. 

Now, if you have a picture in, for instance, Google Photos, that you want Gemini to look at in conjunction with a text prompt, you can submit it directly via Android's built-in share sheet as you would to send a text with the image attached. That's much easier than starting in the Gemini app, tapping on the upload image button, locating the image you want, and attaching it. And if your image is in the cloud, you would also need to download it to your device. It might not be more than a minute or even less, but if you want Gemini to explain a photo or use one to inform a new AI-generated image, that extra time and friction might put you off the idea. 

It's not a total revolution for Gemini, however. Submitting images to the AI is faster, but only images. You can't use the sharing button to send text or a link to Gemini. It also doesn't encompass the Gemini overlay, which lets you use Gemini without switching out of the app you're currently using. While the image gets sent to the AI app, you still actually need to switch to the app to use Gemini's features. 

Gemini Speed

Though subtle, the update is part of Google's efforts to smooth the road for intuitive engagement with Gemini. If you often use Gemini for multimedia content, the update could save you time in the long run. Gemini will be able to analyze the image and provide insights, descriptions, or even text content based on what it "sees" more quickly than before. This makes the app more useful for users who need to switch between different types of media in their daily workflows.

Even if Gemini is only an occasional part of your mobile usage, a minute or less can affect whether you decide to skip using Gemini. That's anathema to Google's plans to embed Gemini throughout your mobile device experience and your life in general. It's also another way for users who already rely on Google's ecosystem, such as Photos or Drive, to thread Gemini into how they use those other services. Making Gemini more convenient is clearly a major goal for Google. As ChatGPT and other AI assistants keep upping their multimodal features, Gemini will need this kind of edge to stay ahead, or at least keep even, with its rivals.

You might also like

OpenAI Adds $4 Billion Credit Line on Top of $6.6 Billion Investment Round

NYT Technology - Thu, 10/03/2024 - 14:15
The San Francisco company is gathering the billions its executives believe they will need to continue building new A.I. technology.

Microsoft admits Windows 11 24H2 could play havoc with some online games – and it’s blocked the update for affected PCs

Techradar - Thu, 10/03/2024 - 11:06

Windows 11 24H2 is not long out and already there’s trouble brewing in the bug department, with some PC gamers finding themselves affected by frustrating issues.

So far, the 24H2 update has had a limited rollout (to Windows 11 PCs, that is – Copilot+ PCs ran 24H2 from the get-go, though not with all of its features, we should add, plus a bunch of new AI abilities are now inbound). Still, that cautious deployment hasn’t stopped some problems with 24H2 from rearing their heads, predictably enough, and a couple of these are hitting PC gamers specifically.

According to the Windows release health status dashboard, there’s an issue with Asphalt 8, and a bigger potential problem with some games running Easy Anti-Cheat (EAC). That includes some very popular games such as Fortnite and Apex Legends, for example.

As Microsoft explains: “Some devices using Easy Anti-Cheat stop responding and receive a blue screen.”

Note that not every EAC game is affected, and only those titles running an older version of the anti-cheating tool aren’t playing nice with Windows 11 24H2. Tom’s Hardware reports that versions of EAC that date back before April 2024 will get a ‘Memory Management’ Blue Screen of Death (a complete lock-up, in other words).

Also note that AMD Ryzen processors are not affected, just PCs with Intel CPUs (and not older chips either – only Alder Lake processors or newer from Team Blue).

The Asphalt 8 bug is more straightforward in that it could, from time to time, freeze up and stop responding.

As a result, compatibility holds have been put on PCs that have Asphalt 8 installed, or an out-of-date version of Easy Anti-Cheat, to prevent them from running into trouble.

If you fall into those categories, you won’t get Windows 11 24H2 – and won’t be able to see it in Windows Update – until Microsoft irons out these incompatibility flaws.

Asphalt 8 video game screen capture of two cars colliding at high speeds

(Image credit: Gameloft) Analysis: Sugar on the asphalt

There’s not much you can do about Asphalt 8, except remove the game if you’re desperate for Windows 11’s 24H2 update (though you may still have to wait for it, anyway, given the phased rollout).

In the case of Easy Anti-Cheat, you can try installing the latest patch for any given game that uses this tool – in the hope that the utility is updated within that patch. In that scenario, with a more recent Easy Anti-Cheat version, you’ll hopefully no longer suffer from the glitch.

To be fair to Microsoft, in this case, you’d hope that any developer would have bundled the latest version of Easy Anti-Cheat with their game’s most recent update, and games shouldn’t be running an EAC version from six months ago (or older). If the dev hasn’t pushed a recent EAC build with game updates, that isn’t Microsoft’s fault.

Elsewhere there are some non-gaming problems Microsoft has flagged up with Windows 11 24H2. That includes fingerprint sensors becoming erratic, apps that customize wallpapers causing chaos, and other compatibility issues with PCs that have the Intel Smart Sound Technology (SST) driver.

There are no real showstoppers in evidence right off the bat, though, which is obviously something of a relief, though it’s still early days for the 24H2 update. As noted, only a limited number of Windows 11 users have 24H2 thus far.

You may also like...

Pages