Do NOT follow this link or you will be banned from the site!
Feed aggregator
YouTube Music's web app now gives you offline downloads for travel tunes
YouTube Music’s browser app is giving Premium subscribers the ability to download songs for offline listening.
Details of this upcoming change originate from a Reddit user who posted multiple screenshots of the altered service. There’s not much to go off at the moment. The images show there will be a new blue Download button in between Save to Library and the three-dot expandable menu above an album’s tracklist. Clicking it causes a Downloading window to pop up in the bottom left-hand corner denoting progress.
Downloads on Web App from r/YoutubeMusicOnce finished, you can head on over to the new Downloads tab on the Library page to find the song. A line of text underneath states music will stay on your device indefinitely so long as it connects to the internet “once every 30 days.” 9To5Google in their report states the feature will have filters allowing users to sort content by “Playlists, Podcasts, Songs, and Albums.”
Limited roll outIt’s important to mention that offline downloading may only be available to a handful of people. We happened to be one of the lucky few to have received the update on our YouTube Premium subscription (YouTube hasn't made any official announcement). If you look closely at our screenshot, the Download button is actually white instead of blue.
(Image credit: Future)Some online reports claim people are unable to download podcasts. However, that doesn’t seem to be the case because we were able to grab a couple of episodes. All you have to do is click the three-dot menu to the right of the play button and select Download. The podcast will show up in your Library soon after. This is a big deal as Google Podcasts will be shutting down this April in the United States, forcing listeners over to YouTube Music. It looks like the platform is preparing for the inevitable flood of new people migrating over.
(Image credit: Future)It’s unknown when this feature will officially roll out, although judging by its recent appearance, a release may be happening soon. YouTube Music users seem to be looking forward to getting the patch. On another Reddit post talking about the update, you’ll see multiple comments talking about how excited they are that offline downloading is just over the horizon.
In our opinion, you can't listen to music without a good pair of headphones. For recommendations, check out TechRadar's list of the best headphones for 2024.
You might also likeOppo's new AI-powered AR smart glasses give us a glimpse of the next tech revolution
- Oppo has shown off its Air Glass 3 AR glasses at MWC 2024
- They're powered by its AndesGPT AI model and can answer questions
- They're just a prototype, but the tech might not be far from launching
While there’s a slight weirdness to the Meta Ray-Ban Smart Glasses – they are a wearable camera, after all – the onboard AI is pretty neat, even if some of its best features are still in beta. So it’s unsurprising that other companies are looking to launch their own AI-powered specs, with Oppo being the latest in unveiling its new Air Glass 3 at MWC 2024.
In a demo video, Oppo shows how the specs have seemingly revolutionized someone's working day. When they boot up, the Air Glass 3's 1,000-nit displays show the user a breakdown of their schedule, and while making a coffee ahead of a meeting they get a message saying that it's started early.
While in the meeting the specs pick up on a question that’s been asked, and Oppo's AndesGPT AI model (which runs on a connected smartphone) is able to provide some possible answers. Later it uses the design details that have been discussed to create an image of a possible prototype design which the wearer then brings to life.
After a good day’s work they can kick back to some of their favorite tunes that play through the glasses’ in-built speakers. All of this is crammed into a 50g design.
Now, the big caveat here is the Air Glass 3 AR glasses are just a prototype. What’s more, neither of the previous Air Glass models were released outside of China – so there’s a higher than likely chance the Air Glass 3 won’t be either.
But what Oppo is showing off isn’t far from being mimicked by its rivals, and a lot of it is pretty much possible in tech that you can go out and buy today – including those Meta Ray-Ban Smart Glasses.
The future is nowThe Ray-Ban Meta Smart Glasses already have an AI that can answer questions like a voice-controlled ChatGPT.
They can also scan the environment around you using the camera to get context for questions – for example, “what meal can I make with these ingredients?” – via their 'Look and Ask' feature. These tools are currently in beta, but the tech is working and the AI features will hopefully be more widely available soon.
They can also alert you to texts and calls that you’re getting and play music, just like the Oppo Air Glass 3 concept.
The Ray-Ban Meta glasses ooze style and have neat AI tools (Image credit: Meta)Then there’s the likes of the Xreal Air 2. While their AR display is a little more distracting than the screen found on the Oppo Air Glass 3, they are a consumer product that isn’t mind-blowingly expensive to buy – just $399 / £399 for the base model.
If you combine these two glasses then you’re already very close to Oppo’s concept; you’d just need to clean up the design a little, and probably splash out a little more as I expect lenses with built-in displays won’t come cheap.
The only thing I can’t see happening soon is the AI creating a working prototype product design for you. It might be able to provide some inspiration for a designer to work off, but reliably creating a fully functional model seems more than a little beyond existing AI image generation tools' capabilities.
While the Oppo Air Glass 3 certainly look like a promising glimpse of the future, we'll have to see what they're actually capable of if and when they launch outside China.
You might also likeGoogle isn’t done trying to demonstrate Gemini’s genius and is working on integrating it directly into Android devices
Google’s newly reworked and rebranded family of generative artificial intelligence models, Gemini, may still be very much at the beginning of its development journey, but Google is making big plans for it. It’s planning to integrate Gemini into Android software for phones, and it’s predicted that users will be able to access it offline in 2025, according to a top executive at Google’s Pixel division, Brian Rakowski.
Gemini is a series of large language models that are designed to understand and generate human-like text and more, and the most compact, efficient model of these is Gemini Nano, intended for tasks on devices. This is the model that’s currently built and adapted to run on Pixel phones and other capable Android devices. According to Rakowski, Gemini Nano’s larger sibling models that require an internet connection to run (as they only live in Google’s data centers) are the ones expected to be integrated into new Android phones starting next year.
Google has been able to do this thanks to recent breakthroughs in engineers’ ability to compress these bigger and more complex models to a size that was feasible for use on smaller devices. One of these larger sibling models is Gemini Ultra, which is considered a key competitor to Open AI’s premium GPT-4 chatbot, and the compressed version of it will be able to run on an Android phone with no extra assistance.
This would mean users could access the processing power that Google is offering with Gemini whether they’re connected to the internet or not, potentially improving their day-to-day experience with it. It also means whatever you enter into Gemini wouldn’t necessarily have to leave your phone for Gemini to process it (if Google wills it, that is), thereby making it easier to keep your entries and information private - cloud-based AI tools have been criticized in the past for having inferior digital security compared to locally-run models. Rakowski told CNBC that what users will experience on their devices will be “instantaneous without requiring a connection or subscription.”
(Image credit: Future) A potential play to win users' favorMSPowerUser points out that the smartphone market has cooled down as of late, and some manufacturers might be trying to capture potential buyers’ attention by offering devices capable of utilizing what modern AI has to offer. While AI is an incredibly rich and intriguing area of research and novelty, it might not be enough to convince people to swap their old phone (which may already be capable of processing something like Gemini or ChatGPT) for a new one. Right now, the makers of AI hoping to raise trillions of dollars in funding are likely to offer versions that can run on existing devices so people can try it for themselves, and my guess is that satisfies most people’s AI appetites right now.
Google, Microsoft, Amazon, and others are all trying to develop their own AI models and assistants to become the first to reap the rewards. Right now, it seems like AI models are extremely impressive and can be surprising, and they can help you at work (although caution should be heavily exercised if you do this), but their initial novelty is currently the biggest draw they have.
These tools will have to demonstrate continuous quality-of-life improvements to be significant enough to make the type of impression they’re aiming to make. I do believe steps like making their models widely available on users’ devices and giving users the option and the capability to use them offline is a step that could pay off for Google in the long run - and I would like to see other tech giants follow in its path.
YOU MIGHT ALSO LIKE...Microsoft Paint could get Midjourney-like powers soon thanks to a surprise AI upgrade
Microsoft has been paying quite a lot of attention to its once-forgotten Paint app recently, which had gone years without any meaningful updates or new features. Now, it seems like the app is getting yet another upgrade - a Midjourney-like ability to generate AI art in real-time.
So, what does that mean? If you’re unfamiliar with the popular image generator Midjourney, it’s an AI-powered tool that allows you to type in a text prompt to generate an image in a style of your choosing - be it paintwork, photorealism, or even pixel art.
The rumor comes from the credible Windows leaker PhantomOfEarth on X (formerly Twitter), who made a post stating that “The upcoming AI feature for paint may be something known as ‘LiveCanvas’”. While the leaker isn’t entirely sure what exactly the feature will be, it does sound very familiar to Leonardo.Ai’s Real-Time Canvas.
The upcoming AI feature for Paint might be something known as "LiveCanvas". Not sure what it will do. https://t.co/YwQcC3EPnYFebruary 26, 2024
See moreReal-Time Canvas allows you to draw in one window and watch in a second window as generative AI brings your art to life - like a sort of artistic auto-fill. This would fit perfectly in Microsoft Paint - users would be able to sketch out their ideas or create art and use the generative AI technology to add to it. Microsoft already has some basic (and, if I’m being honest, kind of average) AI-powered image generation within Paint, so it would make sense to add a more interactive feature like this rather than simply a repeat of something they already have.
We’re quite excited to see how this tool could help budding artists looking to experiment with generative AI, since it’ll be available free in Windows. With the ability to draw in one window and edit in another, you can create the barebones of your outwork and add finer details with the AI. It's approaching a more 'moral' application of generative AI - one that doesn't simply cut out the human creator entirely.
We don’t know much about expected release dates or even have a rough idea of what the feature would look like outside of PhantomOfEarth’s post - and, as always, we should take leaks like this with a side of salt. Likely, the feature will eventually make its way to the Windows Insider Program, which allows Windows enthusiasts and developers to sign up and get an early look at upcoming releases and new features that may be on the way. So, we’ll have to wait and see if it comes to fruition - and get doodling.
You might also like...Meta could launch an LG OLED VR headset in 2025
- Meta Quest Pro 2 rumored for 2025 release
- LG expected to make OLED displays for it
- OLED could provide visual boost next Quest Pros need
The Apple Vision Pro might be the talk of the town in the high-end VR space right now, but not only will it have to contend with rivals like the Samsung XR headset that are expected to launch later this year, new reports claim the Meta Quest Pro 2 will launch in early 2025 to compete with it too.
The original Meta Quest Pro was something of a disappointment. At the time it seemed like a decent option for people looking for a high-end standalone VR headset – especially compared to rivals like the HTC Vive XR Elite. But since the launch of the Meta Quest 3 and Vision Pro – the former of which is not only cheaper but actually has some better specs in the mixed-reality department – it’s fallen by the wayside.
Meta is clearly hoping to make its next Quest Pro device a standout VR gadget. A Korea Economic Daily report (translated from Korean) cites unnamed industry sources who said Meta CEO Mark Zuckerberg is set to meet with the CEO of LG Electronics to discuss a partnership for its next Pro devices.
LG OLED coming to Quest?It’s been rumored for some time that LG is looking to make an XR device of some kind – either its own or one in partnership with another brand like the collaborative Google and Samsung XR headset – and back in February 2023 we first heard whispers Meta wanted LG to create OLED displays for its headsets.
While we hope these high-end screens will make their way to the more budget-friendly Quest line it’s more reasonable to assume that a pricier Quest Pro line would be upgraded to LG OLEDs first.
LG makes fantastic OLED TVs like the LG C3 (Image credit: LG)Yes, we know the original Oculus Quest got there first, but since then Meta has relied on LCD panels with better brightness and resolution because the original OLED Quest couldn’t benefit from this display tech’s full advantages. The pixels took too long to turn on and off so you could never experience true blacks, despite the ability to achieve true blacks being the main reason to use an OLED.
LG’s next-gen panels hopefully should be able to offer top-of-the-line visuals – one of the four things we want to see from the Meta Quest Pro 2 – but we’ll have to wait and see.
Thankfully, the recent Korea Economic Daily report said we might not be waiting too long. The Meta Quest Pro 2 is apparently being prepared for an early 2025 launch, and while this is a slight departure from Meta’s usual October release strategy it makes some sense.
However, as with all rumors, we must remember to take these reports with a pinch of salt. Until Meta or LG make an official announcement there's no guarantee they’re working together on the next Quest Pro or any kind of headset – nor a guarantee of when it’ll launch and what specs it might have.
As soon as we do hear anything more concrete, or we spy any interesting leaks and rumors, we’ll be sure to keep you informed.
You might also likeGoogle’s Gemini will be right back after these hallucinations: image generator to make a return after historical blunders
Google is gearing up to relaunch its image creation tool that’s part of the newly-rebranded generative artificial intelligence (AI) bot, Gemini, in the next few weeks. The generative AI image creation tool is in theory capable of generating almost anything you can dream up and put into words as a prompt, but “almost” is the key word here.
Google has pumped the brakes on Gemini’s image generation after Gemini was observed creating historical depictions and other questionable images that were considered inaccurate or offensive. However, it looks like Gemini could return to image generation soon, as Google DeepMind CEO Demis Hassabis announced that Gemini will be rebooted in the coming week after taking time to address these issues.
Image generation came to Gemini earlier in February, and users were keen to test its abelites. Some people attempted to generate images depicting a certain historical period that appeared to greatly deviate from accepted historical fact. Some of these users took to social media to share their results and direct criticism at Google.
The images caught many people’s attention and sparked many conversations, and Google has recognized the images as a symptom of a problem within Gemini. The tech giant then chose to take the feature offline and fix whatever was causing the model to dream up such strange and controversial pictures.
Hassabis confirmed that Gemini was not working as intended, and that it would take some weeks to amend it, and bring it back online while speaking at a panel taking place at the Mobile World Congress (MWC) event in Barcelona.
(Image credit: Shutterstock) If at first, your generative AI bot doesn't succeed...Google’s first attempt at a generative AI chatbot was Bard, which saw a lukewarm reception and didn’t win users over from the more popular ChatGPT in the way Google had hoped, after which it changed course and debuted its revamped and rebranded family of generative models, Gemini. Like ChatGPT, Google is now offering a premium-tier for Gemini, which offers advanced features for a subscription.
The examples of Gemini's misadventures have also reignited discussions about AI ethics generally, and Google’s AI ethics specifically, and around issues like the accuracy of generated AI output and AI hallucinations. Companies like Microsoft and Google are pushing ahead to win the AI assistant arms race, but while racing ahead, they’re in danger of releasing products with flaws that could undermine their hard work.
AI-generated content is becoming increasingly popular and, especially due to their size and resources, these companies can (and really, should) be held to a high standard of accuracy. High profile fails like the one Gemini experienced aren’t just embarrassing for Google – it could damage the product’s perception in the eyes of consumers. There’s a reason Google rebranded Bard after its much-mocked debut.
There’s no doubt that AI is incredibly exciting, but Google and its peers should be mindful that rushing out half-baked products just to get ahead of the competition could spectacularly backfire.
YOU MIGHT ALSO LIKE...Nvidia's GeForce Now’s free tier will soon show you up to two minutes of ads while you wait to play - proving nowhere is safe from commercials
Nvidia’s free tier of GeForce Now, its cloud gaming service, will soon run up to two minutes of ads before you play, according to Nvidia spokesperson Stephanie Ngo.
GeForce Now is a service offered by Nvidia that allows you to connect to digital PC game stores and stream games you already own across a multitude of different devices - including Macs, Windows laptops, iPhones and iPads, Android phones, and more.
It offers three membership tiers, with the free membership offering a queue system with an hour-long gaming session length that will then bring you back to the start of the queue once your time is up. It’s in this waiting time that the ads will be shown, so while it could be a little annoying, your actual gameplay time won’t be interrupted.
The ads will help pay for the free tier service and keep it free, with Ngo adding that the change is also expected to reduce wait times for free users in the long run - though it’s not entirely clear at this point how that’s going to work. Perhaps Nvidia is expecting the arrival of ads to push users to pay for the premium tiers or simply drive some users away from the platform entirely - either would, in theory, help reduce queues for the free tier. GeForce Now users should expect an email on 27 Feb to let them know about the changes.
Major inconvenience or just … meh?I’m not a user of Nvidia’s game-streaming service myself, but I reached out to GeForce Now Members within the TechRadar team and learned that wait times currently fluctuate between five to fifteen minutes - and scrolling through the GeForce Now subreddit proves that wait times can go on even longer.
Most people who use the free tier of GeForce Now go in aware that they will be spending a not-insignificant amount of time in a queue, so in reality, two minutes of ads when you know you’re likely going to be waiting for longer anyway isn’t much of an inconvenience - it might even help kill some time. Many users are likely to simply do something else while queuing for their free hour timeslot anyway, so why shouldn’t Nvidia get some extra ad revenue from it?
That being said, it is a gloomy example of the inescapable modern torture of being advertised at non-stop. Almost every facet of the internet is packed with ads at this point (this article included - sorry about that, but we’ve got to eat!) and while a lot of platforms offer ad-free paid tiers, it seems like that isn’t enough anymore.
Amazon Prime has received a lot of (well-deserved) flak for slapping ads onto paid memberships, and Netflix’s ad-supported free tier wasn’t very well-received either. While Nvidia’s latest move seems fairly innocuous right now, who’s to say the ‘up to two minutes’ won’t extend further in the future, until you’re sat watching a full ten minutes of commercials to play an hour-long session of your current favorite game? Do you just give in and buy a paid membership? I just might, personally - but I wouldn’t be happy about it.
Via The Verge
You might also like...- I created an AI app in 10 minutes and I might never think about artificial intelligence in the same way again
- Nvidia CEO predicts the death of coding — Jensen Huang says AI will do the work, so kids don't need to learn
- Lenovo’s unassuming new ThinkPads just became my favorite laptops of MWC – here’s why
You might be waiting a while yet for Wi-Fi 7 support in Windows 11 – but Microsoft is on the case
Windows 11 is now adding support for Wi-Fi 7, those who want to use the much-improved wireless standard will doubtless be pleased to learn – but it’s only in testing currently.
That’s despite the fact that there are already Wi-Fi 7 routers out there, and the standard has been officially finalized by the Wi-Fi Alliance (the Wi-Fi Certified 7 program was announced at the start of January 2024, in fact).
As you might guess, it’ll be some time before official Wi-Fi 7 support comes through to the release version of Windows 11, as it’ll need to progress through testing channels first.
Right now, it’s only in the Canary (earliest) test channel with build 26063, a preview release that flew under our radar somewhat, but an important one in this respect. However, it’s also been added for Dev channel testers, Microsoft informed us in the usual blog post on build 26063 (as flagged up by XDA Developers).
(Image credit: Microsoft)As the software giant also pointed out, Wi-Fi 7 (aka 802.11be) is in the order of 4x faster than Wi-Fi 6 and more like 6x quicker than Wi-Fi 5.
If you want to know more about how this new wireless standard takes some big strides forward – and it isn’t just about raw speed, though that is, of course, very important – check out our guide to the ins-and-outs of Wi-Fi 7.
Analysis: Wireless partyIn fairness to Microsoft, while it appears to be pretty late to the wireless party, and Wi-Fi 7 may have officially kicked off (at least in some countries, the US, Australia, and UK included), it's still early days for the standard.
The standard may be effectively set in stone now, but that doesn’t mean there won’t be tweaks going forward. There will inevitably be firmware updates for existing Wi-Fi 7 routers to fix or modify things going forward as needed, although all the big cogs in terms of features are now in place.
Windows 11 is one of the final pieces of the puzzle to be added for Wi-Fi 7 support, then, for laptops which sport Wi-Fi 7. And of course as mentioned, you’ll need a Wi-Fi 7 router to benefit from faster wireless speeds. (Those devices are expensive right now, too, it should be noted – though that’s generally true of any cutting-edge tech).
With Wi-Fi 7 we’re getting performance which makes wireless online gaming a reality in terms of it being close to wired (Ethernet) performance, and certainly much better than other fudges for PCs that aren’t plugged directly into the router (such as powerline adapters, which can be notoriously flaky in some scenarios).
What about Windows 10 support for Wi-Fi 7? We’re still not sure on that score, although the last we heard was that it is inbound – but there’s no sign of that yet.
You might also like...Your Fitbit app can now show stats from other wearables and services
As part of a recent Android feature drop, the Fitbit app will now show data from third-party sources to provide users with a “more complete picture of [their] health”.
Google stated in a recent announcement it’s effectively expanding Health Connect’s reach, allowing it to grab stats “from your favorite wearables and apps”. These sources include AllTrials, the Oura Ring, plus nutritional data from MyFitnessPal. Over in the Today tab on the Fitbit app, you will see a new section called Records where you will see all the Health Connect info listed out in detail.
Looking at the demo video, there are entries for calories burned in a day, distance traveled, floors climbed, and body measurements among other things. Tapping an entry will take you to a stat readout. For example, going to “Steps” will show you how many steps you’ve taken in a day, week, month, and year with a daily average number on the side.
Data coming from a third-party source will have the service’s logo right next to it. The aforementioned Steps section has a symbol of an Oura Ring next to it while Elevation Gained has the AllTrials icon alongside it. It’s important to mention that there may be a discrepancy in the information shown. 9To5Google explains in its coverage “data in Health Connect may not match the metrics you see on your Fitbit devices.”
Android feature dropThe update is currently rolling out alongside the eight other features. Just to briefly go over them, you have WearOS smartwatches receiving public transit directions via Google Maps plus support for Google Wallet passes. Google Messages will soon host the company’s Gemini chatbot so you can have direct conversations with the AI. And the Android home screen will gain an output switcher for Spotify giving subscribers the ability to change where their media is playing. You’ll be able to seamlessly hop between a smartphone, a pair of headphones, or a smart TV.
We reached out to Google asking if the only third-party sources Health Connect has access to are All Trials, an Oura Ring, and MyFitnessPal or if there are more. This story will be updated at a later time.
In the meantime, check out TechRadar's list of the best Fitbit trackers for 2024.
You might also likeI created an AI app in 10 minutes and I might never think about artificial intelligence in the same way again
Pretty much anything we can do with AI today might have seemed like magic just a year ago, but MindStudio's platform for creating custom AI apps in a matter of minutes feels like a new level of alchemy.
The six-month-old free platform, which you can find right now under youai.ai, is a visual studio for building AI workflows, assistants, and AI chatbots. In its short lifespan it's already been used, according to CEO Dimitry Shapiro, to build more than 18,000 apps.
Yes, he called them "apps", and if you're struggling to understand how or why anyone might want to build AI applications, just look at OpenAI's relatively new GPT apps (aka GPTs). These let you lock the powerful GPT-3.5 into topic-based thinking that you can package up, share, and sell. Shapiro, however, noted the limits of OpenAI's approach.
He likened GPTs to "bookmarking a prompt" within the GPT sphere. MindStudio, on the other hand, is generative model-agnostic. The system lets you use multiple models within one app.
If adding more model options sounds complicated, I can assure you it's not. MindStudio is the AI development platform for non-developers.
Watch and learn Choose your template. (Image credit: MindStudio)To get you started, the company provides an easy-to-follow 18-minute video tutorial. The system also helps by offering a healthy collection of templates (many of them business-focused), or you can choose a blank template. I followed the guide to recreate the demo AI app (a blog post generator), and my only criticism is that the video is slightly out of date, with some interface elements having been moved or renamed. There are some prompts to note the changes, but the video could still do with a refresh.
Still, I had no trouble creating that first AI blog generator. The key here is that you can get a lot of the work done through a visual interface that lets you add blocks along a workflow and then click on them to customize, add details, and choose which AI model you want to use (the list includes GPT- 3.5 turbo, PaLM 2, Llama 2, and Gemini Pro). While you don't necessarily have to use a particular model for each task in your app, it might be that, for example, you should be using GPT-3.5 for fast chatbots or that PaLM would be better for math; however, MindStudio cannot, at least yet, recommend which model to use and when.
Image 1 of 2Connect the boxes (Image credit: MindStudio)Image 2 of 2And then edit their contents (Image credit: MindStudio)The act of adding training data is also simple. I was able to find web pages of information, download the HTML, and upload it to MindStudio (you can upload up to 150 files on a single app). MindStudio uses the information to inform the AI, but will not be cutting and pasting information from any of those pages into your app responses.
Most of MindStudio's clients are in business, and it does hide some more powerful features (embedding on third-party websites) and models (like GPT 4 Turbo) behind a paywall, but anyone can try their hand at building and sharing AI apps (you get a URL for sharing).
Confident in my newly acquired, if limited, knowledge, I set about building an AI app revolving around mobile photography advice. Granted, I used the framework I'd just learned in the AI blog post generator tutorial, but it still went far better than I expected.
One of the nice things about MindStudio is that it allows for as much or as little coding as you're prepared to do. In my case, I had to reference exactly one variable that the model would use to pull the right response.
Options include setting your model's 'temperature' (Image credit: MindStudio)There are a lot of smart and dead-simple controls that can even teach you something about how models work. MindStudio lets you set, for instance, the 'Temperature' of your model to control the randomness of its responses. The higher the 'temp', the more unique and creative each response. If you like your model verbose, you can drag another slider to set a response size of up to 3,000 characters.
The free service includes unlimited consumer usage and messages, some basic metrics, and the ability to share your AI via a link (as I've done here). Pro users can pay $23 a month for the more powerful models like GPT-4, less MindStudio branding, and, among other things, site embedding. The $99 a-month tier includes all you get with Pro, but adds the ability to charge for access to your AI app, better analytics, API access, full chat transcripts, and enterprise support.
Image 1 of 2Look, may, I made an AI app. (Image credit: MindStudio)Image 2 of 2It's smarter than I am. (Image credit: MindStudio)I can imagine small and medium-sized businesses using MindStudio to build customer engagement and content capture on their sites, and even as a tool for guiding users through their services.
Even at the free level, though, I was surprised at the level of customization MindStorm offers. I could add my own custom icons and art, and even build a landing page.
I wouldn't call my little AI app anything special, but the fact that I could take the germ of an idea and turn it into a bespoke chatbot in 10 minutes is surprising even to me. That I get to choose the right model for each job within an AI app is even better; and that this level of fun and utility is free is the icing on the cake.
You might also like- What is AI? Everything you need to know about Artificial Intelligence ...
- Best AI tools
- You can now try out Samsung's Galaxy AI on any smartphone ...
- Tiny AI chip designer could become Arm's sibling - Softbank ...
- My jaw hit the floor when I watched an AI master one of the world's ...
- Should you upgrade to Google One AI Premium? Its AI features and ...
Google explains how Gemini’s AI image generation went wrong, and how it’ll fix it
A few weeks ago Google launched a new image generation tool for Gemini (the suite of AI tools formerly known as Bard and Duet) which allowed users to generate all sorts of images from simple text prompts. Unfortunately, Google’s AI tool repeatedly missed the mark and generated inaccurate and even offensive images that led a lot of us to wonder - how did the bot get things so wrong? Well, the company has finally released a statement explaining what went wrong, and how it plans to fix Gemini.
The official blog post addressing the issue states that when designing the text-to-image feature for Gemini, the team behind Gemini wanted to “ensure it doesn’t fall into some of the traps we’ve seen in the past with image generation technology — such as creating violent or sexually explicit images, or depictions of real people.” The post further explains that users probably don’t want to keep seeing people of just one ethnicity or other prominent characteristic.
So, to offer a pretty basic explanation for what’s been going on: Gemini has been throwing up images of people of color when prompted to generate images of white historical figures, giving users ‘diverse Nazis’, or simply ignoring the part of your prompt where you’ve specified exactly what you’re looking for. While Gemini’s image capabilities are currently on hold, when you could access the feature you’d specify exactly who you’re trying to generate - Google uses the example “a white veterinarian with a dog” - and Gemini would seemingly ignore the first half of that prompt and generate veterinarians of all races except the one you asked for.
Google went on to explain that this was the outcome of two crucial failings - firstly, Gemini was showing a range of different people without considering a range not to show. Alongside that, in trying to make a more conscious, less biased generative AI, Google admits the “model became way more cautious than we intended and refused to answer certain prompts entirely - wrongly interpreting some very anodyne prompts as sensitive.”
So, what's next?At the time of writing, the ability to generate images of people on Gemini has been paused while the Gemini team works to fix the inaccuracies and carry out further testing. The blog post notes that AI ‘hallucinations’ are nothing new when it comes to complex deep learning models - even Bard and ChatGPT had some questionable tantrums as the creators of those bots worked out the kinks.
The post ends with a promise from Google to keep working on Gemini’s AI-powered people generation until everything is sorted, with the note that while the team can’t promise it won’t ever generate “embarrassing, inaccurate or offensive results”, action is being taken to make sure it happens as little as possible.
All in all, this whole episode puts into perspective that AI is only as smart as we make it. Our editor-in-chief Lance Ulanoff succinctly noted that “When an AI doesn't know history, you can't blame the AI.” With how quickly artificial intelligence has swooped in and crammed itself into various facets of our daily lives - whether we want it or not - it’s easy to forget that the public proliferation of AI started just 18 months ago. As impressive as the tools currently available to us are, we’re ultimately still in the early days of artificial intelligence.
We can’t rain on Google Gemini’s parade just because the mistakes were more visually striking than say, ChatGPT’s recent gibberish-filled meltdown. Google’s temporary pause and reworking will ultimately lead to a better product, and sooner or later we’ll see the tool as it was meant to be.
You might also like...- What is OpenAI's Sora? The text-to-video tool explained and when you might be able to use it
- Are you a Reddit user? Google's about to feed all your posts to a hungry AI, and there’s nothing you can do about it
- Gemma, Google's new open-source AI model, could make your next chatbot safer and more responsible
The Meta Quest 3’s popularity is proof a cheap Vision Pro can’t come soon enough
The Oculus Quest 2 has been the most popular VR headset in the world for the past couple of years – dominating sales and usage charts with its blend of solid performance, amazing software library and, most importantly, affordability.
Now its successor – the Meta Quest 3 – is following in its footsteps.
Just four months after launch it’s the third most popular headset used on Steam (and will likely be the second most popular in the next Steam Hardware Survey). What’s more, while we estimate the Quest 3’s not selling quite as well as the Quest 2 was at the four-month mark, it still looks to be a hit (plus, lower sales figures are expected considering it’s almost double the launch price of the Quest 2).
Despite its higher cost, $499.99 / £479.99 / AU$799.99 is still relatively affordable in the VR space, and its early success continues the ongoing trend in VR that accessibility is the make or break factor in a VR gadget’s popularity.
The cheap Oculus Quest 2 made VR mainstream (Image credit: Facebook)There’s something to be said for high-end hardware such as the Apple Vision Pro bringing the wow factor back to VR (how can you not be impressed by its crisp OLED displays and inventive eye-and-hand-tracking system), but I’ll admit I was worried that its launch – and announcement of other high-end, and high-priced, headsets – would see VR return to its early, less affordable days.
Now I’m more confident than ever that we’ll see Apple’s rumored cheaper Vision Pro follow-up and other budget-friendly hardware sooner rather than later.
Rising up the chartsAccording to the Steam Hardware Survey, which tracks the popularity of hardware for participating Steam users, 14.05% of all Steam VR players used a Quest 3 last month. That’s a 4.78% rise in its popularity over the previous month’s results and means it’s within spitting distance of the number two spot, which is currently held by the Valve Index – 15% of users prefer it over other VR headsets, even three-and-a-half years after its launch.
It has a ways to go before it reaches the top spot, however, with the Oculus Quest 2 preferred by 40.64% of Steam VR players. The Quest 3’s predecessor has held this top spot for a couple of years now, and it’s unlikely to lose to the Quest 3 or another headset for a while. Even though the Quest 3 is doing well for itself, it’s not selling quite as fast as the Quest 2.
(Image credit: Future)Using Steam Hardware Survey data for January 2024 (four months after its launch) and data from January 2021 (four months after the Quest 2’s launch) – as well as average Steam player counts for these months based on SteamDB data – it appears that the Quest 3 has sold about 87% as many units as the Quest 2 did at the same point in its life.
Considering the Quest 3 is priced at $499.99 / £479.99 / AU$799.99, a fair bit more than the $299 / £299 / AU$479 the Quest 2 cost at launch, to even come close to matching the sales speed of its predecessor is impressive. And the Quest 2 did sell very well out of the gate.
We don’t have exact Quest 2 sales data from its early days – Meta only highlights when the device passes certain major milestones – but we do know that after five months, its total sales were higher than the total sales of all other Oculus VR headsets combined, some of which had been out for over five years. Meta’s gone on to sell roughly 20 million Quest 2s, according to a March 2023 leak. That's about as fast as the Xbox Series X is believed to have sold, which launched around the same time.
This 87% of Quest 2 sales figure can be taken with a pinch of salt – you can find out how I got to this number at the bottom of this piece; it required pulling data from a few sources and making some reasonable assumptions – but that number and the Quest 2 and 3’s popularity on Steam shows that affordability is still the most powerful driving force in the VR space. So, I hope other headset makers are paying attention.
The Apple Vision had me a little concerned (Image credit: Future) A scary expensive VR futureThe Apple Vision Pro is far from unpopular. Reports suggest that between 160,000 and 200,000 preorders were placed on the headset ahead of its release on February 2, 2024 (some of those orders have been put on eBay with ridiculously high markups and others have been returned by some disappointed Vision Pro customers).
The early popularity makes sense. Whatever Mark Zuckerberg says about the superiority of the Quest 3, the Apple Vision Pro is the best of the best VR headsets from a technical perspective. There’s some debate on the comfort and immersive software side of things, but eye-tracking, ridiculously crisp OLED displays, and a beautiful design do make up for that.
Unfortunately, thanks to these high-end specs and some ridiculous design choices – like the outer OLED display for EyeSight (which lets an onlooker see the wearer’s eyes while they're wearing the device) – the headset is pretty pricey coming in at $3,499 for the 256GB model (it’s not yet available outside the US).
Seeing this, and the instant renewed attention Apple has drawn to the VR space – with high-end rivals like the Samsung XR headset now on the way – I’ll admit I was a little concerned we might see a return to VR’s early, less accessible days. In those days, you’d spend around $1,000 / £1,000 / AU$1,500 on a headset and the same again (or more) on a VR-ready PC.
The Valve Index is impressive, but it's damn expensive (Image credit: Future)Apple has a way of driving the tech conversation and development in the direction it chooses. Be it turning more niche tech into a mainstream affair like it did for smartwatches with the Apple Watch or renaming well-established terms by sheer force of will (VR computing and 3D video are now exclusively called spatial computing and spatial video after Apple started using those phrases).
While, yes, there’s something to be said for the wow factor of top-of-the-line tech, I hoped we wouldn’t be swamped with the stuff while more budget-friendly options get forgotten about because this is the way Apple has moved the industry with its Vision Pro.
The numbers in the Steam Hardware Survey have assuaged those fears. It shows that meaningful budget hardware – like the Quest 2 and 3, which, despite being newer, have less impressive displays and specs than many older, pricier models – is still too popular to be going anywhere anytime soon.
If anything, I’m more confident than ever that Apple, Samsung, and the like need to get their own affordable VR headsets out the door soon. Especially the non-Apple companies that can’t rely on a legion of rabid fans ready to eat up everything they release.
If they don’t launch budget-friendly – but still worthwhile – VR headsets, then Meta could once again be left as the only real contender in this sector of VR. Sure, I like the Meta headsets I’ve used, but nothing helps spur on better tech and/or prices than proper competition. And this is something Meta is proving it doesn’t really have right now.
(Image credit: Meta) Where did my data come from?It’s important to know where data has come from and what assumptions have been made by people handling that data, but, equally, not everyone finds this interesting, and it can get quite long and distracting. So, I’ve put this section at the bottom for those interested in seeing my work on the 87% sales figure comparison between the Oculus Quest 2 and Meta Quest 3 four months after their respective launches.
As I mentioned above, most of the data for this piece has been gathered from the Steam Hardware Survey. I had to rely on the Internet Archive’s Wayback Machine to see some historical Steam Hardware Survey data because the results page only shows the most recent month’s figures.
When looking at the relative popularity of headsets in any given month, I could just read off the figures in the survey results. However, to compare the Quest 2 and Quest 3’s four-month sales to each other, I had to use player counts from SteamDB and make a few assumptions.
The first assumption is that the Steam Hardware Survey’s data is consistent for all users. Because Steam users have to opt-in to the survey, when it says that 2.24% of Steam users used a VR headset in January 2024, what it really means is that 2.24% of Steam Hardware Survey participants used a VR headset that month. There’s no reason to believe the survey’s sample isn’t representative of the whole of Steam’s user base, and this is an assumption that’s generally taken for granted when looking at Hardware Survey data. But if I’m going to break down where my numbers come from, I might as well do it thoroughly.
Secondly, I had to assume that Steam users only used one VR headset each month and that they didn’t share their headsets with other Steam users. These assumptions allow me to say that if the Meta Quest 3 was used for 14.05% of Steam VR sessions, then 14.05% of Steam users with a VR headset (which is 2.24% of Steam’s total users) owned a Quest 3 in January 2024. Not making these assumptions leads to an undercount and overcount, respectively, so they kinda cancel each other out. Also, without this assumption, I couldn’t continue beyond this step as I’d lack the data I need.
Who needs more than one VR headset anyway? (Image credit: Shutterstock / agencies)Valve doesn’t publish Steam’s total user numbers, and the last time it published monthly active user data was in 2021 – and that was an average for the whole year rather than for each month. It also doesn’t say how many people take part in the Hardware Survey. All it does publish is how many people are using Steam right now. This information is gathered by SteamDB so that I and other people can see Steam’s Daily Active User (DAU) average for January 2021 and January 2024 (as well as other months, but I only care about these two).
My penultimate assumption was that the proportion of DAUs compared to the total number of Steam users in January 2021 is the same as the proportion of DAUs compared to the total number of Steam users in January 2024. The exact proportion of DAUs to the total doesn’t matter (it could be 1% or 100%). By assuming it stays consistent between these two months, I can take the DAU figures I have – 25,295,361 in January 2024 and 24,674,583 in January 2021 – multiply them by the percentage of Steam users with a Quest 3 and Quest 2 during these months, respectively – 0.31% and 0.37% – then finally compare the numbers to one another.
The result is that the number of Steam users with a Quest 3 in January 2024 is 87.05% of the number of Steam users with a Quest 2 in January 2021.
My final assumption was that Quest headset owners haven’t become more or less likely to connect their devices to a PC to play Steam VR. So if it's 87% as popular on Steam four months after their respective launches, the Quest 3 has sold 87% as well as the Quest 2 did after their first four months on sale.
You might also likeWindows 11 could soon deliver updates that don’t need a reboot
Windows 11 could soon run updates without rebooting, if the rumor mill is right – and there’s already evidence this is the path Microsoft is taking in a preview build.
This comes from a regular source of Microsoft-related leaks, namely Zac Bowden of Windows Central, who first of all spotted that Windows 11 preview build 26058 (in the Canary and Dev channels) was recently updated with an interesting change.
Microsoft is pushing out updates to testers that do nothing and are merely “designed to test our servicing pipeline for Windows 11, version 24H2.” The the key part is we’re informed that those who have VBS (Virtualization Based Security) turned on “may not experience a restart upon installing the update.”
Running an update without requiring a reboot is known as “hot patching” and this method of delivery – which is obviously far more convenient for the user – could be realized in the next major update for Windows 11 later this year (24H2), Bowden asserts.
The leaker has tapped sources for further details, and observes that we’re talking about hot patching for the monthly cumulative updates for Windows 11 here. So the bigger upgrades (the likes of 24H2) wouldn’t be hot-patched in, as clearly there’s too much work going on under the hood for that to happen.
Indeed, not every cumulative update would be applied without a reboot, Bowden further explains. This is because hot patching uses a baseline update, one that can be patched on top of, but that baseline model needs to be refreshed every few months.
Add seasoning with all this info, naturally, but it looks like Microsoft is up to something here based on the testing going on, which specifically mentions 24H2, as well.
Analysis: How would this work exactly?What does this mean for the future of Windows 11? Well, possibly nothing. After all, this is mostly chatter from the grapevine, and what’s apparently happening in early testing could simply be abandoned if it doesn’t work out.
However, hot patching is something that is already employed with Windows Server, and the Xbox console as well, so it makes sense that Microsoft would want to use the tech to benefit Windows 11 users. It’s certainly a very convenient touch, though as noted, not every cumulative update would be hot-patched.
Bowden believes the likely scenario would be quarterly cumulative updates that need a reboot, followed by hot patches in between. In other words, we’d get a reboot-laden update in January, say, followed by two hot-patched cumulative updates in February and March that could be completed quickly with no reboot needed. Then, April’s cumulative update would need a reboot, but May and June wouldn’t, and so on.
As mentioned, annual updates certainly wouldn’t be hot-patched, and neither would out-of-band security fixes for example (as the reboot-less updates rely on that baseline patch, and such a fix wouldn’t be based on that, of course).
This would be a pretty cool feature for Windows 11 users, because dropping the need to reboot – to be forced to restart in some cases – is obviously a major benefit. Is it enough to tempt upgrades from Windows 10? Well, maybe not, but it is another boon to add to the pile for those holding out on Microsoft’s older operating system. (Assuming they can upgrade to Windows 11 at all, of course, which is a stumbling block for some due to PC requirements like TPM).
You might also like...What is OpenAI's Sora? The text-to-video tool explained and when you might be able to use it
ChatGPT maker OpenAI has now unveiled Sora, its artificial intelligence engine for converting text prompts into video. Think Dall-E (also developed by OpenAI), but for movies rather than static images.
It's still very early days for Sora, but the AI model is already generating a lot of buzz on social media, with multiple clips doing the rounds – clips that look as if they've been put together by a team of actors and filmmakers.
Here we'll explain everything you need to know about OpenAI Sora: what it's capable of, how it works, and when you might be able to use it yourself. The era of AI text-prompt filmmaking has now arrived.
OpenAI Sora release date and priceIn February 2024, OpenAI Sora was made available to "red teamers" – that's people whose job it is to test the security and stability of a product. OpenAI has also now invited a select number of visual artists, designers, and movie makers to test out the video generation capabilities and provide feedback.
"We're sharing our research progress early to start working with and getting feedback from people outside of OpenAI and to give the public a sense of what AI capabilities are on the horizon," says OpenAI.
In other words, the rest of us can't use it yet. For the time being there's no indication as to when Sora might become available to the wider public, or how much we'll have to pay to access it.
(Image credit: OpenAI)We can make some rough guesses about timescale based on what happened with ChatGPT. Before that AI chatbot was released to the public in November 2022, it was preceded by a predecessor called InstructGPT earlier that year. Also, OpenAI's DevDay typically takes place annually in November.
It's certainly possible, then, that Sora could follow a similar pattern and launch to the public at a similar time in 2024. But this is currently just speculation and we'll update this page as soon as we get any clearer indication about a Sora release date.
As for price, we similarly don't have any hints of how much Sora might cost. As a guide, ChatGPT Plus – which offers access to the newest Large Language Models (LLMs) and Dall-E – currently costs $20 (about £16 / AU$30) per month.
But Sora also demands significantly more compute power than, for example, generating a single image with Dall-E, and the process also takes longer. So it still isn't clear exactly how well Sora, which is effectively a research paper, might convert into an affordable consumer product.
What is OpenAI Sora?You may well be familiar with generative AI models – such as Google Gemini for text and Dall-E for images – which can produce new content based on vast amounts of training data. If you ask ChatGPT to write you a poem, for example, what you get back will be based on lots and lots of poems that the AI has already absorbed and analyzed.
OpenAI Sora is a similar idea, but for video clips. You give it a text prompt, like "woman walking down a city street at night" or "car driving through a forest" and you get back a video. As with AI image models, you can get very specific when it comes to saying what should be included in the clip and the style of the footage you want to see.
https://t.co/SOUoXiSMBY pic.twitter.com/JB4zOjmbTpFebruary 15, 2024
See moreTo get a better idea of how this works, check out some of the example videos posted by OpenAI CEO Sam Altman – not long after Sora was unveiled to the world, Altman responded to prompts put forward on social media, returning videos based on text like "a wizard wearing a pointed hat and a blue robe with white stars casting a spell that shoots lightning from his hand and holding an old tome in his other hand".
How does OpenAI Sora work?On a simplified level, the technology behind Sora is the same technology that lets you search for pictures of a dog or a cat on the web. Show an AI enough photos of a dog or cat, and it'll be able to spot the same patterns in new images; in the same way, if you train an AI on a million videos of a sunset or a waterfall, it'll be able to generate its own.
Of course there's a lot of complexity underneath that, and OpenAI has provided a deep dive into how its AI model works. It's trained on "internet-scale data" to know what realistic videos look like, first analyzing the clips to know what it's looking at, then learning how to produce its own versions when asked.
So, ask Sora to produce a clip of a fish tank, and it'll come back with an approximation based on all the fish tank videos it's seen. It makes use of what are known as visual patches, smaller building blocks that help the AI to understand what should go where and how different elements of a video should interact and progress, frame by frame.
Sora starts messier, then gets tidier (Image credit: OpenAI)Sora is based on a diffusion model, where the AI starts with a 'noisy' response and then works towards a 'clean' output through a series of feedback loops and prediction calculations. You can see this in the frames above, where a video of a dog playing in the show turns from nonsensical blobs into something that actually looks realistic.
And like other generative AI models, Sora uses transformer technology (the last T in ChatGPT stands for Transformer). Transformers use a variety of sophisticated data analysis techniques to process heaps of data – they can understand the most important and least important parts of what's being analyzed, and figure out the surrounding context and relationships between these data chunks.
What we don't fully know is where OpenAI found its training data from – it hasn't said which video libraries have been used to power Sora, though we do know it has partnerships with content databases such as Shutterstock. In some cases, you can see the similarities between the training data and the output Sora is producing.
What can you do with OpenAI Sora?At the moment, Sora is capable of producing HD videos of up to a minute, without any sound attached, from text prompts. If you want to see some examples of what's possible, we've put together a list of 11 mind-blowing Sora shorts for you to take a look at – including fluffy Pixar-style animated characters and astronauts with knitted helmets.
"Sora can generate videos up to a minute long while maintaining visual quality and adherence to the user’s prompt," says OpenAI, but that's not all. It can also generate videos from still images, fill in missing frames in existing videos, and seamlessly stitch multiple videos together. It can create static images too, or produce endless loops from clips provided to it.
It can even produce simulations of video games such as Minecraft, again based on vast amounts of training data that teach it what a game like Minecraft should look like. We've already seen a demo where Sora is able to control a player in a Minecraft-style environment, while also accurately rendering the surrounding details.
OpenAI does acknowledge some of the limitations of Sora at the moment. The physics don't always make sense, with people disappearing or transforming or blending into other objects. Sora isn't mapping out a scene with individual actors and props, it's making an incredible number of calculations about where pixels should go from frame to frame.
In Sora videos people might move in ways that defy the laws of physics, or details – such as a bite being taken out of a cookie – might not be remembered from one frame to the next. OpenAI is aware of these issues and is working to fix them, and you can check out some of the examples on the OpenAI Sora website to see what we mean.
Despite those bugs, further down the line OpenAI is hoping that Sora could evolve to become a realistic simulator of physical and digital worlds. In the years to come, the Sora tech could be used to generate imaginary virtual worlds for us to explore, or enable us to fully explore real places that are replicated in AI.
How can you use OpenAI Sora?At the moment, you can't get into Sora without an invite: it seems as though OpenAI is picking out individual creators and testers to help get its video-generated AI model ready for a full public release. How long this preview period is going to last, whether it's months or years, remains to be seen – but OpenAI has previously shown a willingness to move as fast as possible when it comes to its AI projects.
Based on the existing technologies that OpenAI has made public – Dall-E and ChatGPT – it seems likely that Sora will initially be available as a web app. Since its launch ChatGPT has got smarter and added new features, including custom bots, and it's likely that Sora will follow the same path when it launches in full.
Before that happens, OpenAI says it wants to put some safety guardrails in place: you're not going to be able to generate videos showing extreme violence, sexual content, hateful imagery, or celebrity likenesses. There are also plans to combat misinformation by including metadata in Sora videos that indicates they were generated by AI.
You might also likeWatch out, Apple Vision Pros are reportedly cracking all on their own
If you’ve spent $3,500 or more on the Apple Vision Pro you’d be understandably frustrated if you damaged the outer screen and had to pay $799 (or $299 with Apple Care) to get it fixed. But imagine how much more annoyed you’d be if it cracked for seemingly no reason at all.
That’s what some people are taking to social media to complain about, after they discovered cracks extending upwards from the nose bridge of their pricey Apple headset – which they all claim appeared despite them never dropping, bumping, or damaging the headset.
Reddit user dornbirn explained that after putting their headset away for the night they woke up and found a large crack extending from the nose bridge. u/ContributionFar8997, u/inphenite, and u/Wohinbistdu all shared similar complaints to the Vision Pro Subreddit, with images of their Vision Pro’s showing practically identical cracks extending from the nose bridge.
You should always take posts on the internet with a pinch of salt, but the fact that every crack looks the same and has seemingly appeared while the headset wasn’t in use suggests that this is some kind of manufacturing issue rather than user error.
We’ve reached out to Apple to find out what's causing the apparent cracks and if it has any advice for Vision Pro customers who are worried about their screens breaking.
Cracked Vision Pro Update: good ending! from r/VisionPro Why are Vision Pro screens cracking?It’s not clear exactly why the outer screen is cracking, but the reports we’ve seen all come from people who discovered the Vision Pro was damaged after leaving the device charging with the front cover on.
Our best guess right now is that as the headset charges it heats up, and because of the cover this heat doesn’t dissipate quickly. As the outer screen warms it expands, with perhaps one of the inner layers expanding faster than the outer layer causing tension.
Given the nose bridge is the area with the most complex curved design it makes sense this would be the place where the tension is at its highest. So when the screen can’t take anymore this is where it would most likely crack – explaining why all the images show near identical damage.
We're not engineers though, so to know for sure we'll need to wait for an official Apple explanation of what's causing the cracks.
Apple Store support staff should be able to help (Image credit: Apple) I have a Vision Pro, what should I do?Because there are so many unknown factors it’s tough to say exactly what measures you should take to avoid the same issue happening to your Vision Pro.
Based on the current evidence we’d suggest that you don’t charge the headset with the cover on and that you don’t leave it charging for longer than is necessary. However, the best thing to do is to keep an eye out for Apple’s official guidance, and if a crack forms in your Vision Pro contact support as soon as you can.
While some users have said the Apple Care support team hasn’t been the most helpful – asking them to pay to get the screen fixed – u/Wohinbistdu posted an update to their original Reddit post saying that they were able to take their Vision Pro to the Apple Store and get a replacement unit. Their original has apparently been sent off for Apple’s engineers to investigate.
This was 12 days ago at the time of writing so hopefully Apple is close to finding what’s causing the problems, and is almost ready with a fix.
You might also likeNvidia finally catches up to AMD and drops a new app that promises better gaming and creator experiences
Nvidia has announced plans to bring together the features of the Nvidia Control Panel, GeForce Experience, and RTX Experience apps all in a single piece of software. On February 22, Nvidia explained on its website that this new unified app is being made available as a public beta. This means that the app could still be changed in the hopes of improving it, but you can download it now and try it for yourself.
The app is made specifically to improve the experience of gamers and creators currently using machines equipped with Nvidia GPUs by making it easier to find and use functions that formerly lived in separate programs.
Users with suitable Nvidia GPUs can expect a number of significant improvements that come with this new centralized app. Settings to optimize gaming experiences (by tweaking graphical settings based on your hardware) and downloading and installing new drivers can now be found in one easy interface.
It’ll be easier to understand and keep track of driver updates, such as new features and fixes for bugs, with clear descriptions. While in-game, users should see a redesigned overlay that makes it easier to access features and tools like filters, recording tools, monitoring tools, and more. Speaking of filters, Nvidia is introducing new AI Freestyle Filters which can enhance users’ visuals and allow them to customize the aesthetics of their games. As well as all of these upgrades, users can easily view and navigate bundles, redeem rewards, get new game content, view current GeForce NOW offers, and more.
(Image credit: Future) Nvidia's visionIt certainly seems like Nvidia has worked hard to create a more streamlined app that makes it easier to use your RTX-equipped PC. It’s specifically intended to make it easier to do things like make sure your PC is updated with the latest Nvidia drivers, and quickly discover and install other Nvidia apps including Nvidia Broadcast, GeForce NOW, and more. The Nvidia team also claims in its announcement that this new centralized app will perform better on RTX-GPU-equipped PCs than its separate predecessors. That’s thanks to reduced installation times through the app, better responsiveness from the user interface (UI), and because it should take up less disk space than its predecessors (I assume combined).
This isn’t the end of the new Nvidia app’s development, and it seems some legacy features didn’t make the cut, including 360/Stereo photo modes and streaming directly to YouTube and Twitch, because they see less use. Clearly, Nvidia felt it wasn't worth including these more niche features in the new app, and anyone who wants to continue to use them can still use the older apps (for now, at least). The new app is focused on improving performance, and making it easier to install and integrate new features into users’ systems.
(Image credit: Future)By combining its apps into one, easy-to-use piece of software, Nvidia is finally catching up to AMD in one aspect where Team Red has the advantage: software. AMD's Radeon Adrenalin app already offers a lot of these features, as well as others, like a built-in browser and HDMI link assurance and monitoring that can automatically detect any issues with the HDMI’s connectivity - all in one single interface.
Finally, AMD doesn’t require users to make an account to be able to use its app. We don’t expect that Nvidia will fully catch up to AMD’s app just yet (though it would be nice not to have to sign in), but this is definitely a push in the right direction and hopefully users will see a lot of use out of the new app.
YOU MIGHT ALSO LIKE...Are you a Reddit user? Google's about to feed all your posts to a hungry AI, and there’s nothing you can do about it
Google and Reddit have announced a huge content licensing deal, reportedly worth a whopping $60 million - but Reddit users are pissed.
Why, you might ask? Well, the deal involves Google using content posted by users on Reddit to train its AI models, chiefly its newly launched Google Gemini AI suite. It makes sense; Reddit contains a wealth of information and users typically talk colloquially, which Google is probably hoping will make for a more intelligent and more conversational AI service. However, this also essentially means that anything you post on Reddit now becomes fuel for the AI engine, something many users are taking umbrage at.
While the very first thing that came to mind was MIT’s insane Reddit-trained ‘psychopath AI’ from years ago, it’s fair to say that AI model training has come a long way since then - so hooking it up to Reddit hopefully won’t turn Gemini into a raving lunatic.
The deal, announced yesterday by Reddit in a blog post, will have other benefits as well: since many people specifically append ‘reddit’ to their search queries when looking for the answer to a question, Google aims to make getting to the relevant content on Reddit easier. Reddit plans to use Google’s Vertex AI to improve its own internal site search functionality, too, so Reddit users will enjoy a boost to the user experience - rather than getting absolutely nothing in return for their training data.
Do Redditors deserve a cut of that $60 million?A lot of Reddit users have been complaining about the deal in various threads on the site, for a wide variety of reasons. Some users have privacy worries, some voiced concerns about the quality of output from an AI trained on Reddit content (which, let’s be honest, can get pretty toxic), and others simply don’t want their posts ‘stolen’ to train an AI.
Unfortunately for any unhappy Redditors, the site’s Terms of Service do mean that Reddit can (within reason) do whatever it wants with your posts and comments. Calling the content ‘stolen’ is inaccurate: if you’re a Reddit user, you’re the product, and Reddit is the one selling.
Personally, I’m glad to see a company actually getting paid for providing AI training data, unlike the legal grey-area dodginess of previous chatbots and AI art tools that were trained on data scraped from the internet for free without user consent. By agreeing to the Reddit TOS, you’re essentially consenting to your data being used for this.
Google Gemini could stand to benefit hugely from the training data produced by this content use deal. (Image credit: Google)Some users are positively incensed by this though, claiming that if they’re the ones making the content, surely they should be entitled to a slice of the AI pie. I’m going to hand out some tough love here: that’s a ridiculous and naive argument. Do these people believe they deserve a cut of ad revenue too, since they made a hit post that drew thousands of people to Reddit? This isn’t the same as AI creators quietly nabbing work from independent artists on Twitter.
At the end of the day, you’re never going to please everyone. If this deal has actual potential to improve not just Google Gemini, but Google Search in general (as well as Reddit’s site search), then the benefits arguably outweigh the costs - although I do think Reddit has a moral obligation to ensure that all of its users are fully informed about the use of their data.
A few paragraphs in the TOS aren’t enough, guys: you know full well nobody reads those.
You might also likeMicrosoft brings one of the Google Pixel’s best features to Windows 11
The Google Pixel series has given us some of the best phones on the market, and one thing that sets it apart from other phones is the suite of built-in generative AI features, like Best Photo and Magic Eraser. Now, thanks to an upcoming tool coming to the Windows Photos App, you won’t need to buy a whole new phone just to get your hands on these types of features.
Microsoft has announced in a blog post that the ‘Spot fix’ tool in the desktop Photos app will be getting an AI boost, and will now be known as ‘Generative erase’.
Generative erase will allow you to remove imperfections from your photos in a more natural-looking way, like removing random people in the background and replacing them with an AI-generated backdrop - basically, the exact same way that Magic Eraser works on a Pixel phone. Microsoft notes in the blog post that “Generative erase creates a more seamless and realistic result after objects are erased from the photo, even when erasing large areas”.
The before-and-after is quite impressive - the AI alterations are barely noticeable at first glance. (Image credit: Windows) Keep it coming!The example ‘before and after’ image in the blog post shows a very cute dog on the beach, wearing a collar, with some people in the background. After using Generative erase, the new photo looks entirely organic, with the dog collar free and no people in the background. Even when you zoom into the photo to where the collar and people originally were, you can’t see any obviously visible evidence that the image was altered at all.
It’s an incredibly impressive editing job - considering that it takes very little time and zero effort - and I’m very excited to see it in action when it does make its way over to Windows. It won’t just be Windows 11 users who get to enjoy the new feature, either; Microsoft will be adding the full suite of Photos AI features to Windows 10 too, proving that the older OS isn’t dead just yet.
Currently, the tool is reserved for Windows Insiders, the community of Windows enthusiasts and developers who get early access to potential new features. However, the fact that Microsoft is publicly discussing the feature is a good sign that we will see it sooner rather than later. Alongside Generative erase, the blog notes very briefly that we could also see background blurring and removal features join the Photos app in the same upcoming update.
The company recently announced that Microsoft Paint was getting another string of new AI features as well, so we may be seeing the beginning of a Windows-wide revamp when it comes to creative AI tools. It seems like Microsoft is putting a lot of time and effort into implementing useful generative features into its apps, which is good news for Windows users who want to experiment with artificial intelligence - without having to make a million accounts on different platforms to do so.
Via The Verge.
You might also like...Microsoft is giving Windows Copilot an upgrade with Power Automate, promising to banish boring tasks thanks to AI
Microsoft has revealed a new plug-in for Copilot, its artificial intelligence (AI) assistant, named Power Automate that will enable users to (as the name suggests) automate repetitive and tedious tasks, such as creating and manipulating entries in Excel, handling PDFs, and file management.
This development is part of a bigger Copilot update package that will see several new capabilities being added to the digital AI assistant.
Microsoft gives the following examples of tasks this new Copilot plug-in could automate:
- Write an email to my team wishing everyone a happy weekend.
- List the top 5 highest mountains in the world in an Excel file.
- Rename all PDF files in a folder to add the word final at the end.
- Move all word documents to another folder.
- I need to split a PDF by the first page. Can you help?
As of now, it seems like this plug-in is only available to some users with access to Windows 11 Preview Build 26058, available to Windows Insiders in the Canary and Dev Channels of the Windows Insider Program. The Windows Insider Program is a Microsoft-run community for Windows enthusiasts and professionals where users can get early access to upcoming versions of Windows, features, and more, and provide feedback to Microsoft developers to improve these before a wider rollout.
Hopefully, the Power Automate plug-in for Copilot will prove a hit with testers - and if it is, we should hopefully see it rolled out to all Windows 11 users soon.
As per the blog post announcing the Copilot update, this is the first release of the plug-in, which is part of Microsoft’s Power Platform, a comprehensive suite of tools designed to help users make their workflows more efficient and versatile - including Power Automate. To be able to use this plug-in, you’ll need to download Power Automate for Desktop from the Microsoft Store (or make sure you have the latest version of Power Automate).
There are multiple options for using Power Automate: the free plan, suitable for personal use or smaller projects, and there are premium plans that offer packages with more advanced features. From what we can tell, the ability to enable the Power Automate plug-in for Copilot will be available for all users, free and premium, but Microsoft might change this.
Once you’ve made sure you have the latest version of Power Automate downloaded, you’ll also need to be signed into Copilot for Windows with a Microsoft Account. Then you’ll need to add the plug-in to Copilot To do this, you’ll have to go to the Plug in section in the Copilot app for Windows, and turn on the Power Automate plug-in which should now be visible. Once enabled, you should be able to ask it to perform a task like one of the above examples and see how Copilot copes for yourself.
Once you try the plug-in for yourself, if you have any thoughts about it, you can share them with Microsoft directly at powerautomate-ai@microsoft.com.
(Image credit: Microsoft) Hopefully, a sign of more to comeThe language Microsoft is using about the plug-in implies that it will see improvements in the future to enable it and, therefore, Copilot to carry out more tasks. Upgrades like this are steps in the right direction if they’re as effective as they sound.
This could address one of the biggest complaints people have about Copilot since it was launched. Microsoft presented it as a Swiss Army Knife-like digital assistant with all kinds of AI capabilities, and, at least for now, it’s not anywhere near that. While we admire Microsoft’s AI ambitions, the company did make big promises, and many users are growing impatient.
I guess we’ll have to just continue to watch whether Copilot will live up to Microsoft’s messaging, or if it’ll go the way of Microsoft’s other digital assistants like Cortana and Clippy.
YOU MIGHT ALSO LIKE...Microsoft Paint update could make it even more Photoshop-like with handy new tools
Microsoft Paint received a plethora of new features late last year, introducing layers, a dark mode, and AI-powered image generation. These new updates brought Microsoft Paint up to speed with the rest of Windows 11's modern layout look and feel after years of virtually no meaningful upgrades, and it looks like Microsoft still has plans to add even more features to the humble art tool.
X user @PhantomOfEarth made a post highlighting potential changes spotted in the Canary Development channel, and we could see these new features implemented in Microsoft Paint very soon. The Canary Dev channel is part of the Microsoft Insider Program, which allows Windows enthusiasts and developers to sign up and get an early look at upcoming releases and new features that may be on the way.
New Paint app update for WIP Canary/Dev (11.2402.20.0) with some new features:- A new brush size slider on the left- Layers panel update with a new tile letting you customize the background (change color/hide it) pic.twitter.com/KUJOydQH5IFebruary 21, 2024
See moreWe do have to take the features we see in such developer channels with a pinch of salt, as it’s common to see a cool upgrade or new software appear in the channel but never actually make it out of the development stage. That being said, PhantonOfEarth originally spotted the big changes set for Windows 11 Paint last year in the same Dev channel, so there’s a good chance that the brush size slider and layer panel update that is now present in the Canary build will actually come to fruition in a public update soon.
Show my girl Paint some loveIt’s great to see Microsoft continue to show some love for the iconic Paint app, as it had been somewhat forgotten about for quite some time. It seems like the company has finally taken note of the app's charm, as many of us can certainly admit to holding a soft spot for Paint and would hate to see it abandoned. I have many memories of using Paint; as a child in IT class learning to use a computer for the first time, or firing it up to do some casual scribbles while waiting for my family’s slow Wi-Fi to connect.
These proposed features won’t make Paint the next Photoshop (at least for now), but they do bring the app closer to being a simple, free art tool that most everyday people will have access to. Cast your mind back to the middle of last year, when Photoshop introduced image generation capabilities - if you wanted to use them, you’d have to have paid for Adobe Firefly access or a Photoshop license. Now, if you’re looking to do something quick and simple with AI image-gen, you can do it in Paint.
Better brush size control and layers may not seem like the most important or exciting new features, especially compared to last year's overhaul of Windows Paint, but it is proof that the team at Microsoft is still thinking about Paint. In fact, the addition of a proper layers panel will do a lot to justify the program’s worth to digital artists. It could also be the beginning of a new direction for Paint if more people flock back to the revamped app. I hope that Microsoft continues to improve it - just so long as it remains a free feature of Windows.
You might also like...