Feed aggregator

Midjourney just changed the generative image game and showed me how comics, film, and TV might never be the same

Techradar - Tue, 03/12/2024 - 16:12

Midjourney, the Generative AI platform that you can currently use on Discord just introduced the concept of reusable characters and I am blown away.

It's a simple idea: Instead of using prompts to create countless generative image variations, you create and reuse a central character to illustrate all your themes, live out your wildest fantasies, and maybe tell a story.

Up until recently, Midjourney, which is trained on a diffusion model (add noise to an original image and have the model de-noise it so it can learn about the image) could create some beautiful and astonishingly realistic images based on prompts you put in the Discord channel ("/imagine: [prompt]") but unless you were asking it to alter one of its generated images, every image set and character would look different.

Now, Midjourney has cooked up a simple way to reuse your Midjourney AI characters. I tried it out and, for the most part, it works.

Image 1 of 3

Midjourney AI character creation

I guess I don't know how to describe myself. (Image credit: Future)Image 2 of 3

Midjourney AI character creation

(Image credit: Future)Image 3 of 3

Midjourney AI character creation

Things are getting weird (Image credit: Future)

In one prompt, I described someone who looked a little like me, chose my favorite of Midjourney's four generated image options, upscaled it for more definition, and then, using a new "– cref" prompt and the URL for my generated image (with the character I liked), I forced Midjourney to generate new images but with the same AI character in them.

Later, I described a character with Charles Schulz's Peanuts character qualities and, once I had one I liked, reused him in a different prompt scenario where he had his kite stuck in a tree (Midjourney couldn't or wouldn't put the kite in the tree branches).

Image 1 of 2

Midjourney AI character creation

An homage to Charles Schulz (Image credit: Future)Image 2 of 2

Midjourney AI character creation

(Image credit: Future)

It's far from perfect. Midjourney still tends to over-adjust the art but I contend the characters in the new images are the same ones I created in my initial images. The more descriptive you make your initial character-creation prompts, the better result you'll get in subsequent images.

Perhaps the most startling thing about Midjourney's update is the utter simplicity of the creative process. Writing natural language prompts has always been easy but training the system to make your character do something might typically take some programming or even AI model expertise. Here it's just a simple prompt, one code, and an image reference.

Image 1 of 2

Midjourney AI character creation

Got a lot closer with my photo as a reference (Image credit: Future)Image 2 of 2

Midjourney AI character creation

(Image credit: Future)

While it's easier to take one of Midjourney's own creations and use that as your foundational character, I decided to see what Midjourney would do if I turned myself into a character using the same "cref" prompt. I found an online photo of myself and entered this prompt: "imagine: making a pizza – cref [link to a photo of me]".

Midjourney quickly spit out an interpretation of me making a pizza. At best, it's the essence of me. I selected the least objectionable one and then crafted a new prompt using the URL from my favorite me.

Midjourney AI character creation

Oh, hey, Not Tim Cook (Image credit: Future)

Unfortunately, when I entered this prompt: "interviewing Tim Cook at Apple headquarters", I got a grizzled-looking Apple CEO eating pizza and another image where he's holding an iPad that looks like it has pizza for a screen.

When I removed "Tim Cook" from the prompt, Midjourney was able to drop my character into four images. In each, Midjourney Me looks slightly different. There was one, though, where it looked like my favorite me enjoying a pizza with a "CEO" who also looked like me.

Midjourney AI character creation

Midjourney me enjoying pizza with my doppelgänger CEO (Image credit: Future)

Midjourney's AI will improve and soon it will be easy to create countless images featuring your favorite character. It could be for comic strips, books, graphic novels, photo series, animations, and, eventually, generative videos.

Such a tool could speed storyboarding but also make character animators very nervous.

If it's any consolation, I'm not sure Midjourney understands the difference between me and a pizza and pizza and an iPad – at least not yet.

You might also like

Forget the Apple Car – Porsche has been using the Apple Vision Pro with its record-breaking new Taycan

Techradar - Tue, 03/12/2024 - 12:30

Porsche has just unveiled its most dynamic Taycan so far - the Porsche Taycan Turbo GT. This takes the Taycan Turbo S, gives it more power, reduces the weight and primes it for the track. There are two versions of the new car, the Taycan Turbo GT and the Taycan Turbo GT with Weissach package, which loses the backseats and gains a rear wing to make it a record-breaking track car. 

The unveiling of any new Porsche model wouldn't be complete without some mention of its performance credentials and a portion of the launch presentation included coverage of a record-breaking lap from the Laguna Seca raceway in California. 

It seems that Porsche CEO Oliver Blume couldn't make it to The Golden State himself, so instead, he watched it using Apple Vision Pro. Cut to Tim Cook congratulating Porsche on their record-breaking new car, one of many examples of Porsche and Apple's strong ongoing partnership.

Blume wasn't just watching a video feed on Apple Vision Pro, however. He was in a full-on spatial computing mode, virtual track map, multiple windows of telematics, video feed from the car on the track – even the driver's heart rate was displayed. A celebration of cutting-edge tech at a corporate level? You bet. 

an image of the Apple Vision Pro being used with a Porsche virtual cockpit

(Image credit: Porsche / Chris Hall)

"What an amazing experience it was to join the team virtually along with Apple Vision Pro. Thanks to our custom race engineer cockpit app, it felt like I was right there in Laguna Seca with Lars [Kern, Porsche development driver]," said Blume.

"It has been great to bring the best of German engineering and Apple's inspiring product innovations together."

Cue Tim Cook's surprise cameo. "Congratulations to you and the Porsche team on the new record you set with this incredible new vehicle. It's these kinds of extraordinary milestones that show the world what can happen when a team of incredibly dedicated people come together to break new ground on a big idea," said Cook.

"Porsche has always been known for excellence," continued Cook, "and we're proud to see a number of our products play a role in what you do. And it's so great to see Apple Vision Pro helping reimagine track experiences."

The mutual backslapping continued for a little longer, before Blume dropped the next nugget: "We appreciate the great partnership we have established over the years, starting with the My Porsche app on Apple CarPlay and now we're taking it one step further with Porsche's Apple Vision Pro race app to bring the best user experience to our employees and customers."

The appearance of Apple Vision Pro went virtually unnoticed, however. There was no mention of any Apple Vision Pro app in the press materials and when asked at the launch site in Leipzig, there was no more information forthcoming. Porsche it seems, aren't saying any more about it.

Chalk it down as the ultimate tease perhaps: there doesn't seem to be a name for the app that was used – Oliver Blume himself referred to it in two different ways – but it does demonstrate that Porsche and Apple are continuing to work on technologies together beyond Apple CarPlay and the customisation of the Porsche digital displays.

You might also like

Meta's Ray-Ban smart glasses are becoming AI-powered tour guides

Techradar - Tue, 03/12/2024 - 10:44

While Meta’s most recognizable hardware is its Quest VR headsets, its smart glasses created in collaboration with Ray-Ban are proving to be popular thanks to their sleek design and unique AI tools – tools that are getting an upgrade to turn them into a wearable tourist guide.

In a post on Threads – Meta’s Twitter-like Instagram spinoff – Meta CTO Andrew Bosworth showed off a new Look and Ask feature that can recognize landmarks and tell you facts about them. Bosworth demonstrated it using examples from San Francisco such as the Golden Gate Bridge, the Painted Ladies, and Coit Tower.

As with other Look and Ask prompts, you give a command like “Look and tell me a cool fact about this bridge.” The Ray-Ban Meta Smart Glasses then use their in-built camera to scan the scene in front of you, and cross-reference the image with info in the Meta AI’s knowledge database (which includes access to the Bing search engine). 

The specs then respond with the cool fact you requested – in this case explaining the Golden Gate Bridge (which it recognized in the photo it took) is painted “International Orange” so that it would be more visible in foggy conditions.

Screen shots from Threads showing the Meta Ray-Ban Smart Glasses being used to give the suer information about San Francisco landmarks

(Image credit: Andrew Bosworth / Threads)

Bosworth added in a follow-up message that other improvements are being rolled out, including new voice commands so you can share your latest Meta AI interaction on WhatsApp and Messenger. 

Down the line, Bosworth says you’ll also be able to change the speed of Meta AI readouts in the voice settings menu to have them go faster or slower.

Still not for everyone 

One huge caveat is that – much like the glasses’ other Look and Ask AI features – this new landmark recognition feature is still only in beta. As such, it might not always be the most accurate – so take its tourist guidance with a pinch of salt.

Orange RayBan Meta Smart Glasses

(Image credit: Meta)

The good news is Meta has at least opened up its waitlist to join the beta so more of us can try these experimental features. Go to the official page, input your glasses serial number, and wait to get contacted – though this option is only available if you’re based in the US.

In his post Bosworth did say that the team is working to “make this available to more people,” but neither he nor Meta have given a precise timeline of when the impressive AI features will be more widely available.

You might also like

New Rabbit R1 demo promises a world without apps – and a lot more talking to your tech

Techradar - Tue, 03/12/2024 - 10:00

We’ve already talked about the Rabbit R1 before here on TechRadar: an ambitious little pocket-friendly device that contains an AI-powered personal assistant, capable of doing everything from curating a music playlist to booking you a last-minute flight to Rome. Now, the pint-sized companion tool has been shown demonstrating its note-taking capabilities.

The latest demo comes from Jesse Lyu on X, founder and CEO of Rabbit Inc., and shows how the R1 can be used for note-taking and transcription via some simple voice controls. The video (see the tweet below) shows that note-taking can be started with a short voice command, and ended with a single button press.

another week, another homemade r1 demo. note taking with r1 with playback/download/AI summary.still need bit of touch but it’s both intuitive and functional. more to come. pic.twitter.com/3r5hCsYMe1March 11, 2024

See more

It’s a relatively early tech demo – Lyu notes that it “still need bit of touch” [sic] – but it’s a solid demonstration of Rabbit Inc.’s objectives when it comes to user simplicity. The R1 has very little in terms of a physical interface, and doubles down by having as basic a software interface as possible: there’s no Android-style app grid in sight here, just an AI capable of connecting to web apps to carry out tasks.

Once you’ve recorded your notes, you can either view a full transcription, see an AI-generated summary, or replay the audio recording (the latter of which requires you to access a web portal). The Rabbit R1 is primarily driven by cloud computing, meaning that you’ll need a constant internet connection to get the full experience.

Opinion: A nifty gadget that might not hold up to criticism

As someone who personally spent a lot of time interviewing people and frantically scribbling down notes in my early journo days, I can definitely see the value of a tool like the Rabbit R1. I’m also a sucker for purpose-built hardware, so despite my frequent reservations about AI, I truly like the concept of the R1 as a ‘one-stop shop’ for your AI chatbot needs.

My main issue is that this latest tech demo doesn’t actually do anything I can’t do with my phone. I’ve got a Google Pixel 8, and nowadays I use the Otter.ai app for interview transcriptions and voice notes. It’s not a perfect tool, but it does the job as well as the R1 can right now.

Rabbit r1

The Rabbit R1's simplicity is part of its appeal - though it does still have a touchscreen. (Image credit: Rabbit)

As much as I love the Rabbit R1’s charming analog design, it’s still going to cost $199 (£159 / around AU$300) – and I just don’t see the point in spending that money when the phone I’ve already paid for can do all the same tasks. An AI-powered pocket companion sounds like an excellent idea on paper, but when you take a look at the current widespread proliferation of AI tools like Windows Copilot and Google Gemini in our existing tech products, it feels a tad redundant.

The big players such as Google and Microsoft aren’t about to stop cramming AI features into our everyday hardware anytime soon, so dedicated AI gadgets like Rabbit Inc.’s dinky pocket helper will need to work hard to prove themselves. The voice control interface that does away with apps completely is a good starting point, but again, that’s something my Pixel 8 could feasibly do in the future. And yet, as our Editor-in-Chief Lance Ulanoff puts it, I might still end up loving the R1…

You might also like

Fed up with OneDrive in Windows 11? Microsoft clarifies that you can easily remove the cloud storage app

Techradar - Mon, 03/11/2024 - 09:09

Windows 11 users can uninstall OneDrive – in case you weren’t aware of that – and Microsoft has made this clearer with a change to a support document.

Neowin picked up on this alteration Microsoft made to its guide on how to ‘Turn off, disable, or uninstall OneDrive’ which is part of its library of troubleshooting support documentation.

Now, as Neowin informs us, previously this document did not mention Windows 11 – it only referred to Windows 10. And that might have given some users the impression that it was only possible to remove OneDrive on Windows 10, and not Windows 11.

This isn’t the case, of course, and you can unhook OneDrive from Windows 11, removing it completely, just as you can with Windows 10. By mentioning both operating systems, Microsoft is now making it clear that this is the case.

Microsoft has also fleshed out this support document with further instructions on how to stop syncing OneDrive and other details relating to its cloud storage service.

Oddly, though, another support document on how to ‘Turn off OneDrive in Windows’ still only mentions Windows 10, and not Windows 11. However, it might be the case that Microsoft is in the process of updating this sprawling library of content, and just hasn’t reached that page yet.

Analysis: Removing OneDrive is a cinch

This is useful confirmation to get from Microsoft, as it was easy enough to make negative assumptions about hidden agendas here – when in truth the likelihood is the software giant just hadn’t got round to updating the support info (and still hasn’t with some articles, as we just noted). Although Neowin also points out, it’s possible that this updating process has been prompted by Microsoft now having to comply with new European regulations (the Digital Markets Act).

If you haven’t popped over to view the links to the support info, you may be wondering what the process for uninstalling OneDrive from either Windows 10 or Windows 11 is. Fortunately, it’s simple: just go to ‘Add or remove programs’ (type that in the search box on the taskbar, then click on it), scroll down the list of apps to find Microsoft OneDrive (it’s under ‘M’ and not ‘O’ just to clarify), and then select Uninstall.

This doesn’t mean that you’re completely nuking your OneDrive account, in case there’s any doubt. All your files will still be in OneDrive when you visit the site on the web (or from another, say mobile, app), just as normal – all you are doing is removing the app from Windows 11, and this way of accessing your files on your Windows PC (and any syncing or related features therein).

Some of the confusion about not being able to uninstall OneDrive in Windows 11 at all may have sprung from the fact that it wasn’t possible to remove the cloud storage app from Windows 8.1.

You might also like...

Get ready to learn about what Windows 11 of the future looks like at Microsoft’s March 21 event

Techradar - Mon, 03/11/2024 - 07:20

We’ve begun getting hints of what Microsoft is gearing up to announce for Windows 11 at its March event, and now we’ve got new pieces of the puzzle. We’re expecting information about a new feature for the Paint app, Paint NPU, and about a feature that’s being referred to as ‘AI Explorer’ internally at Microsoft. 

Microsoft has put up an official page announcing a special digital event named “New Era of Work” which will take place on March 21, starting at 9 PM PDT. On this page, users are met with the tagline “Advancing the new era of work with Copilot” and a description of the event that encourages users to “Tune in here for the latest in scaling AI in your environment with Copilot, Windows, and Surface.”

It sounds like we’re going to get an idea of what the next iteration of Windows Copilot, Microsoft’s new flagship digital AI assistant, will look like and what it’ll be able to do. It also looks like we might see Microsoft’s vision for what AI integration and features will look like for future versions of Windows and Surface products. 

A screenshot of the page announcing Microsoft's digital event.

(Image credit: Microsoft) What we already know and expect

While we’ll have to wait until the event to see exactly what Microsoft wants to tell us about, we do have some speculation from Windows Latest that one feature we’ll learn about is a Paint app tool powered by new-gen machines’ NPUs (Neural Processing Units). These are processing components that enable new kinds of processes, particularly many AI processes.

This follows earlier reports that indicated that the Paint app was getting an NPU-driven feature, possibly new image editing and rending tools that make use of PCs’ NPUs. Another possible feature that Windows Latest spotted was “LiveCanvas,” which may enable users to draw real-time sketches aided by AI. 

Earlier this week, we also reported about a new ‘AI Explorer’ feature, apparently currently in testing at Microsoft. This new revamped version which has been described as an “advanced Copilot” looks like it could be similar to the Windows Timeline feature, but improved by AI. The present version of Windows Copilot requires an internet connection, but rumors suggest that this could change. 

This is what we currently understand about how the feature will work: it will make records of previous actions users perform, transform them into ‘searchable moments,’ and allow users to search these, as well as retract them. Windows Latest also reinforces the news that most existing PCs running Windows 11 won’t be able to use AI Explorer as it’s designed to use the newest available NPUs, intended to handle and assist higher-level computation tasks. The NPU would enable the AI Explorer feature to work natively on Windows 11 devices and users will be able to interact with AI Explorer using natural language

Using natural language means that users can ask AI Explorer to carry out tasks simply and easily, letting them access past conversations, files, and folders with simple commands, and they will be able to do this with most Windows features and apps. AI Explorer will have the capability to search user history and find information relevant to whatever subject or topic is in the user’s request. We don’t know if it’ll pull this information exclusively from user data or other sources like the internet as well, and we hope this will be clarified on March 21. 

Person working on laptop in kitchen

(Image credit: Getty Images) What else we might see and what this might mean

 In addition to an NPU-powered Paint app feature and AI Explorer, it looks like we can expect the debut of other AI-powered features including an Automatic Super Resolution feature. This has popped up in Windows 11 23H4 preview builds, and it’s said to leverage PCs’ AI abilities to improve users’ visual experience. This will reportedly be done by utilizing DirectML, an API that also makes use of PCs’ NPUs, and will bring improvements to frame rates in games and apps.

March 21 is gearing up to bring what will at least probably be an exciting presentation, although it’s worth remembering that all of these new features will require an NPU. Only the most newly manufactured Windows devices will come equipped with these, which will leave the overwhelming majority of Windows devices and users in the dust. My guess is Microsoft is really banking on how great the new AI-driven features are to convince users to upgrade to these new models, and with the current state of apps and services like Windows Copilot, that’s still yet to be proven in practice.

YOU MIGHT ALSO LIKE...

Windows 11’s next big AI feature could turn your video chats into a cartoon

Techradar - Mon, 03/11/2024 - 06:50

Windows 11 users could get some smart abilities that allow for adding AI-powered effects to their video chats, including the possibility of transporting themselves into a cartoon world.

Windows Latest spotted the effects being flagged up on X (formerly Twitter) by regular leaker XenoPanther, who discovered clues to their existence by digging around in a Windows 11 preview build.

Potential new camera effects:Video looks like a watercolor paintingVideo looks like an animated cartoonVideo looks like an illustrated drawing2/2March 9, 2024

See more

These are Windows Studio effects, which is a set of features implemented by Microsoft in Windows 11 that use AI – requiring an NPU in the PC – to achieve various tricks. Currently, one of those is making it look like you’re making eye contact with the person on the other end of the video call. (In other words, making it seem like you’re looking at the camera, when you’re actually looking at the screen).

The new capabilities appear to be the choice to make the video feed look like an animated cartoon, a watercolor painting, or an illustrated drawing (like a pencil or felt tip artwork – we’re assuming something like the video for that eighties classic ‘Take on Me’ by A-ha).

If you’re wondering what Windows Studio is capable of as it stands, as well as the aforementioned eye contact feature – which is very useful in terms of facilitating a more natural interaction in video chats or meetings – it can also apply background effects. That includes blurring the background in case there’s something you don’t want other chat participants to see (like the fact you haven’t tied up your study in about three years).

The other feature is automatic framing which keeps you centered, with the image zoomed and cropped appropriately, as (or if) you move around.

Analysis: That’s all, folks!

Another Microsoft leaker, Zac Bowden, replied to the above tweet to confirm these are the ‘enhanced’ Windows Studio effects that he’s talked about recently, and that they look ‘super cool’ apparently. They certainly sound nifty, albeit on the more off-the-wall side of the equation than existing Windows Studio functionality – they’re fun aspects rather than serious presentation-related AI powers.

This is something we might see in testing soon, then, or that seems likely, particularly as two leakers have chimed in here. We might even see these effects arrive in Windows 11 24H2 later this year.

Of course, there’s no guarantee of that, but it also makes sense given that Microsoft is fleshing out pretty much everything under the sun with extra AI capabilities, wherever they can be crammed in – with a particular focus on creativity at the moment (and the likes of the Paint app).

The future is very much the AI PC, complete with NPU acceleration, as far as Microsoft is concerned.

You might also like...

Apple Vision Pro update makes Personas less creepy and can take the creation process out of your hands

Techradar - Fri, 03/08/2024 - 15:02

I finally look slightly less creepy in my Apple Vision Pro mixed reality headset. Oh, no, I don't mean I look less like an oddball when I wear it but if you happen to call me on FaceTime, you'll probably find my custom Persona – digital Lance – a little less weird.

While Apple Vision Pro hasn't been on the market very long and the $3,499 headset is not owned in iPhone numbers (think tens of thousands, not millions) this first big visionOS update is important.

I found it under Settings when I donned the headset for the first time in a week (yes, it's true, I don't find myself using the Vision Pro as often as I would my pocketable iPhone) and quickly accepted the update. It took around 15 minutes for the download and installation to complete.

VisionOS 1.1 adds, among other things, enterprise-level Mobile Device Management (MDM) controls, closed captions and virtual keyboard improvements, enhanced Home View control, and the aforementioned Persona improvements.

I didn't test all of these features, but I couldn't wait to try out the updated Personas. Despite the update, Personas remains a "beta" feature. visionOS 1.1 improves the quality of Personas and adds a hands-free creation option.

Before we start, here's a look at my old Vision Pro Persona. Don't look away.

Image 1 of 3

Apple Vision Pro 1-1 update

My original Persona (Image credit: Future)Image 2 of 3

Apple Vision Pro 1-1 update

My original Persona (Image credit: Future)Image 3 of 3

Apple Vision Pro 1-1 update

My original Persona (Image credit: Future)

Personas are Vision Pro's digital versions of you that you can use in video conference calls on FaceTime and other supported platforms. The 3D image is not a video feed of your face. Instead, Vision Pro creates this digital simulacrum based on a Spatial Photography capture of your face. Even the glasses I have on my Persona are not real.

During my initial Vision Pro review, I followed Apple's in-headset instructions and held the Vision Pro in front of my face with the shiny glass front facing me. Vision Pro's voice guidance told me to slowly look left, right, up, and down, and to make a few facial expressions. All this lets the stereo cameras capture a 3D image map of my face.

Because there are also cameras inside the headset to track my eyes (and eyebrows) and a pair of cameras on the outside of the headset that points down at my face and hands, the Vision Pro can, based on how I move my face (and hands), manipulate my digital persona like a puppet.

There's some agreement that Apple Vision Pro Personas look a lot like us but also ride the line between reality and the awful, uncanny valley. This update is ostensibly designed to help with that.

Apple Vision Pro 1-1 update

Scanning my face for my new Persona using the hands-free mode. (Image credit: Future)

Apple, though, added a new wrinkle to the process. Now I could capture my Persona "hands-free" which sounds great, but means putting Vision Pro on a table or shelf and then positioning yourself in front of the headset. Good luck finding a platform that's at the exact right height. I used a shelf in our home office but had to crouch down to get my face to where Vision Pro could properly read it. On the other hand, I didn't have to hold the 600g headset up in front of my face. Hand capture still happens while you're wearing the headset.

Image 1 of 3

Apple Vision Pro 1-1 update

My new visionOS 1.1 hands-free Persona (Image credit: Future)Image 2 of 3

Apple Vision Pro 1-1 update

My new visionOS 1.1 hands-free Persona (Image credit: Future)Image 3 of 3

Apple Vision Pro 1-1 update

My new visionOS 1.1 hands-free Persona (Image credit: Future)

It took a minute or so for Vision Pro to build my new Persona (see above). The result looks a lot like me and is, in my estimation, less creepy. It still matches my expressions and hand movements almost perfectly. Where my original Persona looked like it lacked a soul, this one has more warmth. I also noticed that the capture appears more expansive. My ears and bald head look a little more complete and I can see more of my clothing. I feel like a full-body scan and total Persona won't be far behind.

This by itself makes the visionOS 1.1 update worthwhile.

Apple Vision Pro vision 1-1

Apple Vision Pro vision 1.1 remove system apps from Home View (Image credit: Future)

Other useful feature updates include the ability to remove system apps from the Home View. To do so, I looked at an app, in this case, Files, and pinched my thumb and forefinger together until the "Remove App" message appeared.

Apple also says it updated the virtual keyboard. In my initial review, I found this keyboard one of the weakest Vision Pro features. It's really hard to type accurately on this floating screen and you can only use two fingers at a time. My accuracy was terrible. In the update, accuracy and the AI that guesses what you intended to type appears somewhat improved.

Overall, it's nice to see Apple moving quickly to roll out features and updates to its powerful spatial computing platform. I'm not sure hands-free spatial scanning is truly useful, but I can report that my digital persona will no longer send you screaming from the room.

You might also like

Google could allow Android users to download up to five apps at once

Techradar - Fri, 03/08/2024 - 14:19

Google is reportedly giving Parallel Downloading another shot after the feature reemerged in a recent Play Store update.

If you’re not familiar with it, parallel downloading would give Android users the ability to install multiple apps at the same time. The tech first appeared about four years ago when a Reddit user noticed they were able to download Chrome, Google Photos, and YouTube onto their mobile device simultaneously. Since then, it seemingly faded into obscurity until it was discovered by industry expert Assemble Debug after diving into the files of Google Play version 40.0.13. 

Parallel Downloading on Google Play Store

(Image credit: Assemble Debug/TheSpAndroid) Current limitations

He was surprised to see that it was fully functioning. Screenshots on TheSpAndroid blog reveal Assemble Debug could download Adobe Lightroom and Adobe Acrobat without issue. At a glance, the process works similarly to single-app installations. The time it’ll take to get a piece of software on your phone depends on its file size.

As he investigated further, Assemble Debug found the feature was held back by a few limitations. First, parallel downloading does not work for updates. If you want to download patches for multiple apps, you’ll have to do it individually. Nothing is changing on that front. 

Second, Google is restricting the amount of simultaneous installations to just two apps. Assemble Debug points out that the restriction is controlled by an internal flag. He deactivated the flag and was able to increase the download limit to “five apps at once.” 

It's possible Google may alter the maximum amount of installs at any time, but they’re keeping things small for now. There could be an increase in a future testing period.

Joining the early test

For those interested, it is possible to activate parallel downloading on your device by grabbing the latest Play Store patch; however, the process is tricky. TheSpAndroid states you’ll need a rooted Android smartphone. Rooting isn’t super difficult to do, but it does take a while to accomplish and you run the risk of totally bricking the hardware. If you want to learn how to do this, we have a guide with step-by-step instructions on how to root your Android phone.

Once that’s all done, you’ll have to enable a certain flag via the GMS Flags app which you can find over on GitHub. Details on how to do this can be found in TheSpAndroid’s report.

It’s unknown when this feature will officially launch. Considering the company is experimenting with Parallel Downloads again after so long, it could be hinting at an imminent release. Hopefully, this is the case. Being able to install apps in bulk is a nice quality-of-life upgrade. It can help new phone owners save a lot of time when setting up their devices.

Speaking of which, check out TechRadar's list of the best Android phones for 2024 if you're looking to upgrade.

You might also like

Windows 11’s bizarre QR code ad for Copilot met an angry reaction – so Microsoft has halted the experiment

Techradar - Fri, 03/08/2024 - 04:45

Some Windows 11 (and Windows 10) users recently experienced a QR code-powered advert on the lock screen of their PC, but Microsoft has halted these ads following negative feedback from users.

The QR code appeared on the lock screen and when scanned it turned out to be a promotion for the Copilot AI, sending users through to where they could download the relevant mobile app for Copilot.

Needless to say, as noted there were unhappy users due to this, as evidenced in this Reddit thread pointed out by Windows Latest. Some Windows 10 users were also complaining, as well as those on Windows 11, and all were displeased that a relatively sizeable advert had been used in this clunky manner.

The user who started the thread described being confronted by a “lovely QR code plastered across my lock screen,” and others expressed similar sentiments. (More threads on Reddit here and here – and a quick warning, all this gets a bit salty at times).

Microsoft has now dropped this experiment, fortunately, as Windows Latest reported. A Microsoft spokesperson told the tech site via an email: “The notification [QR code] was simply a way to educate users and has since been paused. We value our customer experiences and are always learning to determine what is most valuable and to whom.”

This comes on top of another recent and unwelcome move by Microsoft to once again try to drive better Windows 11 adoption.

Analysis: Wonky implementation

There are a few things that make this episode worse. Firstly, while Windows Latest talks about Microsoft canning the QR adverts, the statement above mentions a “pause” – a halt for now, not forever. Does that mean QR code-powered adverts are still a possibility for the future? We can’t rule that out, sadly.

The second point is that this experiment was rolled out to those running finished versions of Windows 11 (and 10) – not people in testing channels. That rubs salt in the wound, frankly, even if it was only a small subset of users who witnessed the ads.

What compounds the above is that as observed on Reddit, the QR code was slightly obscured by a part of the Windows interface in some cases, which meant some thought the code was actually there due to a bug, not by design or any intention of Microsoft’s. Again, why this wasn’t trialled in testing, particularly given the apparently glitchy implementation some folks witnessed, we don’t know.

It’s all a bit puzzling. When you mention QR codes and Windows 11, what we immediately think of is the Blue Screen of Death, which offers up a code related to the error that has occurred. That’s somewhat ironic as this latest move appears to be a clumsy error on Microsoft’s part, too.

Those who were irritated by this – or any other lock screen suggestions – can turn them off. On either Windows 11 or Windows 10, go to Settings > Personalization > Lock Screen, and at the top of this panel, select either ‘Picture’ or ‘Slideshow.’

You’ll then see the option to ‘Get fun facts…’ on the lock screen, which you need to turn off – job done. No more fun facts, suggestions, or randomly piped through shoddily-made QR code adverts.

You might also like...

Adobe's new beta Express app gives you Firefly AI image generation for free

Techradar - Thu, 03/07/2024 - 13:56

Adobe has released a new beta version of its Express app, letting users try out their Firefly generative AI on mobile for the first time.

The AI functions much like Firefly on the web since it has a lot of the same features. You can have the AI engine create images from a single text prompt, insert or remove objects from images, and add words with special effects. The service also offers resources like background music tracks, stock videos, and a content scheduler for posting on social media platforms. It’s important to mention that all these features and more normally require a subscription to Adobe Express Premium. But, according to the announcement, everything will be available for free while the beta is ongoing. Once it’s over, you’ll have to pay the $10-a-month subscription to keep using the tools 

Adobe Express with Firefly features

(Image credit: Adobe)

Art projects on the current Express app will not be found in the beta – at least not right now. Ian Wang, who is the vice president of product for Adobe Express, told The Verge that once Express with Firefly exits beta, all the “historical data from the old app” will carry over to the new one. 

The new replacement

Adobe is planning on making Express with Firefly the main platform moving forward. It’s unknown when the beta will end. A company representative couldn’t give us an exact date, but they told us the company is currently collecting feedback for the eventual launch. When the trial period ends, the representative stated, “All eligible devices will be automatically updated to the new [app]”.

We managed to gain access to the beta and the way it works is pretty simple. Upon installation, you’ll see a revolving carousel of the AI tools at the top. For this quick demo, we’ll have Firefly make an image from a text prompt. Tap the option, then enter whatever you want to see from the AI.

Adobe Express with Firefly demo

(Image credit: Future)

Give it a few seconds to generate the content where you’ll be given multiple pictures to choose from. From there, you edit the image to your liking. After you’re all done, you can publish the finished product on social media or share it with someone.

Availability

Android users can download the beta directly from the Google Play Store. iPhone owners, on the other hand, will have a harder time. Apple has restrictions on how many testers can have access to beta software at a time. iOS users will instead have to join Adobe’s waitlist first and wait to get chosen. If you’re one of the lucky few, the company will guide you through the process of installing the app on your iPhone.

There is a system requirements page listing all of the smartphones eligible for the beta, however, it doesn’t appear to be a super strict list. The device we used was a OnePlus Nord N20 and it ran the app just fine. Adobe’s website also has all the supported languages which include English, French, Korean, plus Brazilian Portuguese.

Check out TechRadar's list of the best photo editor for 2024 if you want more robust tools.

You might also like

Microsoft makes big promises with new ‘AI PCs’ that will come with AI Explorer feature for Windows 11

Techradar - Thu, 03/07/2024 - 09:54

Microsoft has told us that it’s working on embedding artificial intelligence (AI) across a range of products, and it looks like it meant it, with the latest reports suggesting a more fleshed-out ‘AI Explorer’ feature for Windows 11.

Windows Central writes that AI Explorer will be the major new feature of an upcoming Windows 11 update, with Microsoft rumored to be working on a new AI assistance experience that’s described as an ‘advanced Copilot’ that will offer an embedded history and timeline feature. 

Apparently, this will transform the activities you do on your PC into searchable moments. It’s said that this AI Explorer will be able to be used in any app, enabling users to search conversations, documents, web pages, and images using natural language.

That promises a lot, implying you’ll be able to make requests like the following that Windows Central gives:

"Find me that list of restaurants Jenna said she liked.”

"Find me that thing about dinosaurs."

The advanced Copilot should then present everything it deems relevant - including every related word, phrase, image, and topic it can pull. It’s not clear if this means bringing up results from users' data stored locally on their PC or the internet (or a combination, as we see in Windows 11's Search box). I personally would prefer it if AI Explorer kept to just searching local files stored on a device's hard drive for privacy reasons, or at least give us the option to exclude internet results. 

The feature could also offer up suggestions for things you can do based on what you currently have on your screen. For instance, if you’re viewing a photo, you might see suggestions to remove the background in the Photos app. 

The new Photos app in Windows 11

(Image credit: Microsoft) When we except more information

Rumors suggest that on March 21 there will be an announcement for the Surface Laptop 6 and Surface Pro 10, which are being hailed as Microsoft’s first real “AI PCs,” and will offer a range of features and upgrades powered by Microsoft’s next-gen AI tools. Sources say that these will go head-to-head with rivals like the iPad Pro and MacBook Pro in terms of efficiency and performance.

According to Neowin, we can look forward to the official launch of these PCs in April and June, but the AI features aren’t expected to be included right away. They’re forecasted to be added in the second half of the year, so the first of these shipped PCs will be pretty much like presently existing PCs running Windows 11 with some flashy hardware upgrades. It also seems like AI Explorer is specifically intended for these new machines, even if not right away, and existing device users won’t be able to use it. 

It sounds like we’ll have to continue to watch for more information from Microsoft, especially as it’s not clear what exactly to expect on March 21, but it’s a lot of hype and excitement that I hope it can fulfill. Copilot’s present form is generally thought to be underwhelming and somewhat disappointing, so Microsoft has a lot to deliver if it wants to impress users and show them that it’s leading the pack with generative AI.

YOU MIGHT ALSO LIKE...

Waze could tempt you from Google Maps with these super-useful driving alerts

Techradar - Wed, 03/06/2024 - 23:00

Waze will receive a nice quality-of-life update that’ll help you drive around more safely as well as let you know of any recent changes to the road.

The patch is slated to be released on Android and iOS devices across the globe, but the rollout won’t happen all at once. Instead, the six features will come out in pieces throughout the coming months. It’s a little complicated, but once you break the announcement down, it all makes sense.

When it comes to safety, the app will notify you in advance of any emergency vehicles on your route. That way you’ll know when to shift lanes or take a detour. This tool is currently making its way to users living in the US, Canada, Mexico, and France, with, Waze promises, more countries coming soon.

Waze's new speed limit and emergency vehicle alerts

(Image credit: Waze/Google)

Our favorite update out of the bunch has to be Waze deciding it'll shout out upcoming changes to speed limits in case they’re about to suddenly decrease. It's a pretty helpful tool whenever you want to avoid getting caught in a speed trap. Third, the developers are expanding hazard detection to include speed bumps, sharp turns, and toll booths. The speed limit warnings as well as the hazard detection upgrade are currently rolling out to all users. 

Navigating the world

This next set of features is scheduled to launch down the line.

Normally, whenever someone opens a navigation app, it’s because they want to get to their destination ASAP. Well, later this month, you’ll be given the option to take more scenic routes. They may not be the fastest way to get home, but at least, you'll have the opportunity to take your favored path instead.

Most drivers can agree that finding a place to park in a city can be an utter nightmare. To make finding the sweet spot less stressful, Waze is teaming up with software company Flash to provide information on parking garages. The app will tell you how much it costs to park at a location, whether it’s covered or open to the elements, if there’s a valet, and more. 

The announcement states the new data feed is seeing a limited release. It will provide info on a select group of 30,000 parking garages across major cities in the United States and Canada.

Waze's new parking garage feed and alternative routes tool

(Image credit: Google/Waze)

The last feature will teach people how to navigate a roundabout. Waze states they’ll point out when to enter, when to switch lanes, and “where to exit”. Android users will receive the roundabout tool later this month, however, iPhone owners will have to wait until later in the year to get the same upgrade.

We reached out to Google, which is Waze’s parent company, asking if there are plans for future expansions and if it’s going to add the same features to the app’s web page. This story will be updated at a later time.

Waze's latest patch looks like it'll keep a lot of people safe, but accidents happen all the time. To keep your insurance rates from skyrocketing, check out TechRadar's list of the best dash cams for 2024. You never know when you'll need one.

You might also like

Another big reason to install iOS 17.4 right now – it fixes two major security threats

Techradar - Wed, 03/06/2024 - 10:35

Apple has just launched iOS 17.4, and right now everyone’s attention is focused on how it lets you run third-party app stores on your iPhone – although only if you're in the European Union. But there’s another important reason you should upgrade: it fixes two extremely serious security flaws.

In a new security post (via BleepingComputer), Apple says that iOS 17.4 and iPadOS 17.4 resolve two zero-day bugs in the iOS kernel and Apple’s RTKit that might allow an attacker to bypass your device’s kernel memory protections. That could potentially give malicious actors very high-level access to your device, so it’s imperative that you patch your iPhone as soon as possible by opening the Settings app, going to General > Software Update and following the on-screen instructions.

These issues are not just hypothetical; Apple says it is “aware of a report that this issue may have been exploited” in both cases, and if a zero-day flaw has been actively exploited it means hackers have been able to take advantage of these issues without anyone knowing. With that in mind, there’s every reason to update your device now that Apple has issued a set of fixes.

Apple says the bugs affect a wide range of devices: the iPhone XS and later, iPad Pro 12.9-inch 2nd generation and later, iPad Pro 10.5-inch, iPad Pro 11-inch 1st generation and later, iPad Air 3rd generation and later, iPad 6th generation and later, and iPad mini 5th generation and later. In other words, a lot of people are potentially impacted.

Actively exploited

holding an iphone

(Image credit: Shutterstock)

Zero-day flaws like these are usually exploited in targeted attacks, often by sophisticated state-sponsored groups. Apple didn’t share any details of how or when these vulnerabilities were put to nefarious use, nor whether they were discovered by Apple’s own security teams or by external researchers.

Apple devices are known for their strong defenses, but are increasingly falling under hackers’ crosshairs. Recent research suggests that there were 20 active zero-day flaws targeting Apple products in 2023 – double the number of the previous year. According to BleepingComputer, three zero-day attacks on Apple devices have been patched so far in 2024.

This kind of exploit demonstrates why it’s so important to keep all of your devices updated with the latest patches, especially if they include security fixes. Leaving yourself vulnerable is a dangerous gamble when there are extremely sophisticated hacking groups out there in the wild. With that in mind, make sure you download the latest iOS 17.4 update as soon as you can.

You might also like

Microsoft is axing support for Android apps, leaving users to search for other solutions

Techradar - Wed, 03/06/2024 - 09:31

Another week, another Microsoft feature bites the dust - support for Android apps and games in Windows is getting the chop. Starting next year, users will need a third-party alternative solution to run Android apps in Windows 10 and Windows 11.

This is because the official Windows Subsystem for Android (WSA) app, an official Microsoft app that enables Windows 11 to run Android applications natively, will no longer be supported and Windows users won’t be able to access the Amazon Appstore directly on Windows. Support for WSA is slated to end this time next year on March 5, 2025. 

This news appeared in a notice added to the technical documentation for Windows Subsystem for Android. In this notice, Microsoft states that users can expect to access any Android apps they have installed this way (and from the Amazon Appstore) up until the date support is fully deprecated.

According to Android Authority, after March 5, 2025, users will not be able to access any Android apps that rely on WSA. It also seems reasonable to assume that after this date, users won’t be able to install the WSA app, or install any new Android apps from the Amazon store. 

Man using download manager on laptop

(Image credit: Unsplash) The impending reality for Android app fans

If you want to use an app that’s not on track to be deprecated or are looking for a replacement when March 5, 2025 rolls around, you can turn to unofficial third-party apps that will enable you to run Android apps on Windows.

If it’s just games for Android that you’re interested in, there is an official solution on offer from Google, Google Play Games, which makes hundreds of Android games able to be played on PCs running Windows 10 and Windows 11. Google Play Games is still in beta, but you can download it from the official website.

The death of WSA is very disappointing news from Microsoft and takes away options for how users can use their PCs, possibly a move made in the name of capping the visibility of competitors within Microsoft’s flagship operating system. This is purely in Microsoft’s interest and comes at the detriment of users’ choice, and will force users who want to run Android apps to find workarounds. One of the main appeals of Windows against competitors like ChromeOS and macOS is the flexibility and customizability of the operating system, and moves like this only serve to kneecap that selling point.

I assume Microsoft hopes this might drive these users to the Microsoft Store and consider getting Microsoft-issued apps instead, but the offerings of the Microsoft Store are something lacking. I hope that Microsoft has substantial plans to improve the Microsoft Store if it’s going to take away what was largely seen as a stable (and more or less straightforward) platform that expanded the apps available to users by a sizeable amount. 

YOU MIGHT ALSO LIKE...

No, third-party iPhone app stores won't work outside Europe – even with a VPN

Techradar - Wed, 03/06/2024 - 05:33

After many years of Apple keeping its ecosystem firmly locked down, cracks have started to appear in its famous walled garden, with the newly released iOS 17.4 allowing third-party app stores for the first time. However, access to these is only available to users within the European Union (EU) – and don’t expect to be able to get around the restriction using a VPN.

As spotted by 9to5Mac, Apple has uploaded a new support document that outlines how it will make sure that anyone who wants to access a third-party app store is physically located inside the EU.

First, you must have an Apple ID that's set to an EU member state. As well as that, there’s a geolocation check to ensure that you're physically located in one of those countries. Apple says it doesn’t collect your actual location, only an indicator of whether you're eligible to use third-party app stores or not.

Interestingly, the geolocation aspect of Apple’s restrictions implies that even the best VPN services won’t be able to bypass them. That’s because a VPN can change your IP address to fool a server into believing that you're located in a different country, but a geolocation check happens on the device itself (usually using GPS), and therefore can't be spoofed in the same way.

Apple might use other ways to check your location, and it already has a system in place that does just this. Also as found by 9to5Mac, this system looks up things like your rough location (on a nation level, not your exact location), your Apple ID billing address, the region you are using in the Settings app, and the type of device you’re using.

The app stores are coming

The App Store on a phone screen

(Image credit: Shutterstock / BigTunaOnline)

Apple says that you will be able to access alternative app stores if you leave the EU for a brief “grace period,” but warns that if you’re “gone for too long, you’ll lose access to some features, including installing new alternative app marketplaces.” Apps you’ve installed will still work, but you won’t be able to update them. The company hasn’t said how long the grace period is.

Alternative app stores have only just been permitted, but one is already available to download. Called the Mobivention App Marketplace, this store is aimed at corporate customers who want a outlet for distributing their own business-focused apps. Other providers, like MacPaw, Epic Games and AltStore, have said they’ll be launching their own app stores soon.

Apple didn’t give a reason for why it's going so far to ensure that only EU citizens can access third-party app stores, but one reason could be to clamp down on the idea spreading to users in other nations. For one thing, Apple has repeatedly said that third-party app stores to which access is being enforced by the EU’s Digital Markets Act (DMA) could be unsafe.

As well as that, they also represent a potential threat to Apple’s revenues – just one look at Apple’s onerous fees for developers who use third-party app stores shows you how worried Apple must be. While the company is being forced to open up in the EU, no other jurisdiction has followed suit, so it seems likely that Apple wants to contain the spread of alternative app stores as much as it can.

If you’re located inside the EU, you’ll be able to try out these new app stores pretty much straight away. If you’re not, all you can do is wait to see if Apple is forced to open up elsewhere.

You might also like

ChatGPT takes the mic as OpenAI unveils the Read Aloud feature for your listening pleasure

Techradar - Wed, 03/06/2024 - 04:56

OpenAI looks like it’s been hard at work, making moves like continuing to improve the GPT store and recently sharing demonstrations of one of the other highly sophisticated models in its pipeline, the video-generation tool Sora. That said, it looks like it’s not completely resting on ChatGPT’s previous success and giving the impressive AI chatbot the capability to read its responses out loud. The feature is being rolled out on both the web version and the mobile versions of the chatbot. 

The new feature will be called 'Read Aloud', as per an official X (formerly Twitter) post from the generative artificial intelligence (AI) company. These will come in useful for many users, including those who have different accessibility needs and people using the chatbot while on the go.

Users can try it for themselves now, according to the Verge, either on the web version of ChatGPT or a mobile version (iOS and Android), and they will be given five different voices they can select from that ChatGPT can use. The feature is available to try whether you use the free version available to all users, GPT-3.5, or the premium paid version, GPT-4. When it comes to languages, users can expect to be able to use the Read Aloud feature in 37 languages (for now) and ChatGPT will be given the ability to autodetect the language that the conversation is happening in. 

If you want to try it on the desktop version of ChatGPT, there should be a speaker icon that shows up below the generated text that activates the feature. If you'd like to try it on a mobile app version, users can tap on and hold the text to open the Read Aloud feature player. In the player, users can play, pause, and rewind the reading of ChatGPTs’ response. Bear in mind that the feature is still being rolled out, so not every user in every region will have access just yet.

A step in the right direction for ChatGPT

This isn’t the first voice-related feature that ChatGPT has received, with Open AI introducing a voice chat feature in September 2023, which allowed users to make inquiries using voice input instead of typing. Users can keep this setting on, prompting ChatGPT to always respond out loud to their inputs.

The debut of this feature comes at an interesting time, as Anthropic recently introduced similar features to its own generative AI models, including Claude. Anthropic is an OpenAI competitor that’s recently seen major amounts of investment from Amazon. 

Overall, this new feature is great news in my eyes (or ears), primarily for expanding accessibility to ChatGPT, but also because I've had a Read-Aloud plugin for ChatGPT in my browser for a while now. I find it interesting to listen to and analyze ChatGPT’s responses out loud, especially as I’m researching and writing. After all, its responses are designed to be as human-like as possible, and a big part of how we process actual real-life human communication is by speaking and listening to each other. 

Giving Chat-GPT a capability like this can help users think about how well ChatGPT is responding, as it makes use of another one of our primary ways of receiving verbal information. Beyond the obvious accessibility benefits for blind or partially-sighted users, I think this is a solid move by OpenAI in cementing ChatGPT as the go-to generative AI tool, opening up another avenue for humans to connect to it. 

YOU MIGHT ALSO LIKE...

Feeling lost in the concrete jungles of the world? Fear not, Google Maps introduces a new feature to help you find entrances and exits

Techradar - Wed, 03/06/2024 - 04:36

Picture this: you’re using Google Maps to navigate to a place you’ve never been and time is pressing, but you’ve made it! You’ve found the location, but there’s a problem: you don’t know how to get into whatever building you’re trying to access, and panic sets in. Maybe that’s just me, but if you can relate it looks like we’re getting some good news - Google Maps is testing a feature that shows you exactly where you can enter buildings.

According to Android Police, Google Maps is working on a feature showing users entrance indicator icons for selected buildings. I can immediately see how this could make it easier to find your way in and out of a location. Loading markers like this would require a lot of internet data if done for every suitable building in a given area, especially metropolitan and densely packed areas, but it seems Google has accounted for this; the entrance icons will only become visible when you select a precise location and zoom in closely. 

Google Maps is an immensely popular app for navigation as well as looking up recommendations for various activities, like finding attractions or places to eat. If you’ve ever actually done this in practice, you’ve possibly had a situation like I’ve described above, especially if you’re trying to find your way around a larger attraction or building. Trying to find the correct entrance to an expo center or sports stadium can be a nightmare. Places like these will often have multiple entrances with different accessibility options - such as underground train stations that stretch across several streets.

Google's experimentation should help users manage those parts of their journeys better, starting with only certain users and certain buildings for now, displaying icons that indicate both where you can enter a place and exit it (if there are exit/entrance-only doors, for example). This feature follows the introduction of Google Maps’ recent addition of indicators of the best station exits and entrances for users of public transport.

Google Maps being used to travel across New York

(Image credit: Shutterstock / TY Lim) The present state of the new feature

Android Police tested the new feature on Google Maps version 11.17.0101 on a Google Pixel 7a. As Google seemingly intended, Google Maps showed entrances for a place only when it was selected and while the user zoomed in on it, showing a white circle with a symbol indicating ‘entry’ on it. That said, Android Police wasn’t able to use the feature on other devices running the latest version of Google Maps for different regions, which indicates that Google Maps is rolling this feature out gradually following limited and measured testing. 

While using the Google Pixel 7a, Android Police tested various types of buildings including hotels, doctors’ offices, supermarkets, hardware stores, cafes, and restaurants in cities that include New York City, Las Vegas, San Francisco, and Berlin. Some places had these new entrance and exit markers and some didn’t, which probably means that Google is still in the process of gathering accurate and up-to-date information on these places, most likely via its StreetView tool. Another issue that came up was that some of the indicated entrances were not in the right place, but teething issues are inevitable and this problem seemed more common for smaller buildings where it’s actually easier to find the entrance once you’re there in person.

The entrances were sometimes marked by a green arrow instead of a white circle, and it’s not clear at this point exactly what it means when a green arrow or a white circle is used. Google Maps has a reputation as a very helpful, functional, and often dependable app, so whatever new features are rolled out, Google probably wants to make sure they’re up to a certain standard. I hope they complete the necessary stages of experimenting and implementing this new feature, and I look forward to using it as soon as I can.

YOU MIGHT ALSO LIKE...

Pages