Do NOT follow this link or you will be banned from the site!
Feed aggregator
Love and hate: tech pros overwhelmingly like AI agents but view them as a growing security risk
- Nearly half of IT teams don’t fully know what their AI agents are accessing daily
- Enterprises love AI agents, but also fear what they’re doing behind closed digital doors
- AI tools now need governance, audit trails, and control just like human employees
Despite growing enthusiasm for agentic AI across businesses, new research suggests that the rapid expansion of these tools is outpacing efforts to secure them.
A SailPoint survey of 353 IT professionals with enterprise security responsibilities has revealed a complex mix of optimism and anxiety over AI agents.
The survey reports 98% of organizations intend to expand their use of AI agents within the coming year.
AI Agents adoption outpaces security readinessAI agents are being integrated into operations that handle sensitive enterprise data, from customer records and financials to legal documents and supply chain transactions - however, 96% of respondents said they view these very agents as a growing security threat.
One core issue is visibility: only 54% of professionals claim to have full awareness of the data their agents can access - which leaves nearly half of enterprise environments in the dark about how AI agents interact with critical information.
Compounding the problem, 92% of those surveyed agreed that governing AI agents is crucial for security, but just 44% have an actual policy in place.
Furthermore, eight in ten companies say their AI agents have taken actions they weren’t meant to - this includes accessing unauthorized systems (39%), sharing inappropriate data (33%), and downloading sensitive content (32%).
Even more troubling, 23% of respondents admitted their AI agents have been tricked into revealing access credentials, a potential goldmine for malicious actors.
One notable insight is that 72% believe AI agents present greater risks than traditional machine identities.
Part of the reason is that AI agents often require multiple identities to function efficiently, especially when integrated with high-performance AI tools or systems used for development and writing.
Calls for a shift to an identity-first model are growing louder, but SailPoint and others argue that organizations need to treat AI agents like human users, complete with access controls, accountability mechanisms, and full audit trails.
AI agents are a relatively new addition to the business space, and it will take time for organizations to fully integrate them into their operations.
“Many organizations are still early in this journey, and growing concerns around data control highlight the need for stronger, more comprehensive identity security strategies,” SailPoint concluded.
You might also like- These are the best AI website builders around
- Take a look at our pick of the best internet security suites
- Google Drive's new Gemini features include video analysis at last
The Trump-Elon Musk Feud Creates More Problems for Tesla
Already suffering from steep declines in sales and profit, the carmaker could now face the president’s wrath.
Inside OpenAI’s Plan to Embed ChatGPT Into College Students’ Lives
OpenAI, the firm that helped spark chatbot cheating, wants to embed A.I. in every facet of college. First up: 460,000 students at Cal State.
Buyer With Ties to Chinese Communist Party Got V.I.P. Treatment at Trump Crypto Dinner
The warm welcome for a technology executive whose purchases of the president’s digital coin won him a White House tour illustrates inconsistencies in the administration’s views toward visitors from China.
What to Know About the Effects of Ketamine
Elon Musk has said that he used ketamine as a treatment in the past, but he denied reports that he was taking it frequently and recreationally.
Graphene manufacturer Avadain surpasses crowdfunding goals, plans further expansion
The emerging entrepreneur behind this promising venture shares insights on the company's unique advantages and strategic focus in the evolving graphene industry.
Memphis startup's wearable tech aims to revolutionize post-surgical venous clot monitoring
A Memphis-based startup is gearing up to test its innovative medical device at two prominent hospitals, potentially changing how post-surgical blood clots are detected.
UK Court Warns Lawyers Can Be Prosecuted Over A.I. Tools That ‘Hallucinate’ Fake Material
A senior judge said on Friday that lawyers could be prosecuted for presenting material that had been “hallucinated” by artificial intelligence tools.
Google upgrades Gemini 2.5 Pro's already formidable coding abilities
- Google’s Gemini 2.5 Pro is getting an update to improve its coding
- The update fixes previous issues with formatting and coherence
- The model is expected to become Gemini Pro’s first official stable release
Google's rapid rollout of new models of Gemini is continuing apace, but the latest version of Gemini 2.5 Pro has some notable improvements that the company claims will put it in play for a while as the first “long-term stable release.” The upgrade also patches up some of the issues that might currently frustrate Gemini Pro users.
For now, the model is still in beta, unlike its friskier sibling, Gemini 2.5 Flash. Gemini Pro 2.5 has reportedly taken longer and dealt with some complicated issues around regressions in conversation that made the AI model seem somewhat underpowered as a brain. Those issues have apparently been resolved, with Google bragging about the coding capabilities of the new model in particular. It's outscored rival models on the Aider Polyglot benchmark, a multi-language coding test popular for measuring AI model software composition.
The updated model also offers developers what Google calls “configurable thinking budgets.” These are essentially a way to fine-tune how much computing power Gemini uses to answer complex queries so that you don't use up all of your credits building an app in one go. It’s already available through Google AI Studio and Vertex AI, and will likely become part of Gemini as a whole in the near future.
Gemini Pro powerAnd it's not just a technical whiz. Gemini Pro has faced complaints of lacking the same creative and conversational flair of its fellow Gemini models, failing to impress outside of technical tasks. The writing and formatting could get sloppy, and long-form responses tended to ramble or circle back on themselves. Google says it’s fixed that issue, with correct formatting, more nuanced writing, and no trailing off mid-response.
All of those upgrades lead to why Google has declared this version of Gemini Pro to be a long-term, stable model, at least for now. For developers and enterprise users, that kind of certainty is valuable in its own right, just as much as regular upgrades.
The new model will have an impact on Gemini users outside of the office, too. The same improvements to formatting, memory, and contextual understanding will likely be incorporated into the public-facing version of Gemini just to keep things neat. And it fits with Google's strategy to embed Gemini everywhere and encourage everyone to use it for any of their AI needs. Gemini Flash is the default option for those not paying a subscription fee for Gemini. Gemini Nano handles AI for Android devices, but Gemini Pro is intended to be the flagship model, the one that impresses everyone.
Google will definitely try to live up to that vision with the new model, but the competition has hardly gone away. OpenAI, Anthropic, and even Apple are all racing to be on top of the AI model game. Gemini 2.5 Pro proves Google won't be falling behind any time soon, at least now that it's stopped regressing.
You might also likeIf Elon Musk and President Trump Divorce, Who Gets Silicon Valley?
The relationship between Mr. Trump and tech industry power brokers was built on money and the promise of deregulation, with Mr. Musk in the middle of it all.
Sam Altman says AI chats should be as private as ‘talking to a lawyer or a doctor’, but OpenAI could soon be forced to keep your ChatGPT conversations forever
- The New York Times is requesting that all ChatGPT conversations be retained as part of its lawsuit against OpenAI and Microsoft
- This would mean that a record of all your ChatGPT conversations would be kept, potentially forever
- OpenAI argues that chats with AI should be a private conversation
Back in December 2023, the New York Times launched a lawsuit against OpenAI and Microsoft, alleging copyright infringement. The New York Times alleges that OpenAI had trained its ChatGPT model, which also powers Microsoft’s Copilot, by “copying and using millions” of its articles without permission.
The lawsuit is still ongoing, and as part of it the New York Times (and other plaintiffs involved in the case) have made the demand that OpenAI are made to retain consumer ChatGPT and API customer data indefinitely, much to the ire of Sam Altman, CEO of OpenAI, who took to X.com to tweet, “We have been thinking recently about the need for something like ‘AI privilege’; this really accelerates the need to have the conversation. IMO talking to an AI should be like talking to a lawyer or a doctor. I hope society will figure this out soon.”
recently the NYT asked a court to force us to not delete any user chats. we think this was an inappropriate request that sets a bad precedent.we are appealing the decision.we will fight any demand that compromises our users' privacy; this is a core principle.June 6, 2025
OpenAI describes the New York Times lawsuit as “baseless”, and in a lengthy post on the OpenAI website titled, ‘How we’re responding to The New York Times’ data demands in order to protect user privacy’, OpenAI lays out its approach to privacy.
Brad Lightcap, COO, OpenAI, says that the demand from the NYT “fundamentally conflicts with the privacy commitments we have made to our users. It abandons long-standing privacy norms and weakens privacy protections.”
Private investigationsAs more and more people share intimate details of their lives with AI chatbots, which are often taking on the role of a therapist, I can appreciate the need to be able to keep AI conversations private, however, I can also see the NYT’s point of view that if there is evidence that supports its claims against OpenAI then it needs to have access to that data without OpenAI being able to declare it all as too private to share.
At the moment, a ChatGPT chat is removed from your account immediately when you delete the conversation, and scheduled for permanent deletion from OpenAI systems within 30 days. The order would mean that even deleted ChatGPT conversations would have to be retained by OpenAI.
As a ChatGPT user myself, I’ve always appreciated the ability to be able to remove conversations entirely. If OpenAI is forced to comply with this request, then it’s going to affect pretty much everybody who uses the service, on either a free, Plus, Pro, or Teams (but not Enterprise or Edu account holders).
The order also does not impact API customers who are using Zero Data Retention endpoints under OpenAI’s ZDR amendment.
OpenAI has said it has appealed the order to the District Court Judge and will inform us when it knows more.
You might also likeHe’s a Master of Outrage on X. The Pay Isn’t Great.
An online creator went from a “nobody” to a conspiratorial sensation on X. What he gets in return is less clear.
5 for '25: Five ways to ward off an internet service disruption
A global technology outage disabled computers and caused billion in losses for companies around the world about this time last year. Are you prepared for when, not if, the next massive service disruption occurs?
The Trump-Musk Fallout + A DOGE Coder Speaks + ChefGPT
“We’re having a broligarchy blowup of the highest order.”
If Elon Musk and Donald Trump Make Up, Don’t Be Surprised
For all the insults that Mr. Musk and Mr. Trump traded on Thursday, don’t be surprised if they make up again days from now. In the meantime, they both benefit.
Apple WWDC 2025: dates, timings, and everything we're expecting from the big software show
It's almost that time of year again: WWDC time. Apple's Worldwide Developer Conference is an annual event, where it reveals to developers and the rest of us what's coming in terms of software updates. That covers iOS, iPadOS, macOS, watchOS, tvOS, and visionOS, so it's always a packed show.
When it comes to official WWDC 2025 news, all we really know is when it's happening. Unofficially, there have been a ton of leaks and rumors hinting at what's to come – and we've collected them all here so you can prepare yourself for the big day.
If Apple sticks to its usual schedule, we'll get beta versions of some of these updates shortly after WWDC 2025 has ended, followed by full launches later in the year. When it comes to iOS 19 (or iOS 26) for example, the software should start rolling out to iPhones in September to coincide with the launch of the iPhone 17.
Cut to the chase- What is it? Apple's big annual software show
- When is it? Monday, June 9, 2025
This year's WWDC is happening on Monday, June 9, as announced by Apple.
There are meetings and presentations all week, but most of the main announcements will be made by Tim Cook and his colleagues in a keynote speech that kicks off the event: that's scheduled to get underway at 10am PT / 1pm ET / 6pm UK (that's 3am AEST on June 10, for those of you in Australia).
As usual, the keynote will be livestreamed over the web, and here's how to watch it.
WWDC 2025: what can we expect?It looks as though WWDC 2025 is going to be particularly busy, based on the rumors and speculation we've come across in the run up to the event. Bear in mind that none of this is official yet, but here's what we're expecting.
A major software rebrandWe were expecting iOS 19 to follow iOS 18, as you would, but a reliable source says Apple plans to name the next iPhone operating system update iOS 26 – to match the year 2026, even though it's launching in the third quarter of 2025.
Not only that, but the rumor is that every Apple software platform will be renamed to match. This would fix the rather confusing situation we have now, where macOS, tvOS, watchOS, and visionOS are all on different version numbers.
That would be quite a jump in some cases – from visionOS 2 to visionOS 26 – but it would make everything more consistent. It's also going to be interesting to see which devices will be eligible for the upcoming updates.
Revamped interfacesAnother leak that's emerged ahead of time suggests most of Apple's software platforms are going to get a modern visual refresh – one that actually matches the least widely used of those platforms, visionOS.
From what we've heard, it sounds like the new look will be more consistent, more straightforward, and with more use of translucent, glass-like elements. We've actually seen hints of this in the official invite to WWDC 2025.
What's more, the tagline of the event is "sleek peek" – pointing towards something that involves a visual overhaul. It could be the biggest update to the aesthetics of Apple's software and apps since iOS 7, all the way back in 2014.
Big app upgradesAs well as refreshing the underlying operating systems, Apple tends to save all its individual app update announcements for WWDC 2025 too. This year it's been rumored that Messages will get automatic translation and support for polls, for example.
The same leak predicts animated album art on the lock screen when you're listening to your tunes in Apple Music, as well as the ability to export Notes in markdown format. An overhaul to the CarPlay interface has also been predicted.
Apparently, a dedicated gaming app is on the way for Apple devices too, a central hub where all your games, chats, leaderboards, and other game-related information can live and sync across your various gadgets.
Not much Apple IntelligenceApple has gone big on AI recently, like just about every other tech company in business – but after several delays to the rollout of Apple Intelligence, it seems we won't get much in the way of new AI announcements at WWDC 2025.
That's according to Mark Gurman at Bloomberg, who is usually reliable when it comes to Apple predictions. Apparently it's going to be pretty quiet on the Siri front, while Apple engineers regroup and make sure the next update is a polished one.
We may still see a few Apple Intelligence tweaks, such as battery optimizations, but don't expect too much in terms of AI – even if there's a possibility that Apple could open up its platforms to more third-party AI voice assistants.
More leaks and rumorsThat's not quite the end of the leaks and rumors when it comes to WWDC 2025. Software updates for the Apple AirPods are rumored to be adding features such as camera control, support for more gestures, and a new mic mode.
Then there's the Apple Watch: we won't see new hardware at WWDC 2025, but we suspect Apple may well introduce some new tracking features in watchOS, as well as perhaps a smattering of Apple Intelligence features.
No doubt Apple will have some surprises in store, so join us on June 9 for the full story: we'll be running a live blog alongside speedy updates from Apple, as we hear all about its software plans for the rest of the year.
You might also likeHow NASA Would Struggle Without SpaceX if Trump Cancels Musk’s Contracts
If President Trump cancels the contracts for Elon Musk’s private spaceflight company, the federal government would struggle to achieve many goals in orbit and beyond.
Luma Labs' new Modify Video tool can reimagine scenes without reshooting
- Luma Labs' new Modify Video tool for Dream Machine uses AI to alter any video footage without reshoots
- Any characters or environments won't lose their original motion or performances
- Anything from subtle wardrobe tweaks to full magical scene overhauls is feasible
Luma Labs is known for producing AI videos from scratch, but the company has a new feature for its Dream Machine that can utterly transform real video footage in subtle or blatant ways, even if it's just an old home movie.
The new Modify Video feature does for videos something like the best Photoshop tools do for images. It can change a scene's setting, style, even whole characters, all without reshooting, reanimating, or even standing up.
The company boasts that the AI video editing preserves everything that matters to you from the original recording, such as actor movement, framing, timing, and other key details, while altering anything else you want.
The outfit you're wearing, which you've decided wasn't you, is suddenly an entirely different set of clothing. That blanket fort is now a ship sailing a stormy sea, and your friend flailing on the ground is actually an astronaut in space, all without the use of green screens or editing bays.
Luma’s combination of advanced motion and performance capture, AI styling, and what it calls structured presets makes it possible to offer the full range of reimagined videos.
All you need to do is upload a video of up to 10 seconds in length to get started. Then pick from the Adhere, Flex, or Reimagine presets.
Adhere is the most subtle option; it focuses on minimal changes, such as the clothing adjustment below or different textures on furniture. Flex does that but can also adjust the style of the video, the lighting, and other, more obvious details. Reimagine, as the name suggests, can completely remake everything about the video, taking it to another world or remaking people into cartoon animals or sending someone standing on a flat board into a cyberpunk hoverboard race.
Flexible AI videoIt all depends on not just prompts, but reference images and frame selections from your video if you choose. As a result, the process is much more user-friendly and flexible.
Although AI video modification is hardly unique to Luma, the company claims it outperforms rivals like Runway and Pika due to its performance fidelity. The altered videos keep an actor’s body language, facial expressions, and lip sync. The final results appear as an organic whole, not just stitched-together bits.
Of course, the Modify Video tools have limitations. These are still capped at 10 seconds per clip for now, which keeps things manageable in terms of wait times. However, if you want a longer film, you need to plan and work out how to artistically incorporate different shots into one film.
Still, features like the ability to isolate elements within a shot are a big deal. Sometimes you have a performance you're very happy with, but it's supposed to be a different kind of character in a different setting. Well, you can keep the performance intact and swap a garage for the sea and your actor's legs for a fish tail.
Dreams to realityIt is genuinely impressive how quickly and thoroughly the AI tools can rework a bit of footage. These tools aren't just a gimmick; the AI models are aware of performances and timelines in a way that feels closer to human than any I've seen. The AI models don't actually understand pacing, continuity, or structure, but they are very good at mimicking these aspects.
While the technical and ethical limitations will prevent Luma Labs from recreating the entire cinema at this point, these tools will be tempting for many amateur or independent video producers. And while I don't see it becoming as widely used as common photo filters, there are some fun ideas in Luma's demos that you might want to try.
You might also like...