Computerworld.com [Hacking News]
Microsoft’s Fara-7B brings AI agents to the PC with on-device automation
Microsoft is pushing agentic AI deeper into the PC with Fara-7B, a compact computer-use agent (CUA) model that can automate complex tasks entirely on a local device.
The experimental release, aimed at gathering feedback, provides enterprises with a preview of how AI agents might run sensitive workflows without sending data to the cloud, while still matching or outperforming larger models like GPT-4o in real UI navigation tasks.
“Unlike traditional chat models that generate text-based responses, Computer Use Agent (CUA) models like Fara-7B leverage computer interfaces, such as a mouse and keyboard, to complete tasks on behalf of users,” Microsoft said in a blog post. “With only 7 billion parameters, Fara-7B achieves state-of-the-art performance within its size class and is competitive with larger, more resource-intensive agentic systems that depend on prompting multiple large models.”
Fara-7B processes screenshots and interprets on-screen elements at the pixel level, enabling it to navigate interfaces even when the underlying code is complex or unavailable.
In internal benchmarks, Fara-7B posted a 73.5 percent success rate on the WebVoyager test, surpassing GPT-4o when both were evaluated as computer-use agents. Microsoft said the model also tends to finish tasks in far fewer steps than earlier 7B-class systems, which could translate to faster and more predictable automation on the desktop.
Microsoft has also built a “Critical Points” safeguard into the model, requiring the agent to pause and request user approval before performing irreversible actions such as sending emails or completing financial transactions.
The shift to local modelsAnalysts note that the move toward compact, local models such as Fara-7B reflects a broader shift in enterprise AI architecture.
Cloud-based systems continue to dominate for large-scale reasoning and organization-wide search. Still, many day-to-day enterprise workflows involve copying data between internal applications on a laptop, where information cannot leave the device.
“Edge-based models solve three big problems with cloud AI: compute cost, data leaving the device, and latency,” said Pareekh Jain, CEO of Pareekh Consulting. “Most enterprise tasks happen across internal apps on a laptop, and a local agent is a much better fit for that.”
Charlie Dai, VP and principal analyst at Forrester, said Fara-7B shows how lightweight, device-resident agents will become more important as organizations accelerate their adoption of agentic AI.
“For enterprises, this signals a gradual decentralization of AI workloads, lowering dependency on hyperscale infrastructure while demanding new strategies for edge governance and model lifecycle management,” Dai added.
The trend also reflects a broader move toward hybrid AI architectures, where local agents handle privacy-sensitive workflows and cloud systems continue to provide scale, according to Tulika Sheel, a senior VP at Kadence International.
By keeping data local and reducing reliance on hyperscale compute, small on-device agents offer a practical way to automate sensitive or repetitive desktop tasks without exposing information to external systems.
Pixel-level agents promise broader compatibility because they can work across many applications without custom integrations, but they also bring operational risks. Jain compared this approach to an AI-enhanced version of robotic process automation, where the agent mimics mouse and keyboard inputs to move data between systems.
“A pixel-only agent can work across many applications without alignment or integration, which is a big advantage,” Jain said. “But if the UI changes, the agent may struggle. It is powerful, but also fragile.”
Dai also said that while pixel-based models offer flexibility across diverse UIs without API integration, their reliability hinges on interface stability and robust vision-to-action mapping.
“In dynamic enterprise environments with frequent UI changes, these agents risk brittleness unless paired with augmented data management, adaptive retraining, and fallback mechanisms; therefore, at this stage, they are more suited for controlled workflows than mission-critical automation,” Dai said.
Performance is only one part of the equation. Enterprises will also need stronger controls before allowing such agents to run unsupervised on internal systems.
“These agents are convenient, but a rogue action could cause damage,” Jain added. “You need strong governance frameworks in place before deploying them at scale.”
Sheel said firms should define clear human-oversight points, such as when “Critical Points” arise, maintain audit trails for every action the agent takes, enforce role-based access controls, and monitor performance and errors continuously. “They should also include a remediation strategy for when the agent makes mistakes or behaves undesirably, and ensure data governance, privacy, and compliance policies are built into the agent’s workflows,” Sheel added.
Is Microsoft’s ‘Humanist Superintelligence’ vision more than an empty slogan?
There are few things tech companies like more than rolling out marketing-tested slogans that sound like cutting-edge breakthroughs, but turn out to be nothing more than stale, old wine in new bottles. It’s a lot easier to roll out a slogan than do the hard work of creating something new.
So, it’s difficult not to be cynical about Microsoft’s announcement this month that it’s forging a new path in AI — what it calls “Humanist Superintelligence (HSI).” It bears all the earmarks of sloganeering by coupling the AI buzzword “superintelligence” with the society-centered word “humanist.”
The HSI vision was laid out in a blog post by Mustafa Suleyman, Microsoft AI CEO and executive vice president. He was a co-founder and former head of Applied AI at the AI company Deep Mind, which was bought by Google for between $400 million and $650 million. He then founded Inflection AI before making the move to Microsoft.
Suleyman is clearly a technologist more than he is a sloganeer. Still, the question remains: Is “Humanist Superintelligence” just hype, or is there something groundbreaking in what he’s proposing? To answer that, let’s delve into the plans he laid out in his announcement.
Putting humanity first — and pushing back against AGITo understand Suleyman’s post, you need to understand the current Holy Grail of most AI researchers and tech executives — AGI, an acronym for Artificial General Intelligence. AGI is the ability of a machine to reason like a human being, on a kind of superhuman scale. A machine that had achieved AGI would be able to work on just about any task, be able to adapt to new situations without needing training and would have the autonomy to learn and take actions without human intervention.
AGI’s backers promise many benefits they believe the technology would bring to humankind — although many also acknowledge that without the proper guardrails, AGI could also become an existential danger to humankind.
Suleyman’s vision directly pushes back against AGI. He writes HSI will “solve real concrete problems and do it in such a way that it remains grounded and controllable. We are not building an ill-defined and ethereal superintelligence; we are building a practical technology explicitly designed only to serve humanity. In doing this, we reject narratives about a race to AGI.”
HSI is not one-size-fits-all like AGI, but instead a series of AI-based technologies, each pointed at solving an important problem and aimed at bettering people’s lives.
In his description, Suleyman takes a swipe at tech execs and researchers who care more about developing new technologies than about how those technologies harm or help people. He writes: “I think we technologists need to do a better job of imagining a future that most people in the world actually want to live in…. Instead of being designed to beat all humans at all tasks and dominate everything, HSI begins rooted in specific societal challenges that improve human well-being.”
All this can sound high-minded and vague, so he provides details on where Microsoft will focus its first HSI work. The company has already begun on what he calls Medical Superintelligence. Next, he says, will be work on designing plentiful, clean, inexpensive energy.
Suleyman claims HSI will be safe from the get-go, in contrast to AGI’s potential dangers. He calls HSI “a subordinate, controllable AI, one that won’t, that can’t open a Pandora’s Box. Contained, value aligned, safe — these are basics, but not enough. HSI keeps humanity in the driving seat, always.”
Is HSI for real or just more hype?All that sounds impressive. But words are cheap. Is HSI just an elevated example of tech hype?
A look at Suleyman’s past offers some clues. He’s not a Johnny-come-lately to warning about AI’s potential dangers. His book, The Coming Wave: Technology, Power, and the Twenty-first Century’s Greatest Dilemma, warns about the evil AI can do unless it is reined in, including building autonomous weapons and bioengineering pathogens. He argues that global regulations are required to stop those and other dangers. In addition, at Deep Mind he established an Ethics and Society unit to scrutinize the potentially harmful aspects of AI, and take steps to ameliorate them.
Keep in mind, HSI doesn’t conflict with making big profits. In fact, the opposite is true. So far, general-purpose generative AI (genAI), a forerunner to AGI, hasn’t paid off so well. And there’s some evidence it may never pay out.
A McKinsey report warns: “Nearly eight in 10 companies report using gen AI — yet just as many report no significant bottom-line impact.” An MIT report, The GenAI Divide: State of AI in Business 2025, has found that 95% of genAI pilots in businesses are failing.
Many people believe that the big money in AI isn’t in genAI, but in special-purpose uses such as Suleyman suggests. Gary Marcus, a founder of two AI companies, argues in a New York Times opinion piece, “If the strengths of AI are truly to be harnessed, the tech industry should stop focusing so heavily on these one-size-fits-all tools and instead concentrate on narrow, specialized AI tools engineered for particular problems.”
Suleyman’s vision hasn’t yet bumped up against bottom-line considerations — how much profit can Microsoft wring from it? So, it’s too early to tell whether HSI will prove to be anything more than a grand vision unfulfilled.
However, his goals are worthy ones. I’m hoping the world lets him accomplish them.
Apple at NeurIPS: Why it matters
Apple’s decision to take part in (and co-sponsor) this year’s NeurIPS conference shows how the company is keeping close tabs on future trends in the field, highlights its willingness to cooperate, and shows Apple reaching out to recruit new expertise.
The company’s machine learning and artificial intelligence (AI) teams are deeply involved in the important event. Since Apple is a co-sponsor, its people will be at its booth to talk about the company’s research and will present several research papers at the show.
As described on the company’s Machine Learning website, the papers include research on more efficient image generation, protecting privacy in AI, and cutting-edge work on Large Reasoning Models (LRMs). You can explore all the presentations from the big-name attendees anticipated at NeurIPS via this interactive graphic.
NeurIPS is importantNeurIPS is considered to be the most prestigious and influential AI conference in the field. The world’s leading researchers and practitioners converge on the show to discuss their cutting-edge research.
That also means the trends that emerge at the event tend to leak into wider discourse 12 to 24 months later. In 2023 and 2024, for example, conversations tended to converge around responsible AI, scaling, and efficiency — involving both hardware and software advances. Apple was clearly paying attention, and the current M5 processors inside some new Macs offer the kind of efficiency and hardware acceleration required for AI.
In 2024, some of the big conversations related to Edge AI, AI-optimized hardware design, privacy-preservation in AI and the democratization of AI through open-source.
This year’s trends seem to be coalescing around hardware design, energy efficiency, quantum computing, contextual awareness, and data efficiency — coupled with more conversation on use of AI in specialized domains, including health, finance, robotics, and sustainable resource management. Many of those trends are already very much in the wider public conversation, reflecting growing public understanding of AI.
Where is Apple?Where is Apple in this? Apple plans to introduce its take on contextual AI in Siri next year. Speaking last month, Apple CEO Tim Cook said on this: “We’re also excited for a more personalized Siri. We’re making good progress on it, and as we’ve shared, we expect to release it next year.”
Apple has also been working very hard (and very secretively) to develop its own AI-augmented digital health services, which might also make their debut in the coming year. These will evidently have a preventive health element as evidenced by the decision to move management of Apple Fitness+ under Apple’s vice president of health, Sumbul Desai, earlier this year. (“Our goal is to empower people to take charge of their own health journey,” said Desai in 2023). Apple’s Private Cloud Compute nails the cloud-based AI service option.
It is also true that with its focus on hardware, software, and processor innovation, Apple now offers the best available tools for AI research, including tools for Edge AI. (You can test the ability of your Apple device to deliver AI at the edge today, using an app called Locally AI (for Mac, iPhone, or iPad). Thanks to the power of Apple Silicon, Locally lets you use private, on-device, edge LLMs right now.)
What if the company has been misconstrued?Examples like these show how much Apple is already doing concerning the evolution of AI. It’s a huge field, of course, with different approaches.
I’d argue that Apple’s strategic approach was to focus on specific use cases (think agentic AI, machine learning), and application of the technology in specific domains, rather than a catch-all LLM system, like ChatGPT. History indicates that strategic decision allowed the company to fall behind, but has it really?
Once you consider the wider industry conversations around applied AI coming out of NeurIPS in recent years, much of what Apple is actually delivering reflects a very strong response to the industry needs discussed and identified there.
If hardware can be seen as the AI fundamentals, then the innate advantages of neural intelligence on Apple Silicon suggest it now has strong foundations in place, particularly around privacy, hardware, and edge AI.
We shall now find out the extent to which Apple, its researchers, and the wider AI industry, can exploit the foundations Apple has laid.
You can follow me on social media! Join me on BlueSky, LinkedIn, and Mastodon.
How has cloud flipped the regular security narrative?
In early 2024, a breach involving Snowflake, Inc. sent a quiet shockwave through boardrooms across industries. Attackers bypassed perimeter defenses entirely; no malware, no exploit kit, no zero-day. They simply walked through an identity gap: weak credentials and excessive permissions.
The attackers pivoted laterally inside multiple customer environments (AT&T, Santander Bank, Ticketmaster, etc.) and exfiltrated large volumes of sensitive data. For many CISOs watching that breach unfold, the lesson was blunt: in the cloud, identity is the new infrastructure – and once it’s compromised, everything that depends on it is suddenly in play.
Some attacks have a cascading effectOne of the many customers impacted by the Snowflake data breach was Ticketmaster, which was using Snowflake systems for marketing and analytics. Hackers used a compromised Snowflake account to access Ticketmaster database, which resulted in the breach of 1.3 terabytes of data of 560 million individuals, triggering numerous lawsuits from customers.
This breach demonstrated that in cloud ecosystems, third-party data platforms become extensions of your attack surface, and when not protected, they can result in havoc.
shutterstock/Kjetil Kolbjornsrud
Cloud security is a global problemThis is a global pattern. 83% of organizations faced a cloud security breach in the past 18 months. 25% of organizations fear of having suffered a breach recently without knowing it yet. Most cloud security incidents are traced back to a combination of misconfigurations, over-privileged identities, or exposed APIs. Increased cloud adoption has created thousands of entry points, each dynamic, ephemeral, and easy to miss.
The rise in attacks is not opportunistic but structural. Cloud environments expand faster than they can be governed. Modern applications are API-driven by design, meaning every service interaction is effectively a mini-perimeter waiting to be tested. Multi-cloud brings architectural complexity that traditional tooling cannot correlate. Security teams are constantly racing business velocity, but adversaries don’t need to outrun the organization; they only need to outrun its controls.
Security-by-design approachAs a result, the old model of “deploy cloud, then secure it” has started to break down. Breaches today don’t occur because CISOs are unaware of the risks, they occur because visibility and enforcement haven’t caught up with speed and fragmentation. Enterprises don’t need another point solution, they need an integrated way to see risk the way an attacker sees it: across posture, identity, runtime behavior, and exposed services.
This is why modern security architectures are consolidating around cloud native application protection platform (CNAPP) as the backbone of cloud defense, bringing posture, workload and identity analytics together instead of expecting teams to stitch insights manually.
Posture evaluation isn’t just about configuration drift any moreIt’s about anticipating the attack path before it becomes actionable. API defense is no longer a niche extension, it is the new frontline. And Zero Trust, once treated as strategy rhetoric, is now the only rational method of preventing lateral movement after the inevitable compromise of a credential or token.
At the same time, regulatory pressure has quietly reframed cloud governance. Boards and insurers are no longer asking “Are you compliant?” They are asking, “Can you continuously prove it?” Evidence is becoming as critical as control.
Organizations need more than implementing cloud controlsOrganizations need to operate security as an assurance layer; across CNAPP, posture management, API visibility, Zero Trust enforcement, microsegmentation and continuous compliance. Where in-house teams struggle with scale and signal-to-noise, a security partner can bring sustained visibility and managed resilience. That turns cloud risk into a controllable variable and cloud innovation into something security no longer must slow down.
In 2025, the real question is whether your organization can continuously defend and prove its cloud posture at enterprise scale. The ones who can, will accelerate. The ones who can’t, will continue to absorb the cost of architectural blind spots. T-Systems helps make sure you are in the first category.
Doubling down on AI but worried about security? Read this e-book today — get your copy here.
The world is split between AI sloppers and stoppers
In September, I called for everyone to “push back against the AI internet.” My prescription was that users of content websites should ask for tools to block AI, and that content companies should prioritize AI identification and offer blocking options.
This approach to the coming wave of AI-made content should suit everyone. It gives complete access to AI content foranyone who wants it and helps people avoid a world where human-made content is uncommon and hard to find.
If this sounds like technopanic or an overblown claim, think about the fact that as of October, more than 52% of onlinearticles are made by AI, according to one estimate. By next year, the share of AI-made online content could pass 90%.
Some experts even predict that by 2030, up to 99.99% of online content could be AI-generated. Seeing the AI-generated writing on the wall, people are choosing sides.
I’ve talked on Reddit in AI forums where many are against content sites giving users a choice. To me, opposing the option to choose “no AI” is an unreasonable position. Believing that words, pictures, music, and other traditional modalities of human expression exist for one person to connect and share with another person is reasonable. Worrying that too much AI content might push creatives away, leaving us with a “culture” of content consumers and no content creators, is valid.
Since I wrote that September piece, many content companies have chosen sides in the ongoing AI content debate. (Yes,there are sides to pick.)
On one side are the technopanicking, doomerist Luddites who are fed up with AI. On the other are the clanking, slopping groksuckers who are excited about the new possibilities with AI content.
Here are the companies that are embracing, rejecting or offering choice in AI (to use a Goldilocks framing, too hot, too cold, and just right).
Too hotMeta social networks. Meta’s AI rules and tools on Facebook and Instagram allow easy AI content creation through features like text, image, and video for posts, comments, Stories, and ads. The company requires explicit content labels for AI-generated media. The new “Vibes,” a short-form AI video feed, is heavily promoted on Meta platforms, and even in the Ray-Ban Meta AI glasses app for some reason. It lets users generate, remix, and share AI-made videos. Creators and brands are encouraged to use AI-driven tools and APIs. As a result, Meta social networks are awash in AI-generated content, and the company offers no toggle to turn it off.
YouTube. The popular video site allows AI content; it now makes up 25% to 50% of new uploads, according to unverified estimates. While the platform’s policies require disclosure and demonetize low-quality slop channels, there’sno toggle to turn off AI.
Substack. The newsletter company does not ban or require disclosure of AI-generated content, nor does it ban AI-generated content from monetization.
Others. Reddit, TikTok, Medium, LinkedIn, X, and Snapchat also offer no universal toggle to turn off AI content.
Too colddiVine. While Meta’s Vibes is 100% AI, diVine has a 100% ban on AI content. The site was recently launched by Twitter co-founder Jack Dorsey and is positioned as a re-launch of Twitter’s old Vine video service.
Medium. The writer’s platform bans all AI content for paywalled content.
Publications. A large number of online publications have outright bans on AI-generated content, including Wired, BBC, Dotdash Meredith, Polygon, and others.
Just rightSpotify. The platform now requires clear labels on AI-created songs, blocks vocal impersonation and deepfakes, and filters spam tracks to protect artists. More than 75 million low-quality uploads have been removed, while new tools show listeners which songs use AI. While the company doesn’t offer a universal toggle, it makes it easy for users to avoid AI.
Pinterest. Pinterest added new controls this year to turn off AI-generated pins.
TikTok. The Chinese-owned social video site launched a “slider” tool in its “Manage Topics” control panel. This lets users reduce AI-generated content in their For You feed, but doesn’t completely block it.
DuckDuckGo and Kagi. These privacy-focused search engines (DuckDuckGo may be better known) have AI toggles for user control of AI content, including toggles to turn off AI-generated images.
There’s no question that, because of AI, we’re living in truly interesting times. And while excitement over AI is perfectly valid, it’s clear that people differ on whether they personally want to see or hear AI content.
That’s why the only reasonable demand by users, and the only reasonable policy for content sites, is to allow AI content, but give all users a universal toggle that enables them to opt-out temporarily or permanently.
Will Apple block Google’s AirDrop Integration?
File sharing between smartphones has long been restricted by platform. The surprising news is that Google has figured out how to use AirDrop to exchange files both ways between its own Pixel Android devices, iPhones, and other Apple devices.
“Sharing moments shouldn’t depend on the phone you have. Starting today with the Pixel 10 family, Quick Share now works with AirDrop, making secure file transfers between Android phones and iPhones more seamless,” Google said in a blog post. (Quick Share is the Android equivalent of AirDrop.)
There are some limits. For starters, Apple devices need to have their AirDrop settings set to “Everyone” mode for this to work and they can always refuse to accept the file.
How Google’s QuickShare worksGoogle says the feature is protected by a “multi-layered security approach to ensure a safe sharing experience from end-to-end, regardless of what platform you’re on.”
That means use of memory-safe language, Rust, for the communication channel, and the built-in platform protection of both Apple and Android devices. Google said it has also put the file-sharing through strict security review.
“These overlapping protections on both platforms work in concert with the secure connection to provide comprehensive safety for your data when you share or receive,” Dave Kleidermacher, Google’s vice president for platforms security and privacy, wrote in a post explaining the tool security.
Compatible Android devices (the Pixel 10 family, at present) need to update the Quick Share Extension in the Privacy and Security section of their settings. If the feature is actually secure and manages to proliferate across other Android devices, I think a lot of people — Android and iOS owners alike — will enjoy using it.
Enterprise and regulatory implicationsIT managers will likely want to make sure it is possible to disable file-sharing through QuickShrare and Air Drop using standard device management tools on both Android and Apple devices. It should be possible on Apple’s systems, as you can already prevent use of AirDrop on managed iPhones. All the same, business entities will likely want to constrain this new opportunity for data exfiltration.
Will Apple put a stop to it? I hope not. Because while I understand how important and complex it is to maintain security and privacy across Apple’s ecosystem, we do exist in a multi-platform world — and there does seem to be plenty of protection in place in how Google has approached it.
The other issue is regulation. Apple is having a horrible time with regulators, particularly in Europe. They seem unwilling or unable to listen to some of the company’s arguments concerning the need to protect the user experience. Is it not possible that Apple and Google could work together to turn this new feature into a “best practice” example of how to create this kind of multi-platform polarity without sacrificing security, privacy, or the unique nature of the different operating systems?
If it is possible, it will benefit customers on both platforms, and would give Apple a model it could show regulators to illustrate how compatibility can be done without sacrificing user experience.
Hacker or frenemy?Apple, of course, might see Google’s new AirDrop trick as a direct attack against the sanctity of its platforms. Just over 20 years ago, Apple accused Real Networks of having the ethics of a hacker when the latter firm figured out how to undermine iTunes Digital Rights Management (DRM).
Might Apple feel the same way about Google’s move to open up AirDrop? We don’t yet know. But the company will need to think hard about how it responds to avoid yet more criticism. It is clearly good for Apple’s users to be able to exchange files with people on other platforms, as long as it does not impact their security.
It is sometimes better to follow the tide, rather than swim against it.
Follow me on social media! Join me on BlueSky, LinkedIn, and Mastodon.
What is an AI PC? The power of artificial intelligence on your desktop
Unlike traditional computers, an artificial intelligence PC, or AI PC, comes with AI capabilities built in by design. AI runs locally, right on the machine, allowing it to essentially learn, adapt, reason and problem-solve without necessarily having to connect to the cloud or even the internet. This greatly increases the performance, efficiency, and security of computing while enhancing user experience..
How are AI PCs different from traditional PCs?Traditional PCs run on CPUs and GPUs (but most PCs use an integrated CPU for everyday tasks), and their essential components include a motherboard, input devices like keyboards and mice, long-term storage, and random-access (short-term) memory (RAM). While they excel at tasks such as everyday web searching, data processing and content streaming, they typically don’t come with many built-in AI features — and they struggle to perform complex AI tasks due to limitations with latency, memory, storage and battery life.
Those traditional PCs typically have integrated GPUs (iGPUs) that are built into the CPU and share system RAM. Going a step beyond this are discrete graphic processing units (dGPUs), which can be found in devices such as those offered by Dell. These separate cards handle graphics-heavy tasks like 4K video rendering and editing, complex 3D modeling, and gaming. dGPUs are more performant than integrated GPUs because they have their own dedicated memory, including video memory (VRAM), and power source.
But while they excel at tasks such as everyday web searching, data processes and content streaming, traditional PCs typically don’t come with many built-in AI features — and they struggle to perform complex AI tasks due to limitations with latency, memory, storage and battery life.
[ Related: What is a GPU? Inside the processing power behind AI ]
AI PCs, by contrast, come preloaded with AI capabilities so that users can get started with the technology right out of the box. They feature integrated processors, accelerators and software specifically designed to handle complex AI workloads. While they also incorporate GPUs and CPUs, AI PCs typically containa critical third engine: the neural processing unit (NPU).
5 things you should know about AI PCs- Local AI processing: AI PCs handle AI tasks on-device with specialized hardware (NPUs) for improved performance, privacy, and lower latency.
- Enhanced productivity: AI PCs boost efficiency and enable new capabilities like improved collaboration, personalized experiences, and advanced content creation.
- Robust security is imperative: AI PCs require a strong security framework, including hardware, data, software, and supply chain considerations.
- The market is growing: The AI PC market is expanding rapidly, with increasing availability, decreasing costs, and a growing software ecosystem.
- Big IT impact: AI PCs will require updates to IT infrastructure and management practices, including device management, application development, network infrastructure, and cost analysis.
NPUs perform parallel computing in a way that simulates the human brain, processing large amounts of data all at once — at trillions of operations per second (TOPS). This allows the machine to perform AI tasks much faster and more efficiently than regular PCs — and locally on the machine itself.
The key components of AI PCsThe generally agreed-upon definition of an AI PC is a PC embedded with an AI chip and algorithms specifically designed to improve the experience of AI workloads across the CPU, GPU and NPU.
All of the major PC vendors — Microsoft, Apple, Intel, AMD, Dell, HP, Lenovo — are building their own versions of AI PCs. Microsoft has emerged as an early leader in AI PCs; in 2024, the company introduced Copilot+ PCs, high-end laptops with built-in NPUs powered by Qualcomm Snapdragon processors.
The Redmond tech giant has set a generally accepted baseline for what constitutes an AI PC. Required components include the following:
- Purpose-built hardware: An NPU works in tandem with CPUs and GPUs. NPU speed is measured in TOPS, and the machine should be able to handle at least 40 TOPS to support on-device AI workloads.
- System RAM: An AI PC must have at least 16GB of RAM. That’s the minimum; having twice as much (or more) improves performance.
- System storage: AI PCs should have a minimum of 256G of solid-state drive (SSD) storage — preferably non-volatile memory express (NVMe) — or universal flash storage (UFS).
Gartner
Benefits of AI PCsAI PCs represent a movement beyond traditional static machines that require constant human input and offer these benefits:
Enhanced productivity and computing that is truly personalizedAI has the capability to learn from what it sees and evolve based on that information; it is also increasingly agentic, meaning it can perform some approved tasks autonomously.
With AI directly integrated into a device and across various workflows, users can automate routine and repetitive tasks — such as drafting emails, scheduling meetings, compiling to-do lists, getting alerts about urgent messages, or sourcing important information from websites and databases.
Beyond that, AI PCs can support advanced content creation and real-time data processing; perform financial analysis; compile reports; enhance collaboration through voice recognition, real-time translation and transcription capabilities; and provide predictive text and writing help. Over time, AI PCs can adapt to individual workflows and eventually anticipate needs and make decisions based on user habits.
As AI agents become ever more intuitive and complex, they can serve as on-device coworkers, answering intricate business questions and helping with corporate strategy and business planning.
Reduced cloud costs, reduced latencyBuilding, training, deploying and maintaining AI models requires significant resources, and costs can quickly add up in the cloud. Running AI locally can significantly reduce cloud costs. Offline processing can also improve speed and lower latency, as data does not need to be transferred back and forth to the cloud.
Users can perform more complex tasks on-device involving natural language processing (NLP), generative AI (genAI), multimodal AI (for more advanced content generation such as 3D modeling, video, audio) and image and speech recognition.
Enhanced securitySecurity is top of mind for every enterprise today, and AI PCs can help bolster cybersecurity posture. Local processing means data stays on device (instead of being sent to cloud servers) and users have far more control over what data gets shared.
Further, AI PCs can run threat detection algorithms right on the NPU, allowing them to flag potential issues and respond more quickly. AI PCs can also be continually updated based on the latest threat intel, allowing them to adapt as cyberattackers change tactics.
Longer battery life, energy savingsWhile some AI workloads have been feasible on regular PCs, they quickly drain the battery because they require so much power. NPUs can help preserve battery life as users run more complex AI algorithms. Adding to this, they are more sustainable, as every query or prompt requires an estimated 10 times less energy compared to using the cloud.
Important factors when considering AI PCsEven as they represent state-of-the-art, AI PCs are (not yet) for every enterprise. There are important factors IT buyers should consider, including the following:
- Higher up-front cost: Because they incorporate specialized hardware (NPUs) and have higher memory and power requirements, AI PCs are generally more expensive than regular PCs (even if they save on cloud costs in the long-run).
- Increased technical knowledge: Users well-versed with everyday PCs might struggle to use built-in AI features at first, requiring more training resources. Also, a higher level of technical savviness is often required to train AI models and develop applications. Further, genAI is still in its early phases, so enterprise leaders have many concerns about AI misuse, (whether unintentional or not).
- Not-yet proven business use cases for many enterprises beyond nifty gadgets: There has yet to be that “killer app” for AI PCs that make them a must-have across enterprises. If a business’s primary computing requirements are everyday tasks — think email, web searching, simple data processing — AI PCs may be too much muscle, making the increased cost difficult to justify.
While the question of whether you need an AI PC might be relevant now, experts predict that won’t be the case for much longer. “The debate has moved from speculating which PCs might include AI functionality,to the expectation that most PCs will eventually integrate AI NPU capabilities,” Ranjit Atwal, senior director analyst at Gartner, said last September. “As a result, NPU will become a standard feature for PC vendors.”
“This is the most significant shift in the personal-computing landscape in decades,” said Olivier Blanchard, research director and practice lead for intelligent devices at Futurum. While enterprises are catalyzing the first wave of adoption “the true transformation will come later in the decade as affordable AI-capable devices reach every user segment worldwide.”
Experts say AI PCs are shifting from niche to mass-market, with global revenues rising from nearly zero in early 2024 to $25 billion by year-end 2025, according to The Futurum Group. By 2030, the market is expected to hit at least $124 billion (but more likely higher, as much as $350 billion).
This growth is being driven by enterprise refresh cycles due to Microsoft’s Windows 10 end-of-support milestone, falling NPU costs, expanding on-device AI capabilities like real-time translation and multimodal productivity, and growing integration across Windows, Office, and original equipment manufacturer (OEM) hardware.
AI PCs – what’s there to think about?AI PCs represent the next generation of computing, and some experts predict they will soon be the only choice of laptop available to large businesses looking to refresh. However, the market is accelerating rapidly, so what exactly constitutes an AI PC from an architectural standpoint — whether powered by on-device NPUs, local AI agents, or the cloud — will likely continue to evolve.
Ultimately, they are still in their early proving phases, and IT buyers have important considerations to keep in mind when it comes to cost, relevance and necessity.
Related AI articles:Singin’ the Agentic Windows blues
Once upon a time, when you ran Windows on your desktop, it was your desktop. Oh, the IT department might have called the shots on how much you could do with it, but you could write what you needed to, and it was all kept nicely on your PC or your choice of network drive.
Those days are long gone.
First, Microsoft started replacing standalone applications with cloud-based Software-as-a-Service (SaaS) programs such as Microsoft 365. Today, you have little choice but to run Microsoft 365. In addition, unless you turn it off, every file you make or save ends up in OneDrive. Microsoft has also been pushing companies to say goodbye to Windows on the desktop entirely and move to running Windows in the cloud. That move has been less successful.
Now, there’s a new twist. Microsoft’s been hinting at it for a while, but on Nov. 10, Pavan Davuluri, Microsoft’s president of Windows, tweeted: “Windows is evolving into an agentic OS, connecting devices, cloud, and AI to unlock intelligent productivity and secure work anywhere.”
What does that even mean?
It means — as near as I can decipher — that with Microsoft’s Agent Workspace and Copilot Actions notes in this latest and greatest version of Windows 11 (OK, that may be an oxymoron, but bear with me), you’ll run AI agents in isolated, secure workspaces. These agents will have their own user accounts, Agent ID, which are separate from the primary user (you). Mind you, to work, these agents must have access to your account’s permissions via the Windows On-Device Registry (ODR) to manage your files, automate routine tasks, adjust settings, and work with your system tools.
These tools, Microsoft argues, will have excellent security, privacy, and transparency features. Each agent’s actions will be logged and easily auditable. Agentic Windows 11 will include features such as Model Context Protocol (MCP) and agent connectors for apps like File Explorer and System Settings.
Microsoft also likes to talk about how it can do a lot of its work using on-device AI processing. (If you have an AI-chip-equipped PC, of course.) However, to really get the most of AI, you’ll need access to cloud-based large language models (LLMs)
If you drink Microsoft’s Kool-Aid, you’ll see Agentic Windows as the next frontier in desktop computing. The company frames it as a helpmate that will safely automate your repetitive or complex tasks, and lay the groundwork for new, exciting applications by both individuals and enterprises.
Yeah, right.
First, for all that 2025 has been the year of AI Agent hype, as Marina Danilevsky, an IBM Senior Research Scientist, noted: “We haven’t even yet figured out ROI (return on investment) on LLM technology.” Besides, she wrote, “[Agents] tend to be very ineffective because humans are very bad communicators. We still can’t get chat agents to interpret what you want correctly all the time.”
Mind you, IBM has its own dog in this fight, Watson AIOps, so it wants agents to take off. It’s just more realistic about them.
AI agents are still largely hype. As PricewaterhouseCoopers (PwC) pointed out in a recent study, “Reports of full [agentic] adoption often reflect excitement about what agentic capabilities could enable — not evidence of widespread transformation.”
Ya think!?
Mind you, three-quarters of these same executives agreed or strongly agreed that “AI agents will reshape the workplace more than the internet did.” Oh please, I was there when the internet changed everything. You have no clue what you’re talking about. And, then, as now, there was a hype bubble (that ended in the dotcom crash). The NASDAQ then collapsed by 78% and took 15 years to recover.
Sure, some AI Agents can do useful work, but is that any reason to embed them in Windows? Or any other operating system? If AI-enabled web browsers, such as ChatGPT Atlas and Perplexity Comet, are too unsafe to be used, why would you think it safer to have them even deeper in your computer??
Even Microsoft admits, “AI models still face functional limitations in terms of how they behave and occasionally may hallucinate and produce unexpected outputs. Additionally, agentic AI applications introduce novel security risks, such as cross-prompt injection (XPIA), where malicious content embedded in UI elements or documents can override agent instructions, leading to unintended actions like data exfiltration or malware installation.”
This sounds like a barrel of laughs to me.
Let’s get real. This is another Microsoft attempt to monetize AI. When you’ve invested something like $80 billion this year alone in AI, you want to see a return on your investment. I get that. What I don’t get, despite all the CEOs suffering from AI agent FOMO, is why anyone else wants it.
I mean, when I want to use AI — and I do use it — I go through Chrome to Perplexity. There’s no fuss, no muss, and a minimum of security worries. Indeed, as the top post on X responding to Davuluri’s note said, “Stop this nonsense. No one wants this.“
Another person added to this thread, “Nobody wanted ads in their start menu either. Or nonstop telemetry. Or disabled local login.” No one did, and yet here we are.
So, what can you do? I assure you, Microsoft will not be backing down on this. They see it as a cash cow, since to make the most from it, you’ll need to subscribe to their AI services.
One last tweet provides the answer: Windows is “evolving into a product that’s driving people to Mac and Linux.“
I’ve been telling you to switch to Linux for decades now, and I’ve even had nice things to say about macOS at times. Really!
Seriously, though, most users have been locked into the Microsoft tech ecosystem for ages now. But do you really want built-in AI security holes moving forward? To have an AI Big Brother watching your every move?
If you want any kind of control over your desktop, if your company wants control over your desktops rather than Microsoft, it’s now time to start migrating away from Windows to Macs or Linux.
Do higher RAM prices make Apple a better option?
IT purchasers must brace themselves for potential price hikes on new equipment as memory price increases percolate across supply chains. That’s a double-whammy cost crisis, of course, as they must also get set for energy price increases as demand rises to serve electricity-hungry artificial intelligence (AI) farms.
AI also means that even Apple is increasing the amount of memory it puts inside its devices, which is noteworthy. The company has long avoided installing sizable quantities of RAM in its products, preferring to focus on device/software/OS integration and its proprietary unified memory system to deliver performance and efficiency.
What Apple does, others usually followThe fact that Apple has had to increase RAM capacity in response to the demands of AI is significant because it means PC vendors across the board will need to do so, too. It also means people purchasing PCs will need to ensure that the systems they buy have sufficient memory installed to run AI, as any additional memory needs will add to the overall purchase price.
They’ll also have to account for the additional running cost of increased memory in the computers they deploy. That cost might seem negligible to a lone user, but at a scale of thousands of seats, the cumulative consumption could challenge company sustainability targets, as well as raising energy bills. Those costs scale.
Apple’s answer to this is to continue to show that its systems deliver more performance per watt than its competitors. In context, you can also arguably point out that any additional memory it might pack into its products is still relatively parsimonious in comparison to competitors. That’s because its systems are inherently capable of doing more with less, which means you need less to do more. That’s a tautology, but an important one to anyone controlling a budget.
Does this matter?It looks as if it does. Samsung has signalled a 60% price increase for some kinds of memory, while the prices of high-bandwidth memory modules, such as the DDR used in most decent computers, including Macs, is also moving higher.
These price increases reflect demand, as AI infrastructure is a greedy, greedy beast and continues to demand more memory, more water, more energy, and more investment capital as the bubble around AI infrastructure rapidly inflates ahead of aninevitable collapse. The result? There isn’t enough memory to go around.
Has it hit hardware prices yet? That’s not yet clear, but some (particularly in the US) are reportedly more exposed to market fluctuation than others. For example, Morgan Stanley recently downgraded most PC stocks to reflect the volatile pricing environment, warning that Dell, HP, Acer, and Asustek might be the most vulnerable to them.
The analysts did not see Apple as under threat, in part because it is able to pre-order components at scale.
That protection won’t last forever — stockpiles get used up; but it is significant for many enterprise purchasers who must see that even as hundreds of millions of actively used PCs can’t be upgraded to Windows 11, the cost of their replacements continues to rise, and might go higher as additional AI running costs (including added on-device memory demands) come into play.
Curse or opportunityI think Apple can continue to argue that its once high-cost machines are becoming more and more affordable all the time, both in terms of bang for the buck, and also in terms of the total cost of ownership over time. The company’s computational performance story matters a great deal in this challenging environment, as does its ability to generate more processing power per watt than others.
As price and tech pressures hit purchasing decisions, Apple’s long held decision to optimize its hardware seems to be delivering a bigger advantage than ever. That advantage is only made more buoyant by Apple Silicon’s built-in power, performance, and support for on-device AI.
Other things still matterPlus, of course, even if the AI bubble does burst and a pitchfork-wielding population marches on the AI data centers, Apple also makes the best and most secure computers with the most popular operating systems, which is why employees choose its solutions when given the chance.
Those things that mattered so much before ChatGPT arrived still matter just as much today, even for IT purchasing.
Follow me on social media! Join me on BlueSky, LinkedIn, and Mastodon.
Microsoft Ignite 2025 — get the latest news and insights
Microsoft Ignite 2025 runs November 18-21, 2025, in San Francisco, with an optional pre-day on November 17. Can’t make it to the Moscone Center in San Francisco? No problem. It’s a hybrid event, and you can register to attend Ignite (for free) virtually here.
You can expect to learn more about AI, cloud computing, security, productivity tools, and more.
Keynote speakers include Microsoft CEO Satya Nadella and other Microsoft leaders, including Scott Guthrie, executive vice president of the Cloud + AI Group and Charlie Bell, executive vice president of Microsoft Security.
Here are highlights from the show. Remember to check this page often for more on Microsoft Ignite 2025.
Microsoft Ignite 2025 news and insights Microsoft now lets customers run agents on Windows 365 cloud PCsNovember 21, 2025: Microsoft has unveiled a new type of Windows 365 cloud PC that provides a secure environment for running “computer use” AI agents. Windows 365 for Agents, announced at the company’s Ignite conference this week, is built on the same foundations as existing W365 products, but “optimized” for agentic workloads.
Microsoft has yet to ignite enthusiasm for agentic AINovember 21, 2025: Microsoft’s central message at Ignite has been about preparing the enterprise for agentic AI — but its moves have not been universally well received, andanalysts say that implementing agentic AI at scale will demand a deep restructuring of the enterprise, and not just IT systems.
Azure HorizonDB: Microsoft goes big with PostgreSQLNovember 20, 2025: This week at Ignite 2025, Microsoft announced the latest member of its PostgreSQL family: Azure HorizonDB. Designed to be a scale-out, high-performance database, it’s intended to be a place for a new generation of PostgreSQL workloads.
Microsoft now lets customers run agents on Windows 365 cloud PCsNovember 20, 2025: Microsoft has unveiled a new type of Windows 365 cloud PC that provides a secure environment for running “computer use” AI agents. Windows 365 for Agents, announced at the company’s Ignite conference this week, is built on the same foundations as existing W365 products, but “optimized” for agentic workloads.
Cobalt 200: Microsoft’s next-gen Arm CPU targets lower TCO for cloud workloadsNovember 20, 2025: Microsoft unveiled the next generation of its Arm-based custom CPUs in the form of Cobalt 200 as part of its ongoing efforts to reduce dependency on x86-based instances and make its data centers more energy-efficient while offering better performance for cloud computing workloads.
Microsoft drops M365 Copilot price for SMBs, upgrades free Copilot ChatNovember 19, 2025: As of Dec. 1, 2025, Microsoft 365 Copilot for Business will cost $21 per user, per month for customers with any Microsoft 365 Business plan. That’s down from the current $30 price per month set when the tool debuted in 2023.
Microsoft rolls out Agent 365 ‘control plane’ for AI agentsNovember 19, 2025: Microsoft Agent 365 is a control plane to help organizations deploy and manage AI agents at scale. Agent 365 is available through Microsoft’s Frontier program for early access to AI technologies. Agent 365 is designed to let users manage an organization’s agents at scale, regardless of where these agents are built or acquired.
Microsoft Fabric IQ adds ‘semantic intelligence’ layer to FabricNovember 19, 2025: Microsoft says Fabric IQ’s ontology will help workers and autonomous agents better understand data in order to make decisions, but analysts fear deployment hurdles and vendor lock-in.
Microsoft touts scalability of its new PostgreSQL-compatible managed databaseNovember 19, 2025: Third time’s the charm? At Ignite 2025, Microsoft is hoping that the scalability of its new Azure HorizonDB will lure new customers where its two existing PostgreSQL-compatible database offerings did not.
Microsoft unveils Agent 365 to help IT manage AI ‘agent sprawl’November 18, 2025: As businesses begin deploying AI agents in greater numbers, IT teams will need to manage and secure those AI systems as they connect to corporate data. That’s the idea behind Microsoft’s Agent 365 (A365), a new “control plane” that lets customers deploy and govern the use of agents, announced at Ignite 2025.
Microsoft bets on agentic AI for cloud ops, but analysts doubt the pitchNovember 18, 2025: Microsoft is betting big on agentic AI to simplify and automate cloud operations and introduced at its annual Ignite conference an agentic mode to Azure Copilot that could surface insights and provide recommendations, but not take any actions.
A look back at Microsoft Ignite 2024 news and insights Microsoft upgrades Copilot Studio agent builder toolsNov. 20, 2024: Microsoft unveiled new Copilot Studio features aimed at both expanding the functionality of AI agents created with the application and improving the accuracy of outputs. Customers will be able to connect Copilot Studio agents to third-party apps, and tools for building autonomous agents are now available in a public preview.
Microsoft partners with industry leaders to offer vertical SLMsNov. 20, 2024: Teaming up with industry partners such as Bayer and Rockwell Automation, Microsoft is adding pre-trained small language models to its Azure AI catalog aimed at highly specialized use cases.
Microsoft brings automated ‘agents’ to M365 CopilotNov. 19, 2024: Microsoft has introduced a new tool in Microsoft 365 Copilot to automate repetitive tasks, part of a drive to make the genAI assistant more useful to users. Copilot Actions features a simple trigger-and-action interface that Microsoft hopes will make the workflow automations accessible to a wide range of workers.
Microsoft extends Entra ID to WSL, WinGetNov. 19, 2024: Microsoft has added new security features to Windows Subsystem for Linux (WSL) and the Windows Package Manager (WinGet), including integration with Microsoft Entra ID (formerly Active Directory) for identity-based access control. The goal is to enable IT admins to more effectively manage the deployment and use of these tools in enterprises.
Microsoft looks to genAI, exposure managment, and new bug bounties to secure enterprise ITNov. 19, 2024: Microsoft announced a host of new security measures at its annual Ignite conference, with the goal of strengthening its existing data protection, endpoint security, and extended threat detection and response capabilities. Notable improvements include the introduction of a dedicated exposure management tool, an upgrade to insider risk management (IRM) tailored to GenAI usage, new data loss prevention (DLP) features, and integration of genAI into security operations center (SOC) processes.
Microsoft and Atom Computing claim breakthrough in reliable quantum computingNov. 19, 2024: The companies have announced what they claim is a significant step forward in reliable quantum computing, unveiling a commercial quantum machine built with 24 entangled logical qubits. The system, achieved through a combination of Atom Computing’s neutral-atom hardware and Microsoft’s qubit-virtualization technology, aims to address the critical challenge of error detection and correction in quantum computation.
Microsoft adds major upgrades to Power Apps at IgniteNov. 19, 2024: The company announced a series of low-code product enhancements, targeted at developers, that ranged from new agent-building capabilities in Power Apps and Power Pages to new AI and governance features in the codeless automation tool Microsoft Power Automate.
Microsoft’s Windows 365 Link is a thin client device for shared workspacesNov. 19, 2024: Microsoft will start selling a thin client device that lets workers boot directly to Windows 365 “in seconds,” the company announced on Tuesday.
Microsoft reimagines Fabric with focus on AINov. 19, 2024: The company announced a slate of enhancements to its data analytics platform, including Fabric Databases, which can provision auto-optimizing and auto-scaling AI databases in seconds.
Microsoft rebrands Azure AI Studio to Azure AI FoundryNov. 19, 2024: The toolkit for building generative AI applications has been packaged with new updates to form the Azure AI Foundry service.
From MFA mandates to locked-down devices, Microsoft posts a year of SFI milestones at IgniteNov. 19, 2024: The company shared a progress report on its Secure Future Initiative (SFI), introduced a year ago, which included significant measures such as enforcing multifactor authentication (MFA) by default for new tenants, isolating close to 100,000 work devices under conditional access policies, and blocking GitHub secrets from exposure.
Previous Microsoft Ignite coverage Microsoft to launch autonomous AI at IgniteOct. 21, 2024: Microsoft will let customers build autonomous AI agents that can be configured to perform complex tasks with little or no input from humans. Microsoft announced that tools to build AI agents in Copilot Studio will be available in a public beta that begins at Ignite on Nov. 19, with pre-built agents rolling out to Dynamics 365 apps in the coming months.
Microsoft Ignite 2023: 11 takeaways for CIOsNov. 15, 2023: Microsoft’s 2023 Ignite conference might as well be called AIgnite, with over half of the almost 600 sessions featuring AI in some shape or form. Generative AI (genAI), in particular, is at the heart of many of the product announcements Microsoft is making at the event, including new AI capabilities for wrangling large language models (LLMs) in Azure, new additions to the Copilot range of genAI assistants, new hardware, and a new tool to help developers deploy small language models (SLMs) too.
Microsoft partners with Nvidia, Synopsys for genAI servicesNov. 16, 2023: Microsoft has announced that it is partnering with chipmaker Nvidia and chip-designing software provider Synopsys to provide enterprises with foundry services and a new chip-design assistant. The foundry services from Nvidia will be deployed on Microsoft Azure and will combine three of Nvidia’s elements — its foundation models, its NeMo framework, and Nvidia’s DGX Cloud service.
As Microsoft embraces AI, it says sayonara to the metaverseFeb. 23, 2023: It wasn’t just Mark Zuckerberg who led the metaverse charge by changing Facebook’s name to Meta. Microsoft hyped it as well, notably when CEO Satya Nadella said, “I can’t overstate how much of a breakthrough this is,” in his keynote speech at Microsoft Ignite in 2021. Now, tech companies are much wiser, they tell us. It’s AI at heart of the coming transformation. The metaverse may be yesterday’s news, but it’s not yet dead.
Microsoft Ignite in the rear-view mirror: What we learnedOct. 17, 2022: Microsoft treated its big Ignite event as more of a marketing presentation than a full-fledged conference, offering up a variety of announcements that affect Windows users, as well as large enterprises and their networks. (The show was a hybrid affair, with a small in-person option and online access for those unable to travel.)
Related Microsoft coverage Microsoft’s AI research VP joins OpenAI amid fight for top AI talentOct. 15, 2024: Microsoft’s former vice president of genAI research, Sebastien Bubeck, left the company to join OpenAI, the maker of ChatGPT. Bubeck, a 10-year veteran at Microsoft, played a significant role in driving the company’s genAI strategy with a focus on designing more efficient small language models (SLMs) to rival OpenAI’s GPT systems.
Microsoft brings Copilot AI tools to OneDriveOct. 9, 2024: Microsoft’s Copilot is now available in OneDrive, part of a wider revamp of the company’s cloud storage platform. Copilot can now summarize one or more files in OneDrive without needing to open them first; compare the content of selected files across different formats (including Word, PowerPoint, and PDFs); and respond to questions about the contents of files via the chat interface.
Microsoft wants Copilot to be your new AI best friendOct. 9, 2024: Microsoft’s Copilot AI chatbot underwent a transformation last week, morphing into a simplified pastel-toned experience that encourages you…to just chat. “Hey Chris, how’s the human world today?” That’s what I heard after I fired up the Copilot app on Windows 11 and clicked the microphone button, complete with a calming wavey background. Yes, this is the type of banter you get with the new Copilot.
Microsoft now lets customers run agents on Windows 365 cloud PCs
Microsoft has unveiled a new type of Windows 365 cloud PC that provides a secure environment for running “computer use” AI agents.
Windows 365 for Agents, announced at the company’s Ignite conference this week, is built on the same foundations as existing W365 products, but “optimized” for agentic workloads.
Windows 365 for Agents lets customers create pools of Windows or Linux cloud PCs that are accessed once a user invokes an AI agent. When the agent has completed its task, the virtual desktop is returned to the pool — ready to be used again by another agent.
The idea is to provide a secure environment to run computer use agents that interact with applications autonomously on behalf of a human user. As with other W365 products, Windows 365 for Agents virtual desktop is subject to a customer’s enterprise security policies, connected to Microsoft Entra and managed via Intune.
“Doing this inside of the cloud gives you security, as well as elasticity,” Scott Manchester, vice president for Windows Cloud at Microsoft, said in a briefing with Computerworld. “Everything is isolated in this environment.”
One example is an expense reporting agent. An employee can direct the agent to fill in an expense form. The agent then navigates the expense application that’s installed on the isolated virtual desktop, inputting the relevant data. The employee can observe the agent’s actions and take over if necessary. An auditable record of agent actions is also created.
Microsoft already uses Windows 365 for Agents for agents in its own products, including the Researcher agent for Copilot, a computer use feature in Copilot Studio, and Project Opal feature, a new agentic tool coming to Microsoft 365 Copilot.
For developers, Windows 365 for Agents enables the creation of “enterprise-ready agents that run on secure, policy-controlled cloud PCs,” said Stefan Kinnestrand, vice president of Windows Commercial marketing at Microsoft. “They don’t have to worry about the infrastructure layer, they can just focus on the agent themselves.”
Windows 365 for Agents shows promise as an approach to managing and securing agents, said Tom Mainelli, IDC group vice president, device and consumer research. “Running agents in Cloud PCs should improve security and control, although it will also add cost and complexity,” he said. “Adoption will likely start slowly until Microsoft shows it can make large-scale agent management practical.”
The tool costs 40 cents per hour, with customers charged for the duration of a computer use task, rounded up to the next full hour. The feature is currently in preview, with a waitlist here.
Microsoft also announced it will make some of AI features that are exclusive to Copilot+ PCs available on Windows 365 cloud PCs. This includes enhanced Windows search and Click to Do, which lets users interact with text and images displayed on their desktop.
These “AI-enabled Cloud PCs” won’t require the neural processing unit (NPU) chips that are used to run AI features in Copilot+ PCs.
“Since Cloud PCs don’t have NPUs, they’ve missed out on the Copilot features that leverage local AI capabilities,” said Gabe Knuth, principal analyst at Omdia. “This is an attempt to bring those experiences more in-line with each other, at least in terms of productivity and search.”
The AI-enabled Cloud PC is currently in preview.
How IT leaders can build successful AI strategies — the VC view
The AI gold rush these days is littered with abandoned enterprise projects, with humans — not the technology itself — being blamed for high failure rates of AI projects.
Recent data indicates that stagnant AI projects were often the result of poor vision, mismanagement, and a lack of resources.
Eagerness from the top to become “AI-first” companies is also putting pressure on C-suite execs and other IT decision-makers who might not have the budgets, systems, or tools for success.
Though IT leaders will get better at dealing with AI as they gain experience, the learning curve is steep, said Jack Gold, an analyst at J. Gold Associates. “It’s not really all that different from past new technologies and the challenges they posed, such as early databases, the move to web and browser-based apps and more,” he said.
Early-stage venture capital (VC) firms act as validators of AI technologies. Partners are usually as engaged as the founders of AI startups, attending meetings with tech leaders, prototyping, and guiding portfolio companies.
But VCs and CIOs have different risk profiles and priorities when it comes to AI. “When the CIOs are involved, it’s in a very different way…. That CIO is thinking about whether or not they’re going to get fired,” said Julia Moore, managing partner at Breakout Ventures.
With that in mind, Computerworld talked to venture capitalists about how companies could deliver on successful AI projects.
1. Look at how AI will change businessIt’s clear now that AI is transforming existing business structures, operational layers, organizational charts, and processes. “As a CIO, if you look at long term, you get better visibility of the outcomes of AI,” said Sandhya Venkatachalam, founder and partner at Axiom Partners.
“Today, a lot of these net new capabilities are taking the form of AI performing the work or producing the outcomes that humans do, versus emulating or automating software tools,” Venkatachalam said.
The shift will inevitably displace legacy systems and processes. She cited customer support as an early area ripe for upheaval.
“Who is going to disrupt Salesforce from an AI perspective?” Venkatachalam said. “Because [at] call centers…, people [used to be] answering calls; [now] AI is answering calls…and you just saved a bunch of money.”
2. Focus on outcomes, not just AI technologyIT leaders should frame AI projects around results, not technology, said Moore. “Founders look at impact as opposed to the technology in a way — is this going to change this particular industry, as opposed to what is the AI technology behind it?” Moore said.
Tech chiefs can focus on high-leverage work that creates value by automating time-consuming tasks, said Brad Harrison, founder and managing partner at Scout Ventures.
“For CIOs…, think big term, prototype, understand, worry less about the technology and worry about the outcomes — and think about big picture,” Harrison said.
3. Think about what you need tomorrow, not todayVCs typically don’t look at what buyers need right now; they look ahead. Similarly, IT leaders should look at how AI can transform their industry in the future.
The real value of AI is in displacing legacy stacks and processes, and short wins or scattered AI initiatives mean nothing, Venkatachalam said.
Adding AI to existing workflows — like building an internal large language model (LLM) — is often a waste. Enterprises are also wasting time building proprietary tools and infrastructures, which duplicates work already commoditized by big research labs, Venkatachalam said.
AI tools change every six months, and the focus should be on big-picture outcomes, not technology. “We don’t fund the 17th AI coding co-pilot, or yet another attempt to change search. Again, all good, useful stuff, completely covered, completely valued, but not the next big thing,” Venkatachalam said.
Axiom Partners’ investments include HR firms such as Circle8 and the fintech company Cannock-EDR.
4. Partner to move fasterEnterprise organizations cannot move at the speed of transformation required by AI. That’s why IT leaders should partner with AI-native startups, which typically move faster. Most companies “are not designed for the speed of transformation that’s happening right now with compute and AI,” Harrison said.
Harrison’s Scout Ventures has invested in companies building AI tools in the defense industry. His annual gatherings connect portfolio firms with enterprise partners such as Lockheed Martin, L3Harris, IBM, and Red Hat.
Enterprise IT leaders also get access to a larger community of founders working on solving AI problems. “They’re getting really, really good at layering AI into solving these different pieces of the value chain in the right way and they’re getting really good efficiency out of that,” Harrison said.
Partnering with AI-native companies saves time and money and affects success rates, especially for first-time implementers, the VCs said.
5. Align your AI strategy to verticalsAI strategies link IT directly to core products, which dictates market survival. IT decision-makers should align AI strategies to their verticals markets.
Physical AI is considered the next big AI technology after agents in some areas. And Harrison’s investments are in verticals such as defense and law enforcement, where AI manifests in the physical world.
The defense industry demands real-world accountability, and AI technology can’t be experimental, Harrison said. “Where it meets the physical world is where I think we can have the most impact on humanity,” he said.
Moore’s Breakout Ventures invests in early-stage AI companies building datasets and tools that ultimately affect human health. In markets such as pharma and biotech, the science business is turning into a data business, she said.
“If you look at the life sciences…, you’re dealing with physics, chemistry, biology…, a much more complex data set. And so naturally pharma has to stay ahead of the game, because the competition is all digital, it’s all data,” Moore said.
6. Create an AI-first culturePerhaps the biggest hurdle isn’t technical, but cultural. Younger “digital natives,” especially Gen-Z workers, view AI tools differently than senior leadership.
“There is a generational difference in how people are connected… digital natives versus digital immigrants,” Harrison said.
IT leaders should step out of the corner office and engage directly with team members and AI projects, which will bring useful insights. “I’m like… use your big brain, take one hour a week and put it towards that project,” Harrison said. “I think a lot of things would be much, much better.”
7. Get your hands dirtyIT leaders need to encourage internal prototyping and experimentation to stay ahead of the fast-moving AI curve.
John Mannes, a partner at Basis Set Ventures, said his team includes machine-learning engineers and data scientists that are brainstorming, prototyping, and building tools.
Mannes said it’s much more fun for CTOs and founders when his team can say, “Yeah, we tried those seven tools for databases, too, and don’t even waste your time with these six because holy hell, right?
“You’re in the trenches,” Mannes said. “It earns trust and it makes us much smarter as well, in terms of the people we surround ourselves with and how we support them.”
Zoho revamps Zoho One’s UI to focus on work, not apps
Zoho is revamping the user interface for its flagship suite, Zoho One, moving from an app-based model to a unified, context-aware system in which users can easily access any of the 50-plus apps included in the suite.
Of course, the company’s AI assistant, Zia, features prominently, providing contextual intelligence across the suite.
The new UI is now organized into focus areas known as Spaces. The “Personal” Space includes apps specific to the individual, including personal productivity software. “Organization” includes tools for company-wide communication, such as Forums, Town Hall, Ideas, and more. There are also function-specific spaces for areas such as HR, grouped by Department. All of these Spaces can be customized to suit employees’ needs, and are accessed from the top toolbar.
The Spaces toolbar includes a centralized search function, powered by Zia, from which users can access any information across the company that they’re authorized to view, and the new Action Panel allows them to build a view of their day, including scheduled meetings, incomplete tasks, emails, or whatever else they choose, regardless of the app they’re currently using.
Within Spaces, apps can be organized into Boards where users can, for example, track tasks from any of their apps, or access all apps’ notes in a single view.
A new app, Vani, provides an all-in-one, visual-first intelligent virtual space where users can brainstorm, plan, and innovate together.
Like one applicationZoho One also supports integrations with third-party products. For example, said Raju Vegesna, Zoho chief evangelist, if a company uses Gmail, it can be added to a Board alongside the native apps. “The idea here is, instead of users going to the application, the application is coming [to them]; context is the key,” he said. “That’s an experience part where we are trying to make Zoho One look like one application, although at the back end it’s about 50 applications. It’s an ongoing work in progress.”
Zoho says most customer organizations use around 22 of those applications, which include everything required to manage a business, including a CRM, an HR management system, sales and marketing modules, a helpdesk ticket manager, finance and payroll, and security and authentication.
The company boasts more than 75,000 customers globally, ranging from SMBs to enterprises such as telecommunications giant Telus.
Not just for SMBsThomas Randall, research lead at Info-Tech Research Group, approves of the new approach. “Zoho One’s new unified experience is a necessary and overdue evolution,” he said. “Historically, Zoho’s breadth of applications has been both a strength and a challenge, often leaving users navigating a sprawling catalogue. The introduction of a cohesive workspace where workflows guide users instead of application silos is a meaningful shift. Zoho’s new approach of tailoring solutions based on user intent reflects a more customer-centric design.”
And although Zoho is more prominent in the small and medium business realm, Randall said that Info-Tech is seeing mid- to large-sized enterprises shortlisting the company, especially for CRM. “Zoho’s large investment in fully owning its data infrastructure and flexible deployments operationally simplifies many organizations’ complex requirements,” he said. “Larger enterprises should also take a serious look at Zoho applications for business operations and CRM. The combination of affordability, unified experience, and expanding AI capabilities makes Zoho One a viable platform for those seeking integration and operational simplicity.”
Agentic AI – Ongoing coverage of its impact on the enterprise
Over the next few years, agentic AI is expected to bring not only rapid technological breakthroughs, but a societal transformation, redefining how we live, work and interact with the world. And this shift is happening quickly. “By 2028, 33% of enterprise software applications will include agentic AI, up from less than 1% in 2024, enabling 15% of day-to-day work decisions to be made autonomously,” according to research firm Gartner.
Unlike traditional AI, which typically follows preset rules or algorithms, agentic AI adapts to new situations, learns from experiences, and operates independently to pursue goals without human intervention. In short, agentic AI empowers systems to act autonomously, making decisions and executing tasks — even communicating directly with other AI agents — with little or no human involvement.
Agentic AI will enable machines to interact with the physical world with unprecedented intelligence, allowing them to perform complex tasks in dynamic environments, which could be especially useful for industries facing labor shortages or hazardous conditions.However, the rise of agentic AI also brings security and ethical concerns. Ensuring these autonomous systems operate safely, transparently and responsibly will require governance frameworks and testing.
Follow this page for ongoing agentic AI coverage from Computerworld and Foundry’s other publications.
Agentic AI news and insights Microsoft drops M365 Copilot price for SMBs, upgrades free Copilot ChatNovember 19, 2025: Microsoft announced that it reduce the price of Microsoft 365 Copilot for small and mid-sized firms beginning next month. Microsoft 365 Copilot for Business will cost $21 per user, per month for customers with any Microsoft 365 Business plan. That’s down from the current $30 monthly price.
Microsoft Fabric IQ adds ‘semantic intelligence’ layer to FabricNovember 19, 2025: Microsoft promises enterprises better understanding of their data for workers and autonomous agents alike, but analysts fear deployment hurdles and vendor lock-in.
Microsoft unveils Agent 365 to help IT manage AI ‘agent sprawl’November 18, 2025: As businesses begin deploying AI agents in greater numbers, IT teams will need to manage and secure those AI systems as they connect to corporate data. That’s the idea behind Microsoft’s Agent 365 (A365), a new “control plane” that lets customers deploy and govern the use of agents.
From chatbots to colleagues: How agentic AI is redefining enterprise automationNovember 17, 2025: A new wave of agentic AI is taking shape: systems that not only converse but also reason, plan, and act within enterprise workflows. These agents are not assistants that talk; they are digital colleagues that think.
The enterprise IT overhaul: Architecting your stack for the agentic AI eraNovember 10, 2025: For the CIO, the conversation has officially moved past the large language model (LLM). The next critical chapter is agentic AI — autonomous systems capable of reasoning, planning and executing multi-step tasks across your enterprise. Agentic AI is here. Now, CIOs must orchestrate
October 23, 2025: Agentic AI is about to change how companies create value. Yet, most enterprises aren’t ready. The problem isn’t the technology — it’s the planning and execution. Too many pilots stall out because CIOs haven’t built the AI systems, guardrails and culture to move beyond experiments.
AI agents might smooth some of retail’s worst data problemsOctober 21, 2025: So many retail challenges hinge on unreliable product data. Can agentic AI clean up that data enough to make a difference? Can it do the same for other verticals?
The impact of agentic AI on SaaS and partner ecosystemsOctober 16, 2025: The enterprise technology landscape is entering a critical pivot point as agentic AI transforms partner ecosystems from human-mediated, application integration networks into autonomous, self-orchestrating and intelligent ecosystems.
Salesforce updates its agentic AI pitch with Agentforce 360October 13 2025: Salesforce announced a new release of Agentforce that, it says, “gives teams the fastest path from AI prototypes to production-scale agents” — although with many of the new release’s features still to come, or yet to enter pilot phases or beta testing, some parts of that path will be much slower than others.
Gemini Enterprise is Google’s new ‘front door’ for agentic AI access at workOctober 9, 2025: Google introduced an AI assistant to serve as a platform so users can access and coordinate AI agents that automate work tasks. Gemini Enterprise, which replaces the Agentspace app launched last year, also features new enterprise search functions to help customers tap into data from across an organization’s business apps.
Oracle’s agentic AI push in Fusion Cloud CX offers embedded automation for CX leadersOctober 7, 2025: Oracle is adding new pre-built agents to its Advertising and Customer Experience Cloud (Fusion Cloud CX) to help enterprises increase operational efficiency by automating sales, service, and marketing processes.
IBM touts agentic AI orchestration, cryptographic risk controlsOctober 7, 2025: IBM watsonx Orchestrate offers more than 500 tools and customizable, domain-specific agents from IBM and third-party contributors. Among the additions to watsonx Orchestrate are AgentOps capabilities that offer real-time monitoring and policy-based controls for observability and governance.
How self-learning AI agents will reshape operational workflowsOctober 6, 2025: Google’s recent whitepaper, “Welcome to the Era of Experience,” signals a shift in the way AI agents are trained. Google hypothesizes that allowing AI agents to learn from the experience of agents rather than solely from human-generated training data will enable autonomous AI to surpass its current capabilities.
Are your agentic AI projects driving toward success?October 3, 2025: Anushree Verma, Gartner senior director analyst, says most agentic AI projects today are early-stage experiments or proofs of concept, fueled primarily by hype and often misapplied.
Microsoft unveils framework for building agentic AI appsOctober 3. 2025: Microsoft has introduced the Microsoft Agent Framework, an open-source SDK and runtime for building, orchestrating, and deploying AI agents and multi-agent workflows, with full framework support for .NET and Python.
Salesforce Trusted AI Foundation seeks to power the agentic enterpriseOctober 2, 2025: As Salesforce pushes further into agentic AI, its aim is to evolve Salesforce Platform from an application for building AI to a foundational operating system for enterprise AI ecosystems.
ServiceNow’s AI Experience is an agentic AI UI for the Now PlatformSeptember 30, 2025: ServiceNow today launched the AI Experience (AIx), a contextually aware multimodal AI-driven use UI for its Now platform. Building on the ServiceNow AI Platform and with a foundation in Now Assist, the company describes it as “a unified, conversational front door to enterprise AI.”
How MCP is making AI agents actually do things in the real worldSeptember 29, 2025: You’ve seen them: Those incredible large language models (LLMs) that can chat, write and even generate code. They’ve revolutionized how we interact with technology, but there’s a new, even more exciting chapter unfolding. Discover how MCP is turning chatbots into doers, and the future of work may never look the same.
Agentic AI in IT security: Where expectations meet realitySeptember 29, 2025: Agentic AI has shifted from lab demos to real-world SOC deployments. Unlike traditional automation scripts, software agents are designed to act on signals and execute security workflows intelligently, correlating logs, enriching alerts, and even take first-line containment actions.
Walmart looks to cash in on agentic AISeptember 19, 2025: Walmart doesn’t intend to lose its retail crown anytime soon. And, according to US EVP and CTO Hari Vasudev, the $815B company’s artificial intelligence strategy will play a key role in preventing that from happening.
5 steps for deploying agentic AI red teamingSeptember 17, 2025: As more enterprises deploy agentic AI applications, the potential attack surface increases in complexity and reach. But there is still hope that AI agents can be harnessed for defensive purposes too, including using traditional red teaming and penetration testing techniques but updated for the AI world.
Google unveils payments protocol for AI agents with major financial firmsSeptember 17. 2025: Google has introduced the Agent Payments Protocol (AP2), an open framework developed with more than 60 payments and technology companies to support secure, agent-led transactions across platforms and payment methods.
CrowdStrike bets big on agentic AI with new offerings after $290M Onum buySeptember 16, 2025: At its Fal.Con conference, the cybersecurity giant launched its Agentic Security Platform and Agentic Security Workforce, aiming to outpace AI-driven adversaries with real-time intelligence, automation, and a common language for defense.
Adobe makes Agent Orchestrator and AI agents generally availableSeptember 10, 2025: Adobe Experience Platform (AEP) Agent Orchestrator and six new AI agents are designed to build, deliver, and optimize customer experience and marketing campaigns. The company also announced Experience Platform Agent Composer for customizing and configuring AI agents based on brand guidelines and organizational policy.
Rethinking the IT organization for the agentic AI eraSeptember 2, 2025: With the advent of agentic AI, CIOs must be poised to adjust strategic IT priorities, mitigate new security risks, and reskill staff for a new era.
How to build a production-grade agentic AI platformSeptember 2, 2025: Modular orchestration, fail-safe design, hybrid memory management, and LLM integration with domain knowledge are essential to agentic AI systems that reason, act, and adapt at scale.
Agentic AI: A CISO’s security nightmare in the making?September 2, 2025: Enterprises will no doubt be using agentic AI for a growing number of workflows and processes, including software development, customer support automation, and more. But what are the cybersecurity risks of agentic AI, and how much more work will it take for them to support their organizations’ agentic AI dreams?
Microsoft researchers develop new tech for video AI agentsSeptember 2, 2025: Microsoft researchers are developing technologies for a new class of video AI agents to explore three-dimensional spaces before making decisions.The technology framework, called MindJourney, uses a range of AI technologies to understand and analyze 3D spaces, reason about the surroundings, and predict movement
Salesforce AI Research unveils new tools for AI agentsAugust 27, 2025: Salesforce announced a simulated enterprise environment, benchmark, and account data unification tool that are designed to help customers transform into agentic AI enterprises.
Agentic AI promises a cybersecurity revolution — with asterisksAugust 18, 2025: The hottest topic at this year’s Black Hat conference was the meteoric emergence of AI tools for both cyber adversaries and defenders, particularly the use of agentic AI to strengthen cybersecurity programs.
4 thoughts on who should manage AI agentsAugust 11, 2025: As AI agents proliferate, we need to turn our attention beyond AI agent builder platforms to AI orchestration and AI GRC platforms. It also raises questions about which groups within the enterprise should manage AI agents and how they should be treated.
How bright are AI agents? Not very, recent reports suggestJuly 31, 2025: Security researchers are adding more weight to a truth that infosec pros had already grasped: AI agents are not very bright, and are easily tricked into doing stupid or dangerous things
Will AI agents eat the SaaS market? Experts are splitJuly 31,2025: As hype about AI agents reaches new heights, an emerging theory suggests that the groundbreaking AI tools will kill the SaaS business model. The claim isn’t particularly new, but is resurfacing, with people like Microsoft CEO Satya Nadella voicing this position.
How agentic AI will change database managementJuly 28, 2025: Generative AI has already had a profound impact on the world of database management. And now, thanks to AI’s knack for pattern-recognition, teams can use generative AI to analyze data sets, detect anomalies, and access invaluable insights with record speed and precision.
As AI agents go mainstream, companies lean into confidential computing for data securityJuly 21, 2025: Companies need to stop ignoring data security as AI agents take over internal data movement in IT environments, analysts and IT execs warn. To address that issue, some tech players are embracing the concept of “confidential computing.” While it’s existed for years, it;s now finding new life with the rise of genAI.
How agentic AI will transform mobile apps and field operationsJuly 15, 2015: Agentic AI will usher in new mobile AI experiences. Construction, manufacturing, healthcare, and other industries with significant field operations will benefit from mobile AI agents and the resulting operational agility.
MCP is fueling agentic AI — and introducing new security risksJuly 10, 2025: Model Context Protocol (MCP) has caught fire, with several thousand MCP servers now available from a wide range of vendors enabling AI assistants to connect to their data and services. And with agentic AI increasingly seen as the future of IT, MCP will only grow in use in the enterprise. But innovations like MCP also come with significant security risks.
3 industries where agentic AI is poised to make its markJuly 4, 2024: IT leaders from finance, retail, and healthcare lend insights into what organizations are doing with AI agents today — and where they see the technology taking their organizations and industries in the future.
IFS rolls TheLoops agentic AI into industrial ERPJune 27, 2025: IFS is adding AI agent development and management capabilities to its ERP platform with the acquisition of software startup The acquisition brings TheLoops’ full Agent Development life cycle (ADLC) platform into IFS, enabling enterprises to design, test, deploy, monitor, and fine-tune AI agents with built-in support for versioning, compliance, and performance optimization.
How AI agents and agentic AI differ from each otherJune 12, 2025: With agentic AI in its infancy and organizations rushing to adopt AI agents, there seems to be confusion about the difference between “agentic AI” and “AI agents” technologies, but experts say there’s growing understanding that the two are separate, but related, tools.
The future of RPA ties to AI agentsJune 10, 2025: RPA is accelerating toward a crossroads, with IT leaders and experts debating its future. Some IT leaders say that more powerful and autonomous AI agents will replace the two-decade-old AI precursor technology, while others predict that AI agents and RPA will work hand-in-hand.
MCP is enabling agentic AI, but how secure is it?June 2, 2025: Model context protocol (MCP) is becoming the plug-and-play standard for agentic AI apps to pull in data in real time from multiple sources. However, this also makes it more attractive for malicious actors looking to exploit weaknesses in how MCP has been deployed.
The agentic AI assist Stanford University cancer care staff neededMay 30, 2025: At Microsoft Build 2025 earlier this month, Nigam Shah, CDO for Stanford Health Care, discussed agentic AI’s ability to redefine healthcare, especially in oncology, as physicians get overloaded with the administrative tasks of medicine, he said, which lead to burnout.
Agentic AI, LLMs and standards big focus of Red Hat SummitMay 26, 2025: Red Hat, announced a number of improvements in its core enterprise Linux product, including better security, better support for containers, better support for edge devices. But the one topic that dominated the conversation was AI.
Putting agentic AI to work in Firebase StudioMay 21, 2025: Putting agentic AI to work in software engineering can be done in a variety of ways. Some agents work independently of the developer’s environment, working essentially like a remote developer. Other agents directly within a developer’s own environment. Google’s Firebase Studio is an example of the latter, drawing on Google’s Gemini LLM o help developers prototype and build applications .
Why is Microsoft offering to turn websites into AI apps with NLWeb?May 20. 2025: NLWeb, short for Natural Language Web, is designed to help enterprises build a natural language interface for their websites using the model of their choice and data to answer user queries about the contents of the website. Microsoft hopes to stake its claim on the agentic web before rivals Google and Amazon do.
Databricks to acquire open-source database startup Neon to build the next wave of AI agentsMay 14, 2025: Agentic AI requires a new type of architecture because traditional workflows create gridlock, dragging down speed and performance. To get ahead in this next generation of app building, Databricks announced it will purchase Neon, an open-source serverless Postgres company.
Agentic mesh: The future of enterprise agent ecosystemsMay 13, 2025: Nvidia CEO Jensen Huang predicts we’ll soon see “a couple of hundred million digital agents” inside the enterprise. Microsoft CEO Satya Nadella takes it even further: “Agents will replace all software.”
Google to unveil AI agent for developers at I/O, expand Gemini integrationMay 13, 2025: Google is expected to unveil a new AI agent aimed at helping software developers manage tasks across the coding lifecycle, including task execution and documentation. The tool has reportedly been demonstrated to employees and select external developers ahead of the company’s annual I/O conference.
Nvidia, ServiceNow engineer open-source model to create AI agentsMay 6, 2025: Nvidia and ServiceNow have created an AI model that can help companies create learning AI agents to automate corporate workloads. The open-source Apriel model, available generally in the second quarter on HuggingFace, will help create AI agents that can make decisions around IT, human resources and customer-service functions.
How IT leaders use agentic AI for business workflowsApril 30, 2025: Jay Upchurch, CIO at SAS, backs agentic AI to enhance sales, marketing, IT, and HR motions. “Agentic AI can make sales more effective by handling lead scoring, assisting with customer segmentation, and optimizing targeted outreach,” he says.
Microsoft sees AI agents shaking up org charts, eliminating traditional functionsApril 28, 2025: As companies increasingly automate work processes using agents, traditional functions such as finance, marketing, and engineering may fall away, giving rise to an ‘agent boss’ era of delegation and orchestration of myriad bots.
Cisco automates AI-driven security across enterprise networksApril 28, 2025: Cisco announced a range of AI-driven security enhancements, including improved threat detection and response capabilities in Cisco XDR and Splunk Security, new AI agents, and integration between Cisco’s AI Defense platform and ServiceNow SecOps.
Hype versus execution in agentic AIApril 25, 2025: Agentic AI promises autonomous systems capable of reasoning, making decisions, and dynamically adapting to changing conditions. The allure lies in machines operating independently, free of human intervention, streamlining processes and enhancing efficiency at unprecedented scales. But David Linthicum writes, don’t be swept up by ambitious promises.
Agents are here — but can you see what they’re doing?April 23, 2025: As the agentic AI models powering individual agents get smarter, the use cases for agentic AI systems get more ambitious — and the risks posed by these systems increase exponentially.A multicloud experiment in agentic AI: Lessons learned
Agentic AI might soon get into cryptocurrency trading — what could possibly go wronApril 15, 2025: Agentic AI promises to simplify complex tasks such as crypto trading or managing digital assets by automating decisions, enhancing accessibility, and masking technical complexity.
Agentic AI is both boon and bane for security prosApril 15, 2025: Cybersecurity is at a crossroads with agentic AI. It’s a powerful tool that can create reams of code in a blink of an eye, find and defuse threats, and be used so decisively and defensively. This has proved to be a huge force multiplier and productivity boon. But while powerful, agentic AI isn’t dependable, and that is the conundrum.
AI agents vs. agentic AI: What do enterprises want?April 15, 2025: Now that this AI agent story has morphed into “agentic AI,” it seems to have taken on the same big-cloud-AI flavor that enteriprise already rejected. What do they want from AI agents, why is “agentic” thinking wrong, and where is this all headed?
A multicloud experiment in agentic AI: Lessons learnedApril 11, 2025: Turns out you really can build a decentralized AI system that operates successfully across multiple public cloud providers. It’s both challenging and costly.
Google adds open source framework for building agents to Vertex AIApril 9, 2025: Google is adding a new open source framework for building agents to its AI and machine learning platform Vertex AI, along with other updates to help deploy and maintain these agents. The open source Agent Development Kit (ADK) will make it possible to build an AI agent in under 100 lines of Python code. It expects to add support for more languages later this year.
Google’s Agent2Agent open protocol aims to connect disparate agentsApril 9, 2025: Google has taken the covers off a new open protocol — Agent2Agent (A2A) — that aims to connect agents across disparate ecosystems.. At its annual Cloud Next conference, Google said that the A2A protocol will enable enterprises to adopt agents more readily as it bypasses the challenge of agents that are built on different vendor ecosystems not being able to communicate with each other.
Riverbed bolsters AIOps platform with predictive and agentic AIApril 8, 2025: Riverbed unveiled updates to its AIOps and observability platform that the company says will transform how IT organizations manage complex distributed infrastructure and data more efficiently. Expanded AI capabilities are aimed at making it easier to manage AIOps and enabling IT organizations to transition from reactive to predictive IT operations.
Microsoft’s newest AI agents can detail how they reasonMarch 26, 2025: If you’re wondering how AI agents work, Microsoft’s new Copilot AI agents provide real-time answers on how data is being analyzed and sourced to reach results. The Researcher and Analyst agents take a deeper look at data sources such as email, chat or databases within an organization to produce research reports, analyze strategies, or convert raw information into meaningful data.
Microsoft launches AI agents to automate cybersecurity amid rising threatsMarch 26, 2025: Microsoft has introduced a new set of AI agents for its Security Copilot platform, designed to automate key cybersecurity functions as organizations face increasingly complex and fast-moving digital threats. The new tools focus on tasks such as phishing detection, data protection, and identity management.
How AI agents workMarch 24, 2025: By leveraging technologies such as machine learning, natural language processing (NLP), and contextual understanding, AI agents can operate independently, even partnering with other agents to perform complex tasks.
5 top business use cases for AI agentsMarch 19, 2025: AI agents are poised to transform the enterprise, from automating mundane tasks to driving customer service and innovation. But having strong guardrails in place will be key to success.
March 21, 2025: As enterprises look to adopt agents and agentic AI to boost the efficiency of their applications, Nvidia this week introduced a new open-source software library — AgentIQ toolkit — to help developers connect disparate agents and agent frameworks..
Deloitte unveils agentic AI platformMarch 18, 2025: At Nvidia GTC 2025 in San Jose, Deloitte announced Zora AI, a new agentic AI platform that offers a portfolio of AI agents for finance, human capital, supply chain, procurement, sales and marketing, and customer service.The platform draws on Deloitte’s experience from its technology, risk, tax, and audit businesses, and is integrated with all major enterprise software platforms.
The dawn of agentic AI: Are we ready for autonomous technology?March 15, 2025: Much of the AI work prior has focused on large language models (LLMs) with a goal to give prompts to get knowledge out of the unstructured data. So it’s a question-and-answer process. Agentic AI goes beyond that. You can give it a task that might involve a complex set of steps that can change each time.
How to know a business process is ripe for agentic AIMarch 11, 2025: Deloitte predicts that in 2025, 25% of companies that use generative AI will launch agentic AI pilots or proofs of concept, growing to 50% in 2027. The firm says some agentic AI applications, in some industries and for some use cases, could see actual adoption into existing workflows this year.
With new division, AWS bets big on agentic AI automationMarch 6, 2025: Amazon Web Services customers can expect to hear a lot more about agentic AI from AWS in future with the news that the company is setting up a dedicated unit to promote the technology on its platform.
How agentic AI makes decisions and solves problemsMarch 6, 2025: GenAI’s latest big step forward has been the arrival of autonomous AI agents. Agentic AI is based on AI-enabled applications capable of perceiving their environment, making decisions, and taking actions to achieve specific goals.
CIOs are bullish on AI agents. IT employees? Not so muchFeb. 4, 2025: Most CIOs and CTOs are bullish on agentic AI, believing the emerging technology will soon become essential to their enterprises, but lower-level IT pros who will be tasked with implementing agents have serious doubts.
The next AI wave — agents — should come with warning labels. Is now the right time to invest in them?Jan.13, 2025: The next wave of artificial intelligence (AI) adoption is already under way, as AI agents — AI applications that can function independently and execute complex workflows with minimal or limited direct human oversight — are being rolled out across the tech industry.
AI agents are unlike any technology everDec. 1, 2024: The agents are coming, and they represent a fundamental shift in the role artificial intelligence plays in businesses, governments, and our lives.
AI agents are coming to work — here’s what businesses need to knowNov. 21, 2024: AI agents will soon be everywhere, automating complex business processes and taking care of mundane tasks for workers — at least that’s the claim of various software vendors that are quickly adding intelligent bots to a wide range of work apps.
Agentic AI swarms are headed your wayNovember 1, 2024: OpenAI launched an experimental framework called Swarm. It’s a “lightweight” system for the development of agentic AI swarms, which are networks of autonomous AI agents able to work together to handle complex tasks without human intervention, according to OpenAI.
Is now the right time to invest in implementing agentic AI?October 31, 2024: While software vendors say their current agentic AI-based offerings are easy to implement, analysts say that’s far from the truth.
Microsoft drops M365 Copilot price for SMBs, upgrades free Copilot Chat
Microsoft will reduce the price of Microsoft 365 Copilot for small and mid-sized firms beginning next month.
As of Dec. 1, 2025, Microsoft 365 Copilot for Business will cost $21 per user, per month for customers with any Microsoft 365 Business plan. That’s down from the current $30 price per month set when the tool debuted in 2023.
The new Microsoft 365 Copilot Business subscription will be available to organizations with 300 or fewer employees and includes the same features as before. This means access to the AI assistant in apps such as Excel, Teams, and Outlook, as well as Copilot agents and tools such as Notebooks.
“We heard from smaller companies that they wanted a version that would fit their needs and budgets, too,” a Microsoft spokesperson said in a blog post Tuesday. “So we’re making that happen.”
The announcement came during the company’s annual Microsoft Ignite conference this week in San Francisco.
Existing M365 Copilot customers that qualify for the new M365 Copilot Business should contact Microsoft as they will not be automatically transferred to the lower price.
Despite strong interest among IT leaders in M365 Copilot, uptake remains at an early stage. Most customers are still in pilot projects or have deployed the tool to a small subset of employees as they grapple with challenges around data governance, user adoption, and uncertain value. The lower price could ease some of those concerns for small and mid-size businesses (SMBs).
“It’s a smart move by Microsoft to price Copilot aggressively at $21 for SMBs, primarily because their smaller budgets mean it softens the ROI measurement headache,” said Mike Leone, practice director, Data, Analytics & AI, at Omdia.
“While big enterprises justified the $30 investment by sinking resources into projects just to prove its value (or frankly, just paid and hoped for the best), the math simplifies dramatically for an SMB.”
Businesses that deploy M365 Copilot often struggle to quantify productivity gains, he said, making it hard to calculate a clear return on investment.
“Like anything, it’s hard to justify paying a premium (though priced competitively) when you can’t prove who’s using it or how much time users are actually saving,” he said. “By lowering the financial hurdle so significantly, SMBs can immediately see the tangible value of regained time and efficiency that validates the purchase.”
Business will still need to carefully consider where to allocate M365 Copilot Business licenses, however. SMBs operate on razor-thin margins, he said, and $21 per user is still a significant additional monthly cost. “The question isn’t just, ‘Is it cheaper?’ but ‘Does my bookkeeper or my warehouse manager need this tool?’ I think Microsoft would do well in providing clearer guidance on which roles within an SMB would benefit most,” said Leone.
“So, the price drop is a fantastic opener, but I don’t think it eliminates the need to scrutinize that total monthly overhead before rolling it out company wide.”
The introduction of M365 Copilot Business expands the range of options for accessing Microsoft’s AI assistant. Alongside the two main M365 Copilot subscriptions, businesses can subscribe to Teams Premium ($10 per user/month), which includes “intelligent recap” and other collaboration-focused AI features such as automated notetaking and live translation.
Then there’s Copilot Chat, available at no extra cost to Microsoft 365 customers. Essentially a lite version of M365 Copilot, it features a chat interface (grounded in web data rather than a customer’s own files), limited management controls, and pay as you go access to agents.
In September, Microsoft announced that Copilot Chat will be available inside Office apps such as Word, Excel and PowerPoint. This will let users to ask the AI assistant for help drafting documents or analyzing spreadsheets, for instance.
At Ignite, Microsoft also announced an enhancement to the Copilot Chat integration with Outlook that will be “content-aware” across an entire Outlook inbox, calendar, and meetings, not just individual email threads. It is slated to be available in preview in March 2026.
Agent Mode in Word, Excel, and PowerPoint agents — announced for paid M365 Copilot users in September — will enable content creation directly from the Copilot Chat interface. The AI assistant asks clarifying questions before producing a draft that users can direct it to iterate on or jump to the app to work on it themselves.
Agent Mode in Copilot Chat will also be available in “early 2026.”
More Microsoft Ignite 2025 news:- Microsoft Fabric IQ adds ‘semantic intelligence’ layer to Fabric
- Microsoft unveils Agent 365 to help IT manage AI ‘agent sprawl’
- Microsoft touts scalability of its new PostgreSQL-compatible managed database
- Microsoft unveils Agent 365 to help IT manage AI ‘agent sprawl’
- Microsoft bets on agentic AI for cloud ops, but analysts doubt the pitch
Gartner: IT spending in Europe to increase 11% in 2026
US research firm Gartner predicts that IT spending in Europe will increase by 11% to a total of $1.4 trillion in 2026. Growth is expected to be driven by AI, cloud computing and cybersecurity, despite limited IT budgets and few new hires.
Spending on generative AI is expected to grow by 78%, while cloud investments will increase by 24% as companies increasingly move services to Europe for sovereignty reasons. Data center systems, meanwhile, are growing by almost 19 percent, with investment in AI-optimized servers expected to reach $46.8 billion. This compares to 67 billion for China and 170 billion for North America.
Meanwhile, Gartner predicts an increase in regional AI lock-in by 2027, when 35% of countries are expected to be tied to region-specific AI platforms, driven by regulation, security, and demands for national control over data.
How Apple tech can deliver your very own private AI answers
Yesterday, we looked at how Macs can provide on-premises AI services for you; today we’re going to speculate a little more. Consider this: millions of people use iPhones to access public generative AI (genAI) tools such as ChatGPT, Gemini, and others. When you use those tools, you’re sharing your data with cloud providers, which isn’t necessarily a good thing.
What if there were another way?
Well, there is another way, one in which your own AI Mac cluster becomes the first port of call for AI functions you can’t do natively on your Apple device. This article describes the installation of a working version of Deepseek on a Mac for access from a remote iPhone.
In business, this becomes an on-premises AI that can be accessed remotely by authorized endpoints (you, your iPhone, your employees’ devices). The beauty of this arrangement is that whatever data you share or requests you might make are handled only by the devices and software you control.
How it might workYou might be running an open-source Llama large language model (LLM) to analyze your business documents and databases — combined with data (privately) found on the web — to give your field operatives access to up-to-the minute analysis relevant to them.
In this model, you might have a couple of high-memory Macs (even an M1 Max Mac Studio, which you can get second-hand for around $1,000) securely hosted at your offices, with access managed by your choice of secure remote access solutions and your own endpoint security profiling/MDM tools. You might use Apple’s ML framework, MLX, installing models you choose, or turn to other solutions, including Ollama.
As the useful how-to guide noted above shows, people are already experimenting with usage models like this. When they do, they should find performance seems pretty solid, thanks to the excellence of Apple Silicon chips and their capabilities for efficient memory management. There are some limits, such as how many tokens of thought your Mac can produce, and things slow down as tasks become more complex. But it works pretty well up to a point.
Apple continues to raise that point for more and better efficiency.
Apple continues to improve its infrastructureApple is reportedly about to allow for the creation of ad hoc Mac clusters over Thunderbolt 5, making it much easier to deploy teams of Macs. That means better performance and the power of the combined memory from those machines.
That matters. LLMs demand a lot of resources, so the capacity to easily cluster multiple Macs makes it possible to use on-prem solutions for more complex AIquestions. (MacOS Tahoe 26.2 will also give MLX full access to the neural accelerators hosted on M5 chips, which will deliver immediate and dramatic speed improvements for AI inferencing.)
Developers, meanwhile, are making extensive use of Apple’s Foundation Models to access Apple Intelligence LLMs from within their apps. If you’d like to test the potential of this a little for yourself, you could use an app that supports this, or even explore a project called AFM; it lets you run those models from the command line.
The steady democratization of artificial intelligence continues. It must, if we’re to break the stranglehold of elite ownership of the world’s leading AI models. To a great extent, the power/performance per watt advantages of Apple Silicon are really coming into their own, as well they might, given that AI was part of Apple’s goal when it designed Mac silicon.
Now, imagine how these kinds of private, personal AI deployments could help when using a visionOS device to interrogate the Mac AI cluster you keep safe and sound in your office or home.
That’s just one possible end game in the drive to local on-device, edge AI — and it’s closer to realization than you think.
You can follow me on social media! Join me on BlueSky, LinkedIn, and Mastodon.
Trump calls for federal AI standard, warns China will ‘easily catch’ US
President Donald Trump on Tuesday called on Congress to establish a single federal standard for AI regulation, as House Republicans explore attaching preemption language to the National Defense Authorization Act that could override state AI laws nationwide.
In two posts on Truth Social, Trump urged lawmakers to act quickly. “Investment in AI is helping to make the U.S. Economy the ‘HOTTEST’ in the World,” he wrote. “But overregulation by the States is threatening to undermine this Major Growth Engine.”
The president warned that without unified standards, “China will easily catch us in the AI race. Put it in the NDAA, or pass a separate Bill, and nobody will ever be able to compete with America.”
Trump’s post aligns with efforts already underway in Congress. House Majority Leader Steve Scalise is exploring adding AI preemption language to the NDAA, a must-pass defense bill expected to be finalized in December. If successful, it would mark the most significant federal intervention in AI governance to date.
The cost of 50 rulebooksThe frustration behind Trump’s call stems from a regulatory landscape that has grown increasingly complex. State lawmakers introduced nearly 700 AI-related bills across 45 states in 2024, with 113 enacted into law.
Companies operating across state lines now face varying obligations under Colorado’s AI Act (effective June 2026), California’s SB 53 transparency requirements, and Texas’s TRAIGA (effective January 2026)—each with different requirements for bias testing, impact assessments, and consumer notification.
“The current patchwork of state-level AI rules has shifted from a background nuisance to a structural drag on execution,” said Sanchit Vir Gogia, chief analyst and CEO at Greyhound Research. “Enterprises building nationwide AI services now operate inside a maze where the same model can be considered high risk in one state, lightly constrained in another, and entirely unregulated at the federal level.”
Teams spend increasing time producing impact assessments, audit trails, and contract variants for different jurisdictions—costs felt most acutely by IT services providers embedding these constraints in multi-tenant platforms. Compounding the problem, a compliant model today may require redesign next quarter as new state mandates land.
The ‘woke AI’ triggerTrump also pointed to ideological concerns as a driver for federal action. “Some States are even trying to embed DEI ideology into AI models, producing ‘Woke AI’ (Remember Black George Washington?),” he wrote, adding that federal standards could “protect children AND prevent censorship.”
The reference points to the February 2024 controversy when Google’s Gemini image generator produced historically inaccurate depictions, including Black Founding Fathers and racially diverse Nazi soldiers. Google paused the tool and CEO Sundar Pichai apologized, calling the outputs “problematic.”
The administration acted on these concerns in July with an executive order barring federal agencies from procuring AI models that fail “truth-seeking” and “ideological neutrality” standards.
The China argument, and its limitsBeyond ideological concerns, Trump framed federal preemption as essential to national competitiveness. His argument echoed industry positions. OpenAI, Google, and Nvidia CEO Jensen Huang have all backed federal preemption, arguing China’s centralized approach gives Beijing an advantage. Anthropic, however, has opposed broad preemption, lobbying against a proposed ten-year moratorium on state AI laws earlier this year.
The geopolitical picture is more complex than the rhetoric suggests, Gogia noted. China benefits from a single national rulebook, but AI leadership depends on capital, compute, talent, and the trustworthiness of systems being deployed.
“The argument that the US will be overtaken by China unless it replaces state rules with one federal AI standard is powerful rhetoric but analytically incomplete,” Gogia said. “The real competition will be won not just through fast deployment but through the ability to show that powerful AI systems are safe, resilient, and aligned with democratic values.”
Uncertain path, clear adviceWhether Congress acts on Trump’s call remains uncertain. Federal preemption faces headwinds: the Senate voted 99-1 in July to strip a moratorium provision from budget legislation. Forty state attorneys general opposed that measure, with California Attorney General Rob Bonta arguing that “states must be able to protect their residents by responding to emerging and evolving AI technology.”
Even if preemption passes, a weak federal standard won’t eliminate compliance challenges. Gogia said enterprises would still hold themselves to higher internal standards if federal rules underplay safety, transparency, and liability.
His advice to CIOs: don’t wait for Washington. The most resilient approach is to adopt the strictest credible regime as a baseline — typically EU AI Act obligations — and build modular systems that are adjustable by region, invest in transparency and documentation as rigorously as model accuracy.
“CIOs who pause major AI initiatives waiting for a perfect federal framework are exposing their organisations to far greater strategic risk than those who proceed under today’s rules with strong internal governance,” Gogia said.
The White House, OpenAI, and Anthropic did not immediately respond to requests for comment.
Google releases Gemini 3 with new reasoning and automation features
Google has launched Gemini 3 and integrated the new AI model into its search engine immediately, aiming to push advanced AI features into consumer and enterprise products faster as competition in the AI market intensifies.
The release brings new agentic capabilities for coding, workflow automation, and search, raising questions about how quickly businesses can adopt these tools and what impact they may have on existing IT operations.
Google also introduced new agent features, including Gemini Agent and the Antigravity development platform, designed to automate multi-step tasks and support software teams.
Gemini 3 comes in a Deep Think mode as well, which “outperforms Gemini 3 Pro’s already impressive performance on Humanity’s Last Exam (41.0% without the use of tools) and GPQA Diamond (93.8%). It also achieves an unprecedented 45.1% on ARC-AGI-2 (with code execution, ARC Prize Verified), demonstrating its ability to solve novel challenges.”
The update introduces a generative UI that can build custom visual layouts in response to prompts, allowing Gemini to present answers in interactive, application-like formats.
Gemini 3 also brings long-context reasoning and improved multimodal support, enabling the model to handle larger documents, richer datasets, and complex multimedia inputs, the company said.
Immediate integration
For enterprise IT leaders, the key question is how quickly Gemini 3’s capabilities will be integrated into real-world workplace applications and whether agentic features like Deep Think and Antigravity can deliver measurable productivity gains without introducing new operational risks.
According to Sanchit Vir Gogia, chief analyst at Greyhound Research, Google’s decision to embed Gemini 3 directly into Search from day one is one of the most consequential shifts in the enterprise AI market this year.
“This is not an AI feature layered on top of Search,” Gogia said. “It is a fundamental rewrite of the global information distribution engine that billions rely on every day. For enterprises, this marks a decisive moment where AI is no longer a separate capability but the default interpreter of user intent, workflow context, and knowledge retrieval.”
By tightly coupling Gemini 3 with Search, Google is converting its most powerful distribution surface into a permanent AI gateway, which reshapes how organizations consume intelligence and structure their digital workplace experience, Gogia said.
Others suggested that the integration could also reshape how enterprises rely on Google’s ecosystem.
“For enterprises, Google Search might become a one-stop shop for all secondary information, whether it is normal search or for generating content,” said Sharath Srinivasamurthy, research vice president at IDC. “Also, the ad business for Google will go through a change as it will consider AI-driven search and prompts to push relevant ads. The search and prompts (now) will start feeding into Google training models, which will eventually make the search and Gemini’s responses better.”
Charlie Dai, VP and principal analyst at Forrester, said that Google’s integration decisions reflect its confidence in the model’s performance and its native multimodal capabilities. It also shows Google’s intent to monetize AI through core products rather than standalone offerings.
“As enterprise search becomes an AI gateway, enterprise CIOs must have a holistic view on the dependencies in the AI stack for long-term observability and governance,” Dai added.
Automating multi-step workflowsWhile Google is positioning Gemini 3’s agentic capabilities as a step toward hands-free automation, analysts caution that most enterprises are still far from running fully autonomous workflows. Srinivasamurthy said the complexity of real-world processes remains a major barrier.
“Workflows that cut across multiple systems, involve human exceptions, compliance reviews, or high-risk decision points still require careful orchestration and typically human-in-the-loop supervision,” Srinivasamurthy added. “Enterprise adoption is in its early stages, and scaling from pilot or siloed implementations to organization-wide workflows continues to be a significant challenge.”
Dai agreed that while agentic tools like Gemini Agent and Antigravity with increasingly powerful reasoning capabilities will continue to move enterprises closer to workflow automation, safety hinges on robust guardrails and AI readiness of enterprise data.
“CIOs need governance frameworks for identity, data lineage, and action approval, plus continuous monitoring for non-deterministic behavior,” Dai said.
Gogia said that Gemini Agent and Antigravity will unlock meaningful productivity gains, but only once enterprises build the frameworks required to manage autonomous systems responsibly. “The technology may be ready for demonstration, but enterprise governance is still catching up,” Gogia said. “Organizations that scale agentic automation prematurely may expose themselves to operational, regulatory, and reputational risks that outweigh the short-term benefits.”



