Kategorie
FTC finalizes order requiring GoDaddy to secure hosting services
Identity Security Has an Automation Problem—And It's Bigger Than You Think
Unpatched Versa Concerto Flaws Let Attackers Escape Docker and Compromise Host
AI vs. copyright
Last year, I noted that OpenAI’s view on copyright is that it’s fine and dandy to copy, paste, and steal people’s work. OpenAI is far from alone. Anthropic, Google, and Meta all trot out the same tired old arguments: AI must be free to use copyrighted material under the legal doctrine of fair use so that they can deliver top-notch AI programs.
Further, they all claim that if the US government doesn’t let them strip-mine the work of writers, artists, and musicians, someone else will do it instead, and won’t that be awful?
Of course, the AI companies could just, you know, pay people for access to their work instead of stealing it under the cloak of improving AI, but that might slow down their leaders’ frantic dash to catch up with Elon Musk and become the world’s first trillionaire.
Horrors!
In the meantime, the median pay for a full-time writer, according to the Authors Guild, is just over $20,000 a year. Artists? $54,000 annually. And musicians? $50,000. Those numbers are all on the high side, by the way. They’re for full-time professionals, and there are far more part-timers in these fields than people who make, or try to make, a living from being a creative.
What? You think we’re rich? Please. For every Stephen King, Jeff Koons, or Taylor Swift, there are a thousand people whose names you’ll never know. And, as hard as these folks have it now, AI firms are determined that creative professionals will never see a penny from their work being used as the ore from which the companies will refine billions.
Some people are standing up for their rights. Publishing companies such as the New York Times and Universal Music, as well as nonprofit organizations like the Independent Society of Musicians, are all fighting for creatives to be paid. Publishers, in particular, are not always aligned with writers and musicians, but at least they’re trying to force the AI giants to pay something.
At least part of the US government is also standing up for copyright rights. “Making commercial use of vast troves of copyrighted works to produce expressive content that competes with them in existing markets, especially where this is accomplished through illegal access, goes beyond established fair use boundaries,” the US Copyright Office declared in a recent report.
Personally, I’d use a lot stronger language, but it’s something.
Of course, President Donald Trump immediately fired the head of the Copyright Office. Her days were probably numbered anyway. Earlier, the office had declared that copyright should only be granted to AI-assisted works based on the “centrality of human creativity.”
“Wait, wait,” I hear you saying, “why would that tick off Trump’s AI allies?” Oh, you see, while the AI giants want to use your work for free; they want their “works” protected.
Remember the Chinese AI company DeepSeek, which scared the pants off OpenAI for a while? OpenAI claimed DeepSeek had “inappropriately distilled” its models. “We take aggressive, proactive countermeasures to protect our technology and will continue working closely with the US government to protect the most capable models being built here,” the company said.
In short, OpenAI wants to have it both ways. The company wants to be free to Hoover down your work, but you can’t take its “creations.”
OpenAI recently spelled out its preferred policy in a fawning letter to Trump’s Office of Science and Technology. In it, OpenAI says, “we must ensure that people have freedom of intelligence, by which we mean the freedom to access and benefit from AGI [artificial general intelligence], protected from both autocratic powers that would take people’s freedoms away, and layers of laws and bureaucracy that would prevent our realizing them.”
For laws and bureaucracy, read copyright and the right of people to be paid for their intellectual work.
As with so many things in US government these days, we won’t be able to depend on government agencies to protect writers, artists, and musicians, with Trump firing any and all who disagree with him. Instead, we must rely on court rulings.
In some cases, such as Thomson Reuters v. ROSS Intelligence, the actual legal definition of copyright and fair use has found that wholesale copying of copyrighted material for AI training can constitute infringement, especially when it harms the market for the original works and is not sufficiently transformative. Hopefully, other lawsuits against companies like Meta, OpenAI, and Anthropic will show that their AI outputs are unlawfully competing with original works.
As lawsuits proceed and new regulations are debated, the relationship between AI and copyright law will continue to evolve. If it comes out the right way, AI can still be useful and profitable, even as the AI companies do their damnedest to avoid paying anyone for the work their large language models (LLMs) run on.
If the courts can’t hold the wall for true creativity, we may wind up drowning in pale imitations of it, with each successive wave farther from the real thing.
This potential watering down of creativity is a lot like the erosion of independent thinking that science fiction writer Neal Stephenson noted recently: “I follow conversations among professional educators who all report the same phenomenon, which is that their students use ChatGPT for everything, and in consequence learn nothing. We may end up with at least one generation of people who are like the Eloi in H.G. Wells’s The Time Machine, in that they are mental weaklings utterly dependent on technologies that they don’t understand and that they could never rebuild from scratch were they to break down.”
Consumer rights group: Why a 10-year ban on AI regulation will harm Americans
This week, more than 140 civil rights and consumer protection organizations signed a letter to Congress opposing legislation that would preempt state and local laws governing artificial intelligence (AI) for the next decade.
House Republicans last week added a broad 10-year ban on state and local AI regulations to the Budget Reconciliation Bill that’s currently being debated in the House. The bill would prevent state and local oversight without providing federal alternatives.
This year alone, about two-thirds of US states have proposed or enacted more than 500 laws governing AI technology. If passed, the federal bill would stop those laws from being enforced.
The nonprofit Center for Democracy & Technology (CDT) joined the other organizations in signing the opposition letter, which warns that removing AI protections leaves Americans vulnerable to current and emerging AI risks.
Travis Hall, the CDT’s director for state engagement, answered questions posed by Computerworld to help determine the impact of the House Reconciliation Bill’s moratorium on AI regulations.
Why is regulating AI important, and what are the potential dangers it poses without oversight? AI is a tool that can be used for significant good, but it can and already has been used for fraud and abuse, as well as in ways that can cause real harm, both intentional and unintentional — as was thoroughly discussed in the House’s own bipartisan AI Task Force Report.
These harms can range from impacting employment opportunities and workers’ rights to threatening accuracy in medical diagnoses or criminal sentencing, and many current laws have gaps and loopholes that leave AI uses in gray areas. Refusing to enact reasonable regulations places AI developers and deployers into a lawless and unaccountable zone, which will ultimately undermine the trust of the public in their continued development and use.
How do you regulate something as potentially ubiquitous as AI? There are multiple levels at which AI can be regulated. The first is through the application of sectoral laws and regulations, providing specific rules or guidance for particular use cases such as health, education, or public sector use. Regulations in these spaces are often already well established but need to be refined to adapt to the introduction of AI.
The second is that there can be general rules regarding things like transparency and accountability, which incentivize responsible behavior across the AI chain (developers, deployers, users) and can ensure that core values like privacy and security are baked in.
Why do you think the House Republicans have proposed banning states from regulating AI for such a long period of time? Proponents of the 10-year moratorium have argued that it would prevent a patchwork of regulations that could hinder the development of these technologies, and that Congress is the proper body to put rules in place.
But Congress thus far has refused to establish such a framework, and instead it’s proposing to prevent any protections at any level of government, completely abdicating its responsibility to address the serious harms we know AI can cause.
It is a gift to the largest technology companies at the expense of users — small or large — who increasingly rely on their services, as well as the American public who will be subject to unaccountable and inscrutable systems.
Can you describe some of the state statutes you believe are most important to safeguarding Americans from potential AI harms? There are a range of statutes that would be overturned, including laws that govern how state and local officials themselves procure and use these technologies.
Red and blue states alike — including Arkansas, Kentucky, and Montana — have passed bills governing the public sector’s AI procurement and use. Several states, including Colorado, Illinois, and Utah, have consumer protection and civil rights laws governing AI or automated decision systems.
This bill undermines states’ ability to enforce longstanding laws that protect their residents or to clarify how they should apply to these new technologies.
Sen. Ted Cruz, R-Texas, warns that a patchwork of state AI laws causes confusion. But should a single federal rule apply equally to rural towns and tech hubs? How can we balance national standards with local needs? The blanket preemption assumes that all of these communities are best served with no governance of AI or automated decision systems — or, more cynically, that the short-term financial interests of companies that develop and deploy AI tools should take precedence over the civil rights and economic interests of ordinary people.
While there can be a reasoned discussion about what issues need uniform rules across the country and which allow flexibility for state and local officials to set rules (an easy one would be regarding their own procurement of systems), what is being proposed is a blanket ban on state and local rules with no federal regulations in place.
Further, we have not seen, nor are we likely to see, a significant “patchwork” of protections throughout the country. The same arguments were made in the state privacy context as well, by, with one exception, states that have passed identical or nearly-identical laws, mostly written by industry. Preempting state laws to avoid a patchwork system that’s unlikely to ever exist is simply bad policy and will cause more needless harm to consumers.
Proponents of the state AI regulation moratorium have compared it to the Internet Tax Freedom Act — the “internet tax moratorium,” which helped the internet flourish in its early days. Why don’t you believe the same could be true for AI? There are a couple of key differences between the Internet Tax Freedom Act and the proposed moratorium.
First, what was being developed in the 1990s was a unified, connected, global internet. Splintering the internet into silos was (and, to be frank, still is) a real danger to the fundamental feature of the platform that allowed it to thrive. The same is not true for AI systems and models, which are a diverse set of technologies and services which are regularly customized to respond to particular use cases and needs. Having diverse sets of regulatory responsibilities is not the same threat to AI the way that it was to the nascent internet.
Second, removal of potential taxation as a means of spurring commerce is wholly different from removing consumer protections. The former encourages participation by lowering prices, while the latter adds significant cost in the form of dealing with fraud, abuse, and real-world harm.
In short, there is a massive difference between stating that an ill-defined suite of technologies is off limits from any type of intervention at the state and local level and trying to help bolster a nascent and global platform through tax incentives.
Signal now blocks Microsoft Recall screenshots on Windows 11
Unpatched critical bugs in Versa Concerto lead to auth bypass, RCE
FBI and Europol Disrupt Lumma Stealer Malware Network Linked to 10 Million Infections
Anthropic web config hints at Claude Sonnet 4 and Opus 4
Jony Ive and OpenAI plan ‘bicycles’ for 21st-century minds
In a move that casts a shadow across Apple’s upcoming Worldwide Developer’s Conference, OpenAI has announced that it will purchase io, the AI startup founded by acclaimed former Apple designer Sir Jony Ive, who helped create the iMac, iPod, and iPhone.
The deal sees Ive’s hand-picked io team of talented Apple alumni merge with OpenAI. Ive himself stays out the $6.5b deal. He will retain independence at his company LoveFrom but will be taking on “deep design and creative responsibilities across OpenAI and io.”
Toward the human interface for AIThe intention is to design the user interfaces for AI-enabled machines that will define the future of tech. (I hate to say “I told you so.“)
“This is an extraordinary moment,” declares the OpenAI press release announcing the deal. “Computers are now seeing, thinking and understanding. Despite this unprecedented capability, our experience remains shaped by traditional products and interfaces.”
While OpenAI doesn’t quite go so far as to say the move means AI is about to enter its iPhone moment, the company quite clearly believes this to be the case. Ive famously left Apple in 2019, working as an advisor for a while until he ceased working for the company completely, just before beginning io.
“I have a growing sense that everything I have learned over the last 30 years has led me to this moment,” said Ive.
Apple echoes are everywhereFor a veteran Apple watcher, there’s a lot of echoes within the announcement. Even the press release has an Apple-like resonance, headed up by a tasteful picture of Ive with OpenAI CEO Sam Altman. Longtime Apple watchers really should not ignore these echoes.
Ive and his hand-picked team of historically important former Apple design talent, including Evans Hankey and Tang Tan, will take over design and creative at OpenAI to build AI-enabled devices people can use to make things. If that sounds familiar, think back to Apple founder Steve Jobs and his description of computers as “bicycles for the mind.” That sounds like what OpenAI now intends to make.
It isn’t just an intimation of Apple, it’s all about muscling into similar innovation space.
“I hope we can bring some of the delight, wonder and creative spirit that I first felt using an Apple Computer 30 years ago,” said Sam Altman. You can watch a short video featuring Altman and Ive discussing their plans here.
A change in the balanceOf course, Apple has its own relationship with OpenAI, but the appointment of its acclaimed former designer to the company will change the balance of power — particularly as Apple itself is struggling with artificial intelligence.
To put the deal into some kind of context, analyst firm Gartner expects worldwide genAI spending to reach a total of $644 billion in 2025, an increase of 76.4% from 2024. This spend includes a huge increase in sales of AI devices, particularly servers and smartphones.
“By 2026, generative design AI will automate 60% of the design effort for new websites and mobile apps,” writes Gartner Market Databook, which anticipates that by 2026, over 100 million humans will “engage robo-colleagues (synthetic virtual colleagues) to contribute to enterprise work.”
An analyst perspectiveSo, what does Gartner think the deal means for OpenAI, Apple, and the future of tech?
I spoke with Chirag Dekate, Gartner VP and analyst for quantum technologies, AI infrastructures, and supercomputing. He thinks the arrangement will put OpenAI in competition with all the big hardware players in tech, and, perhaps more importantly, reflects an evolutionary step in AI, one that ends up with far more intelligent devices that feel natural to use. I reproduce his analysis below, as it’s far too wide in scope to paraphrase.
What does this deal mean for OpenAI?Dekate: “This marks a next phase of evolution for OpenAI. Market trends as indicated by Google at their I/O event yesterday, Meta, and other innovators, are clear: Leadership in AI is not just about building powerful models anymore, it’s about shaping the entire experience around AI. Bringing Jony Ive on board to design AI-native hardware shows that like Google, Meta, and peers, OpenAI is serious about creating devices where the tech and the design work hand in hand.
“Until now, Open AI was reliant on its peers and ecosystems in the cloud to diffuse AI into products and experiences. With this acquisition, Open AI, rather than relying on others to bring its models to life, is stepping into the driver’s seat. OpenAI wants to craft the physical touchpoints of AI themselves, devices that feel intuitive and indispensable in everyday experiences.
“This acquisition is also a strategic move. With this kind of vertical integration, OpenAI is positioning itself to go head-to-head with the likes of Google, Meta, and Tesla, not just on software, but on how we experience AI in the real world.”
Dekate: “This is an interesting moment for Apple. With Ive, the company’s longtime design visionary, helping build the next generation of AI devices outside of Apple, it could introduce new ways for people to interact with technology, possibly in ways that challenge Apple’s current product thinking. Today’s iPhone experience — and, more broadly speaking, the Apple experience — leaves a lot to be desired. It is expensive, clunky, and feels dated, especially around AI.
“Here the lack of AI nativity within Apple is clear and experienceable for most Apple users. Android experiences from Samsung, and Google Pixel offer more AI native infusion in a way Apple doesn’t. For Apple users, it means more options on the horizon. If OpenAI and Ive succeed, we could see the emergence of new layers of abstraction-designed AI devices that rival Apple’s in terms of experience and aesthetics but are more innovative and ready for AI-native era in a way Apple’s current products aren’t.
“That said, Apple isn’t standing still. They’re likely to ramp up their own AI integration, maybe even explore new device categories to stay ahead. It’s not a threat to Apple’s ecosystem yet, but it is a reminder that in an AI-native era, yesterday’s leaders may not be always have an advantage if they do not have AI-native cores.”
What’s the bigger picture for the industry?Dekate: “This collaboration is part of a broader shift: AI is moving from digital and into the physical world. We’re seeing it with Google’s robotics and XR efforts, Meta’s smart glasses, Tesla’s Optimus, and Nvidia’s AI platforms. OpenAI’s potential move into devices and physical AI is an accelerant.
“The future isn’t just smarter software; it’s intelligent devices that feel natural to use. The industry is heading toward AI-first hardware, designed from the ground up for seamless, human-like interaction. And in that world, design matters more than ever.
“As AI becomes part of how we live and work, the companies that can make that experience intuitive, elegant, and even joyful, like Ive has done in his past, will lead the way.”
You can follow me on social media! Join me on BlueSky, LinkedIn, and Mastodon.
OpenAI hints at a big upgrade for ChatGPT Operator Agent
Critical Samlify SSO flaw lets attackers log in as admin
Russian hackers breach orgs to track aid routes to Ukraine
Russia to enforce location tracking app on all foreigners in Moscow
Russian Hackers Exploit Email and VPN Vulnerabilities to Spy on Ukraine Aid Logistics
3AM ransomware uses spoofed IT calls, email bombing to breach networks
Lumma infostealer malware operation disrupted, 2,300 domains seized
Data-stealing Chrome extensions impersonate Fortinet, YouTube, VPNs
Agentic AI – Ongoing coverage of its impact on the enterprise
Over the next few years, agentic AI is expected to bring not only rapid technological breakthroughs, but a societal transformation, redefining how we live, work and interact with the world. And this shift is happening quickly.
“By 2028, 33% of enterprise software applications will include agentic AI, up from less than 1% in 2024, enabling 15% of day-to-day work decisions to be made autonomously,” according to research firm Gartner.
Unlike traditional AI, which typically follows preset rules or algorithms, agentic AI adapts to new situations, learns from experiences, and operates independently to pursue goals without human intervention. In short, agentic AI empowers systems to act autonomously, making decisions and executing tasks — even communicating directly with other AI agents — with little or no human involvement.
One key driver is the growing sophistication of large language models (LLMs), which provide the “brains” for these agents. Agentic AI will enable machines to interact with the physical world with unprecedented intelligence, allowing them to perform complex tasks in dynamic environments, which could be especially useful for industries facing labor shortages or hazardous conditions.
The rise of agentic AI also brings security and ethical concerns. Ensuring these autonomous systems operate safely, transparently and responsibly will require governance frameworks and testing. Preventing the law of unintended consequences will also require human vigilance.
Because job displacement is a potential outcome, strategies for retraining and upskilling workers will be needed as the technology necessitate a shift in how people approach work, emphasizing collaboration between humans and intelligent machines.
To stay on top of this evolving technology, follow this page for ongoing agentic AI coverage from Computerworld and Foundry’s other publications.
Agentic AI news and insights Putting agentic AI to work in Firebase StudioMay 21, 2025: Putting agentic AI to work in software engineering can be done in a variety of ways. Some agents work independently of the developer’s environment, working essentially like a remote developer. Other agents directly within a developer’s own environment. Google’s Firebase Studio is an example of the latter, drawing on Google’s Gemini LLM o help developers prototype and build applications .
Why is Microsoft offering to turn websites into AI apps with NLWeb?May 20. 2025: NLWeb, short for Natural Language Web, is designed to help enterprises build a natural language interface for their websites using the model of their choice and data to answer user queries about the contents of the website. Microsoft hopes to stake its claim on the agentic web before rivals Google and Amazon do.
Databricks to acquire open-source database startup Neon to build the next wave of AI agentsMay 14, 2025: Agentic AI requires a new type of architecture because traditional workflows create gridlock, dragging down speed and performance. To get ahead in this next generation of app building, Databricks announced it will purchase Neon, an open-source serverless Postgres company.
Agentic mesh: The future of enterprise agent ecosystemsMay 13, 2025: Nvidia CEO Jensen Huang predicts we’ll soon see “a couple of hundred million digital agents” inside the enterprise. Microsoft CEO Satya Nadella takes it even further: “Agents will replace all software.”
Google to unveil AI agent for developers at I/O, expand Gemini integrationMay 13, 2025: Google is expected to unveil a new AI agent aimed at helping software developers manage tasks across the coding lifecycle, including task execution and documentation. The tool has reportedly been demonstrated to employees and select external developers ahead of the company’s annual I/O conference.
Nvidia, ServiceNow engineer open-source model to create AI agentsMay 6, 2025: Nvidia and ServiceNow have created an AI model that can help companies create learning AI agents to automate corporate workloads. The open-source Apriel model, available generally in the second quarter on HuggingFace, will help create AI agents that can make decisions around IT, human resources and customer-service functions.
How IT leaders use agentic AI for business workflowsApril 30, 2025: Jay Upchurch, CIO at SAS, backs agentic AI to enhance sales, marketing, IT, and HR motions. “Agentic AI can make sales more effective by handling lead scoring, assisting with customer segmentation, and optimizing targeted outreach,” he says.
Microsoft sees AI agents shaking up org charts, eliminating traditional functionsApril 28, 2025: As companies increasingly automate work processes using agents, traditional functions such as finance, marketing, and engineering may fall away, giving rise to an ‘agent boss’ era of delegation and orchestration of myriad bots.
Cisco automates AI-driven security across enterprise networksApril 28, 2025: Cisco announced a range of AI-driven security enhancements, including improved threat detection and response capabilities in Cisco XDR and Splunk Security, new AI agents, and integration between Cisco’s AI Defense platform and ServiceNow SecOps.
Hype versus execution in agentic AIApril 25, 2025: Agentic AI promises autonomous systems capable of reasoning, making decisions, and dynamically adapting to changing conditions. The allure lies in machines operating independently, free of human intervention, streamlining processes and enhancing efficiency at unprecedented scales. But David Linthicum writes, don’t be swept up by ambitious promises.
Agents are here — but can you see what they’re doing?April 23, 2025: As the agentic AI models powering individual agents get smarter, the use cases for agentic AI systems get more ambitious — and the risks posed by these systems increase exponentially.A multicloud experiment in agentic AI: Lessons learned
Agentic AI might soon get into cryptocurrency trading — what could possibly go wronApril 15, 2025: Agentic AI promises to simplify complex tasks such as crypto trading or managing digital assets by automating decisions, enhancing accessibility, and masking technical complexity.
Agentic AI is both boon and bane for security prosApril 15, 2025: Cybersecurity is at a crossroads with agentic AI. It’s a powerful tool that can create reams of code in a blink of an eye, find and defuse threats, and be used so decisively and defensively. This has proved to be a huge force multiplier and productivity boon. But while powerful, agentic AI isn’t dependable, and that is the conundrum.
AI agents vs. agentic AI: What do enterprises want?April 15, 2025: Now that this AI agent story has morphed into “agentic AI,” it seems to have taken on the same big-cloud-AI flavor that enteriprise already rejected. What do they want from AI agents, why is “agentic” thinking wrong, and where is this all headed?
A multicloud experiment in agentic AI: Lessons learnedApril 11, 2025: Turns out you really can build a decentralized AI system that operates successfully across multiple public cloud providers. It’s both challenging and costly.
Google adds open source framework for building agents to Vertex AIApril 9, 2025: Google is adding a new open source framework for building agents to its AI and machine learning platform Vertex AI, along with other updates to help deploy and maintain these agents. The open source Agent Development Kit (ADK) will make it possible to build an AI agent in under 100 lines of Python code. It expects to add support for more languages later this year.
Google’s Agent2Agent open protocol aims to connect disparate agentsApril 9, 2025: Google has taken the covers off a new open protocol — Agent2Agent (A2A) — that aims to connect agents across disparate ecosystems.. At its annual Cloud Next conference, Google said that the A2A protocol will enable enterprises to adopt agents more readily as it bypasses the challenge of agents that are built on different vendor ecosystems not being able to communicate with each other.
Riverbed bolsters AIOps platform with predictive and agentic AIApril 8, 2025: Riverbed unveiled updates to its AIOps and observability platform that the company says will transform how IT organizations manage complex distributed infrastructure and data more efficiently. Expanded AI capabilities are aimed at making it easier to manage AIOps and enabling IT organizations to transition from reactive to predictive IT operations.
Microsoft’s newest AI agents can detail how they reasonMarch 26, 2025: If you’re wondering how AI agents work, Microsoft’s new Copilot AI agents provide real-time answers on how data is being analyzed and sourced to reach results. The Researcher and Analyst agents take a deeper look at data sources such as email, chat or databases within an organization to produce research reports, analyze strategies, or convert raw information into meaningful data.
Microsoft launches AI agents to automate cybersecurity amid rising threatsMarch 26, 2025: Microsoft has introduced a new set of AI agents for its Security Copilot platform, designed to automate key cybersecurity functions as organizations face increasingly complex and fast-moving digital threats. The new tools focus on tasks such as phishing detection, data protection, and identity management.
How AI agents workMarch 24, 2025: By leveraging technologies such as machine learning, natural language processing (NLP), and contextual understanding, AI agents can operate independently, even partnering with other agents to perform complex tasks.
5 top business use cases for AI agentsMarch 19, 2025: AI agents are poised to transform the enterprise, from automating mundane tasks to driving customer service and innovation. But having strong guardrails in place will be key to success.
March 21, 2025: As enterprises look to adopt agents and agentic AI to boost the efficiency of their applications, Nvidia this week introduced a new open-source software library — AgentIQ toolkit — to help developers connect disparate agents and agent frameworks..
Deloitte unveils agentic AI platformMarch 18, 2025: At Nvidia GTC 2025 in San Jose, Deloitte announced Zora AI, a new agentic AI platform that offers a portfolio of AI agents for finance, human capital, supply chain, procurement, sales and marketing, and customer service.The platform draws on Deloitte’s experience from its technology, risk, tax, and audit businesses, and is integrated with all major enterprise software platforms.
The dawn of agentic AI: Are we ready for autonomous technology?March 15, 2025: Much of the AI work prior has focused on large language models (LLMs) with a goal to give prompts to get knowledge out of the unstructured data. So it’s a question-and-answer process. Agentic AI goes beyond that. You can give it a task that might involve a complex set of steps that can change each time.
How to know a business process is ripe for agentic AIMarch 11, 2025: Deloitte predicts that in 2025, 25% of companies that use generative AI will launch agentic AI pilots or proofs of concept, growing to 50% in 2027. The firm says some agentic AI applications, in some industries and for some use cases, could see actual adoption into existing workflows this year.
With new division, AWS bets big on agentic AI automationMarch 6, 2025: Amazon Web Services customers can expect to hear a lot more about agentic AI from AWS in future with the news that the company is setting up a dedicated unit to promote the technology on its platform.
How agentic AI makes decisions and solves problemsMarch 6, 2025: GenAI’s latest big step forward has been the arrival of autonomous AI agents. Agentic AI is based on AI-enabled applications capable of perceiving their environment, making decisions, and taking actions to achieve specific goals.
CIOs are bullish on AI agents. IT employees? Not so muchFeb. 4, 2025: Most CIOs and CTOs are bullish on agentic AI, believing the emerging technology will soon become essential to their enterprises, but lower-level IT pros who will be tasked with implementing agents have serious doubts.
The next AI wave — agents — should come with warning labels. Is now the right time to invest in them?Jan.13, 2025: The next wave of artificial intelligence (AI) adoption is already under way, as AI agents — AI applications that can function independently and execute complex workflows with minimal or limited direct human oversight — are being rolled out across the tech industry.
AI agents are unlike any technology everDec. 1, 2024: The agents are coming, and they represent a fundamental shift in the role artificial intelligence plays in businesses, governments, and our lives.
AI agents are coming to work — here’s what businesses need to knowNov. 21, 2024: AI agents will soon be everywhere, automating complex business processes and taking care of mundane tasks for workers — at least that’s the claim of various software vendors that are quickly adding intelligent bots to a wide range of work apps.
Agentic AI swarms are headed your wayNovember 1, 2024: OpenAI launched an experimental framework called Swarm. It’s a “lightweight” system for the development of agentic AI swarms, which are networks of autonomous AI agents able to work together to handle complex tasks without human intervention, according to OpenAI.
Is now the right time to invest in implementing agentic AI?October 31, 2024: While software vendors say their current agentic AI-based offerings are easy to implement, analysts say that’s far from the truth.
ThreatLocker Patch Management: A Security-First Approach to Closing Vulnerability Windows
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- …
- následující ›
- poslední »
