Kategorie
GemStuffer Abuses 150+ RubyGems to Exfiltrate Scraped U.K. Council Portal Data
Who’s the winner in the new Microsoft-OpenAI deal?
It feels like the world’s longest and most public divorce: In late April, Microsoft and OpenAI once again renegotiated the slow-motion breakup that has been playing out between the two over the last several years.
At first glance, it looks like a win-win. In the broadest terms, OpenAI gets more freedom to set its own course — it can sell its models to Microsoft competitors such as Amazon and Google, for example — while Microsoft gets a better revenue deal and first rights to the newest OpenAI technologies into the next decade.
But in truth, one company got a better deal than the other. Who came out ahead? To figure that out, we first need to look at the most important details of the new agreement.
A new deal after a lot of rancorKeep in mind that this new agreement didn’t arise from thin air. It’s a direct result of Microsoft’s threats in March to sue OpenAI when inked a $50 billion deal with Amazon that makes the latter company the only third-party cloud provider for OpenAI’s enterprise platform for building and running AI agents.
After the Amazon-OpenAI contract was signed, Microsoft claimed it violated its exclusive cloud agreement with OpenAI. A Microsoft source told the Financial Times, “We know our contract. We will sue them if they breach it. If Amazon and OpenAI want to take a bet on the creativity of their contractual lawyers, I would back us, not them.”
That led to negotiations, and ultimately the pact between Microsoft and OpenAI that loosens the bonds between the two companies, making it easier for them to go their own ways It also significantly changes the financial relationships between them.
What OpenAI gotThe deal gave OpenAI what it desperately wanted — a fair amount of independence from Microsoft. The biggest plus for OpenAI is that it can now sell its AI models through companies other than Microsoft, including on Google Cloud and Amazon Web Services. (Until now the models were only available on Microsoft Azure.)
With that new freedom, OpenAI can more easily chart its own course rather than have Microsoft determine it.
OpenAI also gets something vital for its expected IPO — an eventual limit on the amount of money it has to pay to Microsoft. OpenAI now pays 20% of its revenue to Microsoft. Under the new terms, OpenAI will continue to pay until 2030, but the total amount of that payment will be capped. The companies haven’t disclosed what that cap is.
The cap is vital for OpenAI, because investors will be more likely to buy OpenAI stock if the company’s long-term profitability isn’t weighed down by payments to Microsoft.
What Microsoft gotMicrosoft gets a great deal, too. Even though OpenAI can now sell to Microsoft rivals, it remains OpenAI’s primary cloud partner; OpenAI products have to ship on Azure before they’re available from competitors. That gives Microsoft a considerable “first mover” advantage, because its customers will get OpenAI’s latest products before Amazon and Google’s customers will.
The deal also extends Microsoft’s stranglehold on OpenAI intellectual property through 2032. Microsoft has been spending big on its own AI development, so by the time the exclusive arrangement ends, Microsoft will likely no longer need it.
The deal will also do a lot to fatten Microsoft’s bottom line. It no longer has to pay OpenAI royalties for reselling OpenAI products on Azure. Instead, Microsoft now keeps all the revenue for itself. And, as outlined above, Microsoft still gets 20% of OpenAI’s revenue until the cap is reached.
There’s one final hidden benefit: The new at-a-distance relationship between Microsoft and OpenAI makes it less likely Microsoft could be prosecuted under anti-trust laws in the US or overseas. The US Federal Trade Commission has already looked into the relationship several times, and issued a warning about potential antitrust violations.
Then-FTC chair Lina Khan last year warned, “The FTC’s report sheds light on how partnerships by big tech firms can create lock-in, deprive start-ups of key AI inputs, and reveal sensitive information that can undermine fair competition.”
So who’s the real winner?Microsoft comes out on top. It no longer has to pay royalties to OpenAI, retains first-mover rights to the latest OpenAI technology, keeps exclusive rights to the AI firm’s intellectual property through 2032, and gets 20% of OpenAI revenues until a cap is reached. In addition, the company is unlikely to be investigated for antitrust violations. Beyond that, it’s still a big stockholder in OpenAI, so it will share in OpenAI’s success.
OpenAI certainly gets benefits as well — but they’re not nearly as significant as Microsoft’s. It’s yet one more example of how Microsoft has used its relationship with OpenAI to jump-start its own AI capabilities and feather its nest for the future.
Android Adds Intrusion Logging for Sophisticated Spyware Forensics
AI is ready to take over Python programming, but not much else
Tests of how well 19 large language models (LLMs) complete and perform complicated multi-step tasks has shown that they are both error-prone and, in many cases, unreliable.
The findings are contained a preprint paper, LLMs Corrupt Your Documents When You Delegate, written by Microsoft researchers Philippe Laban, Tobias Schnabel and Jennifer Neville based on a benchmark they created called DELEGATE-52 that allowed them to simulate workflows that might be part of a knowledge worker’s tasks. The paper is currently under review.
They said that the benchmark contains 310 work environments across 52 professional domains including coding, crystallography, genealogy and music sheet notation. Each environment consists of real documents totaling around 15K tokens in length, and five to 10 complex editing tasks that a user might ask an LLM to perform.
And, they stated in the paper’s abstract: “Our analysis shows that current LLMs are unreliable delegates: they introduce sparse but severe errors that silently corrupt documents, compounding over long interaction.”
Those mistakes are significant, they said. “The findings show that current LLMs introduce substantial errors when editing work documents, with frontier models (Gemini 3.1 Pro, Claude 4.6 Opus, and GPT 5.4) losing an average 25% of document content over 20 delegated interactions, and an average degradation across all models of 50%.”
Benchmark exercise receives a thumbs upBrian Jackson, principal research director at Info-Tech Research Group, found the findings very interesting. “Putting a list of LLMs to the test across different work domains yields a lot of useful insights,” he said. “I think this type of benchmark exercise could be helpful to enterprise developers who are looking to leverage agentic AI to automate specific workflows and understand the limits of what can be achieved.”
However, he said, “what we shouldn’t conclude from this is that, because these foundation models caused document degradation after 20 edits, they can’t be used to automate work in a certain field. It just means they can’t do all of the work as they are currently constructed.”
But, Jackson stated, “in an enterprise environment where having an accurate output is crucial, you wouldn’t take that approach. You would design the automation flow with stronger guardrails in place to prevent errors. This could be done by using multiple agents that play different roles, such as one that makes the edits and another that checks for errors and makes corrections.”
Sanchit Vir Gogia, chief analyst at Greyhound Research, said, “the Microsoft paper should be read as a serious warning about delegated AI, not as a claim that enterprise AI has failed. That distinction matters. The paper is still a preprint, so it deserves careful handling, but its central question is exactly the one CIOs should be asking: can AI preserve the integrity of complex work over repeated delegation?”
The study, he said, is stronger than what he described as “the usual AI benchmark theatre,” because it tests work products, not just looking at clever one-off answers. “It uses reversible editing tasks, domain-specific evaluators, and a round-trip method to see whether a document returns intact after repeated edits. In too many cases, it does not.”
That is the point, explained Gogia. “This is not merely about hallucinations. It is about artefact integrity.”
AI is ‘not yet trustworthy enough’He added that the headline finding is “uncomfortable: even the strongest models corrupt about a quarter of document content by the end of long workflows, while average degradation across all tested models reaches roughly 50%. The paper also finds that performance varies sharply by domain. Python is the only domain where most models are ‘ready,’ and the best model reaches that threshold in only 11 of 52 domains.”
AI is not failing because it cannot write, said Gogia, it is failing because it cannot yet preserve.
The study, he pointed out, “is especially useful because it shows how errors accumulate. Bigger documents worsen outcomes. Longer interaction worsens outcomes. Distractor files worsen outcomes. Short tests flatter the system, while longer workflows expose it. That maps rather neatly to the enterprise world, where work is messy, files are stale, context is noisy and the most important documents are rarely the simplest ones.”
The honest conclusion, he said, “is not that AI should be kept out of enterprise workflows. It is that delegated AI is not yet trustworthy enough to be left alone with consequential artefacts.”
When AI edits an important document such as a contract, a ledger, a policy, a codebase, a board paper, or a compliance record, Gogia warned, the enterprise still owns the damage.
Mitigation approachesIn order to prevent that damage, Jackson suggested, enterprises can do additional training and fine-tuning of models to be better adapted to their specific workflows: “These foundation models are very good at doing a lot of different tasks, but less good at doing one specific task very well. So, enterprises that want to achieve that may need to improve the models themselves by training on their own data.”
For example, “[the Microsoft paper] points out one multi-agent setup that led to more degradation instead of less, so the method to detect degradation must be well-designed to be effective,” he said. “Another approach that some enterprise platforms have introduced is a way to deterministically verify the output for accuracy using mathematical verification. So, knowing what domains prove more difficult for a single LLM to automate is useful, as developers can plan to add more verification steps to the process.”
He said, “depending on the model, for example, if it’s totally open source or if it’s proprietary, you can have more flexibility in terms of how much you can customize it. So, an enterprise developer might look at these results, pick the LLM best at automating their desired domain, and then send it in for additional training to master the process.”
People do not disappearAccording to Gogia, the paper also shows something more precise than ‘AI still needs people.’ “It shows that AI changes the human layer from production to supervision, validation, and accountability. That is a rather different operating model from the one being sold in many boardroom conversations.”
People, he said, “do not disappear. Their work moves. This is the uncomfortable part for enterprises chasing headcount reduction. The people best placed to catch AI errors are often the same people organizations are hoping to replace, reduce, or redeploy. Remove too much domain expertise from the workflow, and the enterprise also removes the people who know when the AI has quietly damaged the work.”
Expertise becomes more valuable, not less, said Gogia: “The paper reinforces this because stronger models do not merely delete content. They often corrupt it. Weaker models are easier to catch when they visibly drop material. Frontier models are more awkward because the content remains present but becomes wrong, distorted, or subtly altered. That requires knowledgeable review, not casual inspection.”
This article originally appeared on CIO.com.
US govt seeks Instructure testimony on massive Canvas cyberattack
UK fines water supplier $1.3M for exposing data of 664k customers
Webinar: Fixing the gaps in network incident response
Signal adds security warnings for social engineering, phishing attacks
Microsoft releases Windows 10 KB5087544 extended security update
Fortinet warns of critical RCE flaws in FortiSandbox and FortiAuthenticator
Windows 11 KB5089549 & KB5087420 cumulative updates released
Microsoft May 2026 Patch Tuesday fixes 120 flaws, no zero-days
Přichází Android nové generace. Udělá vám widget na přání a vylepšuje způsob, jakým používáme telefony
Škoda warns of customer data breach after online shop hack
Android 17 to expand banking scam call and privacy protections
New Exim BDAT Vulnerability Exposes GnuTLS Builds to Potential Code Execution
WWDC: From NeXTStep for Apple to Apple’s next step for AI
As Apple heads toward next month’s Worldwide Developer Conference (WWDC), cast your mind back almost 30 years. That’s when something happened that arguably put events in motion that led to Apple becoming the company it is today. That was when Apple co-founder Steve Jobs returned to the top job at WWDC 1997 — the first such event after Apple acquired NeXT.
The big debt to NSIt took until 2000 to fully realize what the NeXT purchase meant; that’s when the Mac OS X Public Beta was released. The operating system has seen many twists and turns since then, but the NeXTStep OS acquisition forms the basis on which the Apple software ecosystem has been built. Mac, iPhone, iPad – even Apple Watch and Vision Pro – all share elements of it.
You can see its traces each time you use an application that makes use of a macOS API that uses the NS — ‘NeXTStep’ prefix. That means you’re using NeXT when you work in SwiftUI, use Apple’s core frameworks, or write code for use across different platforms in the current ecosystem. Despite the many names for Apple’s platforms, they all have a little NeXT in common.
The need for a new, modern operating system was critical at the time to Apple. The company had fallen into the doldrums with its classic Mac OS operating system and competitors had forged ahead, at least in marketshare. Among others, Michael Dell, Time Magazine, and almost everyone else expected the company to collapse. NeXT was the salvation, Jobs the icon, and history the prize.
The next challenge nowToday’s Apple faces a fresh existential challenge, and while much of it feels media-driven, the company does need to introduce an intelligence layer around and upon its platforms, alongside the tools developers need to exploit AI within their applications.
Apple knows this, too, which is why it already offers Apple Intelligence APIs to developers to use in their apps. The company also knows they need a way to market those software ideas and get them into the hands of end users; that’s what the App Store provides.
When Apple wove NeXT into its operating system, it somehow managed to provide developers with better tools, modern, enduring foundations, frameworks and everything else needed to build an ecosystem that extends across multiple product families at a range of prices and technological advancement — from the $499 MacBook Neo to the $3,499 Vision Pro. You can build applications for any or all of these platforms using components Apple provides, along with what you bring yourself. To a great extent, all of this potential was unlocked by the acquisition of NeXTStep and its use in OS X at the turn of the century.
Telling storiesNo doubt, developers are eager to discover the extent to which Apple has managed to join the circle of AI development on its platforms. They surely hope for powerful new APIs to enhance their products with a new intelligence layer, even while Apple itself needs to offer developers the same thing to keep them loyal to its platforms.
If you squint just a little bit, the same challenges that haunted Apple in the late ‘90s echo again today. Apple wants to reinvent itself for AI without sacrificing all the benefits of its existing ecosystem. It wants to do so while making sure its developer community buys into its chosen direction. To help achieve this, it can lean heavily into its inherent hardware advantage: Not only can its products run the apps developers build, but they are also fantastic platforms to build on in the first place. All the same, it needs to convince them with a narrative that resonates, which means that while WWDC in 1997 was all about NeXTStep, WWDC 2026 is all about which steps Apple takes next.
You can follow me on social media! Join me on BlueSky, LinkedIn, and Mastodon.
RubyGems Suspends New Signups After Hundreds of Malicious Packages Are Uploaded
Why Runtime Monitoring Is Replacing Traditional Linux Logging
Debian 14 Makes Reproducible Builds Mandatory for Linux Packages
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- …
- následující ›
- poslední »



