Computerworld.com [Hacking News]
Mozilla appoints new CEO, unveils new AI focus
Vowing to make it the “world’s most trusted software company,” Mozilla’s board of directors announced Tuesday it was appointing Anthony Enzor-DeMeo as the new CEO, whose mandate will be to achieve that lofty goal.
Enzor-DeMeo, former GM of Firefox, wrote in a blog that becoming trusted “is not a slogan, but a direction involving three strategies, key among them being that privacy, data use, and AI must be clear and understandable. Controls must be simple. AI should always be a choice — something people can easily turn off.”
In addition, he wrote, the firm’s business model must align with trust, and Firefox “will evolve into a modern AI browser.”
The new CEO, who replaces interim CEO Laura Chambers, who is returning to the Mozilla board, stated in a release, “the browser is AI’s next battleground. It’s where people live their online lives and where the next era’s questions of trust, data use, and transparency will be decided.”
Describing Mozilla’s strategy announcement as an “interesting development,” Sanchit Vir Gogia, the chief analyst at Greyhound Research, said that the debate around browser-based AI has been framed far too narrowly and often presented as a choice between two extremes.
The power of the browser has risen.On one hand, he said, “Chrome and Edge are racing ahead, turning the browser into an always on AI surface optimized for consumer productivity, cloud integration, and ecosystem scale.”
On the other, Gogia pointed out, “Mozilla is deliberately slowing things down, keeping AI optional, bounded, and subordinate to user and enterprise consent. Enterprises recognize the logic in both positions. But in practice, they are choosing a third path.”
The core issue, he added, “is not whether AI belongs in the browser. It already does. The issue is what happens when the browser stops being a passive interface and becomes an active participant inside the enterprise trust boundary. Once AI is embedded at the browser layer, it can read across tabs, infer user intent, summarise internal systems, and, in some cases, act autonomously.”
At that point, said Gogia,” the browser is no longer just a tool. It is an actor. And that is where enterprise governance begins to fracture.”
He predicted that the next phase of this market will be defined not by who ships the most AI features, but by who solves browser level accountability first. “The browser has become too powerful to be treated as a commodity endpoint,” he said. “Enterprises are no longer asking which browser is best. They are asking which browser belongs where.”
Mozilla, said Gogia, “has helped surface the risk. Big Tech has accelerated it. Island browsers are where enterprises are quietly resolving it. Whoever manages to combine intelligence, control, and trust at the access layer will not just win CIO confidence, they will define what safe, enterprise grade AI actually looks like in the real world.”
Brian Jackson, principal research director at Info-Tech Research Group, said, “while we don’t yet know Mozilla’s AI strategy or how it will go about delivering it to users, we do know that it has an entirely different set of incentives than its competitors. Google and Microsoft are heavily invested in AI and therefore want to increase the number of users and engagement in these products.”
Mozilla, he said, doesn’t have those incentives to push users towards AI consumption, so it can continue to focus on its core mission of a privacy-first browser that prioritizes trust.
Many users, Jackson pointed out, “will like the personalized services and time-saving productivity features that come from an AI-first browser experience. But others may wonder if the data they are exposing to these AI models, and by proxy big tech giants, is worth more than having an AI agent suggest how to write an email reply for you.”
One perspective, he said, “would say that Mozilla is just behind its competitors in terms of building out an AI-enhanced web experience for its users, and that may end up costing it market share. Another point of view is that it’s not rushing to extract as much user data as it can to feed an AI algorithm as part of a competition to improve an AI algorithm.”
Jackson predicted, “if users start to feel like Chrome or Edge is pushing AI too aggressively, or worse yet, they are creeped out when AI rehashes their personal information and presents recommendations around it, they might actually look for alternatives like Mozilla.”
In addition, Gogia added, enterprises “are under real pressure to extract productivity gains from AI. They will not permanently trade capability for comfort. If Chrome and Edge succeed in stabilizing governance, improving auditability, and avoiding a major enterprise failure, tolerance for AI-first defaults will rise. Mozilla’s challenge is execution. It must prove that governance first does not mean capability last.”
Apple in enterprise — industry execs on what works, and what they want in ’26
With Apple Silicon its current crown jewel, Apple has continued to rapidly build its presence in enterprise computing throughout 2025, generating significant market share gains as companies accelerate Apple deployments across their fleets.
What’s driven Apple’s progress this year — and what should we expect from the company in the year ahead? To find out, I spoke to execs from a range of companies in the space: Fleet, Hexnode, Iru, Jamf, JumpCloud, MacStadium, SAP, and a newer entrant in the Apple enterprise scene, MacPaw.
For the most part, everyone I spoke with agreed that Apple Silicon and the profound power and performance advantages of the current Mac fleet has been the biggest thing to celebrate. They also suggest that what Apple does in artificial intelligence (AI) may be the biggest inflection point for the coming year.
What Apple saysSpeaking in November, Apple’s Jeremy Butcher, who handles business product marketing, told me: “It’s so great to see the momentum [around Macs in the enterprise]. As you know, it’s very intentional.” Butcher also stressed how the company considers what business users need and works to introduce those features as they make sense.
“We’re seeing tremendous momentum around Mac in the enterprise,” said Apple’s Colleen Novielli, who focuses on MacBook product marketing. “We’re seeing this amazing spectrum of adoption across the Mac range.”
With that in mind, it makes sense to speak to people leading the charge in the enterprise, including Apple’s device management partners, major deployment partners, and others.
SAP: The mass deployment storyMartin Lang heads up enterprise mobility at SAP. He has led the company to deploy tens of thousands of Macs, iPhones, and iPads worldwide. For him, Apple in 2025 was all about scale. “At SAP, Macs now account for about 50% of our workforce, more than 54,000 devices. I didn’t quite think this was possible just a few years back,” he said.
That deployment has translated into, “verifiably fewer support cases and longer productive workdays,” he told me. “People trust the devices to serve them well for years, and that’s true for both entry-level and high-performance machines.”
Apple made significant improvements to its mobile device management (MDM) systems in 2025, and these changes extended to visionOS devices. SAP has about 100 Vision Pro units deployed across the company; most are now managed to the same compliance standard as other devices, making them viable for use with corporate data.
“One thing I’d consider ‘bad’ is that Apple still struggles with enterprise-scale logistics,” said Lang. He noted that iPhone 17 has been severely backordered since launch, meaning some SAP staffers have waited months for a new device, and speculated that distribution and supply chain challenges might have contributed to the delays.
Lang also wants to see Apple tout its enterprise success stories. “Enterprise customers did amazing things with Apple in 2025, however many of those stories stay hidden,” he said. “Enterprises want concrete, peer-driven examples, not just platform announcements…. I think Apple could push people more to share stories.”
Looking forward, Lang shared his hope that enterprise users will learn that iPhones have a much wider set of use cases than just collaboration and time management. “In our personal lives, we fully leverage mobile-specific capabilities like push notifications, biometrics, location awareness, [and] offline intelligence. However, in enterprises, mobile devices are still mostly just used for collaboration purposes,” he said.
He also noted how SAP is using Badges in Apple Wallet to provide door entry access across the company. “This is an example of how these tools can be used for so much more in business.”
JAMF: Now we have the hardware, here comes the AIMichael Covington, vice president of portfolio strategy at Apple device management and security vendor Jamf, called 2025 an “incredible year” for Apple in the enterprise. Jamf continues to give IT teams ever more power to configure, secure, and support their fleets, but it all starts with the hardware.
“This year’s release of the MacBook Air with the M4 processor may have been the quiet highlight for many large organizations that have been waiting for the right price and performance boost before making Apple’s renowned end-user experience part of the standard issue tech,” he said.
Covington has also seen more businesses begin to deploy Macs. “Over the course of the year, we watched as organizations from across a broad set of sectors, stopped treating the Mac as an ‘exception’ and embraced it as the device that is driving growth,” he said.
In part, this is because the MacBook Air now delivers the kind of performance you’d once look to a pro machine to achieve, all at a cost that makes it easy and attractive for mass deployment. “Couple these hardware advancements with Apple’s investment in expanding management and security hooks, and you’ve got a recipe for success in the enterprise.”
AI is the next opportunity for Apple, Covington said. “It’s no secret that Apple waited for generative AI technology to mature before introducing its own Apple Intelligence suite to the market.”
In part, this reflects the company’s deep commitment to user privacy, which makes AI development challenging, but “also presents a huge opportunity to differentiate how AI is presented to end users for work.
“We are excited to see how Apple continues to enable their devices to seamlessly fit into enterprise IT in the year ahead,” Covington said. “As AI becomes a more integrated component of the end user experience, Apple is uniquely positioned to unlock a new wave of productivity, while also ensuring users feel safe and secure — whether engaging with a work application or personal data.”
MacStadium: Power and consistencyMacStadium CTO Chris Chapman saw lots of great moves from Apple across the year. In hardware, the M4 Air increased RAM and added powerful AI processing at a sub-$1,000 price. “This opened the floodgates for the TCO and value discussion around Apple as a preferred device in business,” he said.
Once you have the hardware, how do you manage it? Declarative Device Management was a “missing enterprise capability” that has now been realized. “DDM opens the door for Apple to be considered an enterprise platform that IT teams can use to manage Apple devices as business-owned assets,” Chapman said.
He also welcomed the ’26 series of operating system upgrades. “While we love creative names like Tahoe and Sierra, IT departments are tasked with consistency, repeatability, and stability. Apple finally adopted consistent, standard versioning with OS26, iOS26, etc. Now, a fleet of devices can be tracked by a linear-based version across form factors. This is a step toward IT standardization that has long been missing, giving IT administrators a much-needed capability.”
Chapman, like others, is watching Apple’s unfolding AI strategy. Apple now has “some of the best and most powerful hardware to run local AI models with very low energy consumption,” he said, but lacks its own compelling solutions for enterprise AI.
“Apple’s individual assistant features are lagging compared to other AI platforms like ChatGPT or Gemini,” he said. “It’s also very focused on performing tasks for the individual, but not as capable of learning or knowing your business or corporate data. Many IT departments blocked or disabled Siri because of visibility and management concerns. For the enterprise, this is a miss and somewhat confusing compared to Microsoft’s Copilot. Apple is still formulating its broader AI play, but the vague approach taken this year was lackluster for the enterprise.”
Those challenges may not last, Chapman says. “Apple is restructuring its AI team, and there is talk of a partnership with Google. Moving Apple into a strategic position to be leveraged for AI in business is an intriguing and powerful direction.”
Still, Apple’s visionOS work continues to seek use cases, Chapman said. “How Vision Pro can be an effective enterprise device and used at scale is unclear,” he said. “To be fair, I don’t think anyone has nailed the use case or technology yet, but Apple seemed uncharacteristically farther away from the field in form and function.” He does expect new form factor AI devices to appear and suspects that these, combined with visionOS, will open new opportunities for business uses.
The same may be true, he said, about any new folding devices released next year, which could “open new use cases and targets for application developers.”
Fleet Device Management: Amazing hardwareMike McNeil, Fleet Device Management CEO, was impressed by Apple’s move to open enterprise opportunities with the introduction of MDM migration tools this year.
“Apple nailed it with openness in 2025, and I expect to see more of the same in 2026,” he said. “The push to make it easier to migrate MDMs was a major signal, even if it wasn’t quite as easy as we thought it would be at first. I see a lot of customers still using Fleet’s original custom migration tool, because it’s a bit more of a paved road. For example, one customer migrating 40,000 Macs to Fleet ‘the new way’ experienced an outage from Apple Business Manager midway through the migration, and that was tough. But I appreciate the energy Apple is investing here, even if some parts are rough around the edges.”
McNeil also told me his own personal upgrade story. “I finally upgraded from my old Intel Mac to a 15-in. M4 MacBook Air last week, and…holy cow, what a performance improvement. The fact that I’ve used a 2019 MacBook Pro through the entirety of my time building this company is a testament to the long-term value of an investment in Apple hardware, but also, the new stuff is amazing.”
Hexnode: We can get much more from AppleI spoke once again with Hexnode CEO Apu Pavithran. He sees the conversation around Apple in the enterprise changing. Where before it might have been characterized by searching for reasons to deploy Apple’s kit, it is now all about maximizing the benefits of equipment business users already have.
“This is big in and of itself and Apple’s surely happy the fundamentals continue to move in their favor: happier employees, better retention, lower support costs,” he said. “As AI capabilities mature and management tools deepen, Apple’s privacy-first approach becomes a competitive differentiator. I expect the momentum we’ve seen to accelerate as the business case strengthens.”
Pavithran saw a lot to celebrate during the year: Mac sales are accelerating, user satisfaction is high, and Apple can continue to show a “positive feedback loop between workplace performance and subsequent tech investment.”
Apple Silicon delivers the best possible power, performance, and reliability. “I’m consistently impressed with Apple’s hardware — it’s never been more reliable than right now,” said Pavithran. “Failures are so low on the list of IT problems with five-year device lifecycles becoming standard. Again, the improved total cost of ownership, sustainability benefits, and resale value only strengthen the company’s business case.”
Pavithran, too, sees the Apple hardware story forming strong foundations for the company’s upcoming AI story. “From my perspective, Apple’s done a terrific job embracing this AI moment in line with the privacy requirements of big business. The one-two punch of stronger hardware and on-device data processing makes it easier for security-conscious companies to say yes. Unlike cloud-dependent competitors, Apple’s privacy-first approach goes a long way toward alleviating data concerns about AI. They’re threading the needle between market evolution and compliant, careful onboarding.”
As for device management, Pavithran sees Apple’s willingness to continuously refine its approach as proof that it is listening to enterprise IT. “For example, it’s promising to see more granular restrictions on Apple Intelligence being released after we discussed it earlier this year. As usual, this shows the company cares about what enterprise users want and adapts its solutions accordingly.”
Challenges continue, of course – regulatory oversight, particularly in Europe, will likely make IT harder in the region, while spatial computing continues to seek real use cases. “We hope for additional shared device management features: Return to Service, Shared iPad, and Authenticated Guest Mode are available, but currently lack depth. Admins should be given extra room when it comes to isolating sessions, user sandboxing, and pre-staging apps based on the next user’s role,” he said.
Iru – another Mac storyWeldon Dodd, distinguished engineer at Iru (the company formerly known as Kandji) also sees Apple Silicon as a triumph. “[The] Mac has never been in a stronger hardware position where it now simply dominates the price, performance, and battery life balance for most use cases.”
Dodd also noted that while we wait for Apple to introduce the next evolution of its approach to AI, its existing hardware ecosystem is ready for action. “While it doesn’t enter into the equation for enterprise AI model training with specialized server hardware from vendors like Nvidia, Apple’s chips have been shown to be very capable at running AI workloads on the endpoint,” he said.
And while 2026 may be an inflection point where the advances slow a bit, “I expect Apple to maintain its lead on hardware performance through the year.”
Enterprise deployment is part device and part device management. Like most such firms in the Apple space, Iru makes use of the systems Apple creates; this year’s big upgrade was around Platform SSO, which Apple improved at WWDC, Dodd said.
“Platform SSO continues its siren song luring Mac admins towards an integration with cloud identity that still presents a bit of friction for admins to fully implement. PSSO improved in a few important ways this year in macOS 26 Tahoe, where the two big features are authentication with Automated Device Enrolment during Setup Assistant for initial account creation with IdP credentials and being able to use PSSO sign in at the File Vault unlock screen. This launched with support from a single IdP vendor, but another has joined. And it would be great to see more support from other vendors in 2026, as well as further improvements from Apple to make this into a truly seamless marriage between macOS and cloud identity.”
Dodd also sees a second wave of change coming for AI. He believes we all became more aware of the limitations of genAI during the year, which means IT admins will now focus on learning how to use Model Context Protocol with agentic AI to pull together disparate systems. “There’s real potential for a new kind of integration layer in enterprise IT that will allow for real insights to be developed by bringing data together from what have been separate tools,” he said.
JumpCloud: Reality must catch upJoel Rennich, senior vice president for product management at JumpCloud, welcomes the improvements in DDM and Platform SSO, but warns: “It will take some time for MDM vendors and Identity Providers (IdPs) to actually support this,” he said.
“Apple mostly kept improving on the security and identity threads that they’ve been pulling at for the last few years. There weren’t any major new changes for vendors to have to pivot to, or new flows to support,” he said.
But he warned that few vendors “support the full scope” of the changes that Apple has instituted in recent years. “Since much of the Apple improvements over the last few years are not in any way industry standard, this has become very hit or miss,.”
Rennich warned that the biggest challenge for Apple in the enterprise is Apple itself. “The aspects that make Apple great in the consumer space are many times inherently at odds with what enterprises are looking for, and in most cases Apple refuses to compromise on aspects like user privacy and experience,” he said. While he doesn’t expect Apple to change its stance, he expects business users to continue to request more controls and management tools.
MacPaw: A new wave for enterprise ITUkrainian developer MacPaw recently introduced CleanMyMac Business to the Jamf Marketplace. Dan Jaenicke, MacPaw’s director of B2B product strategy, also sees Apple’s enterprise success as powered by Apple Silicon. “The hardware continues to outperform competitors, and Macs are lasting longer than ever. That longevity is invaluable for IT teams, allowing them to focus on productivity and strategic initiatives instead of constantly replacing devices,” he said.
However, the AI story must evolve, he said, pointing to MacPaw data that shows almost 60% of Mac admins already use AI at work.
“The spotlight in 2026 will be on Apple’s progress in enterprise AI. IT leaders are looking for tools that make workflows smarter and more secure. With key developer and executive departures on the horizon, the entire community will be closely watching to see whether Apple can maintain momentum, lead in AI adoption, and continue to balance hardware and software innovations,” he said.
You can follow me on social media! Join me on BlueSky, LinkedIn, and Mastodon.
Microsoft Copilot can boost your writing in Word, Outlook, and OneNote — here’s how
One of the most enticing uses for generative AI is to help you write. Anyone can get writing help from Microsoft’s Copilot genAI tool via the free Copilot web or mobile app. But Copilot becomes especially useful when it’s integrated with various Microsoft 365 apps.
As you compose, edit, or view a document in Word, for example, you can summon Copilot to assist you in several ways: It can generate rough drafts, polish or change the tone of your writing, and summarize long passages of text. Copilot can also help you compose or summarize emails in Outlook and help you rewrite or summarize notes in OneNote.
In this article:- Who can use Copilot in Microsoft 365 apps
- Have Copilot generate a rough draft
- Ask Copilot for suggestions to improve your writing
- Have Copilot rewrite text
- Have Copilot summarize long documents, notes, emails, or threads
If you have a Microsoft 365 Personal, Family, or Premium subscription, Copilot access in Word, OneNote, Outlook, and other Microsoft 365 apps is built into your plan. But Microsoft 365 business and enterprise subscriptions do not include Copilot integration in the apps. Your company has to purchase an additional Microsoft 365 Copilot plan, which costs $30 per user per month when paid annually. (Microsoft does offer plans that bundle the M365 apps and M365 Copilot, but the costs are the same.)
This guide goes over how to use Copilot in Word, Outlook, and OneNote to help you compose, revise, and summarize text. I’ll demonstrate using Copilot with an individual Microsoft 365 Premium account, but most of the steps and user interfaces are similar under a Microsoft 365 business plan. I’ll also note additional features that are available only under the business versions of Copilot and Microsoft 365.
Note: Microsoft 365 apps aren’t completely consistent on different platforms, so you might see a somewhat different interface for a feature than is shown here. What’s more, some features are available in the web apps but not the desktop apps, and vice versa — and if you have a work or school Microsoft 365 account, your administrator may allow some Copilot tools but not others.
Have Copilot generate a rough draftCopilot can help you compose text drafts in Word, Outlook, and OneNote. You use Copilot through a toolbar or pane that appears within the body of your document, email draft, or note, or via an entry box that appears above a blank document in Word. Copilot is also available from a sidebar that opens along the right of these apps.
Using the Copilot toolbar or paneIn Word: When you start a new, blank document, you’ll see three example prompts above the blank document. Clicking one of these will trigger Copilot to generate text as described in that prompt.
Below these prompts is a text entry box that says, “What do you want Copilot to draft?” That’s where you type in a prompt that describes what you want Copilot to write. (More on prompt writing in a moment.)
When you start a new document in Word, you’ll see some prompt suggestions and the Copilot toolbar above the document.
Howard Wen / Foundry
At the right end of the text entry box you can optionally click the paperclip icon (Reference your content) and select a document in your OneDrive, SharePoint, or on your PC. Copilot will base its output on the document, including content, writing style, and formatting. (Business users can select up to three files for Copilot to reference.) You can also type a / (forward slash) inside the text entry box to select a document for reference.
If your Word document already has text in it, place the cursor where you want to insert new text generated by Copilot. Click the pen icon that appears in the left margin.
In a document that already has text, place your cursor in the file and click the pen icon in the left margin.
Howard Wen / Foundry
This will open the Copilot toolbar, where you can type your prompt into the text entry box and optionally use the paperclip icon to upload a document for Copilot to reference.
When you click the pen icon in the left margin, the Copilot toolbar appears in the body of your document.
Howard Wen / Foundry
On the dropdown below the toolbar, there are three selections:
- Click Keep writing if you want Copilot to generate more text based on the context of the rest of the document.
- I’ll describe Writing suggestions in detail later in this guide.
- Chat with Copilot will open the Copilot sidebar, also described later in this guide.
In Outlook: With the cursor in the message body of a new or draft email, click the Copilot icon that appears in the left margin. Or you can click the down arrow to the right of the Copilot button at the right end of the ribbon toolbar. On the dropdown menu that opens, click Draft.
Select Draft from the Copilot dropdown, then type your prompt into the Copilot toolbar.
Howard Wen / Foundry
The Copilot toolbar opens in the body of your email draft. Type your prompt inside the text entry box or choose one of the example prompts in the dropdown menu below the toolbar to have Copilot generate text as described in the prompt.
In OneNote: With the cursor on the page of a note (blank or with information already on it), click the Copilot icon that appears to the left of the cursor.
On the dropdown menu that opens, click Take notes with Copilot. A pane will open in which you can type a prompt to Copilot. (If you click the Copilot icon while on a blank page, you’ll be taken to this pane immediately.) Type your prompt inside the text entry box, then click Generate or press the Enter key. The second button, Inspire me, will enter suggested prompts based on the context of your other notes.
Enter your prompt or click Inspire me to see suggested prompts.
Howard Wen / Foundry
Crafting your promptsPrompts are sentences you enter to instruct Copilot (or other AI assistants) how to compose the text that you want it to create. Your prompt should minimally include the subject and a few specifics about the writing you want it to generate.
To get started, describe the kind of text you want Copilot to generate and add a detail or two about it. These prompts can be simple or a little more complex. For example:
- Create a brief business pitch for a new vegan restaurant that will be located in downtown Atlanta, Georgia.
- Write an opening paragraph describing my interest in a technical support job opening at Microsoft.
- Write a few sentences that inquire if there are any job openings in technical support at Microsoft.
- Compose a polite follow-up with the recipient about a video call we had last week.
The more specifics you include in your prompt, the more likely you are to get good results. For instance, if you have notes that contain specific data points that you want to include in the generated text, copy and paste those notes into your prompt (or upload a document in Word as described earlier in the story). If you have an outline for the topics you want to cover in the draft, paste that in as well.
But frankly, there are no hard rules about writing prompts — just use your imagination and see how Copilot responds. It may not generate results that you like (if it generates any at all). But keep experimenting with the descriptions in your prompts until you coax Copilot to produce a useful response.
Once you’ve entered your prompt, click the right arrow (Generate) at the right end of the entry box or press Enter on your keyboard and wait for Copilot to work its magic.
The results are in – actions you can takeWhen Copilot has generated a draft, it appears in the document, email, or note with a toolbar below it.
In Word, use the toolbar below the generated draft to keep, retry, discard, or refine the text.
Howard Wen / Foundry
In Word and OneNote: You can use the toolbar to perform the following functions:
- Click the Keep it button to keep the newly minted words in your document or email. You can then edit the generated text in the doc or note as you see fit.
- Click the Regenerate button (two circular arrows) if you’re not satisfied with the result and want Copilot to generate a whole new one.
- Click the Discard button (a trashcan) to discard the result.
- Refine the result by typing more prompts in the text entry box (e.g., “add more details,” “make this sound more professional,” or “make it shorter”) and clicking the arrow. Copilot will generate an updated result using your additional commands and descriptions.
- Click the pencil icon above the toolbar so that you can edit the prompt you wrote, or enter an entirely new prompt, in the text entry box. The current results that Copilot generated will be discarded, and it’ll generate another set of text based on your revised or new prompt.
- Optionally click the thumbs up or down icon in the upper-right corner of the toolbar to rate the quality of the result that Copilot generated. Presumably, this helps train the AI to produce better results in the future.
In Outlook: Using the options in the dropdown menu below the toolbar, you can have Copilot change the length of the generated text (by selecting Make it shorter or Make it longer) or the tone of the text (by moving the pointer over Change Tone and selecting Direct, Casual, Formal or Like a Poem).
Copilot-generated text in Outlook, with options for taking action on it.
Howard Wen / Foundry
Important: All AI-generated content can contain errors or outright fabrications, known as hallucinations. When you insert text that Copilot has generated into a document or email, be sure to fact-check it carefully. (See our tips for curbing hallucinations in Copilot.)
AI-generated content also tends to be generic and a bit boring, so you’ll likely want to edit it to inject your own personality or writing style.
Customize email draft instructions in OutlookOutlook offers an additional way to make Copilot’s email drafts sound more like you: give it custom instructions for composing messages. Click the down arrow to the right of the Copilot button that’s at the right end of the ribbon toolbar. On the menu that opens, select Settings.
On the Settings panel that opens over the page, click Draft Instructions. Then on the right side of the panel, under “Custom Instructions,” click on the switch for Use custom instructions when drafting email. Type in your custom instructions, including specifics like length, tone of voice, your customary greeting and closing, and so on. Then click the X at the upper-right corner of the panel to close it. You can further adjust these instructions at any time.
You can specify custom instructions for Copilot to use when generating email drafts.
Howard Wen / Foundry
Using the Copilot sidebarIn Word and OneNote, click the Copilot button toward the right end of the ribbon toolbar on the Home tab. In Outlook, start a new email message and click the Copilot button at the upper right.
This will open the Copilot sidebar to the right. Type your prompt inside the text entry box. Optionally, you can click the + to search for and select a document in your OneDrive, SharePoint, or on your PC to use as a reference. This works the same as the aforementioned “Reference your content” function while using the Copilot toolbar in Word. You can also type / (forward slash) to activate this function.
When you are done entering your prompt and adding a document for reference, click the arrow button or press Enter on your keyboard. Copilot will generate text and display it inside the sidebar.
Generated text in the Copilot sidebar in Word (left). If you scroll down in the sidebar, you’ll see icons for inserting the text or copying it your clipboard (right).
Howard Wen / Foundry
In Word, you can click + in the row of icons that appear below the generated text to add the text to your document or note. (This option isn’t available in Outlook or OneNote.) In all three apps, you can click the Copy button to copy the writing to your PC clipboard. You can then paste it into your document, email, note, or elsewhere.
Or you can refine Copilot’s results. In the sidebar below the generated text you’ll see some suggested prompts, such as “Make it more specific to our industry” or “Expand into a full section.” You can select one of these and/or type additional prompts into the entry box.
Ask Copilot for suggestions to improve your writingIf you’d rather compose emails and documents yourself but would like some suggestions for improvement, there’s a nifty Copilot feature in Outlook to assist you. Called “Coaching,” it critiques an email draft and offers recommendations for making it stronger. You can then make changes yourself or request that Copilot do so.
Word has a similar feature called “Writing suggestions” that uses Copilot to suggest ways to improve your writing, and you can choose to apply them to your document.
Outlook: Get coaching on an email draftAfter you’ve written an email draft, click the down arrow to the right of the Copilot button at the right end of the ribbon toolbar. On the menu that appears, select Coaching.
Or, in your email draft, click the Copilot icon in the left margin to open its toolbar. From the dropdown below the toolbar, select Get coaching.
Copilot will review your draft and offer specific suggestions for improving it in terms of tone, reader sentiment, and clarity. At the bottom of this report, you can click Apply all suggestions, which will trigger Copilot to rewrite your email draft according to its suggestions, or click Dismiss to close the report with no changes made to your email draft.
Copilot can critique your email draft and offer suggestions for improvement.
Howard Wen / Foundry
Word: Get writing suggestions for a documentClick the pencil icon in the left margin of your document. Or, if you want Copilot to evaluate a specific section of the document, highlight the text you want critiqued and then click the pencil icon.
On the dropdown below the Copilot toolbar, click Writing suggestions. Copilot will analyze your writing. A panel will open that displays one or more suggestions. You can read through them by clicking the left and right arrows on the top of this panel. Each suggestion has a blue checkmark that you can uncheck if you want to disregard the suggestion.
If you want to apply any of the checked suggestions to your writing, click the Apply selected suggestions button.
Copilot can offer suggestions for improving a document in Word.
Howard Wen / Foundry
Have Copilot rewrite textYou can have Copilot rewrite passages of text in a Word document, an email, or a OneNote page. This can be useful if you feel that the text could use a little more detail, or if a paragraph sounds too wordy. Microsoft says Copilot’s rewriting ability works best at under 3,000 words.
In all three web apps, you can use the Copilot sidebar for rewriting. In Word, you can also use the “Rewrite with Copilot” panel, and OneNote has a similar rewriting tool.
Using the “Rewrite with Copilot” panel in WordHighlight the passage of text that you want Copilot to rewrite, then click the pencil icon that appears in the margin to the left of the text that you highlighted. Alternatively, you can right-click on your highlighted text, and on the menu that opens, select Draft with Copilot.
On the dropdown that opens, you can select Auto rewrite to prompt Copilot to rewrite the passage wholesale, or you can choose one of the other items on this dropdown to have Copilot rewrite the text in a specific way: Fix spelling and grammar, Structure and refine, Make shorter, or Make formal. (Writing suggestions will have Copilot offer targeted suggestions for improving your writing, as covered in the previous section of this story.)
Copilot offers several approaches for rewriting your document.
Howard Wen / Foundry
After you make a selection from the dropdown, the “Rewrite with Copilot” panel appears below your highlighted text. Copilot will generate and present up to three rewritten versions in the panel. Click the right and left pointing arrows at the top of the panel to cycle through these rewrites to review them.
Reviewing Copilot’s suggested rewrites for the highlighted text.
Howard Wen / Foundry
Below the rewritten text, you can click the following buttons:
- Replace will replace the original text that you highlighted with the currently visible rewritten version.
- Insert below will insert the rewritten version below the original text you highlighted (so that you can decide later if you want to keep it).
- The Regenerate button (two circular arrows) will generate another result.
- In the Word web app, there’s a text entry box where you can refine the result by typing more prompts.
Note: Users with Copilot and M365 business subscriptions can also have Copilot rewrite messages in Teams. This feature works similarly to the Rewrite with Copilot panel in Word.
Using the Copilot icon in OneNoteThe OneNote Windows app has its own built-in rewriting tool. To use it, click the top bar of a text field on a page, then click the Copilot icon to the left of the text field and on the next menu, select Rewrite this.
Select Rewrite this from the Copilot menu.
Howard Wen / Foundry
This action will trigger Copilot to rewrite everything inside the text field. The rewrite will then be set inside the top of the text field.
The rewritten text appears in the text field above the original text.
Howard Wen / Foundry
Using the Copilot sidebar in Word, Outlook, or OneNoteYou can use the Copilot sidebar for rewriting in Word’s Windows and web apps, and in the Outlook and OneNote web apps — though it’s less convenient in Outlook and OneNote.
On the Home tab in the ribbon toolbar, click the Copilot button to open the Copilot sidebar to the right.
In Word: To have Copilot rewrite the whole document or note, type rewrite in the text entry box. To have it rewrite a specific paragraph, supply the paragraph number or select the paragraph you want rewritten. You can also describe how you want the text to be rewritten, such as rewrite the first paragraph to be shorter or rewrite paragraph 3 to sound more professional.
In Outlook or OneNote: Here you can’t simply select the text you want rewritten; you have to paste the text into Copilot’s text entry box and tell the AI how you want it rewritten.
Copilot’s rewritten text appears in the sidebar. In Word, you can click + in the row of icons that appear below the generated text to add it to your document. It will be added in the spot where the cursor is on your document. In all three apps, you can use the Copy icon to copy the rewritten text to your clipboard, and then paste it where you like.
A rewritten paragraph in the Copilot sidebar.
Howard Wen / Foundry
If you want to adjust Copilot’s rewriting result, you can click one of the suggested prompts that appear in the sidebar below the generated text — or you can type more prompts in the text entry box.
Having to copy and paste text to and from the sidebar in Outlook and OneNote is a bit of a hassle. For rewriting tasks in those apps, it’s simpler to use Outlook’s Coaching feature or OneNote’s “Rewrite this” tool via the Copilot icon.
Have Copilot summarize long documents, notes, emails, or threadsYou can have Copilot generate a brief summary of a long document in Word or a page in OneNote. Microsoft says Copilot can summarize up to 1.5 million words. In Outlook, Copilot can summarize a long email and, even more useful, the conversation within an entire email thread.
Using the Copilot summary panel in WordWhen you open a document that already contains text in the Word web app, Copilot automatically generates a summary of it in a small panel above your document; click View more to expand the panel so that you can view the entire summary.
Click View more (top) to expand the summary panel and see the full summary (bottom).
Howard Wen / Foundry
Throughout the summary, you may see citation numbers that refer to passages of text within the original document. Moving the pointer over one of these numbers will pop open a snippet of the cited text in a small panel. Clicking a number will jump your view of the document in the main window to the cited text in it.
Hover over a citation number to see a snippet of the cited text.
Howard Wen / Foundry
Using the Copilot sidebar in WordWith the document opened in Word, highlight the text that you want summarized. If you want a summary of the entire document or page, skip this step.
Click the Copilot button on the Home tab of the ribbon toolbar to open the Copilot sidebar. Inside the text entry box, type summarize and click the arrow button.
Copilot will generate a summary and display it inside the sidebar.
Copilot’s summary of a long document in the sidebar.
Howard Wen / Foundry
Below the summary, there’s the familiar + icon you can click to add the generated text to your document and the Copy button to copy the summary to your PC clipboard. Below that you may see suggested prompts that you can click to revise the summary.
Using the Copilot icon or sidebar in OneNoteClick the top bar of a text field on a page. Click the Copilot icon to the left of the text field and on the next menu, Summarize this. This action will trigger Copilot to summarize everything inside the text field. The summary will then be set inside the top of the text field.
In OneNote, Copilot’s summary appears at the top of the text field.
Howard Wen / Foundry
To summarize an entire notebook, open the Copilot sidebar, type summarize in the text entry box, and click the arrow button. Copilot will generate a summary and display it inside the sidebar, along with the usual Copy button and suggested prompts for refining the output.
Summarizing emails and threads in OutlookOpen the email or conversation that you want to summarize. Click Summary by Copilot or Summarize at the top of the email thread. Copilot will generate a summary of the email or thread.
A Copilot-generated summary of an email.
Howard Wen / Foundry
This summary will be posted at the top of the email or thread. Thread summaries may include citations that Copilot used in generating the summary. Clicking a citation (denoted by a number) will scroll down the thread to the cited email for you to view.
This Copilot-generated summary of an email thread includes citations you can click to go to the source email.
Howard Wen / Foundry
Getting a summary when sharing a Word doc (business plans only)If you have Copilot with a Microsoft 365 business plan, you can use Copilot to generate a summary of a Word document when you share it with your co-workers. This summary is inserted as a passage of text inside the message that your co-workers receive inviting them to collaborate on the document.
Note: This feature works with the web version of Word, not the desktop apps.
With the document open in Word, click the Share button toward the upper right. On the Share panel that opens, click the Copilot icon inside the lower right of the “Add a message” composition box. The AI will generate and insert the summary. You can edit the summary before you send out the invite.
This article was initially published in August 2024 and updated in December 2025.
Related:
Nvidia bets on open infrastructure for the agentic AI era with Nemotron 3
AI agents must be able to cooperate, coordinate, and execute across large contexts and long time periods, and this, says Nvidia, demands a new type of infrastructure, one that is open.
The company says it has the answer with its new Nemotron 3 family of open models.
Developers and engineers can use the new models to create domain-specific AI agents or applications without having to build a foundation model from scratch. Nvidia is also releasing most of its training data and its reinforcement learning (RL) libraries for use by anyone looking to build AI agents.
“This is Nvidia’s response to DeepSeek disrupting the AI market,” said Wyatt Mayham of Northwest AI Consulting. “They’re offering a ‘business-ready’ open alternative with enterprise support and hardware optimization.”
Introducing Nemotron 3 Nano, Super, and UltraNemotron 3 features what Nvidia calls a “breakthrough hybrid latent mixture-of-experts (MoE) architecture”. The model comes in three sizes:
- Nano: The smallest and most “compute-cost-efficient,” intended for targeted, highly-efficient tasks like quick information retrieval, software debugging, content summarization, and AI assistant workflows. The 30-billion-parameter model activates 3 billion parameters at a time for speed and has a 1-million-token context window, allowing it to remember and connect information over multi-step tasks.
- Super: An advanced, high-accuracy reasoning model with roughly 100 billion parameters, up to 10 billion of which are active per token. It is intended for applications that require many collaborating agents to tackle complex tasks, such as deep research and strategy planning, with low latency.
- Ultra: A large reasoning engine intended for complex AI applications. It has 500 billion parameters, with up to 50 billion active per token.
Nemotron 3 Nano is now available on Hugging Face and through other inference service providers and enterprise AI and data infrastructure platforms. It will soon be made available on AWS via Amazon Bedrock and will be supported on Google Cloud, CoreWeave, Microsoft Foundry, and other public infrastructures. It is also offered as a pre-built Nvidia NIM microservice.
Nemotron 3 Super and Ultra are expected to be available in the first half of 2026.
Positioned as an infrastructure layerThe strategic positioning here is fundamentally different from that of the API providers, experts note.
“Nvidia isn’t trying to compete with OpenAI or Anthropic’s hosted services — they’re positioning themselves as the infrastructure layer for enterprises that want to build and own their own AI agents,” said Mayham.
Brian Jackson, principal research director at Info-Tech Research Group, agreed that the Nemotron models aren’t intended as a ready-baked product. “They are more like a meal kit that a developer can start working with,” he said, “and make desired modifications along the way to get the exact flavor they want.”
Hybrid architecture enhances performanceSo far, Nemotron 3 seems to be exhibiting impressive gains in efficiency and performance; according to third-party benchmarking company Artificial Analysis, Nano is the most efficient among those of its size, and leads in accuracy.
Nvidia says Nano’s hybrid Mamba-Transformer MoE architecture, which integrates three architectures into a single backbone, supports this efficiency. Mamba layers offer efficient sequence modeling, transformer layers provide precision reasoning, and MoE routing gives scalable compute efficiency. The company says this design delivers a 4X higher token throughput compared to Nemotron 2 Nano while reducing reasoning-token generation by up to 60%.
“Throughput is the critical metric for agentic AI,” said Mayham. “When you’re orchestrating dozens of concurrent agents, inference costs scale dramatically. Higher throughput means lower cost per token and more responsive real-time agent behavior.”
The 60% reduction in reasoning-token generation addresses the “verbosity problem,” where chain-of-thought (CoT) models generate excessive internal reasoning before producing useful output, he noted. “For developers building multi-agent systems, this translates directly to lower latency and reduced compute costs.”
The upcoming Nemotron 3 Super, Nvidia says, excels at applications that require many collaborating agents to achieve complex tasks with low latency, while Nemotron 3 Ultra will serve as an advanced reasoning engine for AI workflows that demand deep research and strategic planning.
Mayham explained that these as-yet-unreleased models feature latent MoE, which projects tokens into a smaller, latent, dimension before expert routing, “theoretically” enabling 4X more experts at the same inference cost because it reduces communication overhead between GPUs.
The hybrid architecture behind Nemotron 3 that combines Mamba-2 layers, sparse transformers, and MoE routing is “genuinely novel in its combination,” Mayham said, although each technique exists individually elsewhere.
Ultimately, Nemotron pricing is “attractive,” he said; open weights are free to download and run locally. Third-party API pricing on DeepInfra starts at $0.06/million input tokens for Nemotron 3 Nano, which is “significantly cheaper” than GPT-4o, he noted.
Differentiator is opennessTo underscore its commitment to open source, Nvidia is revealing some of Nemotron 3’s inner workings, releasing a dataset with real-world telemetry for safety evaluations, and 3 trillion tokens of Nemotron 3’s pretraining, post-training, and RL datasets.
In addition, Nvidia is open-sourcing its NeMo Gym and NeMo RL libraries, which provide Nemotron 3’s training environments and post-training foundation, and NeMo Evaluator, to help builders validate model safety and performance. All are now available on GitHub and Hugging Face. Of these, Mayham noted, NeMo Gym might be the most “strategically significant” piece of this release.
Pre-training teaches models to predict tokens, not to complete domain-specific tasks, and traditional RL from human feedback (RLHF) doesn’t scale for complex agentic behaviors, Mayham explained. NeMo Gym enables RL with verifiable rewards — essentially computational verification of task completion rather than subjective human ratings. That is, did the code pass tests? Is the math correct? Were the tools called properly?
This gives developers building domain-specific agents the infrastructure to train models on their own workflows without having to understand the full RL training loop.
“The idea is that NeMo Gym will speed up the setup and execution of RL jobs for models,” explained Jason Andersen, VP and principal analyst with Moor Insights & Strategy. “The important distinction is NeMo Gym decouples the RL environment from the training itself, so it can easily set up and create multiple training instances (or ‘gyms’).”
Mayham called this “unprecedented openness” the real differentiator of the Nemotron 3 release. “No major competitor offers that level of completeness,” he said. “For enterprises, this means full control over customization, on premises deployment, and cost optimization that closed providers simply can’t match.”
But there is a tradeoff in capability, Mayham pointed out: Claude and GPT-4o still outperform Nemotron 3 on specialized tasks like coding benchmarks. However, Nemotron 3 seems to be targeting a different buyer: Enterprises that need deployment flexibility and don’t want vendor lock-in.
“The value proposition for enterprises isn’t raw capability, it’s the combination of open weights, training data, deployment flexibility, and Nvidia ecosystem integration that closed providers can’t match,” he said.
This article originally appeared on InfoWorld.
The UK wants your iPhone to check your age
In yet another demonstration of its addiction to surveillance, the UK government now wants your smartphones to stop you from looking at images containing nudes unless you pass an age verification check. The Financial Times reports the UK will soon “encourage” Apple and Google to build nudity-detection algorithms into their operating systems by default in an attempt to tackle violence against women and girls.
At first glance, this seems like a good idea, as it might protect people against various forms of abuse and could help in the battle against child pornography.
Critical to the notion is the concept that operating system vendors, rather than individual apps, will become responsible for age verification. Apple already offers Communication Safety tools within Parental Controls. These already detect nude photos and videos in apps such as Messages or FaceTime.
Unintended consequences? They’re already happeningHowever, just like the UK’s many other surveillance-centered decisions, it is subject to unintended consequences:
- First, it seems highly probable the algorithms required to police people’s devices will deliver false positives, likely including fine art portraits. It’s important to note that instances of this have already happened in response to the UK’s poorly-crafted and badly implemented Online Safety Act (OSA): one social media post of a painting by Francisco de Goya was restricted for UK users, reports the BBC.
- Second, in the event the OS does detect a false positive, what happens next? Does the law imply perfectly innocent culture vultures will end up having to explain themselves to the authorities for daring to look at art?
- The third and biggest negative consequence is the same as it has always been: once you have smartphone operating systems working to analyze the content on your devices for one thing, what is to stop those systems working to identify other forms of content on the device?
With the UK moving in this direction, what is to stop more authoritarian governments from instructing operating system developers to monitor and prevent distribution of other forms of communications and content. It’s enough to drive the entire population to use of VPNs.
Protecting the innocent, or criminalizing debate?None of these arguments is new. All have been raised in response to UK government overreach before; sadly, the current political class don’t seem to want to listen to those pleas.
That’s not a good sign for a country that may well have already forced Apple to open up encrypted data to investigation despite strong resistance. The decision to force Apple and others to place backdoors in their devices was not discussed at the election, while the arrogant lack of transparency around that decision is cause for concern in any so-called democracy.
It’s also true that the accompanying Online Safety Act (OSA) is already being applied in ways that creep far beyond its original stated intention. An excellent joint briefing from the Electronic Frontier Foundation, Open Rights Group, Big Brother Watch, and Index on Censorship describes a plethora of ways in which the OSA is unfairly restricting people’s day-to-day activities.
They tell us that the OSA may yet force the Wikimedia Foundation to withdraw Wikipedia from the UK. They also say the way the OSA is crafted means freedom of expression is being curtailed, even while the process of age approval has been farmed out to third-party companies that are themselves under little oversight. We have already seen people’s private data leaked.
Road to nowhereTo some degree, the UK government’s latest attack against online freedoms is to be expected. It is par for the course in an administration that is both enslaved to and displays little understanding of technology, is resistant to considering the social impact of it, and has little regard to the importance of free expression to legitimize democratic dialogue. It’s a government that combines legislative incompetence with authoritarian overreach, with little understanding of how bad these laws will be abused by even more politically abusive entities tomorrow. At the very least, the current raft of proposals need work.
You can follow me on social media! Join me on BlueSky, LinkedIn, and Mastodon.
Emerging cyber threats: How businesses can bolster their defenses
Enterprises leveraging our rapidly digitizing world must also have a robust understanding of how cyber threats are evolving.
AI deepfakes are already becoming too convincing to be easily spotted by common sense approaches. Malicious actors are using AI to find vulnerabilities and to make their attacks harder to detect. And AI systems themselves pose security risks. Research by Foundry shows that security and privacy are the most pressing ethical issues around generative AI deployments.
Down the road, quantum computing promises immense power and capabilities for businesses, but it will also be used by adversaries, especially to break encryption.
And further out, technologies still in the labs, such as DNA-based data storage, cybernetics and bio-hacking present their own challenges to security and data protection.
These are just some of the ways future technologies put enterprise security at risk.
shutterstock/Gorgev
Over the horizonAccording to Martin Krumböck, CTO for cybersecurity at T-Systems, security teams can form a clearer view of emerging threats, by dividing them into three timescales, or “horizons”. “There’s always something changing in security,” he says.
Classical infrastructure security is in the “here and now”, and an immediate priority. And too many enterprises still have gaps in cloud security and are not yet ready for AI.
“We are seeing very quick business adoption of AI,” Krumböck explains. “At the same time, people are ignoring the risks. But the risks are already here.”
Deep fakes, used for CEO and CFO fraud, are one example. “In the past, we could mitigate that with good training,” Krumböck says. “Now, the deep fakes are getting so good that all that training is thrown out of the window.”
Other AI threats include attacks on training data for large language models (LLMs), prompt injections and direct attacks on models themselves. “But it is not at the forefront of thinking yet,” he warns.
CISOs and CSOs, then, need to be aware of the risks of AI. But they need to juggle this with monitoring longer-term threats.
“Further over the horizon, there are issues that will become important in security,” says Krumböck. “The shift to post-quantum cryptography isn’t about responding to a threat today, but about preparing for tomorrow. Particularly against long-term risks like ‘harvest-now, decrypt-later’ attacks.” Threats to blockchain technology are another medium-term risk.
It’s worth at least being aware of longer term risks posed by emerging disciplines like DNA-based computing technology, where the DNA molecules themselves perform computational processes.
“DNA storage becomes a huge information security risk because it’s so small and can be easily implanted somewhere or used to smuggle data out,” says Krumböck. “It sounds like sci-fi right now, but it might become a reality.”
Back to the futureClearly, security and IT leaders need to plan for emerging threats and inform their boards.
One trusted method is to test new technologies through small trials. This helps understand the organization’s risk appetite, alongside the benefits of innovation.
Few enterprises, though, can employ dedicated teams of security researchers and futurists to assess far-off risks. But organizations can work with their security partners, leverage their expertise and scale to look over the horizon.
As one of the largest enterprises in its sector, Deutsche Telekom and T-Systems have that scale. “That, in itself, puts a huge target on our backs, and we need to defend our own telecommunications network day in day out, and protect our end customers,” Krumböck explains.
This allows T-Systems to invest forward-looking security research, and crucially, translate that intelligence into information and advice that boards can understand, and act on.
Want to secure AI initiatives? Start with this e-book.
Need to rethink comprehensive security? Check out this guide.
GPU pricing, a bellwether for AI costs, could help IT leaders at budget time
AI is now as much a utility as any other ongoing business cost, and IT leaders setting out their AI budgets for 2026 need to consider the costs of the underlying resources — the GPUs in modern data centers that are unlocking AI’s potential.
In the three years since ChatGPT arrived, the push for ever more — and better — generative AI tools has continued at a rapid clip. That growth has come at a cost, however: spiraling AI budgets amid low GPU availability and limited energy capacity to run those data centers.
Efforts are now underway to reduce the cost of using GPUs, and the attendant cost of using genAI tools, with smaller data centers, billing tools, software tools, and alternative hardware leading the charge.
Traditional AI budgeting is heavily reliant on GPU pricing, hours, and instance rates. GPU instances are “eye-wateringly expensive” at $30+ per hour for high-end configurations on-demand, said Corey Quinn, chief cloud economist at Duckbill, which provides cost analysis tools for cloud providers.
“For serious AI workloads, GPU costs often become the dominant line item, which is why you’re seeing companies scramble for reserved capacity and spot instances,” he said, adding that AI billing through cloud services “is a mess.”
IT leaders can’t commit to fixed computing resources because of the unpredictability of AI workloads. Hyperscalers muddy the waters further with managed GPU services, AI credits, and committed-use discounts.
Then there are “shadow costs everyone forgets — data transfer, storage for training data, and the engineering time to make any of it work,” Quinn said.
At the same time, smaller cloud providers — also called neoclouds — are getting their hands on more GPUs and making them available to IT users. Those companies include CoreWeave, Lambda Labs, and Together AI.
“They’re picking up meaningful market share by focusing exclusively on GPU workloads and often undercutting hyperscaler pricing by 30% to 50%,” Quinn said.
Neoclouds focus more on discounted GPUs within a smaller geographic footprint, something some companies can live with, Quinn said.
IT leaders don’t need the latest and shiniest GPUs from Nvidia or AMD for their AI workloads, said Laurent Gil, cofounder of Cast AI. Older generations of GPUs perform equally well on certain AI workloads, and IT leaders need to know where to find them to save money.
“AWS spot pricing for Nvidia’s A100 and H100 has decreased by 80% between last year and this year — just not everywhere,” Gil said.
Cast AI offers the necessary software tools and AI agents to move workloads to cheaper GPUs across cloud providers and regions. “Our agents do what a human does once a month, except they do it every second,” Gil said.
Cast AI’s tools also optimize for CPUs, which consume far less power than GPUs. (Energy consumption is becoming a major bottleneck for AI workloads, Gil said.)
Some companies are also looking to make pricing and GPU availability more transparent.
One startup, Internet Backyard, allows data center providers to provide real-time quotes, billing, payments, and reconciliation for GPU capacity. The white-label software is embedded in data center providers’ systems.
“From the tenant side of the data center, we have the tenant portal, where you can see real-time GPU pricing and energy matching with your actual consumption,” said Mai Trinh, CEO of Internet Backyard.
The startup isn’t yet collaborating with hyperscalers; for now, it focuses more on emerging data centers that need to standardize billing, quoting, and payment processing. “When we talk to people who build a data center, they tell us that everything’s happening on Excel — there’s no real-time pricing there,” Trinh said.
Since AI is related to performance, the company is exploring a performance-based pricing model rather than GPU-specific pricing. “It is extremely important for us to base pricing on performance, because that’s what you’re really paying for,” Trinh said. “You’re not paying for someone else’s depreciating asset.”
The startup’s backers include Jay Adelson, a co-founder of Equinix, one of the world’s largest data center companies.
Energy is also an important driver in GPU pricing. GPU demand for AI computing is overwhelming grids, which have power ceilings, and pushing up utility prices.
U.S. data centers could account for 12% of total energy consumption by 2030, according to a 2024 McKinsey study. Meanwhile, electricity prices are soaring in the data center frenzy. Multiple groups last week sent a letter to US Congress requesting a moratorium on building data centers.
Energy requirements such as those required by the largest AI providers for future data centers are not sustainable, said Peng Zou, CEO of PowerLattice. “High-density AI clusters are forcing CIOs to rethink their infrastructure roadmap and economics,” Zou said.
PowerLattice makes technology for modern chips to become more power efficient. The company’s technology is among a slew of AI-era chip technologies designed to eke more compute from systems while reducing power consumption.
“The reliability and uptime of AI and GPU servers are critical, and these are things CIOs care deeply about,” Zou said.
The biggest AI mistake: Pretending guardrails will ever protect you
The fact that the guardrails from all the major AI players can be easily bypassed is hardly news. The mostly unaddressed problem is what enterprise IT leaders need to do about it.
Once IT decision-makers accept that guardrails don’t consistently protect anything, the presumptions they make about AI projects are mostly rendered moot. Other techniques to protect data must be implemented.
The reports of guardrail bypasses are becoming legion: Poetry disables protections, as does leveraging chat history, inserting invisible characters and using hexadecimal format and emojis. Beyond those, patience and playing the long game, among others, can wreak havoc — impacting just about every generative (genAI) and agentic model.
The risks are hardly limited to what attackers can accomplish. The models themselves have shown a willingness to disregard their own protections when the models see them as an impediment to accomplishing an objective, as Anthropic has confirmed.
If we try to extend the road analogy that gives a guardrail its name, “guardrails” are not guardrails in the physical concrete barrier sense. They are not even strong deterrents, in the speedbump sense. They are more akin to a single broken yellow line. It is a weak suggestion with no enforcement or even serious discouragement.
If I may borrow a line from popular social media video blogger Ryan George in his writer-vs.-producer movie pitches series, an attacker wanting to get around today’s guardrails will find it “super easy, barely an inconvenience.” It’s as if homeowners protect their homes by placing “Do Not Enter” signs on all their doors, then keep the windows open and the doors unlocked.
So, what should an AI project look like once we accept that guardrails won’t force a model or agent to do what it’s told?
IT has a few options. First, wall off either the model/agent or the data you want to protect.
“Stop granting AI systems permissions you wouldn’t grant humans without oversight,” said Yvette Schmitter, CEO of the Fusion Collective consulting firm. “Implement the same audit points, approval workflows, and accountability structures for algorithmic decisions that you require for human decisions. Knowing guardrails can’t be relied on means designing systems where failure is visible. You wouldn’t let a hallucinating employee make 10,000 consequential decisions per hour with no supervision. Stop letting your AI systems do exactly that.”
Gary Longsine, CEO at IllumineX, agreed. He argued that the same defenses enterprises use to block employees from unauthorized data access need to now be deployed to genAI and AI agents. “The only real thing that you can do is secure everything that exists outside of the LLM,” Longsine said.
Taken to its extreme, that might mean keeping a genAI model in an isolated environment, feeding it only the data you want it to access. It’s not exactly air-gapped servers, but it’s close. The model can’t be tricked into revealing data it can’t access.
Capital One toyed with something similar; it created genAI systems for auto dealerships, but also gave the large language model (LLM) it used access to public data. The company also pushed open-source models and avoided hyperscalers, which addressed another guardrail issue. When agents are actively managed by a third-party firm in a cloud environment, your rules don’t necessarily have to be obeyed. Taking back control might mean literally doing that.
Longsine said some companies could cooperate to build their own data center, but that effort would be ambitious and costly. (Longsine put the price tag at $2 billion, but it could easily cost far more — and it might not even meaningfully address the problem.)
Let’s say five enterprises built a data center that only those five could access. Who would set the rules? And how much would any one of those companies trust the other four, especially when management changes? The companies might wind up replacing a hyperscaler with a much smaller makeshift hyperscaler, and still have the same control problems.
Here’s the painful part: There are many genAI proofs of concept out there today that simply won’t work if management stops believing in guardrails. At the board level, it seems, the Tinkerbell strategy remains alive and well. They seem to think guardrails will work if only all investors just clap their hands really loudly.
Consider an AI deployment allowing employees to access HR information. It will only tell any employee or manager the information they should be able to access. But those apps — and countless others just like them — take the easy coding approach; they grant the model access to all HR data and rely on guardrails to enforce proper access. That won’t work with AI.
I’m not saying guardrails will never work. On the contrary, my observations suggest they do — about 70% to 80% percent of the time. In some better designed rollouts, that figure might hit 90%.
But that’s the ceiling. And when it comes to protecting data access — especially potential exfiltration to anyone who asks the right prompt — 90% won’t suffice. And IT leaders who sign off on projects hoping that will do are in for a very uncomfortable 2026.
OpenAI launches GPT-5.2 as it battles Google’s Gemini 3 for AI model supremacy
OpenAI has released GPT-5.2, claiming significant gains in the AI model’s ability to complete real-world business tasks to an “expert level” compared to GPT-5.1, released in November.
The new model, available in Instant, Thinking, and Pro performance tiers, offers major improvements across a range of benchmarks, the company said.
Using OpenAI’s GDPval benchmark, which compares the model’s ability to complete 44 different business tasks to the same standards as human experts, GPT-5.2 matched or exceeded human users in 70.9% of tests, compared to GPT-5.1’s 38.8% across the Instant (basic), Thinking (deeper reasoning), and Pro (research-grade) versions.
To illustrate these advances, OpenAI said that GPT-5.2 Thinking could fully format a workforce planning spreadsheet, while on GPT-5.1, the equivalent output assembled the same spreadsheet correctly, but in a more basic state that lacked formatting.
“We designed GPT‑5.2 to unlock even more economic value for people; it’s better at creating spreadsheets, building presentations, writing code, perceiving images, understanding long contexts, using tools, and handling complex, multi-step projects,” said OpenAI.
GPT-5.2 also showed a mixture of gains across other important benchmarks, including ARC-AGI-1/ARC-AGI-2 (general problem solving), and SWE-Bench Pro/SWE-Bench Verified (real-world software tasks).
“For everyday professional use, this translates into a model that can more reliably debug production code, implement feature requests, refactor large codebases, and ship fixes end-to-end with less manual intervention,” the company said.
GPT-5.2 has begun rolling out to ChatGPT users, starting with the paid plans. Subscription pricing is unchanged. For API access, GPT-5.2 is priced at $1.75 per one million input tokens, and $14 per one million output tokens, with a 90% discount on cached inputs. Despite this being more expensive than GPT-5.1, OpenAI claimed the model’s greater efficiency meant that “the cost of attaining a given level of quality ended up less expensive due to GPT‑5.2’s greater token efficiency.”
Code redFor OpenAI, the appearance of the new version so soon after the last one represents an important acceleration in its GPT-5 model development. In early December, CEO Sam Altman sent a ‘code red’ emergency memo to OpenAI employees warning that without rapid development of GPT-5, the company risked falling behind Google’s increasingly capable Gemini 3 model
Since then, things appear to have stabilized, with Altman telling CNBC this week that Gemini’s advances had been less significant than first feared, and that the code red state would end by January. However, a noticeable omission from the web announcement was any comparison between GPT-5.2’s performance and that of Gemini 3. Reportedly, a separate press briefing offered only a limited comparison.
Maria Sukhareva, a principal AI analyst at Siemens, questioned OpenAI’s use of benchmarks more generally. “It [GPT-5.2] claims to beat GDPVal, but this is a benchmark developed by OpenAI for OpenAI. Technically there are no obstacles for OpenAI to fine-tune their model for those 44 tasks, while completely failing on everything else,” she pointed out.
“Essentially, the numbers reported by GPT-5.2 are meaningless where one cannot see what data they trained the model on. GPT-5.2 suffers from all the same problems as previous models,” she argued. Sukhareva’s deeper dive on GPT-5.2 benchmarking can be found on her Substack.
Rachid ‘Rush’ Wehbi, CEO of e-commerce platform Sell The Trend, has tested GPT-5.2 under real-world conditions. “GPT-5.2 is doing a lot better when it comes to keeping its train of thought going for longer periods and not falling apart when you throw some layered context at it. For companies, that’s way more important than making a tiny bit of an improvement on some potentially inconsequential benchmark,” he said.
“Benchmarks are fine for showing you’ve made some sort of progress, but they don’t tell you if your model is going to actually hold up in the real world. GPT-5.2 is a step forward, but enterprise AI is still a work in progress.”
According to Bob Hutchins, founder of AI literacy company Human Voice Media, “most enterprise frustration with AI up until now is from the last 20% — the formatting, the constraints, the handoffs. GPT-5.2 shows progress there.” His advice for enterprises was, “ignore the launch noise and run a disciplined trial. GPT-5.2 is a meaningful step. It does not close the gap between promise and practice, it narrows it.”
For example, benchmarking by agentic AI company Vectara’s Hallucination Evaluation Model, found that, while GPT-5.2 has improved on that front, it still lags some competitors.
“OpenAI still has some way to go in improving hallucination performance,” commented Ofer Mendelevitch, Vectara head of developer relations. “GPT-5.2-low-thinking is best in the GPT family so far, ranking 33rd on our leaderboard with an 8.4% hallucination rate. However, ChatGPT 5.2 notably trails DeepSeek V3.2, which ranks 23rd with a hallucination rate of 6.3%. For comparative purposes, Gemini 3’s grounded hallucination rate in our testing was 13.6%, with Grok 4.1 coming in at 17.8%.”
This article originally appeared on InfoWorld.
More OpenAI news:Trump directs Justice Department to challenge state AI laws
US President Donald Trump signed an executive order Thursday directing the Justice Department to challenge state artificial intelligence laws the administration says threaten US competitiveness.
In his order, Trump is taking issue with state “requiring entities to embed ideological bias within models” — although the example he gave of embedding bias was “a new Colorado law banning ‘algorithmic discrimination’” that, he said, could “force AI models to produce false results in order to avoid a ‘differential treatment or impact’ on protected groups.”
The order establishes an AI Litigation Task Force within the Justice Department to challenge state laws on grounds they are “unconstitutional, pre-empted, or otherwise unlawful.” The order directed the Commerce Department to publish an evaluation of state AI laws that conflict with national policy priorities and to withhold Broadband Equity Access and Deployment funding from states with such laws.
The US Federal Trade Commission will be required to “explain the circumstances under which State laws that require alterations to the truthful outputs of AI models are pre-empted by the Federal Trade Commission Act’s prohibition on engaging in unfair and deceptive acts, while the Federal Communications Commission will consider adopting “a Federal reporting and disclosure standard for AI models that pre-empts conflicting State laws,” according to the text of the Executive Order.
The order also calls on the administration’s Special Advisor for AI and Crypto to recommend legislation to establish a “uniform Federal policy framework for AI that pre-empts State AI laws that conflict with the policy set forth in this order,” although it will still allow states to set their own laws on AI child safety protections, data center infrastructure, and procurement of AI services.
The order targets a growing patchwork of state regulations. States introduced nearly 700 AI bills in 2024, with 113 enacted. Colorado’s AI Act takes effect in June 2026, requiring developers to disclose information about high-risk AI systems and conduct impact assessments. California’s SB 53 requires transparency about AI use in employment decisions. Texas’s TRAIGA, effective January 2026, sets disclosure requirements for generative AI in contracts.
Compliance costs remain for EU salesThe order aims to simplify domestic compliance, but US companies selling AI products and services in European markets will still need to comply with the EU AI Act, which applies to any AI system used by people in EU member states.
“A deregulated US posture can make American AI firms faster at home, but it does not make them freer abroad,” said Sanchit Vir Gogia, chief analyst at Greyhound Research.
The EU regulation, which entered into force in August 2024, classifies AI systems by risk level and requires conformity assessments, transparency obligations, and human oversight for high-risk applications including hiring tools, credit scoring systems, and law enforcement applications.
That could be an obstacle for products conceived under a different set of laws, said Enza Iannopollo, vice president principal analyst at Forrester, said, “These requirements cannot always be added as an afterthought. Many AI systems need safety, integrity, and ethical safeguards built in by design.”
And, said Gogia, “The EU AI Act is not just a legal framework, it is a procurement filter.” Enterprise procurement teams, internal audit functions, and insurers impose requirements similar to EU standards regardless of local regulation, he said. Multinationals standardize internal AI governance to avoid building separate compliance systems for different markets.
Enterprises face increased liability exposureThe order’s approach of pushing to remove state laws it considers pre-empted by an existing federal law on “unfair and deceptive acts,” and drafting legislation that would, if passed, pre-empt other state AI laws, leaves enterprises navigating increased legal uncertainty, according to analysts.
Without regulation establishing common standards, companies must provide separate assurances to each customer and business partner, increasing transaction costs and legal uncertainty, Iannopollo said.
“Using AI to power products, services, or business models creates liability across multiple fronts: employment law, consumer protection, privacy, and sector-specific regimes such as banking or critical infrastructure. When AI systems lack built-in safeguards, those liabilities become harder and costlier to manage,” she said.
Technology trade group NetChoice sees the order helping ensure America leads in AI innovation. “Startups and small businesses will greatly struggle to create and compete with a 50-state patchwork of red tape,” said Patrick Hedger, NetChoice director of policy, in a written statement.
Brad Carson, president of Americans for Responsible Innovation, a political action committee lobbying for a thoughtful governance framework for rapidly advancing technologies, said in a written statement that the order “directly attacks the state-passed safeguards that we’ve seen vocal public support for over the past year, all without any replacement at the federal level.”
The order follows Trump’s July 2025 AI Action Plan, which called for removing regulatory barriers to AI development. Trump revoked President Biden’s October 2023 AI executive order in January 2025.
Microsoft’s Patch Tuesday updates: Keeping up with the latest fixes
Long before Taco Tuesday became part of the pop-culture vernacular, Tuesdays were synonymous with security — and for anyone in the tech world, they still are. Patch Tuesday, as you most likely know, refers to the day each month when Microsoft releases security updates and patches for its software products — everything from Windows to Office to SQL Server, developer tools to browsers.
The practice, which happens on the second Tuesday of the month, was initiated to streamline the patch distribution process and make it easier for users and IT system administrators to manage updates. Like tacos, Patch Tuesday is here to stay.
In a blog post celebrating the 20th anniversary of Patch Tuesday, the Microsoft Security Response Center wrote: “The concept of Patch Tuesday was conceived and implemented in 2003. Before this unified approach, our security updates were sporadic, posing significant challenges for IT professionals and organizations in deploying critical patches in a timely manner.”
Patch Tuesday will continue to be an “important part of our strategy to keep users secure,” Microsoft said, adding that it’s now an important part of the cybersecurity industry. As a case in point, Adobe, among others, follows a similar patch cadence.
Patch Tuesday coverage has also long been a staple of Computerworld’s commitment to provide critical information to the IT industry. That’s why we’ve gathered together this collection of recent patches, a rolling list we’ll keep updated each month.
In case you missed a recent Patch Tuesday announcement, here are the latest six months of updates.
Ho ho ho! December’s Patch Tuesday delivers three zero-daysThe December Patch Tuesday update addresses three zero-days (CVE-2025-64671, CVE-2025-54100, and CVE-2025-62221) but includes surprisingly few total patches (just 57). Notably, Microsoft has not published any critical updates for the Windows platform this month. That said, given the zero-days, we recommend a “Patch Now” release schedule for Windows and Microsoft Office. More info on Microsoft Security updates for December 2025.
Be thankful: November’s Patch Tuesday has just one zero-dayThis November Patch Tuesday release offers a much reduced set of updates, with just 63 Microsoft patches and (only) one zero-day (CVE-2025-62215) affecting the Windows desktop platform. Windows desktops this month require a “Patch Now” plan, and while the severity of these security vulnerabilities is less than it was in October, the testing requirements are still extensive. More info on Microsoft Security updates for November 2025.
For October’s Patch Tuesday, a scary number of fixesMicrosoft this week released 175 updates affecting Windows and Office and .NET, including server-based updates for Microsoft SQL Server and Exchange server. There are also four zero-day fixes (CVE-2025-24052, CVE-2025-24990, CVE-2025-2884 and CVE-2025-59230), leading to a “Patch Now” recommendation for Windows.
General support for Windows 10 ended Oct. 14, with Microsoft advising: “At this point technical assistance, feature updates and security updates are no longer provided. If you have devices running Windows 10, we recommend upgrading them to Windows 11.” More info on Microsoft Security updates for October 2025.
For September, Patch Tuesday means fixes for Windows, Office and SQL ServerMicrosoft released 86 patches this week with updates for Office, Windows, and SQL Server. But there were no zero-days, so there’s no “patch now” recommendation from the Readiness team this month. This is an incredible sign of success for the Microsoft update group. To reinforce this fact, we have patches for Microsoft’s browser platform that have (perhaps for the first time) been rated at a much lower “moderate” security rating (as opposed to critical or important). More info on Microsoft Security updates for September 2025.
For August, a ‘complex’ Patch Tuesday with 111 updatesMicrosoft’s August Patch Tuesday release offers a rather complex set of updates, with 111 fixes affecting Windows, Office, SQL Server and Exchange Server — and several “Patch Now” recommendations.
Publicly disclosed vulnerabilities in Windows Kerberos (CVE-2025-53779) and Microsoft SQL Server (CVE-2025-49719) require immediate attention. In addition, a CISA directive about a severe Microsoft Exchange vulnerability (CVE-2025-53786) also requires immediate attention for government systems. And Office is on the “Patch Now” update calendar due to a “preview pane” vulnerability (CVE-2025-53740). More info on Microsoft Security updates for August 2025.
For July, a ‘big, broad’ Patch Tuesday releaseWith 133 patches in its Patch Tuesday update this month, Microsoft delivered a big, broad and important release that requires a Patch Now plan for Windows, Microsoft Office and SQL Server. A zero-day (CVE-2025-49719) in SQL Server requires urgent action, as do Git extensions to Microsoft Visual Studio. More info on Microsoft Security updates for July 2025.
June Patch Tuesday: 68 fixes — and two zero-day flawsMicrosoft offered up a fairly light Patch Tuesday release for June, with 68 patches to Microsoft Windows and Microsoft Office. There were no updates for Exchange or SQL server and just two minor patches for Microsoft Edge. But two zero-day vulnerabilities (CVE-2025-33073 and CVE-2025-33053) mean IT admins need to get busy with quick patching plans. More info on Microsoft Security updates for June 2025.
Ho ho ho! December’s Patch Tuesday delivers three zero-days
The December Patch Tuesday update from Microsoft addresses three zero-days (CVE-2025-64671, CVE-2025-54100, and CVE-2025-62221) but includes surprisingly few total patches (just 57). As well as an unusually low number of updates, Microsoft has not published any critical updates for the Windows platform this month. That said, given the zero-days, we recommend a “Patch Now” release schedule for Windows and Microsoft Office. There are no updates for the developer tools this month and a minor patch for Microsoft Exchange Server.
To help navigate these changes, the team from Readiness has provided a helpful infographic detailing the risks of deploying updates to each platform. (Information about other recent Patch Tuesday releases is available here.)
Known issuesMicrosoft has published a longer than usual list of known issues for December. Focusing on the actionable issues affecting later versions (non-ESU), we believe the following deserve attention from enterprise engineers:
- After installing KB5070892 or later updates, Windows Server Update Services (WSUS) does not display synchronization error details within its error reporting. This functionality is temporarily removed to address the Remote Code Execution Vulnerability, CVE-2025-59287.
- A very small number of users may notice that the password icon for the Windows login screen is not visible. This has been an issue since the August 2025 update. Microsoft has published a Known Issue Rollback (KIR) to address Pro and Home users. Enterprise deployments should use an updated group policy to reset the icon image.
Microsoft had released an out-of-band update (KB5070881) for Windows Server 2025, which was briefly offered to all Windows Server 2025 machines, regardless of Hotpatch enrollment.
Machines that installed KB5070881 will temporarily stop receiving Hotpatch updates and will instead receive security updates that require a restart. This issue is expected to be resolved in the next baseline release in January 2026.
Major revisions and mitigationsThere have been several updates and revisions to previous Microsoft patches this December. Most of them relate to Chromium updates (see the Browser section below). However, these two revisions may require further reading and remedial action:
- CVE-2024-30098: Windows Cryptographic Services Security Feature Bypass Vulnerability. Though this update revision is referenced as a documentation update by Microsoft, a previous release incorrectly identified the managed key provider. This could have led to smart-card authentication failures. If you have experienced this kind of issue since October, Microsoft has published a knowledge note (KB5073121) on how to detect and resolve these kinds of issues.
- CVE-2025-60710: Host Process for Windows Tasks Elevation of Privilege Vulnerability. This patch revision affects all supported versions of Windows. Before you update, Microsoft suggests that you disable the Recall feature. Only enable this feature once you have patched your system with this latest update.
Microsoft Secure Boot certificates used by most Windows devices are set to expire, starting in June 2026. This might affect the ability of certain personal and business devices to boot securely if not updated in time. There is plenty of time — you have been warned.
Each month, the team at Readiness analyzes the latest Patch Tuesday updates from Microsoft and provides detailed, actionable testing guidance. This guidance is based on assessing a large application portfolio and a comprehensive analysis of the Microsoft patches and their potential impact on Windows platforms and application deployments.
For this December 2025 release cycle from Microsoft, we have grouped the critical updates and required testing efforts into different functional areas.
Cloud files and sync providersOrganizations using OneDrive, SharePoint sync, or third-party cloud storage providers should validate sync-root connectivity and file hydration workflows. Testing should cover sync-root connection and disconnection scenarios, including hydration/dehydration, client restarts, client upgrades, unexpected client crashes, account unlink/relink flows, and multi-user scenarios.
Windows Sandbox and virtualizationThe kernel and storage virtualization components received updates affecting Windows Sandbox functionality. Organizations using Sandbox for application testing or isolated browsing should install and enable Windows Sandbox, configure folder mappings via configuration files, and validate that mapped folders are accessible, with basic file operations (create, modify, delete) functioning correctly.
Start Menu User TilesThe Start Menu’s User Tiles UI received updates this month. Testing should validate UI rendering (correct display, alignment, profile images), functionality (click actions, hover states, keyboard navigation), dynamic updates (profile changes reflecting immediately), error handling (missing or corrupted profile data), and performance (no lag or crashes during user switching).
December 2025’s release is stability-focused with no high-risk components. Testing effort should center on cloud file synchronization workflows for OneDrive/SharePoint users, Windows Sandbox folder mapping for virtualization environments, and Start Menu User Tiles for organizations with multi-user workstations. This lighter release provides an opportunity to complete patching before year-end corporate change freezes.
Updates by product familyEach month, we break down the update cycle into product families (as defined by Microsoft) with the following basic groupings:
- Browsers (Microsoft Edge)
- Microsoft Windows (both desktop and server)
- Microsoft Office
- Microsoft Exchange and SQL Server
- Microsoft Developer Tools (Visual Studio and .NET)
- Adobe (if you get this far)
Microsoft has released a single update to Microsoft Edge (CVE-2025-62223) and a further 13 Chromium-based updates with this December release. One of the “interesting” things this month is that Microsoft has released a patch for Microsoft Edge on the Apple Mac platform. We may have to start including Mac in our testing regime if Microsoft keeps this up. Please add these low-profile browser changes to your standard release calendar.
Microsoft WindowsWe should start this section with an important announcement: There are no critical-rated patches for Windows this December. This is an incredible achievement for Microsoft.
The following product areas have been updated with 38 patches rated important for this December 2025 patch cycle:
- Windows Cloud Files Mini Filter, VSP, Brokering and Windows Resilient File System (ReFS)
- Win32k, DWM and DirectX Graphics Kernel
- Windows Common Log File System
- Windows Remote Access Connection Manager
- Windows Routing and Remote Access Service (RRAS)
- Windows Installer and PowerShell
- Microsoft Hyper-V
- Windows Shell and Camera codecs
Unfortunately, we have three zero-days through reported exploitation and public disclosure (CVE-2025-64671, CVE-2025-54100, and CVE-2025-62221) that affect GitHub, PowerShell, and the Windows mini-driver, respectively. Add these updates to your Windows “Patch Now” release schedule (yes, even though these are not rated as critical by Microsoft).
Microsoft OfficeThe real focus of this month’s testing should be on Microsoft Office with Microsoft releasing four critical-rated updates and a further 12 patches to the Microsoft Office productivity suite. This month’s critical updates affect Microsoft Word, Excel, and SharePoint with remote code execution vulnerabilities. Add these Microsoft Office updates to your “Patch Now” schedule.
Microsoft Exchange and SQL ServerMicrosoft has released two updates (CVE-2025-64667 and CVE-2025-64666) to Exchange Server this month, both rated as important by Microsoft and requiring a server reboot.
Add these updates to your standard server update schedule.
Developer toolsMicrosoft has not published any updates to the .NET or Visual Studio platforms this month. Enjoy the respite.
Adobe (and third-party updates)It’s back! Adobe Reader has returned to form this month (APSB25-119) with a series of critical updates to the PDF generator of choice. We have been watching recent, rapid updates to Reader this month, hoping that we don’t have any more before the commonly adopted enterprise change control lock-down next Friday.
The Readiness team hopes that next week is not too rushed with last-minute changes and that everyone gets a much-deserved break over the holiday period.
How to make Apple’s App Store Awards great again
Apple announces its App Store Awards each year during the final weeks of the year. Ostensibly, these reward developers for building apps and games that push creative boundaries on Apple devices including the iPhone, iPad, Mac, Watch, TV, and Vision Pro.
Selected by the App Store editors, the prize-winning apps should be seen as cutting-edge products that shine light on the emerging future of app design on Apple’s platforms. To reflect this, Apple introduced a new ‘Cultural Impact’ category this year.
At the crossroads of design and technologyThat’s the idea behind the awards, and to some extent this is realized by the collection of apps Apple puts together each year. However, some of the leading lights in Apple analysis are growing a little less impressed by the awards and the selected apps. I suspect this reflects the need to innovate these innovation awards. To figure out how to do that, it makes sense to consider where the awards began.
It was humans, specifically human curation. You see, Apple has understood for years that people want human guides, rather than guidance from intelligence machines/bots. Nowhere is this more apparent than in its Apple Music service, which has human curators to help guide your music discovery on the service. (Having human guidance matters a lot when you have half the smartphone population of some countries using the service.)
Designed for humansIt’s the same at the App Store, where humans manage the process and the store itself. Those humans are allegedly the same ones who submit the suggested winners for the App Store Awards, which are selected by those editors. The principle should be that the editors pick the apps that most deserve praise for pushing app design boundaries.
Looking at this year’s awards, Apple has clearly made a few decisions in the background pertaining to how it chooses the apps. This year’s winning iPhone app, Tiimo, is remarkable in that it attempts to be a to-do app for neurodivergent people, which is laudable. What is less remarkable is its lack of a native Mac app. (Users are directed to the web app for desktop interactions.) Apple commentator John Gruber seems quite critical of the choice. I’m less so, but it does strike me that if accessibility is to be seen as a differentiator within the awards, then it would be even better served by giving it a dedicated category.
I’d much rather thousands of app developers were competing to win the coveted Apple Accessibility App of the Year award than that the need for such apps — and admirable goal of serving that need — became an unspoken subtext to the more general iPhone App award. If we are going to shine a light on the bricks on the road to positive change, then let’s elevate the importance of the illumination.
Wisdom of the TerminatorI’ve also come across some App Store Award critics who say that winning apps don’t always do such a great job of following Apple’s own human interface guidelines, or argue that rather than showing us what’s great in the world of app design, the awards actually show us the kind of apps Apple wants to promote.
Everyone is a critic, so when I come across a problem, I can’t help but consider the wisdom of the Terminator. In his self-help book, Arnold Schwarzenegger, the former governor of California and star of the Terminator movie franchise, tells us that if you complain about something, then you should also “come to the table with a potential solution.” If you don’t, then the problem isn’t very big. It’s an approach I use when managing community projects. I use it because it works.
So, what is the solution for the App Store Awards? How can the company breathe life and build credibility into something that seems to have become a bit formulaic? Apple is obviously trying to do something about this — hence the new Cultural Impact category — but it does seem to me that the missing piece in the App Store Awards equation is the people using the apps.
Credibility is earnedWith that in mind, perhaps one thing that could improve the awards, boost their credibility, and also make them into something developers compete to receive is to widen the selection and judging panel.
It needs to be a panel rather than a popular vote, as with hundreds of millions of app users, a popular vote would almost certainly favor the most widely known developers, or the ones who paid for PR and marketing services to win votes. (Elections get won by money, as we all know to our cost.) That means it makes more sense to build an awards panel, partly based on users and partly based on credible names from digital design, development, and research.
While Apple’s editors would inevitably contribute to the shortlist, panelists should also be able to raise apps they come across for consideration — and there would need to be guardrails that prevent nepotistic choices. To avoid gaming the results, the identities of the panelists should be confidential until the winners are announced and liable to change each year.
This process, while more complicated, would also be more credible, helping transform the App Store Awards into something even Apple’s worst critics take seriously. Which, given the creativity that is still visible at the App Store, they probably should.
Please follow me on Twitter, or join me in the AppleHolic’s bar & grill and Apple Discussions groups on MeWe. Also now on Mastodon.
Will Google get smart glasses right this time?
Silicon Valley is abuzz with chatter about Google’s upcoming AI glasses. The trigger was a big announcement on The Android Show on December 8.
The company announced that its first AI glasses will be developed in collaboration with partners like Warby Parker, Samsung, and Gentle Monster, and should launch next year.
Google is planning two categories of smart glasses: AI-powered audio glasses and XR (extended reality) glasses with displays.
(These products should not be confused with Project Aura, resulting from Google’s partnership with XREAL. Aura glasses are tethered XR glasses with a 70-degree field of view, optical see-through displays, and support for Android XR apps and hand-tracking.)
Google’s approach mirrors Meta’s. That company currently offers its Ray-Ban Meta glasses with no display and Meta Ray-Ban Display glasses that do have a display.
Both companies are working to release two-screen AI display glasses by the end of 2027. The binocular glasses will be able to show stereoscopic 3D images and offer a larger virtual display compared to the monocular version.
Like Meta Ray-Ban Display glasses, the Google display glasses will offer a single “screen” in the right lens, which will enable visual information like YouTube Music controls, Google Maps turn-by-turn navigation, and Uber status updates, according to Google.
Also like Meta’s glasses, the right temple has a touchpad for controlling the glasses’ features, and voice commands processed by Gemini Live will also control the features and offer up information.
Google’s AI glasses require connections to Android phones. And we can assume that Apple’s unannounced AI glasses will depend on iPhones. It makes sense to look at this category of device in the initial years as peripheral devices to smartphones. They depend entirely on the smartphone’s cellular and Wi-Fi connectivity, location services and hardware, notifications, phone calls and messaging, podcast and other media apps, social network apps, and so on.
All of Google’s glasses will run on the Android XR operating system, which debuted on the Samsung Galaxy XR headset in October.
Crucially, Google’s glasses will be based on the company’s Gemini AI model, which is currently a far better model than Meta AI. Gemini could prove to be Google’s biggest advantage, along with deep contextual knowledge of people who use Gmail, Google Photos, Google Docs, Tasks, Notes, and other Google products.
Google also has industry-leading services that could make Google’s glasses better: Google Translate and Google Maps, for example. At the announcement, Google demonstrated a real-time translation feature available either through on-screen captions or via audio translation through the speakers. As a user of Ray-Ban Meta’s Live Translate feature, I can tell you that captions are far better, because the audio translations often play when you or the other person are talking, so you understand even less than without the translation.
Lessons from Google GlassGoogle Glass was first shown to the public in April 2012 and officially launched its Explorer Edition in 2013, making it one of the first consumer smart glasses to bring a wearable computer into eyewear form. Google terminated the consumer version in January 2015.
I was an early Google Glass user. Yes, I was a glasshole.
Google Glass was way ahead of its time, but looked pretty wild. It had a small, prism-like display positioned above the right eye that showed digital information in the user’s field of view, a novel feature at the time.
You could control Google Glass with voice commands like “OK Glass” to start actions, making it one of the first widely available voice-activated wearable computers. You could also take pictures by winking your eye. Or, you could take photos and record video with a button press, then instantly share them over email or social media.
It offered real-time turn-by-turn navigation through Google Maps, with audio cues and visual directions in the display.
It had a touchpad on the side of the frame for scrolling and selecting options.
The device connected to smartphones via Bluetooth to access the internet, using the phone’s data connection. It synced with Google services like Gmail, Calendar, and Search, allowing hands-free access to messages, appointments, and web queries.
In other words, Google Glass worked much like today’s AI glasses, but without the AI, despite shipping 13 years ago.
A consensus emerged that Google Glass failed. And a huge number of people hated it.
The big question now is: Will Google apply the lessons learned from Google Glass? Here’s what I believe those lessons are:
1. Don’t let them look like an electronics product. Google Glass looked very weird, with a big boom hovering over the right eye. They could be worn with or without lenses. But either way, they looked dorky, and the fact that they sat on the face over the eyes meant that whomever you were conversing with couldn’t take you seriously while you were wearing them.
Google’s upcoming AI glasses should look like ordinary glasses. For the record, there’s something akin to an “uncanny valley” with AI glasses. In my opinion, Ray-Ban Meta glasses are on the acceptable side of that divide, and Meta Ray-Ban Display glasses are on the unacceptable side. It’s a fine line.
2. Don’t make others feel like they’re being watched and photographed. The main complaint about Google Glass, and the reason for the epithet “glasshole,” was that many people hated having a camera pointed at them, unsure about whether or not they were being recorded by Google Glass wearers.
Ray-Ban Meta glasses address this uncertainty by notifying others with a light when the camera is on. It’s not clear that this is good enough to satisfy the growing opposition to cameras in glasses.
3. Don’t make it too expensive. Google Glass cost $1,500 (over $2,000 if adjusted for inflation) which made most in the public feel priced out of the product, and therefore excluded.
4. Don’t forget the killer app. Every platform needs a “killer app” to succeed — the one feature that compels people to buy it. (I spelled out the need for this kind of killer app for wearables in 2014.) Google Glass didn’t have one, other than possibly the camera. In fact, the majority of use was just taking pictures.
It’s likely that Google believes Gemini is that killer app for its new glasses, but I don’t think it is. Between now and ship time, Google needs some super compelling app that sets its glasses apart from what by then will be a crowded market that likely includes Apple.
Predicting Google’s prospectsIt’s tough to say whether Google’s glasses are likely to succeed in the market. They probably won’t be the cheapest or most fashionable, nor will they garner a reputation for protecting the privacy of both users and non-users. They won’t be available to iPhone users. Those are Google’s disadvantages.
But Google’s high-quality AI, its access to search, and the fact that so many people run their lives and work on Google products could give the company access to the information and personal data that could make Google’s AI glasses the best product on the market for a billion people.
As a former Google Glass user and defender of the project, including and especially in this space back in the day, I have to say that I’m rooting for Google to succeed at long last.
6 recent Google Chrome features you probably forget to use
Sometimes, the best tech features are the ones you don’t even actively think about — they’re just there when you need ’em, quietly working on your behalf, without any fanfare or intensive effort required.
Sometimes, though, those same sorts of scintillating slivers can have the unintended effect of being so seamlessly integrated into an app or process that you completely forget they’re there and never get into the habit of actually using ’em. Or, worse yet, maybe you never even notice their arrival at all.
With Google’s Chrome desktop browser, we’ve seen so many features pop up over so many months that it’s all too easy to have that happen. I was reminded of that fact when I randomly rediscovered a recent feature in Chrome on my computer the other day and realized I’d never fully explored it when it first showed up in the browser many weeks back.
That prompted me to poke around some more and remember a bunch of other interesting features that similarly came into the Chrome compound somewhere along the way and then got promptly forgotten by my mushy middle-aged man-brain.
So while we typically focus our noggins here on the Android-oriented side of Google’s Chrome creature and the many new options constantly coming into that kingdom, today, we’re gonna pivot and turn our attention to the desktop domain — ’cause if you’re using Chrome on Android (especially for Very Important Business Purposes), there’s a decent chance you’re using it on a computer at least some of the time, too.
Here, without further ado, are six recent Chrome features you’ve probably forgotten.*
* Assuming your aging brain is as mushy as mine.**
** If it isn’t, I apologize. And I’m incredibly impressed by you.***
*** What were we talking about, again?
[Get fresh Googley goodness directly in your inbox with my free Android Intelligence newsletter. Three new things to try every Friday — minimal brainpower required.]
Google Chrome feature #1: A new split viewUp first is the feature that prompted this entire exploration — and, for full disclosure, it’s something I very much rolled my eyes at in amusement back when I first saw it.
It’s a little somethin’ Google’s calling split view, for tabs, and it sounds quite silly on the surface.
The idea is this: When you’re looking at a website on your computer, you might want to look at a second website alongside it. So instead of simply opening up two tabs or even opening two windows and positioning them alongside each other, you can now initiate an Android-reminiscent screen split and see any two tabs side by side within the same single Chrome window.
You can see why I rolled my eyes, right? It sounds so pointless and redundant. Why bother with something like this when there are already so many other simple ways to accomplish something similar?
That’s certainly what I thought. But then, the other day, I stumbled back onto the split tab view setup for the first time since my initial encounter, and I thought, “Huh — you know, I might as well at least try it.”
And I’ve gotta tell ya: It is a surprisingly helpful new productivity booster to have.
Here’s how it works: Anytime you’re viewing a tab, you can either right-click on its title (in the tab bar at the tippity-top of the Chrome window) and look for “Add tab to new split view” in the menu that comes up there or right-click on any link you see within the tab and find the equivalent option there.
Either way you do it, you’ll end up with a screen that looks a little somethin’ like this:
Chrome’s tab split view — who knew?JR Raphael, Foundry
You can then easily view both tabs and move back and forth between ’em without having to mess with messy multiple-window layouts or isolated environments. You can even change the exact ratio of the split by clicking and dragging the divider between the two areas — and you can manage it further by clicking the split-view icon that appears to the left of your address bar and offers up options for separating your two views, reversing their order, or closing either side.
For me, the value has been in areas like writing a document or taking notes whilst viewing a related web page, drafting an email whilst referencing a document, and other such tasks where having two things side by side is an enticing advantage.
Best of all? The feature’s already there and waiting in any Chrome desktop habitat, no matter what kind of computer you’re using. All you’ve gotta do is remember to use it.
Google Chrome feature #2: Instant analysisAnother recently added Chrome desktop option waiting to be remembered is the Android-inspired ability to dive deeper into anything you encounter on this wide, wily ol’ web of ours and inspect it thoroughly with the excellent Google Lens tool.
Google Lens is something we typically talk about in the context of Android. But when Android’s Lens-connected Circle to Search system started gaining steam last year, Google had the idea to bring a version of that same smartness over to the desktop side for us to enjoy in that environment as well.
So here’s how to find it, in case you’d also forgotten:
- Right-click anywhere in any Chrome tab you’ve got open.
- In the menu that pops up, find and click the option to “Search with Google Lens.”
- Then use your mouse or trackpad to select any area of your screen that you want to investigate or learn more about.
Once you do, Lens will spring into action and show you more info about whatever you’ve selected — whether it’s an image, some text, text within an image, you name it.
Chrome’s desktop Lens option is like Circle to Search for your computer.JR Raphael, Foundry
You can click on any of the results as well as copy the text (even if was inside an image and previously not something that could be copied), translate it, save it as a new image — all sorts of interesting possibilities.
The powers are all there. It’s up to you to embrace ’em.
Google Chrome feature #3: Instant interactionIf a more chat-driven interaction is what you’re after, Google’s Gemini AI bot is also on the ready within the Chrome desktop browser — and in the right sort of scenario, it could actually be useful.
You might, for instance, want to ask Gemini about something you see on a page — maybe a particular laptop model that you’re curious to learn the cost of or a sprawlingly long scientific article that you want to summarize and see translated into plain English.
Whatever the case may be, the easiest way to call Gemini for page-specific questioning is to again right-click the tab’s title at the top of the Chrome window — and this time, look for the option to “Share tab with Gemini” in that context menu.
That’ll beam the page into Chrome’s built-in Gemini portal, where you can then ask away to your heart’s content.
See? Gemini’s Chrome presence can be helpful at times.JR Raphael, Foundry
Just remember: As with any current-day large-language model system, you can’t always believe what Gemini tells you. But, if nothing else, it can be a helpful starting point for a deeper dive into some specific topic and a way to kick off your own more intricate and fact-grounded probe.
Google Chrome feature #4: Easier readingGoog almighty, the modern web sure can be an eyesore to look at. (Insert awkward eye darting here.)
Believe it or not, though, Chrome actually has a fantastic way to improve your web-wide reading experience. It eliminates annoying ads, over-the-top pop-ups, and unfortunate font and color choices, too. And, as a welcome bonus, it doesn’t even leave you feeling guilty that you’re depriving your favorite publisher and the people who work for it of the critical ad revenue that helps them keep publishing.
My friend, meet — or maybe just reacquaint yourself with — Chrome’s remarkable reading mode.
Anytime you’re viewing an article of any sort, simply right-click anywhere within the window and find the option to “Open it reading mode.”
And hey — how ’bout that?!
Before and after. ‘Nuff said.JR Raphael, Foundry
Right there, alongside the regular version of the page, is a cleaned-up, distilled-down version that you can actually read without wanting to gouge your eyes out. (Insert additional awkward eye darting here.)
You can even customize everything about the article’s appearance in the controls at the top of the reading mode area — changing its font style, font size, color theme, even line height and line spacing (if you really wanna get wild).
And since the page loads alongside the original, the site still gets your view and the credit for all of its ads displaying — giving you a clear conscience to complement your non-cringe-inducing read.
Win-win, I’d say.
Google Chrome feature #5: A reading companionWhile we’re looking at that reading mode option, we also need to take note of an inconspicuous set of icons resting within its upper border.
See that little play button and the three options alongside it?
The other side of Chrome’s reading mode.JR Raphael, Foundry
Yup — those are the ones.
Clicking the play button will cause Chrome to read the text from the reading mode window out loud to you, which can be a handy way to ingest info when you’re also ingesting your lunch. (Mmm…lunch.) The buttons next to it will let you change the speed of the reading and the specific voice used, among other adjustments.
And for an extra easily-overlooked addition, note, too, that you can also highlight specific segments of text within the reading mode area and then click the play button. That’ll cause Chrome to read only those exact segments aloud — an interesting way to share specific snippets with a room of colleagues, koalas, or maybe even koala colleagues, depending on your current workplace situation.
Google Chrome feature #6: Tab torqueAs a certified lifelong tab hoarder, you’d think I’d remember to use this next Chrome feature. But somehow, I never do.
It’s a super-simple way to switch tabs using only your keyboard — and to find the exact tab you want to toggle to, no matter which window it’s within or how buried on your desktop it might be.
Just hit Ctrl-Shift-A (or Cmd-Shift-A, if you’re one of those highfalutin Mac-owning marmosets). No matter where you are in Chrome or what else you’re working on, your browser will pop up a handy little window with all your open and recently closed tabs.
You can then either use your keyboard’s arrows to move to the one you want or just start typing the title of the page you’re looking for — and, as Chrome narrows down the list to match, hit Enter when the right one is highlighted.
The time-saving tab search switcher.JR Raphael, Foundry
Yes, please — and thank you.
A bonus feature: Instant device beamingThis last feature isn’t technically part of our same collection, ’cause it isn’t especially recent at all. But it’s one of those things I think a lot of people forget (or never even realize) is possible — and it’s so forkin’ useful, I’d be remiss not to mention it as part of this conversation.
So here ’tis: As long as you’re signed into the same Google account within Chrome on your various devices, the Chrome desktop browser has a supremely handy system for beaming any page you’re viewing on your computer directly over to your favorite Android phone or tablet.
It’s a swift ‘n’ simple way to send something you opened at work onto your mobile device so you remember to look at it later — or maybe just leisurely read through it on your lunch. (Mmm…lunch.)
With any page you’re viewing, click Chrome’s three-dot main menu icon — in the browser’s upper-right corner — then hover over “Cast, save, and share” and select “Send to your devices.”
Now, this is how wireless sharing should happen.JR Raphael, Foundry
It’s about as out of the way and buried as can be, but man alive, is it a treasure you’ll embrace and appreciate once you get yourself in the habit of relying on it.
And that’s something that, with enough training and practice, even the mushiest old mammal brain can be conditioned to do.
Related reading: 9 Google Chrome features you really should be using
Got Android? Get my Android Intelligence newsletter for three fresh tips each Friday, straight from me to you.
OpenAI: Latest news and insights
OpenAI is an artificial intelligence organization comprised of the non-profit OpenAI, Inc. and several for-profit subsidiaries. The company is perhaps best known for its ChatGPT chatbot, which launched in 2022, kicking off a period of massive disruption in the tech industry and beyond.
A complicated and increasingly contentious relationship with Microsoft, ongoing legal issues over copyright infringement, and frequent product announcements keep OpenAI in the news. Follow this page and never miss a beat.
Latest Open AI news and analysis: OpenAI launches GPT-5.2 as it battles Google’s Gemini 3 for AI model supremacyDecember 12, 2025: OpenAI has released GPT-5.2, claiming significant gains in the AI model’s ability to complete real-world business tasks to an “expert level” compared to GPT-5.1, released in November. The new model offers major improvements across a range of benchmarks, the company said.
What does OpenAI’s ‘Code Red’ warning mean for Microsoft?December 10. 2025: OpenAI founder and CEO Sam Altman sent out a memo to OpenAI employees declaring a “Code Red” emergency and focusing all company efforts on improving ChatGPT. The reason? Google’s newly released Gemini 3 model beat the pants off GPT-5.1
OpenAI to acquire AI training tracker NeptuneDecember 3, 2025: OpenAI has agreed to acquire Neptune, a startup specializing in tools for tracking AI training. Neptune promptly announced it is withdrawing its products from the market.
OpenAI admits data breach after analytics partner hit by phishing attackNovember 27, 2025: OpenAI suffered a significant data breach after hackers broke into the systems of its analytics partner Mixpanel and successfully stole customer profile information for its API portal, the companies have said in coordinated statements.
OpenAI rolls out GPT-5.1 to refine ChatGPT with adaptive reasoning and personalizationNovember 13, 2025: OpenAI has introduced GPT-5.1, an update to its GPT-5 model, aiming to deliver faster responses, improved reasoning, and more flexible conversational controls as the company works to refine its ChatGPT experience for both consumer and enterprise users.
OpenAI spends even more money it doesn’t haveNovember 3, 2025: OpenAI’s overdraft continued its upward trajectory today when the company signed a multi-year $38 billion contract with AWS to have it run its AI workloads. The latest spending spree adds to the incremental $250 billion of Azure services it pledged to buy last week, and, of course, to the commitment it has made towards building Stargate data centers with Oracle.
OpenAI seeks to automate ‘computer use’ for Macs in the enterpriseOctober 24, 2025: While AI bots have begun mastering tasks in browsers and on Windows, Mac-using enterprises have largely been overlooked, until now. OpenAI aims to change that with its acquisition of generative AI interface maker Software Applications Incorporated.
Enterprises should not install OpenAI’s new Atlas browser, analysts warnOctober 24, 2025: Companies that might be eyeing OpenAI’s new ChatGPT Atlas browser should not rush to use it because of potential security risks, analysts said this week. The browser was unveiled on Tuesday after it had been teased for months as a work in progress. It is currently available for MacOS only.
Has OpenAI shown us a future for Safari?October 23, 2025: Has OpenAI shown us the future of Safari? In one way it has, because its new Atlas browser shows these generative AI (genAI)-based apps are no longer just windows to the web — they’re becoming intelligent copilots for our digital lives.
OpenAI–Broadcom alliance signals a shift to open infrastructure for AIOctober 14, 2025: OpenAI has partnered with Broadcom to co-develop and deploy its first in-house AI processors. The move could reshape data center networking dynamics and chip supply strategies as the ChatGPT maker races to secure more computing power for AI workloads.
OpenAI Codex rivals Claude CodeOctober 13, 2025: The OpenAI Codex gives software developers a first-rate coding agent in their terminal and their IDE, along with the capability to delegate background tasks to agents in the cloud.
OpenAI Codex adds SDK, admin tools, Slack integrationOctober 10, 2024: Codex is now generally available. Since being launched as a research preview in May, Codex, OpenAI’s AI-powered software engineering agent that can work on tasks in parallel, has added Slack integration, an SDK, and admin tools.
OpenAI admits AI hallucinations are mathematically inevitable, not just engineering flawsSeptember 18, 2025: OpenAI, the creator of ChatGPT, acknowledged in its own research that large language models will always produce hallucinations due to fundamental mathematical constraints that cannot be solved through better engineering.
OpenAI, Microsoft discuss shape of future relationshipSeptember 12, 2025: Microsoft and OpenAI are in talks about the future of their partnership, they said in a joint statement , without providing details. Separately, OpenAI said it wants to go ahead with its previously announced plan to turn its for-profit business into a public benefit corporation, in which its nonprofit organization would own a $100 billion stake.
What Oracle’s $300B OpenAI deal means for enterprise cloud strategySeptember 11, 2025: A single $300 billion contract has seemingly transformed Oracle from a traditional ERP and database vendor into a cloud computing powerhouse.The company has signed a five-year computing power commitment with OpenAI, contributing to a reported 359% surge in future contract revenue this quarter.
OpenAI acquires Statsig to speed up generative AI-based product launchesSeptember 3, 2025: OpenAI is acquiring Statsig, a Washington-based product development platform startup, for $1.1 billion to speed up its generative AI-based product launches and accelerate iteration cycles of existing products such as Codex and ChatGPT.
OpenAI drops GPT-5: smarter, sharper, and built for the real worldAugust 7. 2025: More than two years after GPT-4’s release, OpenAI has unveiled GPT-5, boasting sharper reasoning, multimodal input, better math skills, and cleaner task execution, according to the company.
OpenAI challenges rivals with Apache-licensed GPT-OSS modelsAugust 6, 2025: OpenAI has released its first open-weight language models since GPT-2, marking a significant strategic shift as the company seeks to expand enterprise adoption through more flexible deployment options and reduced operational costs. The two new models — gpt-oss-120b and gpt-oss-20b — deliver what OpenAI describes as competitive performance while running efficiently on consumer-grade hardware.
Google snatches Windsurf execs in a $2.4B deal, derailing OpenAI’s biggest acquisition yetJuly 14, 2025: Google has recruited CEO Varun Mohan and co-founder Douglas Chen of AI coding startup Windsurf in a $2.4 billion talent acquisition deal, just two months after Windsurf agreed to be acquired by OpenAI for $3 billion. Mohan, Chen and select research and development staff, will join Google’s DeepMind AI division
OpenAI and Perplexity enter browser wars to take on ChromeJuly 10, 2025: Google Chrome’s dominance in the browser market is facing new threats as OpenAI and Nvidia-backed Perplexity unveil AI-powered browsers aimed at reshaping how users interact with the web. Comet is a new web browser with built-in AI search capabilities, the company said.
Microsoft brings OpenAI-powered Deep Research to Azure AI Foundry agents
July 8, 2025: Microsoft added OpenAI-developed Deep Research capability to its Azure AI Foundry Agent service. The move is designed to let developers use Deep Research API and SDK to embed, extend, and orchestrate Deep Research-as-a-service across data and existing systems.
Oracle to power OpenAI’s AGI ambitions with 4.5GW expansionJuly 3, 2025: OpenAI has signed a significant compute leasing deal with Oracle, under which it will access 4.5 gigawatts (GW) of data center power, marking one of the largest single leasing arrangements in the industry.
OpenAI tests Google TPUs amid rising inference cost concernsJuly 1, 2025: OpenAI has begun testing Google’s Tensor Processing Units (TPUs), a move that — though not signaling an imminent switch — has raised eyebrows among industry analysts concerned about the escalating costs of AI inference and its effects.
Microsoft/OpenAI AGI argument unlikely to impact enterprise ITJune 26, 2025: The contract between the two AI giants has an exit clause once AGI is achieved. The problem: It is impossible to prove when that happens. Either way, IT execs at Macy’s, Bank of America, doubt it will matter.
OpenAI productivity suite could change the way users create documentsJune 26, 2025: OpenAI’s planned productivity suite could dismantle traditional habits of how users create and consume documents in the same the way the company changed browsing and search habits.
o3-pro may be OpenAI’s most advanced commercial offering, but GPT-4o bests itJune 24, 2025: In a head-to-head comparison of the two models, researchers found that o3-pro is far less performant, reliable, and secure, and does an unnecessary amount of reasoning. Notably, o3-pro consumed 7.3x more output tokens, cost 14x more to run, and failed in 5.6x more test cases than GPT-4o.
Microsoft and OpenAI: Will they opt for the nuclear option?June 24, 2025: The fight between Microsoft and OpenAI over what Microsoft should get for its $13 billion investment in the AI company has gone from nasty to downright toxic, with each of the companies considering strategies against the other that can only be described as their nuclear options.
OpenAI walks away from Scale AI — triggering industry-wide rethink of data partnershipsJune 19, 2025: OpenAI has ended its long-standing partnership with Scale AI, the company that powered some of the most complex data-labeling tasks behind frontier models such as GPT-4.
OpenAI’s o3 price plunge changes everything for vibe codersJune 18, 2025: o3 used to be too slow and too expensive for daily coding—no longer. The latency is now bearable, the price is sane, and the chain-of-thought pays off.
Sam Altman: Meta tried to lure OpenAI employees with billion-dollar salariesJune 18, 2025: After reports suggested Meta has tried to poach employees from OpenAI and Google Deepmind by offering huge compensation packages, OpenAI CEO Sam Altman weighed in, saying those reports are true.
OpenAI-Microsoft tensions escalate over control and contractsJune 17, 2025: The relationship between OpenAI and Microsoft is under growing strain amid extended talks over OpenAI’s restructuring, with OpenAI reportedly considering antitrust action over Microsoft’s influence in the partnership.
OpenAI’s MCP move tempts IT to trust genAI more than it shouldJune 16, 2025: OpenAI late last month announced changes to make it much easier to give its genAI models full access to any software using Model Context Protocol (MCP). Here’s why that’s a bad idea.
OpenAI launches o3-pro, slashes o3 price by 80% in bid to widen AI leadJune 11, 2025: OpenAI has unveiled its most advanced AI model to date, the o3-pro, which surpasses competitors on key benchmarks and replaces the o1-pro. The o3-pro is now available for ChatGPT Pro and Team users, as well as through the developer API, with access for enterprise and education sectors beginning next week.
What Microsoft hopes to get from its breakup with OpenAIJune 11, 2025: The once-tight bond between Microsoft and OpenAI has been fraying for well over a year — and it’s getting worse. What the two companies want from each other now is very different from when Microsoft made its original $13 billion investment.
Oracle to spend $40B on Nvidia chips for OpenAI data center in TexasMay 26, 2025: Oracle is reportedly spending about $40 billion on Nvidia’s high-performance computer chips to power OpenAI’s new data center in Texas, marking a pivotal shift in the AI infrastructure landscape that has significant implications for enterprise IT strategies.
OpenAI’s Skynet moment: Models defy human commands, actively resist orders to shut downMay 30, 2025: OpenAI’s most advanced AI models are showing a disturbing new behavior: they are refusing to obey direct human commands to shut down, actively sabotaging the very mechanisms designed to turn them off.
Jony Ive and OpenAI plan ‘bicycles’ for 21st-century mindsMay 21, 2025: OpenAI has announced that it will purchase io, the AI startup founded by acclaimed former Apple designer Sir Jony Ive, who helped create the iMac, iPod, and iPhone.
OpenAI launches Codex AI agent to tackle multi-step coding tasksMay 19, 2025: OpenAI’s most advanced AI coding agent, Codex, will bring parallel task automation to developers—but analysts caution that speed without scrutiny invites “silent failures.”
Cisco taps OpenAI’s Codex for AI-driven network codingMay 16, 2025: Cisco is working with OpenAI and its newly released Codex software engineering agent to give network engineers access to better tools for writing, testing and building code.
OpenAI’s IPO aspirations prompt rethink of Microsoft allianceMay 12, 2025: Microsoft and OpenAI are renegotiating their multibillion-dollar partnership deal to better align with each company’s evolving goals in the artificial intelligence race
OpenAI hires Instacart CEO Fidji Simo to oversee customer-facing appsMay 8, 2025: The hire indicates that OpenAI’s roadmap will involve more structured, productized offerings rather than just API access.
OpenAI offers help promoting AI outside the US, but analysts question why countries would acceptMay 7, 2025: OpenAI, acting as part of the US government-led Stargate AI project, rolled out a program called OpenAI for Countries. The idea is for Stargate to help other countries create their own genAI environments, including data centers and genAI models.
OpenAI reaffirms nonprofit control, scales back governance changesMay 6, 2025: OpenAI has scrapped plans to reduce its nonprofit parent’s oversight and will keep its existing governance structure intact, a move that limits CEO Sam Altman’s influence and responds to mounting external pressure.
OpenAI to acquire AI coding tool Windsurf for $3BMay 6, 2025: The acquisition comes just months after Windsurf explored funding at this same valuation from investors, highlighting the premium being placed on specialized AI coding capabilities, according to reports.
Former OpenAI employees urge regulators to halt company’s for-profit shiftApril 23, 2025: A broad coalition of AI experts, economists, legal scholars, and former OpenAI employees is urging state regulators to keep OpenAI’s nonprofit foundation in control of the company.
OpenAI’s new models can ‘think with pictures’April 17, 2025: OpenAI has released o3 and 04-mini, two reasoning AI models designed to be extra good at programming, math, and science and that can use images to “think,” according to Engadget, This means that users can upload sketches or diagrams, for example, and even if they are of low quality, o3 and 04-mini will understand what is meant.
OpenAI GPT-4.1 models promise improved coding and instruction followingApril 15, 2025: The GPT-4.1, GPT-4.1 mini, and GPT-4.1 nano models, available only via the API, will provide better performance than GPT-4o and GPT-4o mini at a lower price, OpenAI said.
OpenAI slammed for putting speed over safetyApril 11, 2025: According to a Financial Times report, the ChatGPT maker is now assigning staff and third-party groups only a few days to assess the risks and performance of its latest large language models (LLMs) as compared to several months they were given earlier.
OpenAI fears irreparable harm from Musk, files countersuitApril 10, 2025: OpenAI has filed a countersuit against Elon Musk, accusing the billionaire of a sustained campaign to damage the company and urging a US federal court to block further actions it described as unlawful and disruptive. The legal filing, submitted in a California district court, marks the latest escalation in a dispute between Musk and the AI startup he helped establish in 2015.
Senators probe Google-Anthropic, Microsoft-OpenAI deals over antitrust concernsApril 9, 2025: Democratic Senators Elizabeth Warren and Ron Wyden have launched a formal inquiry into partnerships between tech giants Google and Microsoft, and AI startups, demanding detailed information about arrangements they fear may be circumventing antitrust scrutiny while consolidating power in the rapidly evolving AI market.
Anthropic’s and OpenAI’s new AI education initiatives offer hope for enterprise knowledge retentionApril 4, 2025: Two of the biggest names in artificial intelligence are independently developing new AI tools that encourage learning, at a time when the technology has been criticized for dumbing down smart users in the enterprise and discouraging critical thinking. While the new initiatives from OpenAI and Anthropic are aimed at transforming how AI is used in higher education, the opportunities they open up extend beyond universities.
Amazon, OpenAI, and China’s Zhipu unveil new AI tools amid intensifying competitionApril 1, 2025: A wave of new AI products is hitting the market, signaling a shift toward more autonomous, task-completing systems that could reshape how businesses and consumers interact with digital services: Amazon has unveiled Nova Act, an AI agent designed to operate a web browser much like a human user; OpenAI said it will release an open-weight language model; and China’s Zhipu AI introduced a free AI assistant aimed at strengthening its position in the domestic market and competing with Western tech giants.
OpenAI, Google AI data centers are under stress after new genAI model launchesMarch 28, 2025: New generative AI models introduced by Google and OpenAI have put the companies’ data centers under stress — and both companies are trying to catch up to demand. OpenAI’s CEO Sam Altman tweeted that his company was temporarily restricting the use of GPUs after overwhelming demand for its image generation service on ChatGPT.
Microsoft abandons data center projects as OpenAI considers its own, hinting at a market shiftMarch 26, 2025: OpenAI has privately discussed building and operating its first data center to house storage, which is essential for developing sophisticated AI models. Microsoft, on the other hand, has pulled back on its buildouts, canceling data center projects in the US and Europe.
OpenAI calls for US to centralize AI regulationMarch 13, 2025: OpenAI executives think the federal government should regulate artificial intelligence in the US, taking precedence over often more restrictive state regulations.
New tools from OpenAI help companies create their own AI agentsMarch 12, 2025: OpenAI launched Responses, a new api intended to eventually replace Assistants. The big draw? Responses provides a number of new tools that companies and organizations can use to create their own AI agents.
Microsoft is developing its own AI models to compete with OpenAIMarch 10, 2025: Reports suggest Microsoft has decided to seriously challenge Deepseek and OpenAI by developing its own set of reasoning AI models called Microsoft AI (MAI). If successful, Microsoft would eventually not have to use its partner OpenAI’s o1 models in Copilot
Microsoft-OpenAI investigation closed by UK regulatorsMarch 5, 2025: The UK’s Competition and Markets Authority (CMA) spent a great deal of time deciding whether it should investigate Microsoft’s investment in OpenAI as a potential merger situation, but in the end, decided to open and close the investigation within 24 hours.
OpenAI revamps AI roadmap, merging models for a leaner futureFebruary 13, 2025: OpenAI will integrate “o3” into GPT-5 instead of releasing it separately, streamlining adoption while signaling a shift toward fewer, more controlled AI models amid rising competition and cost pressures.
Musk’s $97B offer to buy OpenAI rejected as leadership stands firmFebruary 11, 2025: In a message to staff, Altman said the board has no intention of considering Musk’s offer, stating that the proposal does not align with OpenAI’s mission
OpenAI launches deep research agent for multi-step research tasksFebruary 3, 2025: Hot on the heels of its launch of the o3-mini model, OpenAI announced another component for ChatGPT that allows the generative AI tool to do more in-depth research. “Deep research is built for people who do intensive knowledge work in areas like finance, science, policy, and engineering and need thorough, precise, and reliable research,” OpenAI said in a blog post announcing the new capability.
OpenAI unleashes o3-mini reasoning modelJanuary 31, 2025: OpenAI released the latest model in its reasoning series, o3-mini, both in ChatGPT and its application programming interface (API). It had been in preview since December 2024.
Indian media houses rally against OpenAI over copyright disputeJanuary 27, 2025: The legal heat on OpenAI in India intensified as digital news outlets owned by billionaires Gautam Adani and Mukesh Ambani joined an ongoing lawsuit against the ChatGPT creator. They were joined by some of the largest news publishers in India including the Indian Express, and Hindustan Times, and members of the Digital News Publishers Association (DNPA), which includes major players like Zee News, India Today, and The Hindu.
Altman now says OpenAI has not yet developed AGIJanuary 20, 2025: Confusion over whether OpenAI’s o3-mini has reached the major milestone of artificial general intelligence (AGI) or not deepened following a post on X by CEO Sam Altman that completely contradicts what he said two weeks earlier in an interview with Bloomberg.
Microsoft sues overseas threat actor group over abuse of OpenAI serviceJanuary 13, 2025: Microsoft has filed suit against 10 unnamed people (“Does”), who are apparently operating overseas, for misuse of its Azure OpenAI platform, asking the Eastern District of Virginia federal court for damages and injunctive relief.
With o3 having reached AGI, OpenAI turns its sights toward superintelligenceJanuary 6, 2025: OpenAI CEO Sam Altman has reinvigorated discussion of artificial general intelligence (AGI), boldly claiming that his company’s newest model has reached that milestone.
Now US government agencies can use OpenAI’s ChatGPT tooJanuary 28, 2025: OpenAI has rolled out ChatGPT Gov, a version of its flagship frontier model specifically tailored to US government agencies. The platform has many of the same capabilities as OpenAI’s other enterprise products, including access to GPT-4o and the ability to build custom GPTs — and it also features a much higher level of security than ChatGPT Enterprise.
OpenAI debuts AI agent Operator to transform web task automationJanuary 24, 2025: OpenAI has unveiled “Operator,” a new AI agent designed to perform web-based tasks, offering potential productivity enhancements for enterprises. The tool enables interaction with on-screen elements, positioning it as a solution for automating routine processes in business workflows amid growing competition in the generative AI space.
OpenAI opposes data deletion demand in India citing US legal constraintsJanuary 23, 2025: OpenAI has informed the Delhi High Court that any directive requiring it to delete training data used for ChatGPT would conflict with its legal obligations under US law. The statement came in response to a copyright lawsuit filed by the Reuters-backed Indian news agency ANI, marking a pivotal development in one of the first major AI-related legal battles in India.
OpenAI, SoftBank, Oracle lead $500B Project Stargate to ramp up AI infra in the USJanuary 22, 2025: Several large technology firms including OpenAI, SoftBank, Oracle, Nvidia, and MGX have partnered to set up a new company in the US to ramp up AI infrastructure in the country.
OpenAI is losing money on its pricey ChatGPT Pro subscriptionJanuary 7, 2025: OpenAI CEO Sam Altman, in a post on X, says the AI company is currently losing money on its ChatGPT Pro subscription. “People are using it much more than we expected,” he wrote.
Fine-tuning Azure OpenAI models in Azure AI FoundryJanuary 2, 2025: Microsoft Azure’s new AI toolkit makes it easy to customize OpenAI large language models for your applications.
OpenAI still hasn’t released tools to deny data collectionJanuary 2, 2025: OpenAI has failed to release the tool to opt-out or customize data collection the company promised to make available by 2025, according to Techcrunch.
Cloudflare has blocked 416 billion requests from AI bots in the last six months
Cloudflare CEO Matthew Prince says in a new interview with Wired that the company has blocked more than 416 billion AI bot requests since July 1, 2025. It’s all part of the company’s initiative to help customers stop AI bots from scraping their content without payment.
According to Prince, AI represents a platform shift that will change the entire internet business model, and Cloudflare wants to protect an open ecosystem where both small and large players can operate on a level playing field.
He particularly criticizes Google for merging its search and AI crawlers. This means that anyone who blocks AI training also disappears from Google’s search index. Prince says Google is abusing its dominant position here.
“It’s almost like a Marvel movie — the hero of the last film becomes the villain of the next one,” Prince told Wired. “Google is the problem here. It is the company that is keeping us from going forward on the internet, and until we force them — or hopefully convince them — that they should play by the same rules as everyone else and split their crawlers up between search and AI, I think we’re going to have a hard time completely locking all the content down.”
Prince hopes that industry pressure and possible future regulation will lead to new and fairer business models where AI companies pay for licensed content instead of scraping it for free.
At Apple, identity resilience supports future security
At Apple, maintaining the highest possible security and privacy across its platforms starts with the standards it supports.
The company, known for its tight integration between hardware and software, deliberately takes choices that help it promote those priorities — even at the cost of integration with third-party software and services.
It is also true to say that Apple continues to improve and iterate its enterprise offerings. “It’s so great to see the momentum [around Macs in the enterprise],” Jeremy Butcher, who handles business product marketing at Apple, told me last month. “As you know, it’s very intentional.”
Understand the future, look to the pastApple’s decision to remove kernel extension (kext) support back in 2020 is a perfect illustration. Way back then, the company decided kexts were a potential security problem and gave warning of its intention to remove support from macOS. Some developers complained at the removal — and then we later had the huge CrowdStrike disaster on Windows, which effectively justified Apple’s decision.
While the complaints at Apple’s decision certainly generated volume, and while it is true that the removal of kext support gave some developers problems, the result was improved security across Apple’s ecosystem.
Apple thinks the same way when it comes to identity management across its platforms. That’s because it knows identity is critical to evolving endpoint security architecture, and to build that, you must secure your platforms on the strongest available foundations — particularly for products that support its enterprise presence.
Identity, it’s the answer, don’t you see?At WWDC 2025, Apple improved Platform Single Sign On, bringing authentication with PSSO into Setup Assistant during Automated Device Enrollment. This hit the market with macOS 26 only a few weeks ago. However, to enjoy this implementation, identity providers must adopt a narrow number of modern frameworks, such as OAuth or OIDC. The idea behind this is that for Identity Providers (IdPs, much used in enterprise security) to deliver optimized platform support across Macs, they must support the latest frameworks.
That means they can’t rely on custom stacks, as Apple can’t necessarily ensure their security, which means they must support Apple’s Extensible SSO frameworks to deliver seamless sign-on.
The principle is that if you want to deploy the best possible Apple user experiences, you must align with the company’s decisions around supported frameworks.
Transition is an opportunityThis may all sound a little unfair, but Apple’s focus isn’t on being fair to IdPs, it’s about delivering consistently secure experiences for its users, and, as CrowdStrike showed, strong security cannot exist without strong foundations.
This can be tough news for some businesses, particularly those still enduring the slow but inevitable migration away from their legacy platforms. During that transition it is inevitable some companies will seek a middle ground between Apple’s expression of SSO and the needs of their legacy platforms, even if Apple offers better experiences. Given the company’s track record for making solid security decisions, it seems to me likely to only be a matter of time until IdPs that don’t currently support Apple’s chosen APIs will end up doing so.
Where we are today on that journey is an opportunity, of course. Many in the Apple enterprise focused MDM space will reach out to companies at this point in their transition with compromise solutions that give some of what they need in terms of Apple SSO while also handling noncompliant tech.
That’s just good business. It’s a profit center for them and can also be seen as a positive reflection of the vibrancy of Apple’s wider enterprise ecosystem and its ability to shape itself to meet ever-emerging enterprise needs.
Apple’s flexible ecosystemThere’s always going to be money to be made bridging the gap between the central Apple experience and third-party platforms, software, and services. That’s OK, of course, as it means the core Apple experience is maintained, and users like you and I can continue to rely on the platform delivering the best possible security experience.
Apple’s decision to limit identity provider support to a narrow number of modern frameworks is causing consternation. But, eventually, most of those IdPs will dig a little deeper in their development investment and build solutions Apple can accept — it is important to note that the current macOS that supports recent changes in Apple’s implantation has only been available for a matter of weeks. While we wait for them to catch up, there are plenty of Apple partners ready to help bridge the gap.
Please follow me on Twitter, or join me in the AppleHolic’s bar & grill and Apple Discussions groups on MeWe. Also now on Mastodon.
US state attorneys general ask AI giants to fix ‘delusional’ outputs
After disturbing mental health incidents involving AI chatbots, state attorneys general sent a letter to major AI companies, warning them to fix “delusional outputs” or risk legal action, TechCrunch reported Wednesday.
The letter, signed by 42 attorneys general from U.S. states and territories, asked Microsoft, OpenAI, Google, Anthropic, and others to implement new safeguards to protect users.
It called for safeguards including new incident reporting procedures to notify users when chatbots produce harmful outputs, and transparent audits by third parties of large language models for signs of delusional or sycophantic ideations.
Those third parties could include academics and civil society groups, and should be allowed to “evaluate systems pre-release without retaliation and to publish their findings without prior approval from the company,” the letter said.
Android 16 Upgrade Report Card: Upgrade winter
2025’s been a weird year when it comes to Android upgrades.
For ages now, Google’s given us each major annual Android upgrade in the latter half of the year — anywhere from late summer to early fall. And device-makers have then sent that software out to their users in the weeks (and, at times, months) that follow. It’s been a largely reliable and predictable cadence, at least as far as Google’s core initial release is concerned.
This year, the Android team decided to shake things up considerably. 2025’s Android 16 update landed in early June, seemingly with the goal of giving manufacturers more time to build the software into their preholiday device shipments. Our Googley gurus also promised us a second operating system update sometime toward the end of the year, though that turned out to be more of a feature drop than a full-fledged numbered Android version.
But even so: Having the major annual Android launch of the year land in June instead of September (compared to Android 15 and most other recent releases) is a pretty significant shift. And so I was quite curious to see how it’d affect upgrade delivery timing, particularly when most Android device-makers outside of Google are anything but consistently commendable in that department.
The answer, it seems, is that the more things change, the more they stay the same.
Now that we’re six full months past the launch of Android 16, it’s time to step back and look at who’s making upgrades a priority and who’s treating ’em as an afterthought. Only you can decide how much this info matters to you (hint: It oughta matter — a lot! — particularly if you care about business-important areas like privacy and security). But whether you find post-sales software support to be a top priority or an irrelevant afterthought, as always, you deserve to be armed with all the data that empowers you to make your own fully informed future buying decisions.
Here’s what the cold, hard data shows us about which Android device-makers are making the grade and which are failing to support their highest-paying customers properly once the wallets have been put away.
[Get level-headed knowledge in your inbox with my free Android Intelligence newsletter. Three things to know and try every Friday!]
GoogleJR Raphael, Foundry
- Length of time for upgrade to reach current flagships: 0 days (50/50 points)
- Length of time for upgrade to reach previous-gen flagships: 0 days (25/25 points)
- Length of time for upgrade to reach two-cycles-back flagships: 0 days (15/15 points)
- Communication: Good (10/10 points)
This first section of our Android upgrade analysis is the most easy to anticipate — since with extraordinarily rare exception, Google gets each and every new Android software update out to all of its still-supported Pixel devices more or less instantly after the software’s release.
And since that support window now stretches out to a full seven years for all Google-made devices — including even the more budget-minded Pixel “a” models — there’s generally little question as to when any reasonably current Pixel will see newly released software.
With Android 16, like the majority of Android releases, the answer turned out to be “more or less instantly.” Android 16 started rolling out to all still-supported Pixels within mere moments of its release.
And while Google’s usual “rolling out in waves” asterisk always applies to a certain degree — with some Pixel owners not receiving the software on that very first day — Android 16 made its way to all supported Pixel devices within a reasonable amount of time and without the need for any extra communication beyond the company’s initial announcement. (For the purposes of this analysis, it’s the start of a rollout — to a flagship phone model in the US — that counts, as you can read about in more detail at the bottom of this page.)
The fact that Google treats all of its phones as equals is significant. Most people and businesses don’t buy new phones every single year, and most casual Android observers look only at the headline-making first rollout a manufacturer announces — which, outside of Google, tends to affect only its most recent top-tier phone. With Pixels, the data shows time and time again that even previous-gen devices and devices from multiple years back (along with those lower-priced “a” models) are all handled with the same priority. And as virtually every Upgrade Report Card reminds us, that’s not the experience you’ll find with any other kind of Android handset — far from it.
For the standard caveat here: Sure, we could argue that Google has a unique advantage in that it’s both the manufacturer of the devices and the maker of the software — but guess what? That’s part of the Pixel package. And as a person purchasing a phone, the only thing that ultimately matters is the experience you receive.
As usual, the results tell you all there is to know: Google’s phones are without a doubt the most reliable way to receive ongoing updates in a dependably timely manner and ensure you’re always using the most up-to-date, optimally secure software available on Android. It’s the only company that makes an explicit guarantee about that as a part of its devices’ purchasing package, and — as we’re about to be reminded further — it’s the only one that consistently delivers on such a standard each and every year.
SamsungJR Raphael, Foundry
- Length of time for upgrade to reach current flagships: 103 days (40/50 points)
- Length of time for upgrade to reach previous-gen flagships: 109 days (19/25 points)
- Length of time for upgrade to reach two-cycles-back flagships: 123 days (11/15 points)
- Communication: Fair (5/10 points)
While Google’s calling card with Android upgrades is its consistency, Samsung’s is the exact opposite — although, in a sense, the company is quite consistent (at ping-ponging between mediocre and appallingly poor Android upgrade delivery performance, with the occasional decent result sprinkled in).
Following last year’s appallingly poor run (a 0% “F” grade — yikes!) and the previous year’s peak (an 81% “B–” score), Samsung’s back into middling terrain this time with a decidedly ho-hum 76% “C.”
For context, that’s almost back down to its 2023 Android 13 performance, where it received a 73% “C” score. And just to further illustrate the ping-ponging pattern I mentioned, the year before that, with Android 12, Samsung came in with an 83% B. And the year before that, with Android 11, it was a 68% D+ (ouch!).
As I’ve said before, in spite of a curious consensus among many tech writers that Samsung is somehow absolutely killing it when it comes to upgrades, the company just can’t be counted on to deliver current smartphone software in a reliably timely manner. That’s true sometimes even for its current-gen flagships, though those usually see updates within a quarter of a year or so (and I say “usually” because that wasn’t quite the case this year). The problem often gets more extreme with the company’s older devices, where the spread from first-gen to second and third is often far more stretched out than it should be.
Samsung did improve in one key area this year, and that’s communication. The company typically keeps its customers completely in the dark about its progress along the way and offers no meaningful communication about what’s happening and when a rollout might begin. With Android 16, for the first time in a long time, Samsung actually put out an official list of exactly which devices would be upgraded and when — sort of.
The list included only a small handful of specific callouts, and it wasn’t released until mid-September — nearly a hundred days after Android 16’s arrival and at the same time of the company’s first rollout. But, well, it was at least something, and that’s more than we can say for its previous recent efforts in that area.
So for now, it’s mostly more of the same from Samsung when it comes to upgrade reliability. If past trends are any indication, we’ll probably see another jump and then another drop in performance in the cycles ahead. But hey, who knows? Maybe Sammy will surprise us and actually get its act together for more than a single year’s cycle sometime.
For 2025, at least, we can certainly say that it did better than most other non-Google Android phone-makers — but unfortunately, as we’re about to see, the standard from this point onward isn’t exactly tough to beat.
OnePlusJR Raphael, Foundry
- Length of time for upgrade to reach current flagships: 156 days (33/50 points)
- Length of time for upgrade to reach previous-gen flagships: Still waiting (0/25 points)
- Length of time for upgrade to reach two-cycles-back flagships: Still waiting (0/15 points)
- Communication: Poor (0/10 points)
OnePlus, like Samsung, is consistently inconsistent when it comes to Android upgrade reliability — though lately, the company seems to have more misses than hits.
With Android 16 this year, OnePlus completely dropped the ball. It took a whopping 156 days — creeping up close to six months! — to get the software out to its current-gen flagship. And as of this writing, owners of its previous-gen and two-cycles-back flagship phones in the US are still waiting to see their software (software that, remember, is already outdated and from June of this year) arrive.
All considered, the best we could say is that OnePlus provides mediocre support on average, with the occasional pleasant surprise around its most recent flagship product — and, with rare exception, it tends to do embarrassingly poorly with its previous-gen flagships.
As usual, what adds insult to injury is the fact that OnePlus is absolutely awful about communicating with its customers. Wade through the official OnePlus forum, and you’ll find pages upon pages of comments from frustrated phone-owners who are either desperate for any shred of info about when their top-of-the-line device will see its increasingly dated software update or are pulling their hair out because of problems with the rollouts that have begun.
All in all, it’s just not a remarkable result — though (insert bemused sigh and/or hiccup here) it could always be worse.
MotorolaJR Raphael, Foundry
- Length of time for upgrade to reach current flagships: Still waiting (0/50 points)
- Length of time for upgrade to reach previous-gen flagships: Still waiting (0/25 points)
- Length of time for upgrade to reach two-cycles-back flagships: Still waiting (0/15 points)
- Communication: Poor (1/10 points)
If there’s one Android phone-maker that’s actually as consistent as Google with its upgrade delivery performance, it’s Motorola.
But hold the phone: This isn’t a happy comparison. Unlike Google, Motorola is consistently careless with supporting its highest-paying customers following a purchase, and it rarely gets a single software update out within six months of its release.
Last year was a rare exception, though still not enough to bring the company up above its standard “F”-level score. This year, we’re right back into typical Moto terrain, with nary an effort and not a single update seen on any of the company’s US flagships as of this writing.
The message, as we’ve seen year after year after year in this arena, is clear: If you buy a Moto phone, you’re gonna be waiting a good long while to get current software, if you ever get it — and you’re gonna be waiting in the dark, too, with no meaningful communication from the company about what’s going on or when you can expect to see any progress.
Wait — what about everyone else?Does this list seem shorter than you were expecting? Alas, this is our current Android hardware reality, at least here in the States at this moment.
One-time Android regular LG is no longer around, as the company bowed out of the phone-making game entirely in 2021. And early Android veteran HTC has been off the grid since 2021’s Report Card, given the fact that it’s barely even putting out new phones anymore — certainly not flagship-level devices. If the company ever comes back around and attempts to get in the game again at any point, I’ll eagerly add it back into the list for inclusion.
And then there’s Sony — a company a random reader will ask me about on occasion but that just doesn’t make sense to include in this list right now. Sony has never had much of a meaningful presence in the US smartphone market (which is a shame, really — but that’s another story for another time), and in recent years, its role in the US mobile market has dropped from “barely anything” to “virtually nothing.”
And let’s not forget about Nothing, the hype-loving small-scale phone-maker from OnePlus founder Carl Pei. Nothing has been doing (ahem) virtually nothing in terms of providing software support to its paying customers, though it has at least made a bit of a token effort this year — with an early July announcement that the software would be available sometime in the third quarter of the year, then a partial rollout beginning closer to the end of this current fourth quarter and stopping prematurely due to problems a handful of days later.
Suffice it to say, the company’s score wouldn’t be spectacular if it were significant enough to include in this breakdown.
In detail: How these grades were calculatedThis Android Upgrade Report Card follows the same grading system used with last year’s analysis — which features precise and clearly defined standards designed to weigh performance for both current and previous-generation flagship phones along with a company’s communication efforts, all in a consistent and completely objective manner.
Each manufacturer’s overall grade is based on the following formula, with final scores being rounded up or down to the nearest full integer:
- 50% of grade: Length of time for upgrade to reach current flagship phone(s)
- 25% of grade: Length of time for upgrade to reach most immediate previous-gen flagship phone(s)
- 15% of grade: Length of time for upgrade to reach two-cycles-back previous-gen flagship phone(s)
- 10% of grade: Overall communication with customers throughout the upgrade process
Notably, 2023’s Android 13 analysis marked the first time the formula was expanded to account for flagship phones that are two generations back in addition to the most recent previous-gen models. With the de facto standard support window stretching to a minimum of three years, it made sense to take a broader view and see how different device-makers are actually doing when it comes to supporting those older models — as a promise of support alone only means so much. How long it actually takes for those phones to receive updates is equally important. And the scores here now reflect that, extending further into a phone’s lifespan.
Upgrade timing often varies wildly from one country or carrier to the next, so in order to create a consistent standard for scoring, I’ve focused this analysis on when Android 16 first reached a flagship model that’s readily available in the US — either a carrier-connected model or an unlocked version of the phone, if such a product is sold by the manufacturer and readily available to US customers — in a public, official, and not opt-in-beta-oriented over-the-air rollout.
(To be clear, I’m not counting being able to import an international version of a phone from eBay or from some random seller on Amazon as being “readily available to US customers.” For the purposes of creating a reasonable and consistent standard for this analysis, a phone has to be sold in the US in some official capacity in order to be considered a “US model” of a device.)
By looking at the time to Android 16’s first appearance (via an over-the-air rollout) on a device in the US, we’re measuring how quickly a typical US device-owner could realistically get the software in a normal situation. And since we’re looking at the first appearance, in any unlocked or carrier-connected phone, we’re eliminating any carrier-specific delays from the equation and focusing purely on the soonest possible window you could receive an update from any given manufacturer in this country. We’re also eliminating the PR-focused silliness of a manufacturer rushing to roll out a small-scale upgrade in somewhere like Lithuania just so they can put out a press release touting that they were “FIRST,” when the practical implication of such a rollout is basically just a rounding error.
I chose to focus on the US specifically because that’s where this publication (and this person writing this right now — hi!) is based, but this same analysis could be done using any country as its basis, of course, and the results would vary accordingly.
All measurements start from the day Android 16 was released into the Android Open Source Project: June 10, 2025, which is when the final raw OS code officially became available to manufacturers.
The following scale determined each manufacturer’s subscores for upgrade timeliness:
- 1-14 days to first US rollout = A+ (100)
- 15-30 days to first US rollout = A (96)
- 31-45 days to first US rollout = A– (92)
- 46-60 days to first US rollout = B+ (89)
- 61-75 days to first US rollout = B (86)
- 76-90 days to first US rollout = B– (82)
- 91-105 days to first US rollout = C+ (79)
- 106-120 days to first US rollout = C (76)
- 121-135 days to first US rollout = C– (72)
- 136-150 days to first US rollout = D+ (69)
- 151-165 days to first US rollout = D (66)
- 166-180 days to first US rollout = D– (62)
- More than 180 days to first US rollout (and thus no upgrade activity within the six-month window) = F (0)
There’s just one asterisk: If a manufacturer outright abandons any US-relevant models of a device, its score defaults to zero for that specific category. Within that category (be it current or previous-gen flagship), such behavior is an indication that the manufacturer in question could not be trusted to honor its commitment and provide an upgrade. This adjustment allows the score to better reflect that reality. No such adjustments were made this year, though there have been instances where it’s happened in the past (hello, Moto — again!).
Last but not least, this analysis focuses on manufacturers selling flagship phones that are relevant and in some way significant to the US market and/or the Android enthusiast community. That, as I alluded to above, is why a company like Sony is no longer part of the primary analysis — and why companies like Xiaomi and Huawei are not presently part of this picture, despite their relevance in other parts of the world. Considering the performance of players in a market such as China would certainly be interesting, but it’d be a completely different and totally separate analysis, and it’s beyond the scope of what we’re considering in this one report.
Aside from the companies included here, most players are either still relatively insignificant in the US market or have focused their efforts more on the budget realm in the States so far — and thus don’t make sense, at least as of now, to include in this specific-sample-oriented and flagship-focused breakdown.
Don’t let yourself miss a thing: Sign up for my free Android Intelligence newsletter to get next-level knowledge delivered directly to your inbox.
- 1
- 2
- 3
- 4
- 5
- následující ›
- poslední »



