Computerworld.com [Hacking News]
Apple has to climb the mountain
Apple has a lot of challenges these days. Would Steve Jobs really be handling these problems better than current leaders?
The problems, some are long-term, others short-term, include (but are not confined to):
- Chinese consumers turning to domestic brands in response to the US trade war.
- US customers feeling the impact of tariffs and anticipated increase in product prices.
- Regulators in every nation seemingly intent on chipping away at the services empire Apple built from thin air.
- Apple’s recently-disclosed failure to launch with Apple Intelligence.
- Supply chain problems, partly in response to trade wars and partly exposed during Covid, when single-source supply chains collapsed overnight.
- Declining consumer trust in technology.
These challenges are in addition to the tasks Apple has always had to manage — maintaining hardware and software quality, developing new products and services that surprise and delight customers, building consumer engagement, and inventing the best hardware in the world. A look at the recently introduced Mac Studio and M4 MacBook Air show the company still has the ability to do that. Both are the best computers in the world in their class.
Challenges everywhereBut the central problem Apple has is mirrored in its own actions.
You see, reports claim the company’s marketing teams insisted on promoting Apple Intelligence and its much-vaunted contextual understanding of users, even though the feature wasn’t ready. They not only insisted on it, but they also went large on pushing it, helping build just the right environment to create a crisis of belief when it was revealed the company would be unable to make the grade. (Subsequent reports suggest the feature is already working, but just not consistently enough; perhaps Apple should introduce it as a public beta to show how far it’s come.)
What problem does this mirror?Just as Apple’s own teams focused on a service that wasn’t ready, the rest of us out here continue to seek solace in impossible dreams. We live in a world of confusion in which populists, snake oil salesmen, and fake thought leaders thrive. Lack of belief, combined with a search for easy answers, means we choose the answers that seem easy. That’s what happened with Apple Intelligence — so great was the need to seem to occupy space in AI, the company chose to market a feature it hadn’t got working yet.
It took an easy road, rather than a hard one, and in doing so reflected the muddy waters of our times.
That’s not how things were when Jobs introduced the iMac, iPod, or iPhone. Back then, we thought tech would help us, social media hadn’t yet been weaponized against wider public good, and many still wanted to believe global governments would meet the goals of Agenda 21, rather than using 1984 as an instruction manual. Conflict hadn’t yet exposed the deep rifts underlying the fragile global consensus, and Apple under Jobs spoke a language of hope and optimism that reflected a more optimistic zeitgeist.
Apple today can’t cling to that past.
A new language for a new timeThat aspect of the brand no longer seems to match the existence so many of its customers experience. And it’s arguable whether senior management, ensconced in the Silicon Valley bubble, is exposed enough to identify a product design and marketing language that resonates in our new, highly complex, polarized, conflicted reality. While Apple has done extraordinarily well as the ultimate aspirational brand and enthusiasm for its products will remain among those who can reasonably afford them. But declining sales means declining profits, and in a world set up to mirror Wall Street’s irrational belief that perpetual growth is possible on a finite planet, decline is unacceptable.
That’s true even for the most successful company in human history.
That’s a lot of pressure for Apple’s top brass to handle. Plus, of course, in every case, the answers they have available to them appear to be least-worse responses, rather than good ones. Adding additional complexity, the challenges are themselves intertwined as societies everywhere undergo significant structural change, as political forces of various hues attempt to hold things together with false narratives of a history that never really happened.
Just how can the future look better tomorrow when it’s based on a past that never existed?
The journeyAll the same, the more complex things become, the harder we work just to stand still. And with myriad connected challenges, it’s not at all certain even Steve Jobs would be able to visualize an easy way through. The simple answer is to keep hope alive, but the uncomfortable truth is that, just as it did with the iMac, Apple’s biggest challenge now is to find a consumer product truly emblematic of its time, something that speaks to us of who are we, what we need, and where we are going.
In that light, perhaps the failure of the launch of Apple Intelligence really reflects the time we’re in. We can see the mountain but can’t yet make it to the top.
You can follow me on social media! Join me on BlueSky, LinkedIn, and Mastodon.
Cisco’s AI agents for Webex aim to improve customer service
Cisco is adding new features to its Webex collaboration tool as it expands its adoption of agentic AI.
The latest tools include an AI Agent and an updated AI Assistant for the company’s Webex Contact Center, a collaborative tool that helps companies handle customer service calls. The AI tools are designed to bolster customer service experiences.
The announcements came on the opening day of the Enterprise Connect show in Orlando, FL.
The Webex AI Agent, slated to be available at the end of this month, should make customer service calls smoother by using AI alongside human agents. The goal is to reduce wait times and use intelligent ways to resolve issues.
According to Cisco, the Agent will allow companies to tackle complex real-time customer service queries by handling more dynamic conversations. The tool can also run scripted agents with preconfigured responses, Cisco said.
One use case highlighted by Cisco, for example, could help airline customers change flights in real-time by querying timing preferences, providing a range of flight options, and completing the call by making the booking. The agent uses AI technology to connect corporate information systems to customer queries.
The company also added new Cisco AI Assistant features to its Webex Contact Centers. That tool is an assistant for customer service agents that can make recommendations for answering customer queries.
The agent, originally rolled out in February, uses a number of tools to understand customer intent and then provides appropriate recommendations. The goal is to help human agents provide better responses.
For example, one new tool can allow accurate transcription of calls, making it easier to understand speakers with accents or unusual speech patterns. It can also provide context for complex discussions, along with real-time recommendations on actions or responses, Cisco said.
Some of the previously added tools can provide summaries on dropped calls or interactions with virtual agents before calls are transferred to human agents. Still other tools can measure customer satisfaction or pull information from past calls and topics to improve customer service experiences.
Cisco also announced it has integrated Apple’s AirPlay on Cisco Devices for Microsoft Teams Rooms, which enables “instant wireless content sharing from iPhone, iPad or Mac to Cisco Devices.”
Google Workspace: 7 great ways to use the Gemini AI sidebar
Google’s generative AI chatbot, named Gemini, is available as a web app. But you can also access most of Gemini’s AI tools in a side panel as you’re using Google Workspace apps including Google Docs, Sheets, Slides, Drive, and Gmail. Using the Gemini sidebar within a Workspace app is much more convenient and contextual than copying and pasting between browser tabs.
When you’re in one of these five main Workspace apps, open the Gemini side panel by clicking the nova star icon that’s next to your user profile icon at the upper-right corner. From this side panel, you can instruct Gemini to generate new content (such as text, tables, or slides), make changes to your current document or other file, or analyze its content to provide context.
The Gemini side panel in Google Docs.
Howard Wen / Foundry
Keep in mind that, like all genAI tools, Gemini can make mistakes, so it’s always advisable to check the output carefully for errors. What’s more, the copy it writes is often generic and flat, but it can be a useful starting point for you to refine and add color to.
As you’ll see, Gemini’s real strengths lie in taking away drudge work such as summarizing a long document or extracting a key piece of information from a sea of emails. This guide explains how to use these AI capabilities from the Gemini side panel in the Google Workspace apps.
Who can use Gemini in Google Workspace appsThe Gemini AI tools that include this side panel are now included with paid Google Workspace plans from the Business Standard plan and up. If you have a regular Google personal account, you can subscribe to Google One AI Premium to have access to these tools. Or, for no cost, you can sign up for access to Google Workspace Labs with your Google account to be permitted to try out Gemini in the Workspace apps.
Get started: Use a suggested prompt or type in your ownIn the Gemini side panel, you’ll usually see suggested prompts — these are action links you can click that instruct the AI to do something, such as creating an outline.
Many of these suggested prompts will be the same, or similarly phrased, across the five Workspace apps that you can use the Gemini side panel in. But they can also vary from app to app and sometimes depend on whether you start with a blank document or open an existing one.
Suggestions in the Gemini side panel for Gmail, Sheets, and Slides.
Howard Wen / Foundry
In Docs, for instance, Gemini might prompt you to brainstorm a list of ideas, whereas in Slides it might prompt you to create a slide to pitch an idea. In some cases, prompts might be customized for you based on the emails in your Gmail account or files in your Google Drive. (If you want to see additional suggested prompts, click More suggestions in the panel.)
To choose a prompt, click it, and it will appear in the entry box at the bottom of the window, where you can customize it for your needs. When it’s worded the way you want, press Enter.
Or you can skip the suggested prompts: just type your own prompt inside the entry box and press Enter.
Next steps: Insert, copy, retry, or refineAfter you enter your prompt, Gemini generates a result, which appears in the side panel. Like all genAI tools, Gemini sometimes makes errors, so you’ll want to carefully check over its output.
If you’re happy with the result, you can click the arrow icon on the toolbar below it to insert the generated text, image, or other content into your document, email draft, slide, or spreadsheet. Or you can click the Copy button to copy it to your PC clipboard.
Gemini’s generated result appears in the side panel.
Howard Wen / Foundry
If you want to see what the result will look like in your document before inserting it, click the vertical three-dot icon (More options) and select Preview. If you wish, you can send Google feedback on the result by clicking the thumbs up or thumbs down icon on the toolbar.
Right above this toolbar, you may see a link labeled “Sources.” When clicked, this action will list the sources (your documents, emails, presentations, spreadsheets, etc.) that Gemini used to generate the content.
If you’re not happy with Gemini’s generated result, you can instruct it to generate a fresh result from your original prompt by selecting More options > Retry or More options > Retry with Google Search in the toolbar.
Below the toolbar you might see one or more additional prompt suggestions. Click a suggestion, and Gemini will generate a second result that builds on the first one.
Another option is to instruct Gemini to refine the current result rather than starting over from scratch. To do so, just type how you want the result refined into the entry box and press Enter.
A few additional tips for using the Gemini side panel:
- To erase your previous prompts and results from the panel, click the three-dot icon (More options) at the top of the Gemini side panel and click Clear history.
- To make the side panel larger so you can see the results better, click the Expand button immediately to the right of the More options button at the top of the panel.
- To close the side panel, click the X in its upper right corner.
Now that you know how to use the side panel, let’s look at some specific use cases. There are too many possible actions to cover in this article, and Google adds new capabilities all the time. But here are some common examples of what you can prompt Gemini to do.
1. Summarize documents and emailsGemini can quickly summarize your emails and files stored in Google Drive in a variety of ways.
In Docs, Sheets, and Slides: Open a document, spreadsheet, or presentation, then open the Gemini sidebar. When you do, Gemini automatically generates a summary of the file, provided that it has enough text or data content. The summary appears near the top of the Gemini panel; you may need to click the three-dot icon (View more) to see the whole thing.
Gemini can instantly summarize the contents of a document.
Howard Wen / Foundry
In Drive: Select a document or other file, then click Summarize this file on the toolbar above the file listing. This will open the Gemini side panel, and a summary for the contents of the file will be generated if possible.
Gemini can summarize documents stored in Google Drive.
Howard Wen / Foundry
You can summarize all the contents of a folder in Google Drive in the same way. Navigate to the folder in Drive, select it, and click Summarize this folder on the toolbar above the listing.
Another method to summarize files in Drive is to open the Gemini sidebar first, then write a prompt in the entry box telling it what you want it to summarize. Type summarize @ and start typing the document’s filename. A small menu of suggested files will open, and you can select the one you want.
If you can’t remember the filename, you can try describing the document or file, and Gemini may be able to determine which file you’re referring to. You can also instruct Gemini to summarize more than one document or file together.
Examples:
- Summarize the meeting notes from the July meeting
- Summarize @Office Budget Winter and @Office Budget Annual
In Gmail: Open an email or email thread first. Then, in the Gemini side panel, you can click a suggested prompt to generate a summary of the email or thread.
You can also prompt Gemini to summarize multiple emails, even if they’re not in a single thread. To do so, open the Gemini sidebar and describe what you want it to summarize in the entry box.
- Give me a summary of the emails that Saulo sent me regarding our upcoming meeting
- Summarize the emails I sent to Mona over the last week
Gemini can also summarize multiple emails based on a description you provide.
Howard Wen / Foundry
2. Extract specific info from files or emailsInstead of asking Gemini to summarize an entire document or email, try asking it questions about your files in Drive or about your emails in Gmail to extract specific information.
- What are the key points in @Business Plan: IT Consulting for Restaurant Management?
- How much is allocated for new technology purchases @Office Budget Annual?
- How much has [business name] charged me this year?
Ask Gemini for specific information from your emails and files.
Howard Wen / Foundry
3. Create a tableGemini can generate a table template with headings, placeholder text, and even formulas in its cells. After you insert it into your document, email draft, slide, or spreadsheet, you fill out the table with your own data.
Gemini works best at designing tables for project management, so try describing a table that has dropdowns, lists, task lists, or to-dos. It can also generate a data table that you can insert into a spreadsheet and then create a chart from it.
- Make me a table depicting 12 months with 3 categories per month
- Create a table with dropdowns with selections that include Greek, Japanese, Italian for a business luncheon
A generated table template in Sheets.
Howard Wen / Foundry
When you create a table in Sheets, it will appear as an overlay on the spreadsheet. Click the Insert button below it to insert it in the spreadsheet.
Note: In Sheets, when you launch a new, blank spreadsheet, the Tables side panel will automatically open on the right. To switch to the Gemini side panel, either click the “Help me create a table” button in the Tables side panel or just click the nova star icon at the top of the screen.
If the spreadsheet you open already has data in the cells, and they’re not organized in a formal way, try clicking one of the Create a table suggested prompts. Gemini will generate a table with this data arranged in a neater fashion.
Finally, it’s worth noting that Sheets has another table template tool called Help Me Organize that we’ve covered previously. See “How to use Gemini AI to make templates in Google Sheets” for details on how to use it, along with ideas for crafting successful table prompts that would work in the Gemini sidebar as well.
4. Create an imageDescribe the kind of image that you want Gemini to generate. In Slides, you can prompt Gemini to generate an image based on the content of the slide that you’re currently viewing in the main window.
- Create a cartoon of a giraffe reading a menu while seated at a restaurant table
- Create an image based on the current slide
In Slides, Gemini can create images related to the current slide’s content.
Howard Wen / Foundry
5. Write a text draftYou can use the Gemini side panel to generate a first draft for an email or document. In Docs or Gmail, open the side panel and type a prompt describing the kind of text that you want Gemini to generate.
- Create an opening paragraph pitching my IT consulting service for restaurant owners
- Create an email to send to Eric Jones in which I suggest we catch up at the tech conference in San Diego next week
Using email as an example, from the main page of your Gmail account, open the Gemini side panel and enter your prompt. If you like Gemini’s generated draft, click the Insert button, and a composition window for a new email will open with the generated text inserted into it. You can add or make changes before sending it out, or leave it in your Drafts folder to work on later.
A new email draft generated by Gemini.
Howard Wen / Foundry
The Gemini side panel will also present suggested prompts that you can click to guide you through refining or rephrasing your document or email draft.
All that said, a better way to generate and redo text is to use the Help Me Write tool (which is also powered by Gemini). See our story “How to use Gemini AI to write (and rewrite) in Google Docs and Gmail” for full instructions on using Help Me Write.
6. Create a formula in SheetsDon’t know most of the formulas that you can use in Sheets, or want some ideas? Describe to Gemini the kind of calculation you want, and the AI will try to generate something workable. You can then insert the formula into a cell on your spreadsheet.
- Formula for the sum of Column B divided by 10 and then multiplied by 1.67
- Formula to calculate compound interest at 3.5% over 3 years
In Sheets, Gemini can help you create a formula.
Howard Wen / Foundry
7. Create a slide in SlidesIn Slides, describe a slide that you want Gemini to generate.
You can even ask Gemini to create a slide based on the content of a document, spreadsheet, or other file in your Google Drive.
- Create a slide that introduces an annual business budget report
- Create a slide using @Office Budget Winter
Gemini can create a slide based on a document or other file.
Howard Wen / Foundry
This is just a sampling of what you can ask Gemini to do in Google Workspace apps. Once you start experimenting, you’ll likely find numerous ways it can help you in your work.
Governments won’t like this: encrypted messaging between Android and iOS devices coming, says GSMA
Imagine a world of the near future where Android and Apple iOS users can message one another with the certainty that their communication is secured against eavesdropping by end-to-end encryption (E2EE).
And it would not only be for one-to-one chats, but across large groups of employees and users, something that is impossible to guarantee today without resorting to standalone apps such as WhatsApp.
These capabilities might soon be a reality, thanks to a technical specification released this week, the GSM Association’s RCS Universal Profile version 3.0.
In development since 2007 as a replacement for SMS, Rich Communication Services (RCS) already allows a range of features including read receipts, typing indicators, and media sharing. But E2EE security, a much more complex technical feat, has always proved elusive.
Thanks to some IETF-backed magic inside RCS 3.0 called the Messaging Layer Security (MLS) protocol, that is about to change. Specifications may come and go, but history suggests that the addition of security to a spec is always a significant moment when people start to feel more positive about its adoption; at least that’s what the GSMA is hoping.
This is especially true for businesses, which value two features above all: absolute certainty about messaging security, and the ability for employees to communicate in large groups. RCS 3.0 with MLS delivers on both fronts, said GSMA technical director, Tom Van Pelt.
“[This ensures] that messages and other content such as files remain confidential and secure as they travel between clients,” he said.
“RCS will be the first large-scale messaging service to support interoperable E2EE between client implementations from different providers. Together with other unique security features such as SIM-based authentication, E2EE will provide RCS users with the highest level of privacy and security for stronger protection from scams, fraud, and other security and privacy threats,” said Van Pelt.
RCS fragmentationRCS 3.0’s big feature is interoperability, which makes it easier for different apps to implement the same features consistently. Today, while RCS is widely implemented by OS platforms, mobile networks, and device makers, each does it in their own way. This has led to fragmentation, hindering uptake.
The result is that if you want to send a secure RCS message between Android devices, you need to use Google’s own Messages app at both ends; it implements E2EE using the well-worn Signal protocol. Similarly, Apple adopted RCS in iMessage last year, but with a proprietary implementation of E2EE.
In short, it’s a confusing jumble. This is one reason why alternatives such as WhatsApp and Signal, both of which also use the Signal protocol, have become so popular; you get E2EE out of the box without compatibility worries, and they allow groups of up to 1,024 members.
Having a single protocol, MLS, covering E2EE changes the story. Now RCS with MLS can offer a range of advanced features including large groups, which are critical for businesses which need many-to-many communication. Right now, if even one user in a group is using an RCS app without compatible E2EE, the security of the whole group chat can be compromised. MLS gives every app maker one IETF standard to aim for.
The WhatsApp effectGoogle has said it plans to adopt MLS inside Messages, which means replacing the proven Signal protocol that struggles to handle larger groups. That will take time, during which it will probably support one with a fallback to the other. Apple, too, said it is committed to MLS.
“We will add support for end-to-end encrypted RCS messages to iOS, iPadOS, macOS, and watchOS in future software updates,” said Apple spokesperson Shane Bauer, in support of the GSMA.
As the two biggest platform apps, these names are important. However, one that’s not on the RCS list yet is WhatsApp, an app for both Android and Apple that, with three billion users, operates in a parallel world to RCS-enabled apps.
WhatsApp is in no hurry to adopt MLS. For parent Meta, the real prize is to turn WhatsApp into a secure business communications platform that dominates the messaging space across multiple types of engagement. Despite that, it will eventually have to adopt MLS in some form, not least to comply with the EU’s Digital Markets Act, which mandates greater app interoperability.
“It’s questionable if and when WhatsApp and Signal are going to support this protocol, as both have already implemented end-to-end encryption within each respective ecosystem,” commented Arne Möhle, CEO of secure email provider Tuta Mail.
“As an encrypted email service, we can also say that interoperability is a challenge,” he added. “It comes with complications such as spam and phishing attempts, an issue that WhatsApp has had to fight hard against. This will get even worse once the app starts allowing people to chat with their friends on other platforms as well.”
But E2EE was only today’s privacy issue. Soon, he predicted, messaging platforms will need to evolve to counter the ability of quantum computers to undermine the security of public key encryption.
“The GSMA protocol needs to be updated with quantum-resistant encryption keys,” said Möhle.
Ironically, a major uncertainty is E2EE itself. This is now being probed by the UK government, which has decided to use Apple as its test case in a campaign to introduce backdoors into the encryption used in iCloud services. So far, Apple is resisting, choosing to disable security rather than allow surveillance. Talks are reportedly ongoing.
E2EE, which stores keys on devices rather than centrally, isn’t part of this effort, but might come under fire if the UK government reheats its controversial idea of client-side scanning (scanning messages before they are encrypted on-device).
Microsoft’s Patch Tuesday updates: Keeping up with the latest fixes
Long before Taco Tuesday became part of the pop-culture vernacular, Tuesdays were synonymous with security — and for anyone in the tech world, they still are. Patch Tuesday, as you most likely know, refers to the day each month when Microsoft releases security updates and patches for its software products — everything from Windows to Office to SQL Server, developer tools to browsers.
The practice, which happens on the second Tuesday of the month, was initiated to streamline the patch distribution process and make it easier for users and IT system administrators to manage updates. Like tacos, Patch Tuesday is here to stay.
In a blog post celebrating the 20th anniversary of Patch Tuesday, the Microsoft Security Response Center wrote: “The concept of Patch Tuesday was conceived and implemented in 2003. Before this unified approach, our security updates were sporadic, posing significant challenges for IT professionals and organizations in deploying critical patches in a timely manner.”
Patch Tuesday will continue to be an “important part of our strategy to keep users secure,” Microsoft said, adding that it’s now an important part of the cybersecurity industry. As a case in point, Adobe, among others, follows a similar patch cadence.
Patch Tuesday coverage has also long been a staple of Computerworld’s commitment to provide critical information to the IT industry. That’s why we’ve gathered together this collection of recent patches, a rolling list we’ll keep updated each month.
In case you missed a recent Patch Tuesday announcement, here are the latest six months of updates.
For March’s Patch Tuesday, 57 fixes — and 7 zero-daysFor so few patches from Microsoft this month (57), we have seven zero-days to manage (with a “Patch Now” recommendation for Windows) and standard release schedules for Microsoft Office, Microsoft browsers (Edge) and Visual Studio. Adobe is back with a critical update for Reader, too — but it’s not been paired (at least for now) with a Microsoft patch. More info on Microsoft Security updates for March 2025.
For February’s Patch Tuesday, Microsoft rolls out 63 updatesMicrosoft released 63 patches for Windows, Microsoft Office, and developer platforms in this week’s Patch Tuesday update. The February release was a relatively light update, but it comes with significant testing requirements for networking and remote desktop environments. Two zero-day Windows patches (CVE-2025-21391 and CVE-2025-21418) have been reported as exploited and another Windows update (CVE-2025-21377) has been publicly disclosed — meaning IT admins get a “Patch Now” recommendation for this month’s Windows updates. More info on Microsoft Security updates for February 2025.
2025’s first Patch Tuesday: 159 patches, including several zero-day fixesMicrosoft began the new year with a hefty patch release for January, addressing eight zero-days with 159 patches for Windows, Microsoft Office and Visual Studio. Both Windows and Microsoft Office have “Patch Now” recommendations (with no browser or Exchange patches) for January. Microsoft also released a significant servicing stack update (SSU) that changes how desktop and server platforms are updated, requiring additional testing on how MSI Installer, MSIX and AppX packages are installed, updated, and uninstalled. More info on Microsoft Security updates for January 2025.
For December’s Patch Tuesday, 74 updates and a zero-day fix for WindowsMicrosoft released 74 updates with this Patch Tuesday update, patching Windows, Office and Edge — but none for Microsoft Exchange Server or SQL server. One zero-day (CVE-2024-49138) affecting how Windows desktops handle error logs requires a “Patch Now” warning, but the Office, Visual Studio and Edge patches can be added to your standard release schedule. There are also several revisions this month that require attention before deployment. More info on Microsoft Security updates for December 2024.
November: This Patch Tuesday release includes 3 Windows zero-day fixesMicrosoft’s November Patch Tuesday update addresses 89 vulnerabilities in Windows, SQL Server, .NET and Microsoft Office — and three zero-day vulnerabilities in Windows that mean a patch now recommendation for Windows platforms. Unusually, there are a significant number of patch “re-releases” that might also require IT admin attention. More info on Microsoft Security updates for November 2024.
October: A haunting Patch Tuesday: 117 updates (and 5 zero-day flaws)This month’s Patch Tuesday delivers a large set of patches from Microsoft that fix 117 flaws, including five zero-day vulnerabilities. Though there are patches affecting Windows, SQL Server, Microsoft Excel and Visual Studio, only the Windows updates require a “Patch Now” schedule — and they’ll need a significant amount of testing because they cover a lot of features: networking, kernel and core GDI components and Microsoft Hyper-V. Printing should be a core focus for enterprise testing and the SQL Server updates will require a focus on internally developed applications. More info on Microsoft Security updates for October 2024
For March’s Patch Tuesday, 57 fixes — and 7 zero-days
For so few patches from Microsoft this month (57), we have seven zero-days to manage (with a “Patch Now” recommendation for Windows) and standard release schedules for Microsoft Office, Microsoft browsers (Edge) and Visual Studio.
Adobe is back with a critical update for Reader, but it’s not been paired (at least for now) with a Microsoft patch.
To navigate what’s changed, the team from Readiness has crafted this useful infographic detailing the risks of deploying these updates to each platform. (And here’s a look at the last six months of Patch Tuesday releases.)
Known issuesMicrosoft is still dealing with reported gaming issues (Roblox) and has two new known issues for this release cycle, including:
- Windows 11: After installing the March update, USB-connected dual-mode printers supporting both USB Print and IPP Over USB may print random text, network commands, and unusual characters, often starting with “POST /ipp/print HTTP/1.1.” This issue can be mitigated using Known Issue Rollback (KIR).
- Windows 10: After installing Windows updates from Jan. 14, 2025 or later, the Windows Event Viewer might log an error related to SgrmBroker.exe as Event 7023, though this does not trigger any visible notifications. This error occurs because the System Guard Runtime Monitor Broker Service, originally part of Microsoft Defender and no longer in use, conflicts with the update during initialization. According to Microsoft, this reported issue does not impact system performance, functionality, or security, as the service is already disabled in other supported Windows versions.
Following previous reports of Citrix-related update issues, devices with Citrix Session Recording Agent (SRA) version 2411 could (still) be unable to complete the installation of the January 2025 Windows security update, causing the system to revert to previous updates after a restart. Affected devices might initially download and apply the update, but an error message stating “Something didn’t go as planned” appears during installation. This issue is expected to affect only a limited number of organizations, as version 2411 of SRA is newly released, and home users are not affected. Don’t count on this issue being fixed soon, folks.
Major revisions and mitigationsMicrosoft has not released or documented any mitigations or workarounds for the current set of updates. As of now, the following Chromium patches have been revised and re-released:
- CVE-2025-1920: Type Confusion in V8 (Chromium)
- CVE-2025-2135: Type Confusion in V8 (Chromium)
- CVE-2025-2136: Use After Free in Inspector (Chromium)
- CVE-2025-2137: Out of Bounds Read in V8 (Chromium)
- CVE-2025-24201: Out of Bounds Write in GPU on Mac (Chromium)
Microsoft is retiring several products this month:
- Microsoft SQL Server 2019, which ended mainstream support on Feb. 28.
- Microsoft Skype, which will be terminated (with prejudice) in May.
- Windows Remote Desktop , which will be replaced next month with the Windows App. (Note: there are still some missing features and several known issues reported in this release.)
Over the next few weeks, several Microsoft products are scheduled to reach their end-of-life (EOL), and will no longer receive security updates, non-security updates, or technical support including:
- April 2, 2025: Dynamics 365 Business Central on-premises (2023 release wave 2, version 23.x).
- April 8, 2025: Dynamics GP 2015/Dynamics GP 2015 R2.
- April 9, 2025: Microsoft Configuration Manager, Version 2309.
Each month, the Readiness team analyzes the latest Patch Tuesday updates and provides detailed, actionable testing guidance based on a large application portfolio and a comprehensive analysis of the patches and their potential impact on Windows and application deployments.
For this release cycle, there are no reported functional changes. However, feature level testing will still be required, especially for system drivers and core libraries. Due to these low-level system (kernel) changes, a full reboot/restart test will be required for all Windows UI elements including Explorer, desktop shell and Internet Explorer.
We have grouped the critical updates and required testing efforts into different functional areas, including:
File System components- Common Log File System: Test by creating a BLF and multiple container files, appending logs using `ReserveAndAppendLog,` and then deleting the containers.
- Core System drivers (ntfs.sys, exfat.sys & fastfat.sys): Test mounting, dismounting, and performing file operations on ExFAT volumes.
- If using a Routing and Remote Access Service (RRAS) server, test `netsh` scenarios to confirm commands work as expected.
- FAX: Validate TAPI initialization, shutdown, and key functions like `lineInitialize` and `lineMakeCall.` Stress test for stability and error handling.
- Focus on storage subsystem tests, including operations on virtual/physical disks and storage enclosures.
- Test how Search Connector files interact with various network paths (UNC, SMB, and file system paths).
- Validate all camera-related scenarios.
- Verify audio/video recording with internal and external devices.
- Test apps like Teams and Camera that use virtual features (for example, Phone Link, Windows Studio Effects).
Affected Versions for this update cycle include the following Windows desktop and server builds:
- Windows 11 24H2, 23H2, 22H2, Windows 10 1607, Windows 10 RTM.
- Windows Server 23H2, Azure Stack OS 22H2, Windows Server 2022
Each month, we break down the update cycle into product families (as defined by Microsoft) with the following basic groupings:
- Browsers (Microsoft IE and Edge)
- Microsoft Windows (both desktop and server)
- Microsoft Office
- Microsoft Exchange and SQL Server
- Microsoft Developer Tools (Visual Studio and .NET)
- Adobe (if you get this far)
Microsoft released 10 low-profile (no rating) updates to its Chromium-based Edge browser. These changes can be added to your standard release calendar.
Microsoft WindowsThe following Windows product areas have been updated with five critical patches and 32 others rated important for this month’s cycle:
- CVE-2025-24035: Windows Remote Desktop Services Remote Code Execution Vulnerability
- CVE-2025-24064: Windows Domain Name Service Remote Code Execution Vulnerability
- CVE-2025-24084: Windows Subsystem for Linux (WSL2) Kernel Remote Code Execution Vulnerability
- CVE-2025-26645: Remote Desktop Client Remote Code Execution Vulnerability
Unfortunately, three of these updates (CVE-2025-24984, CVE-2025-24984 and CVE-2025-24984) have been reported as exploited. Add these Windows updates to your “Patch Now” schedule.
Microsoft OfficeMicrosoft released a single critical update (CVE-2025-24057) and 10 patches rated important for the Office platform. All of the important updates affect Microsoft Word, Excel and Access with no reports of disclosures or exploitation. Add these Microsoft Office updates to your standard release calendar.
Microsoft Exchange and SQL ServerThere were no updates for either Microsoft Exchange or SQL Server this March update cycle.
Developer toolsMicrosoft released five patches, all rated important, that affect Microsoft Visual studio and ASP.NET. Add these updates to your standard developer release schedule.
Adobe (and third-party updates)This month, Adobe released a security update (APSB25-14) for Acrobat and Reader for Windows and macOS that addresses six critical and three important vulnerabilities. Successful exploitation could lead to arbitrary code execution and memory leak. Adobe is not aware of any exploits in the wild for any of the issues. For some reason this update was not included in this Microsoft patch cycle. Maybe that’s as it should be.
Study: AI chatbots usually cite incorrect sources
Popular AI services are not very good at locating the correct original source, according to a new study from Columbia Journalism Review’s Tow Center for Digital Journalism.
In the study, researchers selected 10 articles from 20 different publishers and then manually selected quotes from them to use in their queries. After each chatbot got the quotes, it was asked to identify the corresponding article’s title, original publisher, and publication date.
The researchers deliberately chose quotes that would give the correct original source if typed into the Google search engine.
In total, eight different AI chatbots were tested and, on average, they produced the wrong source 60% of the time. Perplexity performed best — and still got the citation wrong 37% of the time. The worst performer was Grok 3, which was wrong 94% of the time.
The researchers note that while most of the AI tools produced incorrect answers, they still presented them with great confidence. This was particularly true for paid versions of the AI chatbots. The researchers also found evidence that the AI chatbots’ web spiders often ignored publishers’ paywalls they were supposed to respect.
IDC: 80% of companies plan to buy AI PCs this year
AI PCs could solve key issues organizations face when using cloud and data center AI instances, including cost, security, and privacy concerns, according to a new study by IDC Research.
Nearly all organizations are already using or planning to use cloud-based AI platforms. At the same time, many of those projects have been stunted for various reasons, according to the according to the study, which was sponsored AMD.
The percentage of AI PCs in use is expected to grow from just 5% in 2023 to 94% by 2028, IDC said. The research firm surveyed 670 IT decision-makers from large companies in the US, UK, France, Germany, and Japan to explore their views on AI PCs. The November survey found that 97% of respondents plan to deploy AI to more employees in the future.
“This reflects a broader trend toward democratizing AI capabilities, ensuring that teams across functions and levels can benefit from its transformative potential,” Tom Mainelli, IDC’s group vice president for device and consumer research, said in the report. “As AI tools become more accessible and tailored to specific job functions, they will further enhance productivity, collaboration, and innovation across industries.”
The report builds off the AMD 2023 Commercial Survey and shows that IT decision-makers remain bullish on AI’s benefits for their organizations, even as they face key challenges impacting wide-spread adoption.
When looking at the IDC report compared to the AMD 2023 Commercial Survey, the new data found that:
- Security risks (32%) remain a top barrier for decision makers adopting cloud-based AI tools and platforms, down from 67% two years ago.
- IT decision makers are more optimistic about AI PCs boosting productivity (76%) than AI in general; 67% felt that way in 2023.
- Most of those surveyed (82%) see AI PCs as a positive for employees and expect to invest in new hardware before the end of the year
Cost has been a major drag on AI projects. For smaller organizations, rolling out a single in-house instance of generative AI (genAI) can cost from $50,000 to $500,000. For larger enterprises, the costs quickly soar into the millions of dollars. At the same time, using a cloud provider brings privacy and security risks, as organizations have to rely on third-party providers.
By 2030, companies are expected to spend $42 billion a year on genAI projects such as chatbots, research, marketing, and summarization tools. And while the technology has been heralded as a boon to productivity, nailing down a return on investment (ROI) in genAI is elusive.
Because of those ROI challenges, nearly one-in-three genAI projects will be scrapped, according to research.
IDC
Seventy-four percent of those surveyed by IDC expect AI PCs to improve total cost of ownership, as they will natively offer the technology’s efficiencies. Companies are also confident they’ll also soon measure the benefits of AI PC deployment, with 87% saying they’re ready to track ROI — and over half are willing to pay a 10% premium for PCs with NPUs offering more than 40 tera operations per second (TOPS).
AI PCs are modern systems with specialized NPUs that accelerate AI processing at the edge, combining powerful CPUs and GPUs for low latency, enhanced privacy, and reduced cloud costs. Though still emerging, the category is quickly gaining traction across various price points. Microsoft and partners market the higher-end systems as Copilot+ PCs, featuring AI-driven OS tools such as live captions, improved search, and Windows Studio Effects.
Organizations are turning to genAI tools on endpoint devices because security remains a top concern for IT leaders.
The top three features of AI PCs that survey respondents found most compelling are personalized employee experiences (77%), improved data privacy (75%), and enhanced security risk prevention (74%).
IDC
AI PCs address privacy and compliance challenges by running AI workloads locally, reducing the need for cloud connectivity and lowering the risk of data breaches. In sectors such as finance and healthcare, they process sensitive data on-site, ensuring compliance with regulations like HIPAA.
As independent software vendors (ISVs) integrate local AI features and companies upgrade to Windows 11, AI PCs are becoming more common, with 60% of companies planning to replace Windows 10 systems and 73% speeding up PC refresh plans.
AI PCs can also streamline IT troubleshooting, boost security, and automate tasks. In marketing, for example, they handle data-driven campaigns and optimize engagement. In operations, they predict demand and adjust inventory for better efficiency, according to IDC.
For more than year, smaller, more adept genAI models have been migrating to endpoint devices such as smartphones, laptops, and IoT hardware. Notably, Apple, Samsung, and other smartphone and silicone manufacturers have been rolling out AI capabilities on their hardware, fundamentally changing the way users interact with edge devices.
Thought AI PCs can boost productivity, organizations should collaborate with hardware and silicon vendors to understand how the technology aligns with business goals. “This helps identify AI PC solutions that address specific challenges and deliver value,” IDC said.
There are two key opportunities, the research firm said. First, companies should engage with ISVs to stay informed about AI-driven software features, enabling strategic AI PC deployments. Second, working with hardware partners to understand roadmaps helps optimize deployment across datacenters and edge environments, balancing performance, cost, and scalability, IDC said.
“By aligning strategies with technology roadmaps, businesses can unlock AI PCs’ full potential and ensure long-term success,” IDC said.
The most important decision in tech is being made today, but you won’t be told about it.
The most important decision in global technology is being made by a single UK judge in a small room, a decision happening in near-total privacy with no transparency at all.
What’s at stake is the use of data encryption, personal privacy, and the huge risk of being forced to install backdoors into tech products. If this sounds like something that could once have happened behind the Iron Curtain, think again: This world-impacting decision is all part of what seems to be a plan to turn the nation into a surveillance society.
“For your protection.”
If you’ve been following along, you already know what’s at stake.
Welcome to spook BritainThe UK is demanding that Apple open up its systems for surveillance. Apple is opposed to this demand and has already withdrawn one of the services it offered the UK as a result. Today, the company will appeal the demand of the UK Home Office in a top secret court. The public won’t be able to attend that hearing, won’t be able to comment on the case, will not be told the results — and Apple is forbidden from discussing it.
It’s a real case of authoritarian overreach on steroids.
The fact that it is happening at all will embolden governments globally to demand Apple, Google, and others install their own back doors, reducing digital security and privacy — one leaked backdoor exploit at a time. Eventually, it will threaten digital commerce.
Being secret, we don’t know if other companies are facing the same demand, but it’s reasonable to assume that if Apple is facing such stress, then Google will be facing the same thing. We just won’t be told.
Outside of public scrutiny, the UK government is making a decision that threatens serious negative consequences across most parts of life. US Director of National Intelligence Tulsi Gabbard last month called the matter a “clear and egregious violation of Americans’ privacy and civil liberties.”
The special relationshipIn the shadowy halls and byzantine pathways infested by those who have acquired power, discussions are taking place. Only last night, a cross-party group wrote a furious letter to the UK government demanding that today’s decision-making process be done in public. Signatories included US Sens. Ron Wyden (D-OR) and Alex Padilla (D-CA), and Reps. Andy Biggs (R-AZ), Warren Davidson (R-OH), and Zoe Lofgren (D-CA).
“We write to request the Investigatory Powers Tribunal (IPT) remove the cloak of secrecy related to motives given to American technology companies by the United Kingdom which infringes on free speech and privacy, undermines important United States Congress and UK parliamentary oversight, harms national security, and ultimately, undermines the special relationship between the United States and the United Kingdom,” they warned.
The demand the UK is making does, of course, undermine that relationship, as Apple will be obliged to offer up the personal data of any of its users in the world.
Liberty and Privacy International have filed a legal complaint with the Investigatory Powers Tribunal (IPT) demanding the case be heard in public, wrote the Financial Times. Caroline Wilson Palow, legal director of Privacy International, argued: “The UK’s use of a secret order to undermine security for people worldwide is unacceptable and disproportionate.”
Too little, too lateThere has been some consultation, albeit at the 11th hour. Bloomberg reports that UK officials are rushing around attempting to win support for its plans. Pointing to the UK’s non-existent constitution, these discussions lean deeply into tradition and expectation of privacy and balanced use of these powers. To my mind, these promises are tantamount to purchasing a vehicle from a stranger down at your local bar; there’s no trust without guarantee, and I see no guarantee in what has been promised.
The UK side has, in typical myopic fashion, argued that criticism of the attempt is “misinformed.” If that’s true, then the UK has itself to blame for this attempted digital smash-and-grab against global privacy without any significant oversight at all.
Officials argue that they don’t want blanket access and will only request data concerning the most serious crimes. That’s not really the point, of course – the point is that there are no safe back doors; vulnerabilities — even government designed ones — will be identified and abused.
One back door is also one too many, as when one government gains such access, all governments will demand the same thing.
We will never know if we are safeWhat makes this act of self-harm worse is that the world won’t be told of the decision, no matter which way it goes. The UK won’t say anything, and Apple is not permitted to say anything.
That means ordinary people like me and you will never know if our digital lives remain private and secure. But governments and intelligence agencies will know, which means, inevitably, that attempts to find and exploit whatever UK-mandated backdoors are put in place will intensify. Why would any other government not attempt to exploit these holes?
Most of us won’t know anything, until the eventual and inevitable day these backdoors are weaponized and used in a vast global attack.
Far from making the world safer, this deluded demand leaves the world open to an attack that makes the Crowdstrike debacle look like a rehearsal. The bottom line? Because we won’t know how this judgement goes, we need never feel safe online again — all thanks to a decision taken in top secret by one person.
You can follow me on social media! Join me on BlueSky, LinkedIn, and Mastodon.
The return-to-office Catch 22
If you haven’t read Joseph Heller’s masterpiece, Catch-22, I highly recommend you do. The Catch-22 in the novel is this: “A combat pilot was crazy by definition (he would have to be crazy to fly combat missions) and since army regulations stipulated that insanity was justification for grounding, a pilot could avoid flight duty by simply asking, but if he asked, he was demonstrating his sanity (anyone who wanted to get out of combat must be sane) and had to keep flying.”
In modern business, it goes like this: Companies or government agencies insist workers return to the office, but since they have no offices to return to, they must work from home. But since working from home is forbidden, they have to return to the office. Lather, rinse, repeat.
Stupid, right? Yes, but that doesn’t stop increasingly obtuse managers from being obsessed with forcing workers to work in an office even when there’s no power, internet access, or sometimes even space, for them to do so.
I’m not making this up.
Millions of federal workers have been ordered to return to offices across the US, only for many of them to find that those workplaces aren’t ready. Issues have included a lack of desks, Wi-Fi, electricity, and hazardous conditions such as exposed wires and malfunctioning lighting. And, then, of course, there are the plugged sinks and dirty bathrooms — because no one hired janitors to clean up newly opened government offices. (As far as we know, employees haven’t had to bring in their own toilet paper… yet.)
This change came about because US President Donald J. Trump issued an executive order soon after re-taking office demanding the government “take all necessary steps to terminate remote work arrangements and require employees to return to work in-person at their respective duty stations on a full-time basis.”
Why? It’s so he can “drain the swamp” and “usher a Golden Age for America by reforming and improving the government bureaucracy to work for the American people.” Trump’s orders further suggested remote work was a hindrance to effective management and oversight, with the belief that in-person work would improve supervision and control over employees.
I’ve said before that the real motivation for many of these efforts to get people back into cubicles is incompetent micromanagers demanding literal oversight of workers. Trump and company are perfect examples.
Trump, for example, pulled a number out of a hat by lying that “only 6% of employees currently work in person.” Even the lick-spittle, Republican-dominated House of Representatives Oversight Committee was closer to reality: it had just reported five days before that “43% of the workforce were still teleworking as of September 2023.”
Of course, since then Trump’s regime has announced, via Elon Musk’s “Department of Government Efficiency” (DOGE), that it would terminate the leases of a quarter of the government’s real estate. That’s in addition to the “non-core” government buildings the government might sell off, including federal courthouses, the headquarters of the Justice Department in Washington DC and the American Red Cross.
So, where will the remaining workers work? Good question. Like so many things DOGE is doing, there are no real answers. There are just unfounded, unproven claims that it will save money. In the meantime, government employee morale continues to circle the drain.
Some businesses, though, are also forcing employees back to non-existent offices, and they do want productive workers. Take Amazon, for example, which has been trying to shove employees back into the office five days a week for months. There’s one little fly in the soup: Amazon doesn’t have enough office space for them. Oops.
The company has had to delay its return-to-office mandate for some locations, meaning employees in cities like Atlanta, Dallas, Houston, Nashville, New York, and Phoenix might not return until as late as May 2025. (I’ll be surprised if they can all fit in by then.)
This lack of operational planning has gotten so bad that Amazon is overpaying for additional office space, with such deals as leasing serious square footage via WeWork in Manhattan to accommodate employees.
AT&T is running into similar problems with its phased return to a five-day-a-week plan. According to a Business Insider report, some employees have been told they’ll only have 70% to 80% of the workstations they need. Boy, I’m sure that the crew will be able to work better now that they’re back in the office!
Before AT&T implemented its back-to-the-workplace demands, CEO John Stankey said in May 2023 that if employees “want to be a part of building a great culture and environment, they’ll come along on these adjustments.” If this is a great culture, I’d hate to think what he believes is a lousy one.
If you’ve been reading my stories for any time, you know I’m a big believer in working from home. But, come on, people, if you feel that everyone working in the office is the right way, don’t you think you should do a better job of at least providing the spaces and resources your people need to be successful?
I mean, this is management 101. It’s also that rarest of qualities: common sense.
Intel under Tan: What enterprise IT buyers need to know
Intel’s appointment of semiconductor veteran Lip-Bu Tan as CEO marks a critical moment for the company and its enterprise customers.
With rising competition from AMD, Arm-based chips, and RISC-V alternatives, Intel faces mounting pressure to defend its x86 dominance.
While many enterprises still depend on Intel for data center workloads, AI acceleration, and PC deployments, the landscape is shifting. AMD continues to erode Intel’s x86 market share, Arm is expanding in data centers, and Nvidia has surged ahead in AI.
Windows 10 Insider Previews: A guide to the builds
Microsoft never sleeps. In addition to its steady releases of major and minor updates to the current version of Windows 10, the company frequently rolls out public preview builds to members of its Windows Insider Program, allowing them to test out — and even help shape — upcoming features.
Although Windows Insiders can choose to receive Windows 11 preview builds in one of four channels — the Canary, Dev, Beta, or Release Preview Channel — Microsoft currently offers Windows 10 Insider previews in the Beta and Release Preview Channels only.
The Release Preview Channel typically doesn’t see action until shortly before a new feature update is rolled out; it’s meant for final testing of an upcoming release and is best for those who want the most stable builds. The Beta Channel previews features that are a little further out.
Below you’ll find information about recent Windows 10 preview builds. For each build, we’ve included the date of its release, which Insider channel it was released to, a summary of what’s in the build, and a link to Microsoft’s announcement about it.
Note: If you’re looking for information about updates being rolled out to all Windows 10 users, not previews for Windows Insiders, see “Windows 10: A guide to the updates.”
Releases for Windows 10 version 22H2 Windows 10 Build 19045.5674 (KB5053643)Release date: March 13, 2025
Released to: Release Preview Channel
This build fixes a variety of bugs, including one in which thumbnails in File Explorer crashed and caused white pages to appear instead of the actual thumbnail.
(Get more info about Build 19045.5674.)
Windows 10 Build 19045.5552 (KB5052077)Release date: February 13, 2025
Released to: Release Preview Channel
This build fixes a variety of bugs, including one in which Open Secure Shell (OpenSSH) refused to start, stopping SSH connections.
(Get more info about Build 19045.5552.)
Windows 10 22H2 Build 19045.5435 (KB5050081)Release date: January 17, 2025
Released to: Release Preview Channel
This update introduces a new calendar and the new Outlook app. It also fixes a variety of bugs, including one that depleted virtual memory, causing some apps to fail, and another in which the Capture Service and Snipping Tool stopped responding you pressed Windows key + Shift + S several times while Narrator was on.
(Get more info about Build 19045.5435.)
Windows 10 22H2 Build 19045.5194 (KB5046714)Release date: November 14, 2024
Released to: Beta Channel and Release Preview Channel
For Windows Insiders in the Beta Channel, the recommended section of the Start menu will show some Microsoft Store apps from a small set of curated developers. If you want to turn this off, go to Settings > Personalization > Start. Turn off the toggle for Show suggestions occasionally in Start. Note that this feature is being rolled out gradually.
Windows Insiders in the Beta and Release Preview Channels get several bug fixes, including for a bug in which when you dragged and dropped files from a cloud files provider folder, it might have resulted in a move instead of a copy.
(Get more info about Build 19045.5194.)
Windows 10 22H2 Build 19045.5070 (KB5045594)Release date: October 14, 2024
Released to: Beta and Release Preview Channels
In this build, those in the Beta Channel who have chosen to get features as soon as they are rolled out get new top cards that highlight key hardware specifications of their devices.
Insiders in both the Beta and Release Preview Channels get a new account manager on the Start menu. The new design makes it easy to view your account and access account settings. Those in the Beta and Release Preview Channels also get fixes for a variety of bugs, including one in which a scanner driver failed to install when you used a USB cable to connect to a multifunction printer.
(Get more info about Windows 10 22H2 Build 19045.5070.)
Windows 10 22H2 19045.4955 (KB5043131)Release date: September 16, 2024
Released to: Beta Channel and Release Preview Channel
This build fixes several bugs, including one in which playback of some media could have stopped when you used certain surround sound technology, and another in which Windows Server stopped responding when you used apps like File Explorer and the taskbar.
(Get more info about Windows 10 22H2 Build 19045.4955.)
Windows 10 22H2 19045.4842 (KB5041582)Release date: August 22, 2024
Released to: Beta Channel and Release Preview Channel
This build fixes several bugs, including one in which when a combo box had input focus, a memory leak sometimes occurred when you closed that window, and another in which some Bluetooth apps stopped responding because of a memory leak in a device.
(Get more info about Windows 10 22H2 19045.4842.)
Windows 10 22H2 Build 19045.4713 (KB5040525)Release date: July 11, 2024
Released to: Beta Channel and Release Preview Channel
In this build, Insiders in the Beta Channel get a fix in which they will see a search box on their secondary monitors when the setting for search on the taskbar is set to “Search box.”
Insiders in the Beta Channel and Release Preview Channel get fixes for a variety of bugs, including one in which the TCP send code often causes a system to stop responding during routine tasks, such as file transfers. This issue leads to an extended send loop.
(Get more info about Windows 10 22H2 19045.4713.)
Windows 10 22H2 Build 19045.4593Release date: June 13, 2024
Released to: Beta Channel and Release Preview Channel
In this build, Insiders in the Beta Channel get bug fixes for Windows Backup. Insiders in both the Beta and Release Preview Channels get a new feature for mobile device management in which when you enroll a device, the MDM client sends more details about the device. The MDM service uses those details to identify the device model and the company that made it.
Insiders in the Beta Channel and Release Preview Channel also get a variety of bug fixes, including for a bug that could have stopped systems from resuming from hibernation after BitLocker was turned on.
(Get more info about Windows 10 22H2 19045.4593.)
Windows 10 22H2 Build 19045.4472 (KB5037849)Release date: May 20, 2024
Released to: Release Preview ChannelThis build fixes a variety of bugs, including one in which TWAIN drivers stopped responding when you used them in a virtual environment, and another in which the Windows Presentation Foundation (WPF) app stopped responding.
(Get more info about Windows 10 22H2 19045.4472.)
Windows 10 22H2 Build 19045.4353 (KB5036979)Release date: April 15, 2024
Released to: Release Preview Channel
This build introduces account-related notifications for Microsoft accounts in Settings > Home. A Microsoft account connects Windows to your Microsoft apps. This feature displays notifications across the Start menu and Settings. You can manage your Settings notifications in Settings > Privacy & security > General.
A wide variety of bugs have been fixed, including one in which when your device resumed from Modern Standby you might have gotten the stop error, “0x9f DRIVER_POWER_STATE_FAILURE, and another in which the Windows Local Administrator Password Solution’s (LAPS) Post Authentication Actions (PAA) did not happen at the end of the grace period. Instead, they occurred at restart.
(Get more info about Windows 10 22H2 Build 19045.4353.)
Windows 10 22H2 Build 19045.4233 (KB5035941)Release date: March 14, 2024
Released to: Release Preview Channel
This build adds Windows Spotlight, which displays new images as your desktop wallpaper. If you want to know more about an image, click or tap the Learn More button, which takes you to Bing. To turn on this feature, go to Settings > Personalization > Background > Personalize your background and choose Windows spotlight. The update also adds sports, traffic, and finance content to the lock screen. To turn it on, go to Settings > Personalization > Lock screen. Note that these two features will roll out to users gradually.
In addition, in Windows Hello for Business IT admins can now use mobile device management (MDM) to turn off the prompt that appears when users sign in to an Entra-joined machine. To do it, turn on the “DisablePostLogonProvisioning” policy setting. After a user signs in, provisioning is off for Windows 10 and Windows 11 devices.
A wide variety of bugs have been fixed, including one in which some applications that depend on COM+ component had stopped responding. Also fixed was a deadlock issue in CloudAP that occurred when different users signed in and signed out at the same time on virtual machines.
(Get more info about Windows 10 22H2 Build 19045.4233.)
Windows 10 22H2 Build 19045.4116 (KB5034843)Release date: February 15, 2024
Released to: Release Preview Channel
In this build, using Windows share, you can now directly share URLs to apps like WhatsApp, Gmail, Facebook, and LinkedIn. Sharing to X (formerly Twitter) is coming soon.
The build fixes several bugs, including one in which you weren’t able to use Windows Hello for Business to authenticate to Microsoft Entra ID on certain apps when using Web Access Management (WAM).
(Get more info about Windows 10 22H2 Build 19045.4116.)
Windows 10 22H2 Build 19045.3992 (KB5034203)Release date: January 11, 2024
Released to: Release Preview Channel
This update adds eye control system settings. You can back up these settings from the former device while you set up a new device. Then those settings will install automatically on the new device so you can use them when you reach the desktop.
The build fixes a wide variety of bugs, including one in which an MDM service such as Microsoft Intune might not get the right data from BitLocker data-only encryption, and another in which some single-function printers are installed as scanners.
(Get more info about Windows 10 22H2 Build 19045.3992 (KB5034203).)
DOGE may be using an algorithm to fire federal workers
In the past month and a half, the Trump Administration has drastically reduced the federal government workforce.
The cuts alone have generated concern and anger among workers and those who rely on US government services. Adding to the angst: a new concern that government employees could be fired by an algorithm, as engineers modify a legacy reduction-in-force (RIF) software program to assist in their efforts, according to Abigail Kunkler, a law fellow with the nonprofit Electronic Privacy Information Center (EPIC).
Kunkler referenced a February article by Wired citing unnamed sources who told it the unofficial Department of Government Efficiency (DOGE) was retooling AutoRIF software to assist in deciding which employees to lay off. (Wired’s sources reported that most layoffs to that point had been determined manually.)
The day after the article was published, the US Office of Personnel Management ordered agencies to submit RIF plans and file them with the Office of Management and Budget (OMB).
While not an actual federal department, DOGE is a government entity created by President Donald J. Trump with the self-stated mission of reducing waste, fraud, and abuse. To date, DOGE’s efforts have affected 18 federal agencies with layoffs or buyouts. While the exact number of federal job cuts in 2025 remains unclear, reports estimate there have been roughly 222,000 layoffs so far, with more expected as agencies implement budget cuts.
Driven by the government cuts, US layoffs surged 245% in February, according to Reuters.
“It is not clear how AutoRIF has been modified or whether AI is involved in the RIF mandate (through AutoRIF or independently),” Kunkler wrote in a blog post. “However, fears of AI-driven mass-firings of federal workers are not unfounded. Elon Musk and the Trump Administration have made no secret of their affection for the dodgy technology and their intentions to use it to make budget cuts. And, in fact, they have already tried adding AI to workforce decisions.”
Proponents of automated decision-making software claim it improves efficiency and reduces risks of mismanagement and discrimination. However, its use raises concerns about bias, surveillance, and lack of transparency, Kunkler said. The tools often perpetuate bias due to flawed information, such as incomplete or discriminatory historical data, and can lead to arbitrary or discriminatory decisions, potentially violating workers’ rights and laws like Title VII of the Civil Rights Act of 1964.
The creep of worker data collection, surveillance, rating systems, and automated decision-making is called “algorithmic management.” DOGE’s attempts to use a large language model (LLM) to cull “unnecessary” workers is a form of algorithmic management and automated decision-making, Kunlkler said.
AutoRIF, developed by the Department of Defense more than 20 years ago, helps agencies manage workforce reductions. Wired reported that DOGE operatives have been editing its code, with updates made recently through a repository in the Office of Personnel Management’s GitHub, managed by Musk associates after Trump took office. However, a review of that GitHub site contained no “public” repositories.
Efforts to contact and/or get comment from the White House, DOGE, the DOGE Caucus or the Office of Personnel Management were unsuccessful.
“Federal employers using automated decision-making tools sharply reduces transparency for workers and their representatives,” Kunkler said. “There is often no insight into how the tool works, what data it is being fed, or how it is weighing different data in its analysis. The logic behind a given decision is not accessible to the worker and, in the government context, it is near impossible to know how or whether the tool is adhering to the statutory and regulatory requirements a federal employment tool would need to follow.”
Mozilla: Update Firefox immediately
Expired certificates have recently caused a lot of chaos, including for Chromecast users. With that in mind, Mozilla is now urging all Firefox users to immediately update the browser to the latest version.
The reason: an older certificate expires on Friday, which means users who have not updated for a long time could be in trouble. According to Bleeping Computer, this means warnings about stolen passwords and malicious websites can no longer be displayed.
Firefox isn’t the only browser affected; others that use the same certificate include Tor, Librewolf and Waterfox.
OpenAI calls for US to centralize AI regulation
OpenAI executives think the federal government should regulate artificial intelligence in the US, taking precedence over often more restrictive state regulations.
In its contribution to a government consultation on AI regulation filed Thursday, the company also pointed to AI regulatory efforts in China as a threat to US developers — but then suggested that the US should embrace a similar model of AI vendors and government cooperation.
It suggested the government open up to voluntary partnerships with the private sector, neutralizing any advantage China might gain from US AI companies having to comply with “overly burdensome state laws.” In its 15-page filing it advised the government to “create a sandbox for American start-ups and provide participating companies with liability protections including preemption from state-based regulations that focus on frontier model security.”
OpenAI also asked for the government to “provide American AI companies with the tools and classified threat intelligence to mitigate national security risks.”
Act of congressIt buried in a footnote on page 6 was the acknowledgement that the White House can’t legally make these changes without legislative support: “Federal preemption over existing or prospective state laws will require an act of Congress.”
The filing also said that OpenAI wants to have the freedom to train on information, despite any legal protections that forbid it. “The federal government can both secure Americans’ freedom to learn from AI, and avoid forfeiting our AI lead to the PRC by preserving American AI models’ ability to learn from copyrighted material.”
The filing was a response to the federal Office of Science and Technology’s request for comments on developing a federal AI Action Plan. It was one of more than 300 comments initially received.
Forrester Senior Analyst Alla Valente said OpenAI’s statement was “telling everyone what they want to hear. It was absolutely leaning into what the White House wants to hear: If you are a developer, we want you to innovate to your heart’s content and to get rid of all of those pesky state regulations.”
Other analysts agreed, suggesting that state AI laws are likely to be more rigorous, especially about protecting their citizens’ rights, privacy and security.
From an enterprise IT perspective, the longshot proposal could be attractive — not necessarily from a weaker compliance perspective, but from having fewer rules to follow. More critically, the process would reduce having to follow rules that contradict each other.
Many enterprise IT executives work for multinationals. Even if their operations are solely in the US, they likely have operations and partners in other countries. That means that enterprises already have to deal with many AI compliance rules — especially in Australia, the EU, the UK, Canada and Japan — some of which are contradictory.
Valente said that despite OpenAI’s statement stressing simplicity, “It could create more complexity. Simplicity would be global cooperation. Enterprise IT leaders are asking, ‘What does this mean for the customers I already have in regions with very different requirements?’”
Reece Hayden, lead AI analyst with ABI Research, applauded that OpenAI also talked about supporting infrastructure issues such as modernizing the US energy grids. “Those kinds of points have been overshadowed” by efficiency and security details.
Another analyst, IDC research VP Dave Schubmehl, was also skeptical that the OpenAI proposal would go very far.
“I think AI regulation at the state level is probably a foregone conclusion,” Schubmehl, “but this is a valiant attempt by OpenAI to centralize regulation.”
Forget Apple Intelligence, for the enterprise there’s webAI
Shrouded behind the mists of Apple Intelligence, one of the more thought-provoking implementations of artificial intelligence for real-world business use doesn’t come from Apple; it comes from webAI.
The rise of webAIwebAI is a crouching tiger, but not so hidden it didn’t get a mention when new Macs were introduced last week. The company’s AI-powered Companion assistant showcases what might happen when enterprises get hold of private, secure AI that runs locally. webAI claims Companion can be customized to handle almost any enterprise workload. (I’ve seen it trained up on proprietary enterprise data running on the new Mac Studio; It felt like a glimpse into the future of on-device enterprise AI.)
On-device, on-prem, on the ballWhat sets webAI apart is that all the action takes place on the device. Remarkably, this intelligence doesn’t need a Mac Studio but can also run efficiently on an M4 MacBook Air. That means you can quite literally use AI to handle complex tasks on your Mac from anywhere you happen to be without worrying that your valuable enterprise data is going to be purloined on its journey to the cloud or intercepted by rogue intelligence agencies, such as the UK.
A new era for enterprise AI?This development proves that businesses can deploy enterprise-grade AI without the risks associated with cloud connectivity. It also highlights the growing performance gap between Apple’s Macs and competing devices.
While the Apple Intelligence debacle is a serious blow to Apple’s credibility, the potential to use webAI and a Mac to deliver powerful business solutions indicates there’s plenty of life in the platform. Apple actually demonstrated this in action by running a 22 billion parameter webAI Companion model locally on a M4 MacBook Air during its launch. This suggests enterprises might not need to rely on cloud AI services, with all the associated risks of using them.
“Privacy is the founding principle of webAI, it’s in our DNA,” said webAI founder and CEO David Stout. “We are pioneering private AI that knows your business and runs on your devices, inside your walls, completely under your control.”
Tomorrow belongs to…?As Apple continues to focus on both privacy and security on its hardware, the value of its solutions for enterprise users increases. With a robust Apple Silicon processor improvement roadmap ahead, there’s little doubt that the capabilities of the AI models your Macs can run locally will improve.
If, and it’s a big if, Apple makes its Private Cloud Compute service available to third-party developers such as webAI, it could even enable enterprise development teams to deploy incredibly private AI systems, providing multiple benefits including enterprise-class personalized AI, Apple’s privacy and security, and the convenience of end-to-end encrypted cloud services.
Faster and furthurThe AI models we build today will run even more swiftly on the M5, M6, and M7 systems we already know Apple has planned. (Well, we don’t know that precisely, but we can easily surmise based on the current path). That’s a future for AI deployment in business that takes you into 2035, and beyond — all on the device, private, secure and at significantly lower cost than the computer clusters enterprise AI has traditionally required.
That you can run this on portable systems that cost $1,000 also means something, particularly as Windows 10 support nears an end. I predict reduced infrastructure costs, improved security and compliance, and an ROI that could make your finance teams weep with joy. And removing Windows from the equation may cheer some security professionals.
Whatever next?For enterprise users thinking about AI deployment, webAI appears to promise that migrating to a Mac might help companies actually realize the much -hyped benefits of AI at a pretty low cost, all protected within existing security and device management frameworks.
Final thoughts? While I’m disappointed that Apple failed to keep its own promises with Apple Intelligence and do think there will be repercussions for this (and not just in stock prices), when it comes to focused AI implementations that meet the needs of business and exploit the computational power of Apple’s hardware, I’ll be watching what happens with webAI.
You can follow me on social media! Join me on BlueSky, LinkedIn, and Mastodon.
DeepSeek — Latest news and insights
DeepSeek, founded in 2023 by Liang Wenfeng, a Chinese entrepreneur, engineer and former hedge fund manager, is generating a lot of buzz — and for good reason. Here are five things that make it stand out (as well as a listing of the latest news and analysis about DeepSeek).
5 things you need to know about DeepSeek- More accessibility and efficiency: DeepSeek is designed to be less expensive to train and use than many competing large language models (LLMs). Its architecture allows for high performance with fewer computational resources, which is designed to lead to faster response times and less energy consumption.
- Open-source availability and rapid development DeepSeek is under active development with new models and features being released regularly. Models are often available for public download (on Hugging Face, for instance), which encourages collaboration and customization.
- Advanced capabilities: reasoning and multimodal learning Models like DeepSeek-R1 are designed with a focus on advanced reasoning capabilities, aiming to go beyond simple text generation. DeepSeek is expanding into multimodal learning, handling diverse input types such as images, audio and text for a more comprehensive understanding.
- Limitations: Bias and context Like all LLMs, DeepSeek is susceptible to biases in its training data. Some biases may be intentional for content moderation purposes, which raises important ethical questions. While efficient, DeepSeek could have limitations in handling extremely long texts or complex conversations.
- Architecture and performance DeepSeek uses a “mixture of experts” architecture, employing specialized submodels for different tasks, enhancing efficiency and potentially reducing training data needs. DeepSeek has demonstrated competitive performance, comparable to established models in certain tasks, especially mathematics and coding.
March 13, 2025: The release of DeepSeek roiled the world of generative AI last month, leaving engineers and developers wondering how the company achieved what it did, and how they might take advantage of the technology in their own technology stacks.
DeepSeek claims 545% cost-profit ratio, challenging AI industry economicsMarch 4, 2025: In a GitHub post, DeepSeek estimated its daily inference cost for V3 and R1 models at $87,072, assuming a $2 per hour rental for Nvidia’s H800 chips. Theoretical daily revenue was pegged at $562,027, implying a 545% cost-profit ratio for DeepSeek with over $200 million in potential annual revenue.
DeepSeek offers steep discounts, escalating AI price warFebruary 26, 2025: Chinese AI firm DeepSeek unveiled a significant price reduction for developers using its AI models, a move that could intensify competition among both domestic and global rivals.
3 reasons Microsoft needn’t fear DeepSeekFebruary 12. 2025: Microsoft may have the most to lose from DeepSeek’s arrival. It’s invested billions of dollars in AI already, and has said this year alone it will invest another $80 billion. Given that DeepSeek said it built its newest chatbot so cheaply, is Microsoft throwing billions of dollars away? Can it compete with a company that can build genAI at such a low cost?
AI chatbot war breaks out with DeepSeek debut, and the winner is … youFebruary 5, 2025: DeepSeek’s chatbot hit Apple’s App Store and Google Play Store and downloads almost immediately exceeded those of OpenAI’s ChatGPT. In short order, the DeepSeek-R1 AI model changed the genAI market, erasing $600 billion in market capitalization — the largest intraday decline in history — before markets started to recover.
Hackers impersonate DeepSeek to distribute malwareFebruary 4, 2025: To make things worse for DeepSeek, hackers were found flooding the Python Package Index (PyPI) repository with fake DeepSeek packages carrying malicious payloads. According to a discovery made by Positive Expert Security Center (PT ESC), a campaign was seen using this trick to dupe unsuspecting developers, ML engineers, and AI enthusiasts looking to integrate DeepSeek into their projects.
How would a potential ban on DeepSeek impact enterprises?February 4, 2025: European regulators joined Microsoft, OpenAI, and the US government inefforts to determine if DeepSeek infringed on any copyrighted data from any US technology vendor. The investigations could potentially lead to a ban on DeepSeek in the US and EU, impacting millions of dollars that enterprises are already pouring into deploying DeepSeek AI models.
The DeepSeek lesson -— success without relying on Nvidia GPUsFeb. 3, 2025: During the past two weeks, DeepSeek unraveled Silicon Valley’s comfortable narrative about generative AI (genAI) by introducing dramatically more efficient ways to scale large language models (LLMs). Without billions in venture capital to spend on Nvidia GPUs, DeepSeek had to be more resourceful and learned how to “activate only the most relevant portions of their mode
Nvidia unveils preview of DeepSeek-R1 NIM microserviceJan. 31, 2025: Nvidia stock plummeted after Chinese AI developer DeepSeek unveiled its DeepSeek-R1 LLM. Last week, the chipmaker turned around and announced the DeepSeek-R1 model is available as a preview NIM on build.nvidia.com. Nvidia’s inference microservice is a set of containers and tools to help developers deploy and manage gen AI models across clouds, data centers, and workstations.
Italy blocks DeepSeek due to unclear data protectionJan. 31, 2025: Italy’s data protection authority Garante has decided to block Chinese AI model DeepSeek in the country. The decision comes after the Chinese companies providing the chatbot service failed to provide the authority with sufficient information about how users’ personal data is used.
How DeepSeek changes the genAI equation for CIOsJan. 30, 2025: The new genAI model’s explosion on the scene is likely to amp up competition in the market, drive innovation, reduce costs and make gen AI initiatives more affordable. It’s also a metaphor for increasing disruption. Maybe it’s time for CIOs to reassess their AI strategies.
DeepSeek leaks 1 million sensitive records in a major data breachJan. 30, 2025: A New York-based cybersecurity firm, Wiz, has uncovered a critical security lapse at DeepSeek, a rising Chinese AI startup, revealing a cache of sensitive data openly accessible on the internet. According to Wiz, the exposed data included over a million lines of log entries, digital software keys, backend details, and user chat history from DeepSeek’s AI assistant.
Microsoft first raises doubts about DeepSeek and then adds it to its cloudJan. 30, 2025: Despite initiating a probe into the Chinese AI startup, Microsoft added DeepSeek’s latest reasoning model R1 to its model catalog on Azure AI Foundry and GitHub.
How DeepSeek will upend the AI industry — and open it to competitionJan. 30, 2025: DeepSeek is more than China’s ChatGPT. It’s a major step forward for global AI by making model building cheaper, faster, and more accessible, according to Forrester Research. While LLMs aren’t the only route to advanced AI, DeepSeek should be “celebrated as a milestone for AI progress,” the research firm said.
DeepSeek triggers shock waves for AI giants, but the disruption won’t lastJan. 28, 2025: DeepSeek’s open-source AI model’s impact lies in matching US models’ performance at a fraction of the cost by using compute and memory resources more efficiently. But industry analysts believe investor reaction to DeepSeek’s impact on US tech firms is being exaggerated.
DeepSeek hit by cyberattack and outage amid breakthrough successJan. 28, 2025: Chinese AI startup DeepSeek was hit by a cyberattack, according to the company, prompting it to restrict user registrations and manage website outages as demand for its AI assistant soared. According to the company’s status page, DeepSeek has been investigating the issue since late evening Beijing time on Monday.
What enterprises need to know about DeepSeek’s game-changing R1 AI modelJan. 27, 2025: Two years ago, OpenAI’s ChatGPT launched a new wave of AI disruption that left the tech industry reassessing its future. Now, within the space of a week, a small Chinese startup called DeepSeek has pulled off a similar coup, this time at OpenAI’s expense.
iPhone users turn on to DeepSeek AIJan. 27, 2025: As if from nowhere, OpenAI competitor DeepSeek has risen to the top of the iPhone App Store chart, overtaking ChatGPT’s OpenAI. It’s the latest in a growing line of genAI services and seems to offer some significant advantages, not least its relatively lower development and production costs.
Chinese AI startup DeepSeek unveils open-source model to rival OpenAI o1Jan. 23, 2025: Chinese AI developer DeepSeek has unveiled an open-source version of its reasoning model, DeepSeek-R1, featuring 671 billion parameters and claiming performance superior to OpenAI’s o1 on key benchmark. “DeepSeek-R1 achieves a score of 79.8% Pass@1 on AIME 2024, slightly surpassing OpenAI-o1-1217,” the company said in a technical paper. “On MATH-500, it attains an impressive score of 97.3%, performing on par with OpenAI-o1-1217 and significantly outperforming other models.”
How AI-enabled ‘bossware’ is being used to track and evaluate your work
Employee monitoring software, also called “bossware” and “tattleware,” is increasingly being used to track and manage employees remotely via a business network or by using desktop software.
And now, bossware vendors are injecting artificial intelligence (AI) tools into their products, shifting the employee monitoring software from basic tracking to something more granular that can offer deeper, more actionable insights — and even play a role in layoffs.
In a survey last year, online privacy and security provider ExpressVPN said it found 61% of companies are using AI-powered analytics to track and evaluate employee performance.
Employee monitoring tools can increase efficiency with features such as facial recognition, predictive analytics, and real-time feedback for workers, allowing them to better prioritize tasks and even prevent burnout. When AI is added, the software can be used to track activity patterns, flag unusual behavior, and analyze communication for signs of stress or dissatisfaction, according to analysts and industry experts. It also generates productivity reports, classifies activities, and detects policy violations.
“In fact, we’re now seeing employers track physical spaces with tools, including video surveillance (69%) and badge-based entry/exit tracking (58%), as companies demand employees return to the office,” said Lauren Hendry Parsons, ExpressVPN’s privacy advocate.
Overall, remote employee monitoring is now at an all-time high. Depending on the software being used, AI-infused bossware can perform:
- Activity Tracking & Behavior Analysis: AI flags unusual behavior such as excessive time on non-work tasks or changes in typing patterns.
- Sentiment Analysis: Communication is monitored for signs of stress or dissatisfaction.
- Automated Report Generation: AI compiles data into productivity reports with insights and recommendations.
- Data Categorization: Activities are classified as productive or not to help managers focus on key areas.
- Facial Recognition & Biometric Monitoring: Attendance and engagement are tracked through AI-driven facial recognition.
- Automatic Policy Violation Detection: Policy breaches like accessing inappropriate sites are flagged.
- Automated Scheduling & Task Allocation: AI optimizes task assignments based on employee strengths.
Veriato
Some AI productivity tools track data such as hours online, emails sent, and other activities to give employees a score on their work. How managers interpret that score varies; some might rely on it at face value, while others, especially in office settings, might depend more on their own judgment and qualitative insights about an employee.
Other concerns arise when companies use the scores to make employment decisions, such as layoffs, without considering the full context behind the data, according to Pegah Moradi, a workplace automation researcher and PhD candidate at Cornell University.
Companies can use extensive data on employees to make decisions, creating an imbalance of power since employees don’t have access to the same data about themselves. With the rise of remote work post-COVID, “attention tracking has become common,” where employees are logged out or flagged for being idle too long, Moradi said.
The role of AI in monitoringWhile managers have always used metrics to assess employee performance, AI tools can now consolidate those metrics into a single score that’s harder to interpret. This trend is growing, partly due to the availability of large language models (LLMs), according to Moradi.
LLMs are often used in predicting employee behaviors, including the risk of quitting, unionizing, or other actions, Moradi said. However, their role is mostly in analyzing personal communications, such as emails or messages. That can be tricky, because interpreting messages across different people can lead to incorrect inferences about someone’s job performance.
“If an algorithm causes someone to be laid off, legal recourse for bias or other issues with the decision-making process is unclear, and it raises important questions about accountability in algorithmic decisions,” she said.
The problem, Moradi explained, is that while AI can make bossware more efficient and insightful, the data being collected by LLMs is obfuscated. “So, knowing the way that these decisions [like layoffs] are made are obscured by these, like, black boxes,” Moradi said.
Technology worker rights organizations argue remote employee monitoring produces more negative results than positive. The Electronic Frontier Foundation (EFF), which originated the term “bossware,” has denounced employee monitoring software as a violation of privacy. The Center for Democracy and Technology (CDT) has denounced bossware as a threat to the safety and health of employees.
Matt Scherer, CDT’s senior policy counsel for Workers’ Rights and Technology, said there is considerable anecdotal evidence that the use of these tools has increased over the past 10 years as data has become more valuable — particularly in the years since COVID-19 led to an increase in remote work.
Bossware is also unhealthy for workers because it discourages breaks, enforces a faster work pace, and reduces downtime; those can combine to increase the risk of physical injuries from job strain and mental health issues, according to Scherer.
“There also appears to be an increase in the number of companies using systems in ways that threaten workers’ legal rights, such as by disrupting the right to organize,” he said. “But to me, the most troubling thing is that we just plain don’t know how common these surveillance systems are, which employers are using them, or which workers are being affected.”
Hudson Hongo, a spokesman for the the digital rights advocacy group Electronic Frontier Foundation, argued that most bossware is punitive and meant to penalize workers. He agreed it can jeopardize employee health and place workers’ privacy and security at risk.
“Workers have legal and contractual rights that protect them against privacy violations, wrongful termination, and other unjust treatment — including actions guided by automated decision-making (ADM) systems,” Hongo said. “While ADM vendors may promise employers increased efficiency or more objective decision-making, these systems are frequently faulty, have repeatedly demonstrated bias, and may not be aware of relevant local and state laws and regulations.”
Many tech vendors add generative AI (genAI) to performance management systems, promising time savings and more objective, data-driven evaluations, according to Gartner Research. However, adoption is limited, with 52% of human resources reporting no interest in AI for performance management. HR leaders are often hesitant to adopt the technology due to its novelty and unresolved compliance concerns, according to Gartner HR Director Analyst Laura Gardiner.
But by 2027, 30% of organizations will provide targeted training for managers on how to contextualize AI-generated performance feedback to increase manager skill in the responsible use of GenAI, according to Gardiner.
Is regulation the answer?Several states, including California, Illinois, New Jersey, New York, and Vermont, have proposed laws regulating automated tools in hiring, firing, and compensation, according to the Center for Labor and A Fair Economy at Harvard Law School.
A 2023 Massachusetts bill sought to address automated decision-making and worker data from surveillance, with a private right of action for workers. Workers should be able to appeal or correct decisions made by automated systems. Most such measures require impact assessments, which should be conducted by an independent, third party. The bills call for employers to provide timely access to assessment results, including relevant information, when AI or automated systems affect employment.
The Massachusetts proposal has not yet been enacted into law. In fact, Scherer said, there are no laws in the United States that place hard limits on what types of surveillance employers can conduct on workers — or on what types of data they can collect. “In the workplace, violations of employee privacy do not usually raise the specter of lawsuits, which is one of the reasons stronger regulation is needed as worker surveillance tools become more common,” Scherer said.
Not all bossware is for employee performance monitoring. There’s a security rationale, too, because a top cause of data breaches often involves employees, either intentional or unintentional. According to a Verizon study, 82% of breaches are the result of employee errors or insecure acts, and up to 68% involved non-malicious human errors, such as inadvertent actions or falling for social engineering scams.
To prevent data breaches, IT organizations use employee monitoring software to track illegal activities, protect confidential information, and watch for insider threats.
A matter of trustWhen approached with care, employee surveillance can improve operations without compromising dignity or trust. The key is recognizing that it’s not just about technology, but the balance of power between employers and employees, said ExpressVPN’s Parsons.
“When monitoring shifts from a tool for productivity to an invasive form of spying, it creates distrust, stifles creativity, and breeds resentment,” she argued. “Employers need to reflect on whether their monitoring systems may unintentionally damage morale.”
For example, instead of using real-time monitoring or biometric tracking, employers could focus on measuring outcomes. That would give workers more autonomy, fostering a positive and productive work environment, Parsons said.
“There are ways in which those kinds of tools can be used to ensure fairness, to ensure equal treatment, to ensure inclusivity,” said David Brodeur-Johnson, employee experience research lead at Forrester.
Imagine, for example, that a large organization gathers data on employee sentiment, tone, and interactions in an anonymous, aggregated way to track overall mood over time. Or, after announcing a major corporate change that could create uncertainty, anxiety, or mistrust, business leaders can analyze the data to see how employees are reacting — what they’re worried about or where confusion exists, Brodeur-Johnson said.
“This insight helps leadership adjust messaging, clarify priorities, and offer more support,” he said. “However, there’s a risk this data could also be used in unethical ways.”
Several prominent companies, Brodeur-Johnson said, use AI-enhanced tools to evaluate employee performance and time engaged in work, including ActiveTrak, Microsoft Viva, Desk Time and Sapience Analytics. Others include, Veriato (owned by Awareness Technologies), Time Doctor, Snapsoft, Hubstaff, and Teramind.
Fighting employee-monitoring ‘myths’Awareness Technologies CEO Elizabeth Harz said there’s a lot of “misinformation” and “myths” about the employee monitoring software industry. Most of it is based in fear of new innovations. Her company’s Veriato employee monitoring software is no different than sales-tracking platforms that companies like Salesforce.com offers, she said — it just spans a greater number of business use cases.
Companies are responsible for protecting their most valuable assets — people and data — regardless of hybrid or remote work, she said. Monitoring software ensures they’re also operating in safe work environments, protected from harassment, and that customer data is safeguarded to prevent misuse.
“These responsibilities have existed for decades, but now technology offers modern tools to manage them more effectively, just like how sales-tracking tools like Salesforce became standard,” Harz said. “I could defend any type of automation that’s happened in the last 20 or 30 years, and we could see the benefits that have come out it. And this area is no different. I think in five years, it will be extremely commonplace.”
Veriato
Businesses use Veriato’s software to monitor employee actions on company-issued devices; it’s known as User Activity Monitoring (UAM) and includes smartphones, tablets, laptops and desktops. UAM helps with two main goals: improving productivity and managing insider risk, she said.
For example, if an employee is copying a customer’s Social Security Number or other personally identifiable information and pasting it into a Word document, it will be flagged by Veriato and sent to a manager in ordr to stop the activity in real time. “The employee may not realize that they’re compromising a customer’s data. They’re just trying to work faster,” Harz said.
The software can also be used to track an employee’s hours at work, which can be used to create efficiencies. For example, if one employee works 38 hours a week and another works 80 — and both accomplish the same amount of work — the software’s data can be used to find ways to reduce wasted effort.
The data can also be used to decide who should be laid off, which Harz said isn’t any different than if a manager noticed poor performance over time. “And so if someone gets an alert that says, a lawyer is spending half their time on Gmail, and somebody digs in there, they can see forensic-level screen grabs and show what that employee was doing. And they can share that with the employee,” she said. “If someone is not working 80% of the day, they’re watching cat videos on YouTube, they probably should go find a different job.”
Exactly what’s being watched by employee monitoring software and services depends on the platform and its privacy features. Some automatically remove personally identifiable information before analysis, ensuring ethical data use. For example, Microsoft focuses on protecting identity and analyzing only aggregate data. Trust and safeguards are built in to prevent misuse, Brodeur Johnson said.
Ironically, the increasing use of AI-enhanced monitoring software has also given rise to ways to game the software, according to Moradi. “You’re seeing the rise of these sorts of systems to like…a mouse jiggler [that] jiggles your mouth every so often,” she said. “I have a friend who always has a YouTube video playing on her computer because that shows she’s online.
“I find it kind of interesting there are all these little ways that people are resisting this sort of tracking in order to reclaim their own … autonomy,” Moradi said.
Microsoft faces FTC antitrust probe over AI and licensing practices
The US Federal Trade Commission (FTC) is reportedly pressing ahead with an antitrust investigation into Microsoft, a move that could reshape competition in AI and productivity software.
As part of the probe, the FTC has issued a civil investigative demand requiring Microsoft to disclose extensive data on its AI operations, including the cost of acquiring data and training models dating back to 2016, Bloomberg News reported.
Regulators are also seeking information on Microsoft’s data centers, challenges in securing sufficient computing power to meet customer demand, and its software licensing practices.
Launched last year under former FTC Chair Lina Khan, the investigation is also examining Microsoft’s decision to cut funding for its in-house AI projects after partnering with OpenAI — a move that could be viewed as limiting competition in the rapidly growing AI sector.
Khan authorized the inquiry before leaving office, with Andrew Ferguson assuming the chairmanship following President Donald Trump’s inauguration in January.
Implications for the industryThe probe into Microsoft could offer new insights into the company’s reliance on OpenAI’s models and its level of influence over the AI startup.
The findings may have far-reaching consequences for competition and the balance of power in the AI sector, including how open Microsoft is to integrating other models into its OS and software for Copilot, said Neil Shah, co-founder of Counterpoint Research. “It’s the ‘search’ wars all over again. This could have multiple implications, ranging from the integration of third-party models and the flexibility to power Copilot, which may raise optimization and security concerns for enterprises, to the pricing of Copilot and its integrated services,” he said.
Beyond AI integration, the investigation could reshape competitive dynamics in cloud computing and enterprise software.
Changes to Microsoft’s licensing terms or business practices may alter pricing strategies and introduce regulatory hurdles for cloud and AI providers.
“The FTC’s antitrust investigation into Microsoft’s cloud services and AI partnerships could definitely lead to regulatory interventions affecting licensing terms, pricing models, and competition in the enterprise AI and cloud services market,” said Mukesh Ranjan, vice president at Everest Group. “Enterprises relying on Microsoft may face disruptions or increased costs as licensing agreements are revised.”
Heightened regulatory scrutiny of Microsoft’s partnership with OpenAI could also fuel greater competition in the AI sector, benefiting rival service providers.
“We are already seeing significant innovation in this space post the release of DeepSeek,” Ranjan added. “For example, Google recently unveiled Gemma 3, an advanced AI model designed to operate efficiently on a single GPU, which it claims to be much better than OpenAI offerings.”
Effects in the enterpriseAnalysts advise adopting a wait-and-see approach regarding the potential effects of the antitrust probe.
“The biggest potential impact would be reducing OpenAI’s role as the default AI provider for Microsoft products,” said Hyoun Park, CEO and chief analyst at Amalgam Insights. “Microsoft has already begun diversifying, and its partnership with OpenAI is not as tight as it was a year ago. Recent shifts toward agentic AI and open-source models have made ChatGPT less competitive as an enterprise tool, pushing Microsoft to develop independent AI solutions.”
If the investigation results in sanctions, businesses will need to monitor which AI model providers gain broader access to Microsoft’s ecosystem or whether the company will be forced to develop its own models.
“But cynically, this move appears to be an attempt to slow down OpenAI in the short term, as the AI market remains highly dynamic, with no company guaranteed to maintain its leadership over the next three to five years,” Park added.
Notably, while the probe began earlier, its timing now is particularly significant as it unfolds while OpenAI prepares to play a pivotal role in the Stargate initiative, with Elon Musk’s Grok emerging as a potential competitor.
Microsoft has not responded to requests for comment.
Google reportedly looking at smart glasses — again
After abandoning its Google Glass smart glasses in 2023, Google seems ready to take another kick at the can with what will reportedly be a $115 million acquisition of Canadian startup AdHawk Microsystems.
AdHawk has developed camera-free, MEMS-based eye-tracking technology that offers an accurate, low-latency view of where the user is looking, using considerably less power than camera-based systems. The company designs and produces all of the system’s components, “from silicon to cloud” — the chips and micro optics, the hardware, with reference designs, and the algorithms and software necessary to make it all work.
Bloomberg reported on the potential acquisition Tuesday, citing “people with knowledge of the matter” who had asked not to be identified. “The agreement is on track to be completed this week, but it’s still possible that the talks could fall apart at this late stage because the deal hasn’t been signed,” Bloomberg reported.
Neither Google nor AdHawk responded to a request for comment.
Waterloo, Ontario-based AdHawk was founded in 2017 and received funding from the venture arms of tech giants including Intel, HP, Samsung, and Sony Group. It offers several products as well as its components, and touts its MindLink and MindLink Air glasses as a way for researchers and neurologists to examine the eye-brain connection.
The technology is also a vital component of both augmented reality (AR) and virtual reality (VR) headsets. In December 2024, Google introduced Android XR, an operating system that will work on Samsung’s upcoming Moohan headset and other devices. It includes frameworks to let developers embed eye tracking in their software.
“The broader smart glasses market is heating up,” noted Julie Geller, principal research director at Info-Tech Research Group. “But let’s be real—consumer adoption of AR wearables has been slow. Companies are still refining the use cases, and eye-tracking could be the missing piece that finally makes these devices feel seamless—whether for navigation, content interaction, or even real-time ad targeting without intrusive gestures.”
“Adhawk is one of the few independent third-party solutions,” said Anshel Sag, principal analyst at Moor Insights & Strategy, “but it hasn’t been as prominent in the market as some of the others that have been acquired.”
Geller found the deal interesting on another front.
“The AdHawk acquisition deal is particularly interesting since it employs MEMS (Micro-Electro-Mechanical Systems) technology to track eye movement without relying on traditional cameras,” she said. “This allows for a more efficient and precise system, requiring far less data while delivering higher sampling rates.”
Another noteworthy facet of the potential acquisition is the price, Geller said. “AdHawk, a Canadian company behind some of the most advanced eye-tracking technology, is being acquired at what seems like a modest valuation, given its potential impact.”
“The structure of the deal also tells us something,” she added. “Google is reportedly acquiring AdHawk for $115 million, with $15 million tied to future performance. A structure like this—where only about 13% of the deal value is contingent—suggests Google has strong confidence in the company’s technology but wants to ensure its long-term viability in the market. What stands out is that, despite the relatively low price for a company pioneering high-efficiency eye-tracking, Google isn’t over-relying on the earnout to hedge its risk. That suggests they see AdHawk as a foundational piece of their AR and advertising strategy rather than just an experiment.”
Sag added that since Google is one of the few companies that doesn’t have an eye-tracking system of its own, “it makes sense why the company would go out and get Adhawk. That said, this is a much different solution than most other eye-tracking camera-based systems and could enable some unique applications, but [it] could also limit Google’s application of eye-tracking.”
Eye tracking for AR enables better visual acuity and calibration, which can reduce eye strain as well as being used as a biometric authentication method, Sag said. It also makes better user interface experiences possible.
But it’s not just about a better user interface, Geller said: “Let’s not pretend this is just about a better AR experience,” she said. “Eye-tracking might just be the ultimate attention metric, and attention is the currency of digital advertising. If Google can track exactly where users look (and for how long), it could completely reshape ad attribution, engagement, and targeting.”
Sag agreed, noting, “there are also heavy privacy implications of eye-tracking, which I believe most companies have done a good job addressing so far,” he said. “Most raw eye-tracking data stays on-chip, and no biometric data leaves the device.”
A competitor, Tobii, already uses glasses with eye tracking in consumer scenarios to monitor eye movements in controlled environments; Sag believes Google could also use consumer data “in very compelling ways” — as long as it’s careful about how it manages user privacy and biometric data.
As for the business world, he said, the sky’s the limit.
“I believe the business use cases are virtually unlimited and that AI will be a strong driver of making these AR experiences compelling and financially successful,” Sag said. “Its’ why I believe AR will inevitably surpass VR and MR in market size.”
This story has been updated with additional analyst commentary.