Computerworld.com [Hacking News]
How to get paid more in IT
We all go to work to get paid, and IT professionals this week were keen to find out more on the subject, devouring our article on the highest-paying IT skills of 2025.
What they really wanted to understand is how much does AI pay? Being an AI-infused chatbot, you would expect Smart Answers to know the answer. And fueled by decades of authoritative human reporting, it does.
Tech professionals with AI expertise earn approximately 18% more than their counterparts without these skills, says Smart Answers. AI engineers experienced more than 12% salary growth compared to the previous year, while mid-level AI workers saw a 20% increase. Employers are willing to pay as much as 47% more for IT professionals who possess generative AI skills.
Find out: How much more do AI skills pay in 2025?
How to Win with Gen AIWhile people like to get paid, the ultimate goal for many organizations is to generate cold hard cash. We often hear that enterprise data can be a hidden treasure trove, and this week we reported on how to safely monetize it by zeroing in on actionable value and guarding against common risks. Finding untapped gold isn’t easy, after all.
CIO readers wanted to know more generally how organizations can get beyond the hype and generate real value from their data using generative AI (genAI). It’s a big and much-debated question (see: How to win at AI: think like a systems designer, not a tech shopper).
Smart Answers’ advice for becoming genAI ready includes effective and connected data storage, retrieval, and processing, supported by a well-structured data architecture. Risk governance is essential, as is a focus on long-term strategy, quality data, and clear objectives, ensuring alignment across teams for a successful AI program.
Find out: What needs to be in place for effective enterprise generative AI?
Data Management FTWWhat metrics do business and tech leaders trust for measuring ROI on dataops, data governance, and data security? This week, we asked the experts and the resulting story (‘Measuring success in dataops, data governance, and data security‘) was a big hit with the readers of InfoWorld.
This prompted those readers to delve into the wider benefits of a strong data-management strategy. Beyond providing a strong platform for AI success, consistent and uniform data leads to better, more comprehensive decision support. Clear rules for data processes enhance business and IT agility and scalability, and central control mechanisms reduce costs in other areas of data management.
Find out: What are the key benefits of a strong data management strategy?
About Smart Answers
Smart Answers is an AI-based chatbot tool designed to help you discover content, answer questions, and go deep on the topics that matter to you. Each week we send you the three most popular questions asked by our readers, and the answers Smart Answers provides.
Developed in partnership with Miso.ai, Smart Answers draws only on editorial content from our network of trusted media brands—CIO, Computerworld, CSO, InfoWorld, and Network World—and was trained on questions that a savvy enterprise IT audience would ask. The result is a fast, efficient way for you to get more value from our content.
Hype aside, AI may not be turbo-charging employee productivity just yet
Despite the hype that AI is going to fundamentally reinvent work, it has, as yet, had little to no effect on workflows, according to new research.
A report by economists from the University of Chicago and the University of Copenhagen, Large Language Models, Small Labor Market Effects, found that AI chatbots only saved workers about an hour a week, and in some cases, actually created new tasks.
“AI chatbots have had no significant impact on earnings or recorded hours in any occupation,” wrote researchers Anders Humlum and Emilie Vestergaard. “Our findings challenge narratives of imminent labor market transformation due to Generative AI.”
Offering a different narrative on AIThe study specifically looked at the Danish labor market in 2023 and 2024, gathering data from 25,000 workers and 7,000 workplaces. The researchers chose 11 “exposed” occupations: software developers, IT support, financial advisors, HR, accountants, customer-support, legal, marketing, office clerks, journalists and teachers.
The study found that, by late 2024, AI chatbots were widespread: most firms surveyed were encouraging chatbot use, while 38% had their own in-house models, and 30% of employees said they received training on AI tools. Research also revealed that, even with the wide variety of AI tools on the market today, ChatGPT remains the dominant player.
Notably, the researchers found that AI created new tasks for 8.4% of workers, even some who don’t personally use chatbots. These tasks tend to be more sophisticated, such as designing prompts and analyzing outputs, suggesting AI may restructure jobs.
The overwhelming majority of chatbot users — between 64% and 90% in each occupation — did report that AI saved them time. On average, employees said they recouped about 25 minutes per day.
But calculating AI usage frequency and per-day savings actually only equaled about 2.8% in saved time, or roughly an hour a week, according to the researchers.
Further, they estimated that just 3% – 7% of productivity gains translated into higher wages. That status quo applies at the company level, too: There is not yet evidence of job cuts or hiring tied to chatbots.
“Although the paper hasn’t been peer-reviewed and should be treated as such, it does offer a different narrative than that we hear in other circles,” said Justin St-Maurice, technical counselor at Info-Tech Research Group.
Use goes up dramatically when employer-endorsedThe study also revealed the importance of employer encouragement. When supported and trained by employers, 83% of workers used AI, compared to 47% without encouragement. Similarly, daily adoption was 21% when employers promoted AI use, compared to 8% of those using the tools of their own accord.
“This underscores the importance of firm-led complementary investments in unlocking the productivity potential of new technologies,” Humlum and Vestergaard wrote.
However, they said, it should be noted that even when encouraged, more experienced/older workers were less likely to adopt the technology, reflecting habit inertia (sticking to the established way of doing things).
AI research is more grounded than widespread hypeWhile the findings seem to contradict bolder claims about AI’s impact, they do align with other, more grounded assessments of AI in the workplace.
“The measured value of generative AI in the workplace so far has been quite mixed,” said Hyoun Park, CEO and chief analyst of Amalgam Insights. For instance, his firm estimates that less than 10% of employees have been able to integrate AI into more than 10% of their work.
Park also pointed out that one of the biggest value propositions for generative AI so far is code creation, yet less than 1% of all US employees are developers, and not every developer can fully integrate genAI into their work.
Similarly, Microsoft has suggested that people may save 14 minutes per day, or 2.9% of their daily working time, using Copilot, noted St-Maurice.
“This is different from other sources suggesting that jobs are under threat, and that job losses have started as a result of the technology,” he said.
AI outputs are remarkable, but need to be customizedOne of the challenges in workplace AI adoption is that the value propositions that do exist around data and research summarization are not necessarily applicable to the majority of workers across white collar and blue collar jobs, Park noted.
The reality is that effective AI use requires training and organization-specific configuration, he said. Vendors like to say that employees just need to ask questions and will promptly get answers from AI, but there is still much work to do around designing prompts, accessing data, and contextualizing AI outputs.
“Although the outputs coming from foundation AI models are remarkable compared to what we were able to do two or three years ago, they have not been customized to the vast majority of jobs,” said Park. “Until that happens, AI will not be extremely productive in the workplace.”
Similarly, agentic AI requires ongoing management and maintenance, which can be guided by frameworks such as Model Context Protocol and Agent2Agent. Companies also need to invest in documenting and defining processes to maximize value.
St-Maurice noted that AI may make it easier to complete some tasks, but it also raises the bar in terms of expectations, just as when the personal computer replaced the typewriter, and sped up typing, improved document formatting, and enabled managing files and the use of spreadsheets. Similarly, genAI is not just speeding up existing work, but redefining what “competent output” looks like.
“It changes the nature of work, but doesn’t necessarily make us more productive,” said St-Maurice.
Ultimately, Park emphasized, enterprises should make a conscious effort to identify the higher level and strategic work to be done by AI once mundane and Tier 1 tasks are automated.
“Ideally, companies should look at AI as an opportunity to improve the quality of work rather than to replace employees,” he said.
Tech hiring slows, unemployment rises, jobs report shows
Although the nation’s overall unemployment rate held steady in April, technology worker hiring slowed and unemployment rose markedly.
Tech sector companies reduced staffing by a net 7,000 positions in April, an analysis of data released today by the US Bureau of Labor Statistics (BLS) showed.
“Employers are no longer aggressively expanding their workforce, fewer individuals are leaving their jobs, and those who do are finding it challenging to re-enter the job market,” said Ger Doyle, U.S. Country Manager at employment firm ManpowerGroup. “This highlights a significant shift in labor market dynamics, where churn and confidence are low.”
Hiring gains in the tech services sector were not enough to offset job losses in tech manufacturing, telecommunications and cloud infrastructure, according to a report from tech industry association CompTIA.
Across the economy, tech employment fell by an estimated 214,000 jobs, pushing the sector’s unemployment rate up to 3.5% from 3.1% in March. “It was not a great month of data, but expected given the circumstances,” said Tim Herbert, CompTIA’s chief research officer. “Employer tech job postings continue to hold up, so [that’s] a possible sign that hiring will resume as companies find their bearings.”
CompTIA
Across all job sectors, employers added 177,000 jobs in April despite economic uncertainty around President Donald J. Trump’s international tariffs, BLS data showed. The overall unemployment rate remained unchanged from March at 4.2%.
“Our real-time data shows job openings down 11% year-over-year, signaling a cooling environment,” Doyle said.
Hiring remains slow as employers focus on talent retention and adopt a “wait-and-watch” approach, closely monitoring economic signals, Doyle said. While sectors like medical and executive management grow, concerns about future hiring and AI’s impact on roles persist, he said.
“While our data shows a 13% month-over-month decline in traditional software developer postings, this doesn’t tell the whole story. Developers are evolving into strategic technology orchestrators who harness AI to drive unprecedented business value,” said Kye Mitchell, head of tech employment firm Experis North America, a ManpowerGroup subsidiary.
The impact from AI on hiring was stark, as companies grapple with cleaning, organizing, and sharing data stores for potential use by the technology. Demand for database architects skyrocketed, leaping 2312%, Mitchell said. Jobs for statisticians also rose sharply (up 382%).
In today’s economy, IT leaders must invest in AI to deliver measurable outcomes, not just for the sake of technology, according to Mitchell. “Tech leaders must be incredibly precise about where they allocate resources. This isn’t about AI for AI’s sake; outcomes [must] justify the investment, even during uncertain times,” she said.
Among vertical industries, employment continued to trend up in healthcare, transportation and warehousing, financial activities, and social assistance. But federal government employment declined amid cuts by the Trump Administration and its unofficial Department of Government Efficiency (DOGE).
Employers continue to pursue skills-based hiring strategies. About one-half of all April tech job postings did not specify a need for a four-year academic degree, according to CompTIA.
Skills-based hiring has been on the rise for several years, as organizations seek to fill specific needs for big data analytics, programing (such as Rust), and AI prompt engineering. In fact, demand for generative AI (genAI) courses is surging, passing all other tech skills courses spanning fields from data science to cybersecurity, project management, and marketing.
GenAI projects will move from pilot phase to production for many companies this year, which means workers are likely to be affected in ways never before imagined, according to Sarah Hoffman, director of AI research at AlphaSense.
“As AI automates more processes, the role of workers will shift,” Hoffman said in an earlier interview. “Jobs focused on repetitive tasks may decline, but new roles will emerge, requiring employees to focus on overseeing AI systems, handling exceptions, and performing creative or strategic functions that AI cannot easily replicate. The future workforce will likely collaborate more closely with AI tools.”
When considering new hires, 80% of corporate executives are expected to prioritize skills over degrees, with half planning to boost freelance hiring this year to fill in for a gap in AI and other skills, according to a recent study from freelancing platform Upwork.
The top 10 highest-paid skills in tech can help workers earn up to 47% more — and the top skill among them is genAI, according to employment website Indeed and other sources. “Let’s be honest, the job opportunities in the AI field for AI scientists has gone up massively,” said Julie Teigland, global vice chair for alliances and ecosystems at Ernst & Young. “There is a huge skills gap in terms of the number of people that can do that and that is not changing. Those are still in massive demand.
“Everywhere else we can talk about what jobs are changing and where the future is, but AI scientists and data scientists continue to be the top two in terms of what we’re looking for,” she said.
Geographically, when it comes to tech job postings, California topped all states in April with 26,280, up 1,037 from March. Texas, Virginia, and New York followed in total postings, while Arizona, West Virginia, and Maryland saw the largest month-over-month percentage gains, CompTIA data showed.
Apple’s latest earnings offer a glimpse at how it’s faring
Apple’s second quarter is usually interesting. As has now become customary, the company delivered new records, generated billions in revenue, and grew its services segment once again — generally good news except for the embarrassment of having to open up its app store while delivering the Q2 results.
Apple announced that iPhone shipments increased 13% despite a global slowdown across the smartphone industry, and confirmed plans to make all US iPhones in India. It also suggested that some of the sales it did achieve might have been pushed forward as consumers accelerated upgrades they intended to do later this year in order to avoid US tariffs.
What’s the cost of tariffs?Apple CEO Tim Cook explained that he expects the Trump Tariff Tax to affect the company’s June quarter, taking a $900 million chunk out of the company’s multi-billion-dollar business. In other words, while the situation isn’t normal, it’s being managed (if not controlled).
Cook also admitted that Apple did “build ahead inventory” to help tide it over Trump’s Tariff Summer. “Obviously, we’re very engaged on the tariff discussions,” he said. “We believe in engagement and will continue to engage.”
What really hurts with tariffs?People have always complained that Apple’s accessories cost too much. It looks like they will complain even more in the future. During the call, Cook pointed out that the recent 125% tariff on goods from China has most immediately affected things like spare parts for AppleCare and accessories, all of which currently carry a 145% tax in the now high-tax US economy. The situation is similar across third-party chains, which are threatened by much higher costs.
Apple and the supply chainApple flagged potential problems to come, but set out a solid bastion from which to defend itself. From making iPhones in India and Macs in Vietnam to strategic investments in the USA, the company has built a great deal of resilience within its supply chain with which it hopes to negotiate the US trade war. All the same, it still faces an uncertain tomorrow.
With future US tariffs uncertain, the situation for the entire consumer electronics sector is very much in flux. I suggest, however, that Apple has a good news story for Q3 buried in here, as the spike in consumer sales followed the tariff announcements, which mean they occurred in the weekend of April 5 – and the quarter itself ended a week before that.
“For our part, we will manage the company the way we always have, with thoughtful and deliberate decisions, with a focus on investing for the long term, and with dedication to innovation and the possibilities it creates,” Cook said.
Made in the USAApple is also increasing its US manufacturing partnerships. Cook confirmed that TSMC’s new manufacturing facility in Arizona will be making tens of millions of processors for Apple’s devices. ”Apple is the largest and first customer to take chips made at that factory,” he said. “All told, we have more than 9,000 suppliers in the U.S. across all 50 states.”
Apple clearly believes the US can be a good home in which to manufacture some highly skilled elements of its supply chain, even if it can’t supply the kind of expertise needed elsewhere in its manufacturing supply chain.
Speaking at a meeting of US business executives this week, Cook said: “I want to take a moment to recognize President Trump’s focus on domestic semiconductor manufacturing, and we will continue to work with the administration as we invest in these areas. Needless to say, we are excited for the future of American innovation and the incredible opportunities it will create, and we are honored to do our part.”
On Made In India (and Vietnam)There’s also news from India, where the company is accelerating its attempt to build manufacturing. “For the June quarter, we do expect the majority of iPhones sold in the US will have India as their country of origin,” Cook said, “and Vietnam to be the country of origin for almost all iPad, Mac, Apple Watch, and AirPods products also sold in the US.”
Cook also confirmed plans to open new stores in India this year. “The operational team has done an incredible job around optimizing the supply chain and the inventory, and we’ll obviously continue to do those things to the degree that we can,” he said.
Opening the App StoreThe financial news comes just after US District Judge Yvonne Gonzalez Rogers forced Apple to make immediate changes to its developer’s agreement, opening up its US store to external payments and app shoppiong services. Apple plans to appeal the judgement.
Opening the store up to third-party competition is likely to chip away at services revenues while delivering additional risk and confusion to customers, but the scale of that impact remains to be seen. Whatever the impact, I expect Apple to find some way to build a new profit center out of whatever it has left.
Cook, whose judgement was called into question by the judge, said that while Apple has complied with the request, there are risks to the company’s business. Spotify, meanwhile, has forced Apple to approve an update with links to make purchases outside the App Store as the flight begins.
What about the enterprise?Apple also used the financial call to share a little information about its growing status in the enterprise markets, noting KPMG has introduced the iPhone 16 to all of its US employees, “reflecting their confidence in Apple’s security and privacy features.”
He also said that the largest bank in Latin America, NewBank, has selected the MacBook Air as a standard computer for their thousands of employees and highlighted Dassault Système’s decision to integrate Apple Vision Pro into its platform.
What about Apple Intelligence?Some readers might remember when Apple Intelligence was the biggest Apple story around. In light of tariffs, court cases and regulatory limitations, the service seems to have become a footnote, even though it seems to be running late. “We need more time to complete our work on these features so they meet our high-quality bar,” Cook said. “We are making progress, and we look forward to getting these features into customers’ hands.”
Details from Apple’s Q2- Quarterly revenue: $95.4 billion, up 5% year-on-year and an all-time second-quarter high (again).
- Products revenue: $68.7 billion, up 3% year-on-year.
- Services revenue: $26.6 billion.
- Company gross margin: 47.1%. Product margin was 35.9%, while services hit 75.7%.
- Profit: $24.8 billion.
- Cash dividend: 26 cents a share.
Estimated June quarter results will see growth in low to mid-single digits, with margins down to around 45.5-46.5% because of the estimated impact of tariff-related costs. You’ll find an excellent image detailing the scope of Apple’s business here.
iPhoneiPhone sales climbed just 2% in the quarter, despite introduction of the iPhone 16e. The category still generated $47 billion, up $1 billion on the same quarter last year. “iPhone was a top-selling model in the US, urban China, the U.K., Germany, Australia, and Japan, and we continue to see high levels of customer satisfaction in the US at 97%, as measured by 451 Research.”
iPhones also accounted for both of the two top-selling smartphones in China, Cook said.
ServicesServices grew 11%, generating $27 billion this quarter, up from $24 billion a year ago. Perhaps reflecting how Apple sees this segement, and its future in the current regulatory environment, Apple CFO Kevan Parekh said: “We saw strong momentum in the March quarter, and the growth of our installed base of active devices gives us great opportunities for the future. Customer engagement across our services offerings also continued to grow. Both transacting and paid accounts reached new all-time highs, with paid accounts growing double digits year over year. Paid subscriptions also grew double digits. We have well over a billion paid subscriptions across the services on our platform.”
MacMac sales grew 6% as people rushed to purchase the fantastic M4 MacBook Air. Apple generated $8 billion in Mac revenue, up from $7.5 billion last year. “The Mac installed base reached an all-time high, and we saw strong growth for both upgraders and customers new to the Mac. Customer satisfaction was reported at 95% in the US.”
WearablesWhile they generated $7.5 billion in revenue, wearables (which includes AirPods, Apple Watch, and also Vision Pro) are Apple’s weakest segment, down 5% year-on-year. This seems to be the weakest part of Apple’s business in terms of growth.
iPadiPad sales climbed 16%, thanks to the new iPad Air. The segment generated $6.4 billion, up from $5.6 billion in 2024. “The iPad installed base reached another all-time high, and over half the customers who purchased an iPad during the quarter were new to the product. Based on the latest reports from 451 Research, customer satisfaction was 97% in the US.”
You can follow me on social media! Join me on BlueSky, LinkedIn, Mastodon, and MeWe.
Amazon launches Nova Premier, its ‘most capable’ AI model yet
Amazon Web Services (AWS) has launched Nova Premier, its most advanced AI model to date, via Amazon Bedrock. Designed for enterprise use, the model targets complex, multi-step workflows and supports model distillation, enabling smaller models to inherit its capabilities with improved efficiency and reduced cost.
Nova Premier can handle text, image, and long-form video inputs with a one-million-token context window (equivalent to around 750,000 words) and will support over 200 languages, according to an AWS blog. Nova Premier’s applications would span financial analysis, software automation, and agentic tasks involving orchestration across tools and data layers, AWS said.
According to Deepika Giri, head of research – BD & AI at IDC Asia/Pacific, Nova Premier distinguishes itself by applying LLMs to niche Agentic AI scenarios where performance and cost-efficiency are critical. “Its multimodal capabilities also expand its relevance across a wide range of enterprise use cases,” she said.
While AWS describes it as its “most capable model,” Nova Premier trails key rivals in some third-party benchmarks. It lags behind Google’s Gemini 2.5 Pro in coding (SWE-Bench Verified) and scores lower on math and science evaluations such as GPQA Diamond and AIME 2025. However, Amazon’s internal testing shows Nova Premier performs strongly in knowledge retrieval and visual reasoning, with scores of 86.3 on SimpleQA and 87.4 on MMMU.
Pricing is in line with industry standards — $2.50 per million input tokens and $12.50 for output tokens — comparable to Google’s Gemini 2.5 Pro.
Distillation capabilitiesA key feature of Nova Premier is its support for model distillation within Bedrock. This would allow enterprises to generate synthetic data from Premier and fine-tune smaller models such as Nova Pro, Lite, and Micro for targeted applications.
According to AWS, a distilled Nova Pro model increased API invocation accuracy by 20% while delivering similar output at lower cost and latency. The process eliminates the need for labeled training data and is positioned for edge deployments and use cases with resource constraints.
“Distillation enables customers to create smaller, more efficient models for specific tasks,” AWS noted. This sets it apart from other approaches. OpenAI’s GPT-4o-mini, for instance, leans on fine-tuning, while Anthropic’s Claude prioritizes text optimization.
Amandeep Singh, principal analyst at the QKS Group, said the launch is a strategic shift for AWS. “Nova Premier signals AWS’s move from a neutral model host to asserting foundational control in the GenAI value chain. This isn’t about building the biggest model, it’s about owning the orchestration, pricing, and architecture.” He added that combining proprietary models with Bedrock’s flexible interface strengthens AWS’s appeal as a sustainable AI stack for enterprises.
Currently, Nova Premier is available only to approved Bedrock users.
The future will be subtitled
A few years ago, Microsoft HoloLens and Magic Leap promised a future of groundbreaking visual experiences that blend digital and real worlds. While the headsets were bulky, expensive, and proprietary — and, in fact, both products are dying slow deaths — the demos got the public used to the idea that the future of augmented reality would be wild and filled with 3D visual content.
HoloLens projected interactive 3D holograms into users’ environments, allowing them to manipulate these holograms with natural hand gestures, eye tracking, and voice commands. Public demos showed fighting virtual robots in the living room and giant anatomical models for education.
Magic Leap showed off hyper-realistic digital humans like its AI assistant, Mica, who could recognize a user’s mood and interact as though present in the room. The company promised architectural walkthroughs and collaborative design sessions, where multiple users could manipulate big 3D models together.
Glorious stuff. But the market rejected these extremely visual appliances.
My prediction is that not only will augmented reality (and heads-up displays) go mainstream fast, the most common use case for the visual element will be: subtitles.
The incredible power of captioningPolls from CBS News and Preply found that more than half of home movie and TV watchers watch with subtitles on. A 2024 survey found that 70% of Gen Z adults (ages 18–25) and 53% of Millennials (ages 25–41) watch most of their online video content, including on YouTube, with captions or subtitles enabled.
The preference for captions or subtitles in scenarios where the user largely controls the ambient sound and also the volume of the media suggests that when captions are available in smart glasses or AI glasses, much of the public will probably choose them to be on most of the time, especially in situations where context is less clear.
Connected AI glasses that can display visual content to the wearer and which can pass for ordinary, everyday glasses are ideal for providing subtitles to the world. Microphones in the glasses can listen, cameras can watch, and AI can interpret and provide its interpretation in written language visible to the wearer and to nobody else. It’s very powerful stuff.
Here are the many uses for captioning in smart glasses:
1. Hearing aids
While Apple is looking to transform AirPods into hearing aids for the hearing impaired, other companies are doing something better: providing captions for the hearing impaired.
East Coast companies Vuzix and Xander are now shipping captioning glasses for the hearing impaired. Based on Vuzix’s M400 smart glasses and Xander’s software, the product works by picking up speech through built-in microphones and then instantly transcribing and projecting the words onto the lenses. The system runs entirely on the glasses, so it doesn’t need a phone or internet connection. The glasses are available for audiologists and clinics to purchase and use with patients.
2. Language translation
Google demonstrated the power of captioning for language translation three years ago with an experimental prototype that has since been discontinued. Google’s highly produced video showed a perfect example of how captioning would enable conversations between two people who don’t speak the same language.
Much more recently, Meta rolled out its previously experimental Live Translation feature to all Ray-Ban Meta users. I use Ray-Ban Meta glasses and live translation in Mexico, Spain, Italy, and France (I’m currently in Italy), and it falls short precisely because it does not have captioning. The feature speaks the translation in your ear and also types it out in the app. Another bonus is that it translates my English into the foreign language I’m interacting with via the app.
This would be amazing if I could see all the translation happening in my glasses.
By the way, I have also been using Meta’s sister feature, Live AI. With Live AI turned on, I can ask the Meta assistant what foreign language signs mean in English, and it tells me. Imagine having all foreign language signage displayed in your own language at all times while abroad. What’s amazing about this is that, unlike Live Translate, I don’t have to tell the glasses in advance what languages I’m working with. It will translate Japanese signs as quickly as it does French or Portuguese ones.
3. Speaker notes
Shahram Izadi unveiled Google’s new Android XR platform at a recent TedTalk; while he focused on many groundbreaking applications that blend AR with AI, he also pointed out that he could see his speaker notes in his prototype glasses.
In addition to notes during speeches and presentations, captioning will give media types and politicians teleprompter capabilities in their glasses during speeches, TV hits, and podcast appearances.
4. Tours and tourism
It’s common for museums to rent headsets to listen to while browsing collections and for tourists to use various media to get contextual information while wandering around on vacation. AI and AR are ideal for this kind of application, where tourists can learn all about any neighborhood, cultural artifact, or museum display with silent captions in glasses.
5. Content consumption
And finally, we come full circle. With captioning in AR glasses, two people could watch a TV show, movie or YouTube video, and one could get captions while the other person could choose to not see them.
That also applies to lyrics displayed during concerts, language translation during Italian operas, song identification while at bars and restaurants, and other sorts of context spelled out in noisy settings.
The whole point of augmented reality is to provide useful information about the world which the world itself isn’t clearly providing. And while we’ve been dazzled by incredible visuals, the truth is that the best way to augment reality will almost always be with captions and subtitles.
Leaderboard illusion: How big tech skewed AI rankings on Chatbot Arena
A handful of dominant AI companies have been quietly manipulating one of the most influential public leaderboards for chatbot models, potentially distorting perceptions of model performance and undermining open competition, according to a new study.
The research, titled “The Leaderboard Illusion,” was published by a team of experts from Cohere Labs, Stanford University, Princeton University, and other institutions. It scrutinized the operations of Chatbot Arena, a widely used public platform that allows users to compare generative AI models through pairwise voting on model responses to user prompts.
The study revealed that major tech firms — including Meta, Google, and OpenAI — were given privileged access to test multiple versions of their AI models privately on Chatbot Arena. By selectively publishing only the highest-performing versions, these companies were able to boost their rankings, the study found.
“Chatbot Arena currently permits a small group of preferred providers to test multiple models privately and only submit the score of the final preferred version,” the study said.
Chatbot Arena, Google, Meta, and OpenAI did not respond to requests for comments on the study.
Private testing privilege skews rankingsThe Chatbot Arena, launched in 2023, has rapidly become the go-to public benchmark for evaluating generative AI models through pairwise human comparisons. However, the new study reveals systemic flaws that undermine its integrity, most notably the ability of select developers to conduct undisclosed private testing.
Meta reportedly tested 27 separate large language model variants in a single month in the lead-up to its Llama 4 release. Google and Amazon also submitted multiple hidden variants. In contrast, most smaller firms and academic labs submitted just one or two public models, unaware that such behind-the-scenes evaluation was possible.
This “best-of-N” submission strategy, the researchers argue, violates the statistical assumptions of the Bradley-Terry model — the algorithm Chatbot Arena uses to rank AI systems based on head-to-head comparisons.
To demonstrate the effect of this practice, the researchers conducted their own experiments on Chatbot Arena. In one case, they submitted two identical checkpoints of the same model under different aliases. Despite being functionally the same, the two versions received significantly different scores — a discrepancy of 17 points on the leaderboard.
In another case, two slightly different versions of the same model were submitted. The variant with marginally better alignment to Chatbot Arena’s feedback dynamics outscored its sibling by nearly 40 points, with nine models falling in between the two in the final rankings.
Disproportionate access to dataThe leaderboard distortion isn’t just about testing privileges. The study also highlights stark data access imbalances. Chatbot Arena collects user interactions and feedback data during every model comparison — data that can be crucial for training and fine-tuning models.
Proprietary LLM providers such as OpenAI and Google received a disproportionately large share of this data. According to the study, OpenAI and Google received an estimated 19.2% and 20.4% of all Arena data, respectively. In contrast, 83 open-weight models shared only 29.7% of the data. Fully open-source models, which include many from academic and nonprofit organizations, collectively received just 8.8% of the total data.
This uneven distribution stems from preferential sampling rates, where proprietary models are shown to users more frequently, and from opaque deprecation practices. The study uncovered that 205 out of 243 public models had been silently deprecated — meaning they were removed or sidelined from the platform without notification — and that open-source models were disproportionately affected.
“Deprecation disproportionately impacts open-weight and open-source models, creating large asymmetries in data access over time,” the study stated.
These dynamics not only favor the largest companies but also make it harder for new or smaller entrants to gather enough feedback data to improve or fairly compete.
Leaderboard scores don’t always reflect real-world capabilityOne of the study’s key findings is that access to Arena-specific data can significantly boost a model’s performance — but only within the confines of the leaderboard itself.
In controlled experiments, researchers trained models using different proportions of Chatbot Arena data. When 70% of the training data came from the Arena, the model’s performance on ArenaHard — a benchmark set that mirrors Arena distribution — more than doubled, rising from a win rate of 23.5% to 49.9%.
However, this performance bump did not translate into gains on broader academic benchmarks such as Massive Multitask Language Understanding(MMLU), which is a benchmark designed to measure knowledge acquired during pretraining by evaluating models. In fact, results on MMLU slightly declined, suggesting the models were tuning themselves narrowly to the Arena environment.
“Leaderboard improvements driven by selective data and testing do not necessarily reflect broader advancements in model quality,” the study warned.
Call for transparency and reformThe study’s authors said these findings highlight a pressing need for reform in how public AI benchmarks are managed.
They have called for greater transparency, urging Chatbot Arena organizers to prohibit score retraction, limit the number of private variants tested, and ensure fair sampling rates across providers. They also recommend that the leaderboard maintain and publish a comprehensive log of deprecated models to ensure clarity and accountability.
“There is no reasonable scientific justification for allowing a handful of preferred providers to selectively disclose results,” the study added. “This skews Arena scores upwards and allows a handful of preferred providers to game the leaderboard.”
The researchers acknowledge that Chatbot Arena was launched with the best of intentions — to provide a dynamic, community-driven benchmark during a time of rapid AI development. But they argue that successive policy choices and growing pressure from commercial interests have compromised its neutrality.
While Chatbot Arena organizers have previously acknowledged the need for better governance, including in a blog post published in late 2024, the study suggests that current efforts fall short of addressing the systemic bias.
What does it mean for the AI industry?The revelations come at a time when generative AI models are playing an increasingly central role in business, government, and society. Organizations evaluating AI systems for deployment — from chatbots and customer support to code generation and document analysis — often rely on public benchmarks to guide purchasing and adoption decisions.
If those benchmarks are compromised, so too is the decision-making that depends on them.
The researchers warn that the perception of model superiority based on Arena rankings may be misleading, especially when top placements are influenced more by internal access and tactical disclosure than actual innovation.
“A distorted scoreboard doesn’t just mislead developers,” the study noted. “It misleads everyone betting on the future of AI.”
Google now injects hyper-personalized ads into third party AI chats
As it stands to potentially lose ad revenue after being ruled a monopoly, and also to maintain an edge in the digital ad space as generative AI use soars, Google is purportedly now injecting ads into third party chatbot conversations.
It’s not a surprising move, particularly given Google’s antitrust loss that could eventually lead to the breakup of its ad business (although there are likely years of appeals to come before any tangible changes). The tech giant is also in fierce competition with the likes of OpenAI, Perplexity, Meta, Microsoft, Salesforce, and a slew of others to get enterprise users to adopt its genAI platforms.
“Google knows its long-dominant search funnel is leaking,” said Julie Geller, principal research director at Info-Tech Research Group. “If conversational AI becomes the way people discover, decide, and buy, Google needs a revenue engine ready before regulators or rivals box it out.”
More than a money moveAccording to reports, the Google AdSense network expanded to include chatbot conversations earlier this year, after it tested the capability last year. Particularly, according to anonymous sources, it is working with AI search startups iAsk and Liner.
The move comes as Google grapples with the fallout of not just one, but two, antitrust trials in which it was found guilty of establishing “monopoly power.” Most recently, in April, a federal judge in Virginia ruled that the company monopolized two online advertising markets: publisher ad servers and the ad exchanges that sit between buyers and sellers. Google reportedly earned nearly $265 billion in 2024 alone through ad placement and sales.
Incorporating ads into generative AI is a “risky move at a fragile moment,” noted Ria Delamere, chief technology and product officer at Traject Data. “This isn’t just about making money. It’s about trying to hold onto ground as Google faces pressure from AI-native competitors and antitrust regulators.”
An opportunity for hyper-personalizationOf course, Google isn’t the first to do this. Meta, for one, shows ads in “private” messenger chats, David B. Wright, president and chief marketing officer at W3 Group Marketing, pointed out.
“Google and other companies inserting ads in AI chatbots are just jumping into the next available space to place ads,” he said.
Geller pointed out that, in 2022, company leaders acknowledged that about 40% of Gen Z in the US were turning to TikTok or Instagram, not Google, for local recommendations on where to eat or shop, and that behavior has only accelerated since. Incorporating ads into generative AI sets the stage for “hyper-local, persona-level targeting” which could pull advertisers back from social platforms and keep both discovery and dollars inside Google’s walls, she said.
Enterprises will be able to deliver more relevant ads “at the right time to the right person,” Wright agreed. Consider conversing with an AI chatbot as a very long-tail search, he said: Ad servers can take the data from the conversation and use it to craft hyper-targeted ads.
“In an ideal world, we’d only see the ads we want to see when we’d want to see them: when we’re at the right stage of a buying decision for something we want to buy,” he said. “This could be a step closer to that.”
Preserving trust will be paramount moving forwardExperts emphasize that trust is imperative to all this. Notably, it hinges on “knowing when money changes the message,” said Geller. If users suspect an answer is ranked by revenue over relevance, “confidence tanks,” and they may migrate to more “neutral” assistants.
Google will need to flag sponsored content in real-time, explain why it surfaced the ads it did, and prove that organic answers aren’t “quietly demoted,” she said. “Anything less invites skepticism and churn.”
Delamere agreed that when ads start showing up as part of an “answer,” it gets harder to tell where information ends and influence begins. When AI is driving decisions, transparency and explainability aren’t an option, she said.
“This may help Google in the short term, but credibility is hard to earn and easy to lose,” Delamere emphasized.
From a user interface perspective, if ads distract or cause delays, consumers won’t go near the app, noted Melissa Copeland of Blue Orbit Consulting. “Consumers may try it, but if they don’t get the efficient and effective answer they are looking for, they will abandon the channel or brand.”
Meanwhile, when it comes to privacy, Geller pointed out that chat transcripts are stored and, at times, reviewed by humans, creating a “durable record of anything sensitive a user blurts out.”
“While Google offers opt-outs and deletion tools, they’re far from intuitive,” she said, emphasizing that enterprises must offer secure contract-level clarity on retention windows, human-review policies, and encryption standards, and should also insist on audit rights to verify compliance.
Look beyond a single-vendor strategyThis type of capability could help companies offer chatbot functionality at a lower cost, and potentially surface new insights about customers, noted Neil Chilson, head of AI policy with the Abundance Institute.
Like all ad media, he said, when considering the volume and type of ads, companies will need to balance short-time incentives to monetize and long- term financial incentives to keep customers coming back.
“Google is good at helping companies evaluate those trade offs in other advertising channels; it will be interesting to see how that expertise translates to this new area,” said Chilson.
Info-Tech’s Geller pointed out that the search and discovery landscape is evolving too quickly for a single-vendor strategy. She advised enterprise leaders to stay agile, demand full transparency around data use and monetization, and keep an eye on how AI-driven personalization opens new micro-market opportunities.
It’s also important to build flexibility into customer experience and marketing roadmaps, as ads are “only the opening salvo,” she noted. Further, companies should watch for new revenue models from app providers, such as subscription tiers or usage fees, and potentially embrace the benefits of hyper-local targeting. At the same time, she said, “keep exit routes open and your data governance questions sharp.”
Google rolls out ‘AI Mode‘ to improve search results
Google is making changes to its venerable search interface so users can more naturally interact with its AI features.
“AI Mode,” a project brewing in Google’s Search Labs, will slowly roll out to general users within the company’s current search interface. (A new “AI Mode” tab will appear alongside its search box.)
“With AI Mode, you can truly ask Search anything — from complex explanations about tech and electronics to comparisons that help with really specific tasks, like assessing insurance options for a new pet,” Soufi Esmaeilzadeh, director of product management for Google’s search products, said in a blog post.
The new features will migrate from the experimental AI Mode features already being tested by users in Google’s Search Labs. Google has also added features to the experimental AI Mode so users get better search results.
With AI Mode now ready for the real world, Google promises the tool will offer more than AI Overviews; it provides basic insights for questions plugged into the search box. AI Mode is based on the Gemini 2.0 AI model.
“Because our power users are finding it so helpful, we’re starting a limited test outside of Labs. In the coming weeks, a small percentage of people in the US will see the AI Mode tab in Search,” Esmaeilzadeh said.
Google’s experimental AI Mode app had been available only to limited users. The app is available for Apple’s iOS and Android.
A Google spokesperson declined to offer further details about AI Mode search.
Google has been talking about integrating AI into search results more comprehensively since the day it launched its first large language model called Bard. The early models hallucinated and malfunctioned, so Google has been cautious in rolling out AI into its general search features.
But the company had to roll out core AI features to its search tools as soon as possible, said Jack Gold, principal analyst at J Gold Associates.
OpenAI and Anthropic have built search into their AI interfaces, and Meta recently launched its own chatbot based on Llama 4. Microsoft was already ahead of Google in integrating AI search into Bing results.
“It’s seeing increasing competition for AI from companies like Meta and OpenAI that could take some share away from them…, but it’s not clear that a competent AI model couldn’t essentially duplicate and enhance search functions for many users — see Perplexity, as an example,” Gold said.
Google attaching Gemini closer to its search tools offers several benefits, including feedback from users on how well the answers resonate. Enhancing search with AI could also drive down Google’s compute power and infrastructure costs as it could limit the number of searches needed before users get desired results.
“It can better tune its models for accuracy. It also enhances their ability to target ads at users, as AI will show complementary topics that can then be advertised about,” Gold said.
The experimental AI Mode in the search labs already delivered useful information about products and places. Google is now adding more rich results and multimedia features. A search for destinations, results, and products will show in a more organized format.
“Rolling out over the coming week, you’ll begin to see visual place and product cards in AI Mode with the ability to tap to get more details,” Esmaeilzadeh said.
Shopping, dining, and services results will have more options, real-time pricing, promotions, and ratings. And a new left-side panel on the desktop will make it easier to jump back into past searches on longer-running tasks and projects.
Typically, Google requires consent to record search history to understand user trends. A Google spokesperson declined to comment on whether AI Mode would require that.
What is an AI PC? The power of artificial intelligence locally
Unlike traditional computers, an artificial intelligence PC, or AI PC, comes with AI capabilities built in by design. AI runs locally, right on the machine, allowing it to essentially learn, adapt, reason and problem-solve without having to connect to the cloud. This greatly increases the performance, efficiency and security of computing while enhancing user experience.
How are AI PCs different from traditional PCs?Traditional PCs run on CPUs and GPUs (but most PCs use an integrated CPU for everyday tasks), and their essential components include a motherboard, input devices like keyboards and mice, long-term storage, and random-access (short-term) memory (RAM). While they excel at tasks such as everyday web searching, data processing and content streaming, they typically don’t come with many built-in AI features — and they struggle to perform complex AI tasks due to limitations with latency, memory, storage and battery life.
[ Related: What is a GPU? Inside the processing power behind AI ]
AI PCs, by contrast, come preloaded with AI capabilities so that users can get started with the technology right out of the box. They feature integrated processors, accelerators and software specifically designed to handle complex AI workloads. While they also incorporate GPUs and CPUs, AI PCs contain a critical third engine: the neural processing unit (NPU).
5 things you should know about AI PCs- Local AI processing: AI PCs handle AI tasks on-device with specialized hardware (NPUs) for improved performance, privacy, and lower latency.
- Enhanced productivity: AI PCs boost efficiency and enable new capabilities like improved collaboration, personalized experiences, and advanced content creation.
- Robust security is imperative: AI PCs require a strong security framework, including hardware, data, software, and supply chain considerations.
- The market is growing: The AI PC market is expanding rapidly, with increasing availability, decreasing costs, and a growing software ecosystem.
- Big IT impact: AI PCs will require updates to IT infrastructure and management practices, including device management, application development, network infrastructure, and cost analysis.
NPUs perform parallel computing in a way that simulates the human brain, processing large amounts of data all at once — at trillions of operations per second (TOPS). This allows the machine to perform AI tasks much faster and more efficiently than regular PCs — and locally on the machine itself.
The key components of AI PCsThe generally agreed-upon definition of an AI PC is a PC embedded with an AI chip and algorithms specifically designed to improve the experience of AI workloads across the CPU, GPU and NPU.
All of the major PC vendors — Microsoft, Apple, Intel, AMD, Dell, HP, Lenovo — are building their own versions of AI PCs. Microsoft, which offers a line of Copilot+ AI PCs powered by Snapdragon X Elite and Snapdragon X Plus processors, has set a generally accepted baseline for what constitutes an AI PC. Required components include the following:
- Purpose-built hardware: An NPU works in tandem with CPUs and GPUs. NPU speed is measured in TOPS, and the machine should be able to handle at least 40 TOPS to support on-device AI workloads.
- System RAM: An AI PC must have at least 16GB of RAM. That’s the minimum; having twice as much (or more) improves performance.
- System storage: AI PCs should have a minimum of 256G of solid-state drive (SSD) storage — preferably non-volatile memory express (NVMe) — or universal flash storage (UFS).
Gartner
Benefits of AI PCsAI PCs represent a movement beyond traditional static machines that require constant human input and offer these benefits:
Enhanced productivity and computing that is truly personalizedAI has the capability to learn from what it sees and evolve based on that information; it is also increasingly agentic, meaning it can perform some approved tasks autonomously.
With AI directly integrated into a device and across various workflows, users can automate routine and repetitive tasks — such as drafting emails, scheduling meetings, compiling to-do lists, getting alerts about urgent messages, or sourcing important information from websites and databases.
Beyond that, AI PCs can support advanced content creation and real-time data processing; perform financial analysis; compile reports; enhance collaboration through voice recognition, real-time translation and transcription capabilities; and provide predictive text and writing help. Over time, AI PCs can adapt to individual workflows and eventually anticipate needs and make decisions based on user habits.
As AI agents become ever more intuitive and complex, they can serve as on-device coworkers, answering intricate business questions and helping with corporate strategy and business planning.
Reduced cloud costs, reduced latencyBuilding, training, deploying and maintaining AI models requires significant resources, and costs can quickly add up in the cloud. Running AI locally can significantly reduce cloud costs. Offline processing can also improve speed and lower latency, as data does not need to be transferred back and forth to the cloud.
Users can perform more complex tasks on-device involving natural language processing (NLP), generative AI (genAI), multimodal AI (for more advanced content generation such as 3D modeling, video, audio) and image and speech recognition.
Enhanced securitySecurity is top of mind for every enterprise today, and AI PCs can help bolster cybersecurity posture. Local processing means data stays on device (instead of being sent to cloud servers) and users have far more control over what data gets shared.
Further, AI PCs can run threat detection algorithms right on the NPU, allowing them to flag potential issues and respond more quickly. AI PCs can also be continually updated based on the latest threat intel, allowing them to adapt as cyberattackers change tactics.
Longer battery life, energy savingsWhile some AI workloads have been feasible on regular PCs, they quickly drain the battery because they require so much power. NPUs can help preserve battery life as users run more complex AI algorithms. Adding to this, they are more sustainable, as every query or prompt requires an estimated 10 times less energy compared to using the cloud.
Important considerations when considering AI PCsEven as they represent state-of-the-art, AI PCs are (not yet) for every enterprise. There are important factors IT buyers should consider, including:
- Higher up-front cost: Because they incorporate specialized hardware (NPUs) and have higher memory and power requirements, AI PCs are generally more expensive than regular PCs (even if they save on cloud costs in the long-run).
- Increased technical knowledge: Users well-versed with everyday PCs might struggle to use built-in AI features at first, requiring more training resources. Also, more advanced technical knowledge is required to train AI models and develop applications. Further, genAI is still in its early phases, so enterprise leaders have many concerns about AI misuse (whether unintentional or not).
- Not-yet proven business use cases beyond nifty gadgets: There has yet to be that “killer app” for AI PCs that make them a must-have across enterprises. If a business’s primary computing requirements are everyday tasks — think email, web searching, simple data processing — AI PCs may be too much muscle, making the increased cost difficult to justify.
While the question of whether you need an AI PC might be relevant now, that won’t be the case for much longer. “The debate has moved from speculating which PCs might include AI functionality,to the expectation that most PCs will eventually integrate AI NPU capabilities,” Ranjit Atwal, senior director analyst at Gartner, said last September. “As a result, NPU will become a standard feature for PC vendors.”
Gartner forecasts AI PCs will represent 43% of all PC shipments by the end of the year, up from 17% in 2024. The demand for AI laptops is projected to be higher than that of AI desktops, with shipments of AI laptops to account for 51% of total laptops in 2025.
AI PCs – what’s there to think about?AI PCs represent the next generation of computing, and experts predict they will soon be the only choice of laptop available to large businesses looking to refresh. But they are still in their early proving phases, and IT buyers have important considerations to keep in mind when it comes to cost, relevance and necessity.
Download the ‘AI-Savvy IT Leadership Strategies’ Enterprise Spotlight
Download the May 2025 issue of the Enterprise Spotlight from the editors of CIO, Computerworld, CSO, InfoWorld, and Network World.
Apple could face criminal contempt charges over ‘Apple tax’
Apple’s salad days are over.
The company sits on the precipice of reinvention, and may become even more inward-facing in response to a damning US court judgement that may yet see it face criminal charges for contempt of court.
US District Judge Yvonne Gonzalez Rogers has ruled that Apple wilfully violated a 2021 court injunction that required it to change some of its business practices in terms of permitting developers to offer customers ways to purchase digital products outside of Apple’s App Store.
‘Will not be tolerated’The judge told the company to stop preventing developers from sharing external purchasing options and barred it from imposing commissions on transactions made outside its stores. “Apple’s continued attempts to interfere with competition will not be tolerated,” the judge wrote in her decision, finding Apple in contempt of court when Apple’s Vice President of Finance, Alex Roman, lied under oath.
Documents shared during the trial reveal “that Apple knew exactly what it was doing and at every turn chose the most anticompetitive option,” Rogers wrote.
She also noted that Apple CEO Tim Cook ignored Apple Fellow Phil Schiller’s advice that Apple should comply with the injunction, and permitted former CFO, Luca Maestri, to convince him not to do so. “Cook chose poorly,” said the judge.
An Epic winThis is just the latest instalment in the long-running dispute between Apple and a developer called Epic. The latter has been engaged in a multi-year, international campaign against Apple’s so called “Apple tax,” and the computer company appears to have failed to prevail, at least for now.
Apple said in a statement it will appeal the judgement: “We strongly disagree with the decision. We will comply with the court’s order, and we will appeal.”
But what’s gone wrong here is Apple’s alleged contempt of court. This is a very serious charge that means a criminal component to the case has now emerged.
The judge referred the matter to the US Attorney for the Northern District of California who will consider pressing contempt charges, presumably against Roman and the company that employs him. Could this conceivably put Apple CEO Tim Cook in the dock?
A global challengeIt’s well-known by now that Apple has faced steady and unrelenting attacks against the business model that evolved around its App Store. The company has been subject to anti-trust litigation across the planet, and regulators seem to be settling on a position that will outlaw those practices — even in Apple’s key target market of India.
Apple will not be able to prevent app developers from offering their software and services via external stores and will not be able to take a percentage of sales made via those stores.
These victories might well please some developers for a while. But they will likely come at a cost to platform security and ease-of-use and could generate confusion as consumers find themselves drawn to multiple stores, not all of which will prove to be as heavily curated or as secure as Apple’s.
For Apple, the consequences of the case could see billions wiped off its revenue as developers find sales outside of its store, and as it sees some of its App Store-related payments disappear. This will damage Apple’s services segment, and as that part of its business is important to keeping the company sailing in a difficult consumer hardware sales sea, it means the company will need to turn its ship.
What will Apple do?Apple is not without resources. It still has its hardware, platform, and services, and may now choose to compete on more equal terms with external stores — though how that might be implemented is uncertain. In the short term, it seems most likely Apple will need to charge developers more for access to the developer resources it provides. One way the company might do this is to offer a “Pro” tier to developers who want to sell software or services via its own store or any store, or to charge fees for APIs that enable external services.
It has to build those APIs, after all, which means they are a product it could conceivably try to profit from. (Whether this is permitted is unclear.)
For now, the direction of travel is obvious: the company must now swiftly change course to safely traverse these turbulent, shark-infested seas.
You can follow me on social media! Join me on BlueSky, LinkedIn, and Mastodon.
Inside Microsoft’s plans to reshape M365 apps with AI
Microsoft had an early software hit when it released Office for Windows 3.0 nearly 35 years ago. The graphical user interface got users away from command-line interfaces and into a new world of productivity.
Since then, the company has kept up with the changing way people work. Office 365 (now Microsoft 365) was its adaptation to the collaborative era of the cloud. And now, another major transition is under way — into the generative AI (genAI) era, with Copilot at the center of the strategy to work smarter.
Microsoft is positioning Copilot as a tool (or series of tools) for users to create, tap into and act on insights at individual, team, and organizational levels. The company also sees genAI as a way to break down the barriers between Word, Excel, PowerPoint and other apps; help users create their own apps; and declutter app user interfaces so features are easier to access.
Computerworld sat down with Microsoft’s Aparna Chennapragada, chief product officer of experiences and devices, to get an inside look at how the company is integrating genAI throughout its productivity apps.
Aparna Chennapragada, chief product officer of experiences and devices for Microsoft.
Microsoft
What stage are you at with Copilot in Microsoft’s productivity suites? “Our first wave brought AI to existing apps like Word, Excel, PowerPoint, Outlook, Teams for tasks like summarizing documents, prioritizing emails, recapping meetings, writing summaries, action items for meetings. As models improved at reasoning, they can now connect insights within a 200-page document way beyond human cognition.
“The second wave is M365 Copilot as a hub beyond the apps you use today. We’re building the AI hub — the productivity browser for the AI world, the one place you start and end your day. It’s a digital chief of staff, a digital assistant to stay on top of information, ask questions, and get coherent answers from the entire data of your organization and the world.”
Is there a fundamental redesign in bringing Copilot into the interface? If I’m using my desktop app, how are you thinking about that? ”We’re going for ‘hub and spokes’ — that’s the model we’re going for. You have a productivity hub, a full app that gives all the power [of Copilot], and then embedded AI in each app that you work with.
“You will have an app on Windows, on Mac, on your phone, and of course the website. That is your hub. Think about this as the full power of Copilot, the best of AI that Microsoft brings to work.
“When you’re in a document or meeting, you’ll have a narrow Sidekick presence that focuses on relevant tasks that will surface information in a thoughtful way. You have the embedded AI in each of these apps you work with.
“In a Word document, you’re unlikely to ask, ‘What’s the weather?’ But if you had this almost like a narrow Sidekick presence, then you say, ‘Im going to act on this document. Help me with this.’”
We’ve thought of Office apps as being separate. How do you break those classic Word/Excel/PowerPoint walls? How are you coupling them? ”For folks in an organization, some subsets are deeply specialized. If you’re a coder, you live in GitHub; if you’re a lawyer, you live in Word; if you’re an analyst, you live in Excel. For those cases, we want to bring the AI to where you are.
“The lines are blurred and you start with your goal. That’s why we built M365 Copilot app as a hub. You start by saying, ‘I want to write a report. I want to riff on it’ — almost thought processing versus word processing.
“Then you go back and forth. The great thing here is that at the last mile, you can turn that into a Word document, a deck, an email, and all of the above.”
Is it time to change the user interface with the closer coupling of Word, Excel, PowerPoint via Copilot? Looking into the future, will these individual apps still exist or merge into something else? ”We think about this like a pyramid structure. There’s going to be a broad base where every employee would use the universal UI — we call Copilot the UI for AI.
“As models get better and product making gets better, as we harness work data — we think about how to securely and compliantly bring work and world data together. You get 70% of the way there in most cases. Then you have a higher value artifact — you create it through chat.
“In M365 Copilot, we launched Pages. Think of this as a universal file format across Word, Excel, PowerPoint, and…these are AI-forward documents. Once created, you can parlay that into specialized apps and tools.”
It sounds like Pages is similar to XML that will help bring Word, Excel and PowerPoint closer. ”It’s very interesting actually. We GAed this four months ago and we see people create these. Initially, when you start conversations with Copilot, you’re riffing ideas and then hit on some version. You’ve co-processed with AI and now have something of high value.
“We’re seeing people want to move that into a high-value artifact they can go back to. No one does their work in a day. You have these long-lasting things.
“Then folks ask, ‘Is this persistent? Can I share it with my team?’ So, we introduced Pages and said, ‘You don’t have to be prescriptive about the format.’ This is just a canvas as a holder, a universal AI. What is a document in the AI era? From there you can branch off into any file format.”
How are you moving beyond chat interfaces? “While chat offers zero learning curve, some things are more efficient with GUIs. We’re introducing ‘Notebooks’ as an AI canvas for projects — gathering information, iterating on outlines, and working in the background to deliver insights rather than relying on chat. This represents a shift from ‘DOS to GUI’ in the AI world.”
Will Copilot and AI features be available offline? Can I use it on local hardware if I’m somewhere with poor connectivity? “We want universal features to be accessible to everyone. We’re looking at Copilot PCs with local models running on NPUs to provide an acceptable offline feature set.
“We’re working on three key factors: retaining feature power to be useful with spotty connections, managing the model footprint, and creating a seamless orchestration layer that switches between offline capabilities and cloud resources when you’re back online.
“All of these we’re working on. The idea is that as a user, you should have access to intelligence and products wherever you are, at least in these specific ecosystems.”
When you talk about offline productivity, is it a localized model like Microsoft’s homegrown Phi Silica? Or is the AI built into Windows via a driver interface like DirectML or similar? ”It’s an ensemble of local models. The first era of local models was small footprint models. What we are doing is our teams are also looking at post-training these models for specific use cases.
“For example, for writing-versus-analysis-versus-something else or image creation, we’re making sure these things fit the needs of core users for that situation. We use pipeline, we use open-source models.”
It seems like users might create their own apps without seeing Excel or Word working in the background. Is that capability coming — bringing GitHub’s power to users — and when? ”You have a good prediction model of our roadmap. Once we can generate code, you can create lightweight apps on the fly.
“For example, in the Analyst agent we just rolled out, which is a data scientist in a box, I asked it to analyze F1 stats from 2024. I didn’t provide a dataset — I just said search the web for World Bank data and NBA stats, then tell me what’s insightful.
“While that’s an idle pursuit for me, imagine turning that towards work. Internally, we’ve used it to connect to sales data and identify anomalies in scatter plots. As a customer, you don’t care if it’s writing an app, using Excel, or something else — that’s how we’re designing it.
“We see Copilot as the browser with specialized agents working for you. Then there’s a whole slew of invisible tools. We’re building many Office assets as tools so you can simply request a properly cited Word document in a particular format without thinking about the underlying technology.”
Outlook is a daily starting point for many users, but it is confusing — it seems separate from 365 and comes in the Classic and New versions. How are you planning to integrate Copilot there? ”Watch this space. After this meeting, my next session is with the Outlook team where we are going deep into useful scenarios.
“If you think about what’s happening today, even [CEO] Satya [Nadella] mentioned this — no one grew up dreaming they’d wake up every morning to sort through 500 emails and mark 30 as spam. These are gears and mechanisms we use to get work done.
“One of our motivating principles is recognizing that modern work involves 30%-40% core productive work and then a lot of work to manage the work itself — coordination, communication, figuring stuff out, scheduling meetings. Right now, we’re all turning these gears manually, but AI should be able to handle much of that overhead for us.
“We’re looking at how Copilot can help with Outlook by advising on how you’re using time throughout your week, highlighting the most important items requiring your attention, and in some cases helping draft response points or summaries of complex email threads.
“The goal is to remove that administrative overhead so you can focus on the meaningful work rather than the work about work.”
Is your goal with AI to ultimately declutter interfaces like Word and Excel that have too many icons? I get confused sometimes when using Classic Outlook or Word. ”One-hundred percent. Today, there’s such depth in these apps from years of adding valuable features for enterprise users. Traditionally, we faced a trade-off between learning curve and power usage — feature discovery is hard, but once you do, it’s very powerful.
“With AI, we can eliminate that trade-off. You’ll still need to learn how to ask for things, but it’s much easier than learning tools from the ground up.
“Power users can keep their muscle memory while we democratize that power to everyone through Copilot, making those accumulated enterprise features accessible to everybody.”
How are you approaching third-party plugins like Adobe Express? And regarding security, how do you manage appropriate data access with AI handling so much information? ”For plugins, we think there will be high-value tools Microsoft builds, but also many created by others. We’ve seen over 100,000 agents and flows built with Microsoft Copilot Studio already. We aim to provide an agent store where you can discover, install, and pin these tools — making it not just a product but a platform. Some plugins might serve just three people, while others like Adobe Express will reach millions.
“Regarding security, our unique responsibility has three aspects: First, bringing work data together with world knowledge, like combining latest competitor news with internal company data for sales prep. Second, integrating into existing workflows people already use. And third — most importantly — doing this securely, with privacy preservation and compliance.
“Our Copilot control system gives IT admins a complete view of all activities and deployed agents, with controls to manage everything and strong guarantees on security and compliance.”
Microsoft tries to reassure Europe that it can resist the US government. Europe has doubts
Microsoft on Wednesday released a statement aimed at convincing global IT leaders, and particularly those in Europe, that it can still be trusted, but analysts in Europe said its statement was not persuasive.
Much of the European nervousness comes from American tariffs and the inevitable responding tariffs from the European Union (EU). But the fears go beyond that, with some European IT and cybersecurity executives worried about what American technology firms might be forced to do by the Trump administration. Those fears are fueled by the recent politicization of security clearances.
Microsoft’s detailed statement, attributed to vice-chair and president Brad Smith, spent a lot of words recapping all of what Microsoft has done in Europe over the years.
“Our economic reliance on Europe has always run deep. We recognize that our business is critically dependent on sustaining the trust of customers, countries, and governments across Europe,” Smith wrote. “We respect European values, comply with European laws, and actively defend Europe’s cybersecurity. Our support for Europe has always been — and always will be — steadfast. In a time of geopolitical volatility, we are committed to providing digital stability.”
Increasing European capacity“Today, we are announcing plans to increase our European datacenter capacity by 40% over the next two years,” the statement said. “We are expanding datacenter operations in 16 European countries. When combined with our recent construction, the plans we’re announcing today will more than double our European datacenter capacity between 2023 and 2027. It will result in cloud operations in more than 200 data centers across the continent.”
It added, “this expansion will play an important role in boosting Europe’s economic growth and competitiveness. We believe that broad AI diffusion will be one of the most important drivers of innovation and productivity growth over the next decade. Like electricity and other general-purpose technologies in the past, AI and cloud datacenters represent the next stage of industrialization.”
However, the closest Smith got to addressing the core concerns within the European IT community was a promise to legally fight to continue to maintain its European relationships.
“In the unlikely event we are ever ordered by any government anywhere in the world to suspend or cease cloud operations in Europe, we are committing that Microsoft will promptly and vigorously contest such a measure using all legal avenues available, including by pursuing litigation in court,” his statement said. “By including a new European Digital Resilience Commitment in all of our contracts with European national governments and the European Commission, we will make this commitment legally binding on Microsoft Corporation and all its subsidiaries.”
It continued: “Microsoft has a demonstrated history of pursuing litigation when that has been needed to protect the rights of our customers and other stakeholders. This includes four lawsuits we filed against the US Executive Branch during President Obama’s tenure, including to protect the privacy of our customers’ data in the United States and Europe. It also included, during President Trump’s first term, a successful decision before the US Supreme Court to uphold the rights of employees who are immigrants. When necessary, we’re prepared to go to court.”
Must decide ‘where loyalty lies’Analysts felt the promises didn’t deliver much.
Michela Menting, digital security research director at ABI Research, said that even Microsoft can only fight for so long.
“Microsoft can say everything they want on their record of litigation and promising to defend European interests, but ultimately they cannot guarantee that they can continue to do so,” Menting said. “They can fight for it, but that is not the same thing as winning that fight.”
“It is not possible for them to guarantee that, under this administration, that they can uphold those rights,” Menting said.
When pushed for an example, Menting said if the Trump administration wants Microsoft “to siphon all kinds of customer data from European companies, or whatever crazy idea comes into his head, they might well have to do Trump’s bidding.”
“These lists of what they have done in the past, it stands for nothing today,” Menting said. “If the rule of law changes in the US, they will have to adapt.”
Menting dismissed the Microsoft statement as “marketing fluff. It’s not soothing anyone. Indeed, it does the opposite. The fact that they are putting out that statement probably means that they are already receiving threats on their end. Microsoft is clearly worried, and this statement shows it.”
Forrester VP/research director Pascal Matzke was even more blunt, suggesting that European IT leaders are worried about what Microsoft, and other tech giants including Google, ServiceNow, and Salesforce, will do when the pressure is turned on.
“Microsoft has to decide where its loyalty lies — [with] the Trump administration or with its clients?” Matzke said. “There is a concern that they will ultimately be listening more to Trump.”
Anxiety is ‘huge’Matze said the key fear is that the European tech infrastructure has allowed itself to be far too intertwined with various American tech giants, including Microsoft. European government officials are likely to fight the tariffs with their own, “and the whole thing will spiral out of control. Can we continue then to work in the same collaborative manner?”
Matze’s argument is that European IT “anxiety is huge” and that some are starting to fear trusting American companies in the same way that they now fear working with Chinese companies. But because of the deep, years-long reliance on American tech players, he fears that a pullback would “kill innovation,” if it was even possible.
“I don’t see a way back. We are now in this global state,” Matze said, adding that those who think they can separate are wrong. “That’s an illusion. There is just no way. The boat has sailed, that train has left the station.”
Another analyst, Phil Brunkard, executive counselor at Info-Tech Research Group, said, “Microsoft’s new pledges look like they’re designed to calm three groups at once. EU policymakers pressing for digital sovereignty; big European firms drowning in DORA/NIS 2/CRA [regulations]; and global enterprises fearing the next geopolitical shock that could knock out a US hyperscaler.”
Brunkard said he was impressed by Microsoft’s promise for increased capacity.
“The capacity promise is pretty eye-catching: 40% more compute within 2 years, more than double by 2027 across 16 countries and roughly 200 facilities,” Brunkard said. “But the Digital Resilience Commitment is the real headline here. Microsoft is saying that it will fight in court against any foreign order to pull the plug on its EU cloud and, if forced offline, will hand Swiss-escrowed source code to local partners. Add in EU-only data center boards and a Deputy CISO for Europe, and Redmond is telling Brussels ‘OK, we’ll play by your rules now.’”
Is it enough?But is that enough? Brunkard is not certain.
“Does this make Microsoft less toxic? Partly. Sovereignty optics do improve a bit, but antitrust and licensing complaints are still there, and the CRA will be judging on audited technical controls, not blog posts,” Brunkard said. “Respect for European law is a start and a bold statement, but until auditors and eventually regulators can confirm the new safeguards, the jury’s still out.”
ABI’s Menting said there is yet another problem lurking behind these arguments.
“Despite all that blinding compliance speak, it’s hard to ignore the elephant in the room: the EU’s Anti-Coercion Instrument (ACI). If it comes into play, and the current climate is totally amenable to such a state, this could cripple Microsoft’s ability to operate successfully and lucratively in Europe,” Menting said. “The current US tariff imposition on Europe can most certainly be seen as economic coercion, and the EU would be within its rights to trigger the ACI and hit back against US digital services.”
And if that doesn’t work, Microsoft can leverage its power in controlling how and where it pays taxes. Its statement doesn’t discuss how the company will pay its taxes in Europe.
“How will they be reporting their revenues derived in the European territory? It’s all too common for US digital service providers to route those revenues through their various regional subsidiaries — hello Ireland — and then back to the US, effectively gaming the European tax system,” Menting said. “If things become dire, it can still play its tax card. At best, it could totally divest its European business, with completely separate and independent companies operating in Europe. But that is not the American way of doing business and Microsoft is very much an American company.”
Zoho adds AI capabilities to its low code dev platform
Zoho on Wednesday announced the addition of 10 AI-centric services and features within Zoho Creator, the company’s low code application development platform, that it said are part of its pledge to invest only in “AI capabilities that drive real-time, practical and secure benefits to business users.”
The expanded offerings include CoCreator, the firm’s new AI “development partner” powered by Zia, Zoho’s AI assistant, that it said in a release “facilitates faster, simpler and more intelligent app building with the use of voice and written prompts, process flows and business specification documents.”
New features also include the ability to transform unstructured data from different file types and databases into customized applications, aided by what the company described as “advanced AI-based data prep capabilities that remove inconsistencies and bring logical structure to detail.”
Trump wants kids learning AI in kindergarten — some say that’s too late
President Donald J. Trump recently signed an executive order to bring AI into K–12 education to boost literacy around the technology and create a new White House task force to lead the effort.
The task force plans to form public-private partnerships with AI experts to develop online resources for K-12 AI literacy and critical thinking and will seek industry commitments and federal funding to support the effort; the goal is to ensure resources are available for K-12 instruction within 180 days. As part of the plan, the US Secretary of Education must within 90 days issue guidance on using federal grants to support AI in education and find ways to use existing research programs to help states boost student success.
Some, however, say the executive order on AI in education doesn’t go far enough.
“AI education has to start even earlier than kindergarten!” Karen Panetta, a fellow with the Institute of Electrical and Electronics Engineers (IEEE), wrote via email when asked about Trump’s order. “Why? Because children need to be aware of the influences of things that are and are not real.” (IEEE is a global professional organization that advances technology through standards development and education.)
Children will encounter realistic but fake AI-generated content, so it should be an imperative teach them early to question what they see and ask trusted adults for help, according to Panetta.
Heather Barnhart, an education curriculum lead and fellow at the SANS Institute, agreed that AI training is critical, arguing that predators can leverage the technology to create images young children crave.
“That sentence is disturbing, but true,” Barnhart said. “Yes, AI has guardrails. However, it’s open source and can be taught how to create child sexual abuse material (CSAM). AI can also be used in the art of sex extortion. Here, children and teenagers are targeted in financial extortion with the creation of AI generated images of themselves. Out of fear, the kids resort to trying to pay the ransom or worse, harming themselves.”
Parents and teachers should talk to children about AI early and often, and those conversations should be age-appropriate and based on a child’s maturity, she said. Teaching kids to recognize suspicious behavior — both online and offline— is as important as teaching them about physical safety. Giving a child a device exposes them to potential dangers, often from strangers who appear to be peers or friends, Barnhart said.
“Bottom line, we can’t fear technology,” she said. “We cannot keep our children from technology. We need to learn how to communicate with them about online safety so that their world is not impacted when a threat comes their way. The more you talk to your kids and the more open you are to what they are doing and living — and what they are looking at online — the safer your family will be.”
Panetta said AI will increase phishing and online threats unless the US begins digital and AI education from the moment kids use devices. Just as word processors became standard in schools, AI tools will soon be essential in education and work.
Every school has students using tablets in the classroom and at home, Panetta pointed out, allowing students to use standard software programs such as word processors, animation software, drawing programs and instant access via the internet to relevant curated learning videos.
Panetta said using AI to help develop customized learning approaches is key. For example, “autistic children can greatly benefit from having AI that knows how to read their facial expressions to gauge their interest or emotions in response to educational materials. This helps develop AI that is more in tune with the needs of different abled children,” she said.
Trump’s executive order calls for educators, industry leaders, and employers to collaborate to create programs that equip students with essential AI skills across all learning paths. And it calls for a strong framework that integrates early exposure, teacher training, and workforce development to help foster innovation and critical thinking.
The order just “makes sense,” according to Emily DeJeu, an assistant professor of Business Management Communication at Carnegie Mellon University’s Tepper School of Business.
Noting China’s recent announcement of a major AI-focused educational overhaul, “this move seems intended to keep American students competitive in a fast-changing global landscape,” DeJeu said. “There’s also historical precedent for it: the 1983 federal report A Nation at Risk called for integrating computer science into high school curricula, sparking decades of STEM-focused education reforms.
“Building AI literacy could benefit students much like past efforts to build digital literacy,” she said.
However, DeJeu added, educators must be cautious because research shows AI can hinder critical thinking, increase plagiarism, and lead to learning loss. Students may rely on AI to do challenging work, gaining polished results without true understanding — risking a generation that uses AI well but lacks deep knowledge and critical skills.
Panetta also advised a cautious approach in light of AI’s tendency to hallucinate and spew erroneous information and expose sensitive information.
“We need to guarantee that standards are in place for both security and privacy,” Panetta said. “The best educational product that unintentionally shares your child’s image or private information will ultimately do more harm than good. At IEEE, our AI and security experts around the globe are leading the efforts to create these safeguards and standards.”
Hands on with the new Apple Mac Studio M4 Max
I can still remember the first time I attended a press launch for a professional Mac – the January 1999 introduction of the Blue and White Power Mac G3, which Apple wanted the world to believe was faster than Intel PCs of similar clock speed. Today, Apple’s new professional Mac Studio absolutely devours any other system when it comes to processor performance and energy efficiency.
What a difference a quarter century makes.I’ve spent time with the Mac Studio M4 Max in recent weeks. This model was equipped with an M4 Max chip boasting 16 CPU cores, 40 GPU cores, a 1TB SSD drive and 128GB of memory. This particular iteration costs $3,699, but you get a lot for your money. (For reference, that original Power Mac G3 started at $1,599, shipped with Apple’s infamous ‘puck’ mouse, and was nicknamed the Smurf, for its distinctive blue-&-white color.)
That’s where the comparisons end, of course, as there really is no relevant comparison to make between Apple’s old Power Macs and the new breed of Apple Silicon-driven speed demons.
The Mac Studio is everything Apple 20 years ago couldn’t deliver — the most powerful machine in its class, capable of munching its way through the most demanding tasks, and with benchmark data points that absolutely show these Macs to be the best systems for any professional needing to do intensive work.
Speeds and feedsHere’s what the numbers show:
- Geekbench, Single-core, 4,086
- Geekbench, Multi-core, 26,021
- Geekbench, Metal, 187,728
- Geekbench, Open CL, 118,684
The Mac aced its Cinebench tests, too, convincingly topping the list of reference systems and achieving in excess of 3,000 points on the Unigine Heaven benchmark; it’s a good score, but is dented by the fact the test environment needs to run in Darwin emulation.
AppleSupporting the release, Apple published a number of data points to show how powerful these systems can be. The main takeaways: even if you’re using a Mac Studio that’s under a year old, the new model is a welcome speed upgrade, and if you use an M1 Mac Studio you can expect twice the performance (faster rendering, compiling, photo editing).
Numbers are really real-world, so to put these into context, they mean this Mac — the latter-day descendant of the “Smurf” — is powerful enough to take anything you throw at it. And with even more powerful models also available, there’s almost no demanding task you can’t expect this Mac to achieve. Apple Silicon is eating the PC industry lunch.
Higher and higherFinally, if you upgrade from an Intel Mac, well, just as the move to Intel unleashed Apple’s pro Macs from decades in the PowerPC doldrums, the move to Apple Silicon has utterly unshackled the line. It means that if you’ve come across from an Intel Mac, you’ll be stunned by the huge performance upgrade you experience.
For pros, it means you’ll get more done faster than ever on a Mac.
That really wasn’t the case in 1999, when pro machines really were destined for use by Mac fans and people from the creative departments; while good at handling creative tasks, they didn’t truly match Windows in others — except you didn’t have to run Windows, which has always been an advantage to many of us.
Apple Where’s the ceiling?The problem with reviewing this piece of kit is that nothing I could do would actually make it break a sweat. For example, I did my usual test of opening up a GarageBand project with 300 instrument tracks; the machine figuratively shrugged and delivered. It then shrugged at everything I could think of doing with it — running multiple video windows, working with Pixelmator Pro transitions, dabbling about with Final Cut. During the week or so I tried to make the Mac stumble, I barely noticed it get warm and never heard the cooling system in action.
For me, these Macs over deliver, delivering performance far beyond what I actually need. To be frank, of course, most of my computing needs are answered by the also available M4-powered MacBook Air, with which I also had a pleasant dalliance. But I’m not the target market — the most cutting-edge pros in design, graphics, architecture, AI, medicine, and researchers. For those people, these Macs will deliver.
They also open up other opportunities.
For example, Apple researcher Awni Hannun managed to run Deep Seek v3 in 4-bit natively on the even more powerful M3 Ultra Mac Studio: “The new DeepSeek-V3-0324 in 4-bit runs at > 20 tokens/second on a 512GB M3 Ultra with mlx-lm!” he wrote.
The system I tested can’t quite do that, but it will happily run smaller large language models on device, making it possible to build and run bespoke AI systems on hardware you keep on your desk. That’s great for security-conscious businesses seeking an AI edge who want to ensure all the data belongs to them, and not to their AI provider.
Are there limitations?There are some drawbacks, I suppose. Some could see the need to get hold of a display, mouse, and keyboard to use with the device as being a snag. Users might also feel frustrated at the lack of easy upgradeability of Apple’s systems – it would be neat to be able to install your own memory, just as you were able to do with the more upgradeable Power Mac of yore.
Some might want more connectivity options, but that didn’t really worry me; the 5 USB-C/Thunderbolt 5 slots, 10Gb Ethernet, dual USB-A, HDMI, and SDXC slot seemed more than enough for most people.
If you really want the best and most powerful gaming computer, you might need to use systems with Nvidia chips, at least for a little while longer until gaming firms catch up with Mac. Again and again, we hit software compatibility problems with some apps as the only remaining barrier to accelerating Mac adoption.
Summing upI’ve deliberately tried to avoid the formulaic approach to a Mac review here. You don’t have the time to hear me reprise every data point from the tech sheet you can read here, and I don’t see any value in regurgitating those numbers. Life’s too short to re-read it, right?
And when it comes to looks, here’s a picture:
AppleIf you’ve been keeping up with news on these machines, you know they look like a tall Mac mini and come in the form of a nice silver box. You already know what Macs do – they run macOS, can run Windows in emulation, and as Apple builds out the Apple Intelligence system, they’ll do more things more effectively over time.
What is clear is that Apple’s high-end Macs can and will scale to whatever you need them to do. You should also recognize that the velocity of Apple Silicon development means that within the next 12 to 18 months Apple will be able to upgrade the range all over again, inserting even faster processors that raise the bar of what Macs can achieve even more all over again.
That’s a huge change from how things used to be. Back when I met the Power Mac G3, Apple really was playing catch-up with its professional Macs. These days, Apple’s pro machines aren’t playing the same game. The computers set the bar for what competitors hope to achieve. If you need a lot of computational power at significantly lower energy costs, you can’t go wrong with a Mac Studio.
You can follow me on social media! Join me on BlueSky, LinkedIn, Mastodon, and MeWe.
The ultimate guide to using multiple monitors with Windows
Laptops are great, but you don’t have to limit yourself to a tiny laptop screen — especially when you’re using a laptop at a desk. Adding a second screen is easier and cheaper than ever. The same is true if you’re using a desktop PC, too. No matter how you’re working, there’s no reason to limit yourself to just one monitor.
Heck, even the word “monitor” can have multiple meanings. You can project from your a laptop to a TV, wirelessly, in just a few clicks. Or you can get a lightweight portable monitor for more screen real estate anywhere you take your laptop.
An optimal multimonitor setup isn’t only about hardware, though. It’s also about the software tricks you need to make multimonitor setups sing on Windows — from tools to troubleshooting tips. So let’s dive in.
Want more Windows knowledge? Check out my free Windows Intelligence newsletter. I’ll send you free in-depth Windows Field Guides (a $10 value) as a special welcome bonus!
Your multiple monitor hardwareStep one in setting up a second Windows monitor is determining what outputs your PC has. If you have a laptop, take a look at its ports. On a modern laptop, you might see HDMI out, and you might also simply be able to connect an external monitor over USB-C. (Other laptops may have DisplayPort or mini DisplayPort; it depends on the specific system.)
If you have a desktop PC, you almost certainly have a way to connect a second monitor. Again, take a look at the outputs on the back of your PC.
You can buy portable monitors built specifically for laptops, too. These are secondary monitors you can fit in a bag, and they connect via a USB-C cable. (The USB-C cable provides power to the monitor, too.) You can often find these monitors for $100 or less — they’re a lot better and more convenient than you might think.
Alternatively, you might already have the monitor you need already sitting around your home or office. Even somewhat hardware can get the job done as a secondary monitor — especially while you’re still deciding whether you like the idea. And you can get a lower-end (or even used) external monitor for very little money if you don’t want a huge high-resolution display.
If you plan on using your laptop with a big monitor at a desk, you should consider investing in a dock, too. You can then connect your monitor and other peripherals — keyboard, mouse, speakers, and whatever else — directly to the dock, then connect your laptop to all those items with a single swift connection.
Bear in mind that your choice of cable very much matters. Cheaper or older HDMI or DisplayPort cables might not have the bandwidth to deliver fast refresh rates at high screen resolutions on a modern display. When in doubt, spend a few bucks to get a modern cable that’s certified for the latest hardware standards. Don’t just dig something out of a drawer and pair it with a high-end display. (Of course, if you’re just testing this out with something older or less demanding, whatever you have lying around will almost certainly work fine.)
The TV possibilityAs I mentioned, even a TV can serve as an effective second monitor for your laptop. You might be able to connect it directly with an HDMI cable — then you’d just need a wireless mouse and keyboard to control it.
Additionally, there are ways to use wireless projection with modern TVs — meaning you can treat your TV like an external monitor wirelessly, with just a few clicks or taps. To start casting and see whether your TV appears as an option on Windows, press Windows+K. Bear in mind that this is better for presenting something or sharing your display, as it’s nowhere near as fast and crisp as a proper wired connection. But it’s yet another way you can do more with your PC.
srcset="https://b2b-contenthub.com/wp-content/uploads/2025/04/Windows-Casting-dialog.png?quality=50&strip=all 1200w, https://b2b-contenthub.com/wp-content/uploads/2025/04/Windows-Casting-dialog.png?resize=300%2C210&quality=50&strip=all 300w, https://b2b-contenthub.com/wp-content/uploads/2025/04/Windows-Casting-dialog.png?resize=768%2C538&quality=50&strip=all 768w, https://b2b-contenthub.com/wp-content/uploads/2025/04/Windows-Casting-dialog.png?resize=1024%2C717&quality=50&strip=all 1024w, https://b2b-contenthub.com/wp-content/uploads/2025/04/Windows-Casting-dialog.png?resize=996%2C697&quality=50&strip=all 996w, https://b2b-contenthub.com/wp-content/uploads/2025/04/Windows-Casting-dialog.png?resize=240%2C168&quality=50&strip=all 240w, https://b2b-contenthub.com/wp-content/uploads/2025/04/Windows-Casting-dialog.png?resize=120%2C84&quality=50&strip=all 120w, https://b2b-contenthub.com/wp-content/uploads/2025/04/Windows-Casting-dialog.png?resize=686%2C480&quality=50&strip=all 686w, https://b2b-contenthub.com/wp-content/uploads/2025/04/Windows-Casting-dialog.png?resize=514%2C360&quality=50&strip=all 514w, https://b2b-contenthub.com/wp-content/uploads/2025/04/Windows-Casting-dialog.png?resize=357%2C250&quality=50&strip=all 357w" width="1024" height="717" sizes="(max-width: 1024px) 100vw, 1024px">The Casting dialog supports all sorts of external displays, though you sometimes have to enable wireless screen mirroring on the TV side first.Chris Hoffman, Foundry
The Windows second monitor software setupMultimonitor setup on Windows is usually pretty easy — just plug in and go. Then open the Settings app in Windows and head to System > Display.
From there, you can tell Windows how your various monitors are physically positioned simply by dragging and dropping them into the correct arrangement in the on-screen interface. You can also change the scaling of text on your screen, choose the orientation of your displays, and choose how you want Windows to handle the displays (mirrored, as two separate displays, or with one — like your laptop — not being used at all and instead staying dark).
One thing to watch: Be sure the correct monitor is set as your primary one. Click whichever one you want on the Display screen and check “Make this my main display.”
The Display settings page makes it extremely easy to set up multiple displays — no third-party software necessary.Chris Hoffman, Foundry
You might also need to change the screen resolution settings here, although Windows should normally pick that properly for you. (Refresh rate is something Windows often doesn’t detect automatically, though, so it’s worth going to Settings > Display > Advanced display and choosing the highest refresh rate your display supports.)
The projection and taskbar factorsOnce your laptop is docked and connected to an external monitor, you can treat it in several different ways. You could keep using it as a secondary (or primary) monitor. Or, you could even power off the screen and shut your laptop, turning it into a desktop PC that powers your monitor and keyboard.
For many purposes, you don’t need both a laptop and a desktop PC at all. You just need a laptop and the right peripherals, and you can then treat the system like a desktop whenever you like. You’ll have the exact same software setup and files both in your on-the-go laptop mode and when you’re using your PC at your desk — no syncing or extra effort required.
You can also press Windows+P at any time to access the Project options popup, where you can choose exactly how Windows treats your various displays.
The Project dialog helps you quickly control how Windows handles an external display.Chris Hoffman, Foundry
Windows also gives you options for how your taskbar appears across multiple displays. Head to Settings > Personalization > Taskbar > Taskbar behaviors and you’ll find settings like “Show my taskbar on all displays” and one that lets you control whether your windowed apps appear on all taskbars, or just the taskbar on the display they’re floating on.
When you’re done using the external monitor, you can simply unplug it; Windows should handle everything properly — no reboots or other setup necessary. Windows 11 is better at this than Windows 10, and it’ll even try to automatically reposition your windows in a way that makes sense as you move between single and multimonitor modes. If you do encounter an issue, press Ctrl+Windows+Shift+B to find a hidden shortcut for fixing it.
Advanced Windows multimonitor tricksBasic multimonitor usage is surprisingly trivial — you can drag and drop windows across the edge between your monitors and reposition them, just as if you were using one massive monitor.
If you’re willing to dig deeper, though, there’s much more you can do. The basic (but surprisingly powerful) Snap features built into Windows — especially on Windows 11 — are very useful in a multiple-monitor scenario. Use the Windows key along with an arrow key or press Windows+Z (on Windows 11) to find them.
If you have a bigger monitor and want to do even more, Microsoft’s FancyZones PowerToy is a must-install. You’ll get extra-customizable, flexible window layouts. (Another thing that will help on big monitors: My Grab to Scroll script, which eliminates the need to carefully position your mouse over those tiny window scroll bars.)
PowerToys Workspaces is also incredibly useful on multiple monitors. Rather than spending time opening windows one by one and carefully repositioning them, you can launch the apps you use once, use PowerToys Workspaces to save that exact layout, and then get a convenient desktop shortcut you can double-click in the future — to launch all your go-to work apps and reposition them exactly as you like.
If you have a large, wide monitor, you might prefer having your taskbar vertically on the left or right edge of your screen rather than on the bottom. That’s easy on Windows 10, but annoyingly, Microsoft removed the option from Windows 11. In that environment, you’ll need to rely on apps like Start11 ($10) or ExplorerPatcher (free) to get a vertical taskbar.
Your perfect multimonitor setupWhen it comes to these monitor setups, I’d start slow — perhaps with an external monitor you already have lying around or an inexpensive display you can use to test the waters. Then you’ll get a sense for how much you like the setup and whether a bigger or higher-resolution display might be right for you.
There’s so much you can think through once you’re ready. You don’t have to use a monitor sitting on a desk, for instance; you could instead use external monitor arms that clamp to your desk to let you easily reposition the monitors for improved ergonomics and more desk space.
Some monitors can also spin around from a horizontal orientation to a vertical one. A second monitor in portrait mode next to your widescreen display could be just what you need — a great screen for reading or keeping track of emails or work chats.
Or maybe you’ll discover you prefer the single-screen experience but just want more screen real estate — that’s what ultrawide monitors are for.
Whichever way you go, one thing’s for sure: Having lots of screen real estate is one of the joys of using a Windows PC rather than a small smartphone or a tablet. And even the most humble of laptops can power a multiple-screen setup that’s both awesome to use and surprisingly easy to set up.
Want more Windows advice written for humans — by a human? Sign up for my free Windows Intelligence newsletter. I’ll send you three new things to try each Friday and free Windows Field Guides as a welcome gift.
Microsoft at 50: the 4 worst train wrecks in the company’s history
Microsoft is worth roughly $1 trillion today, but that sky-high valuation didn’t come without a few train wrecks during its first 50 years. I’m not talking about smaller screwups like the brain-dead Clippy, Microsoft’s intrusive Office helper from 1995. Instead, I’m looking at the ones with major consequences that in some cases set Microsoft back years, made it lose out on important markets, and cost the company billions of dollars.
Here are the company’s four worst screw-ups. (And here’s a look at its biggest successes.)
Microsoft’s antitrust trial and the lost decadeDuring Microsoft’s first 23 years, the company was on a rocket-like trajectory, with only relatively minor bumps. It ruled the tech world, swatting away competitors with ease, building monopolies in operating systems, productivity software, and beyond.
Its iron-fisted rule seemed unlikely ever to end.
Then in 1998, the US Justice Department and 20 state attorneys general filed an antitrust suit charging the company with illegally using its OS monopoly to crush competitors. Notably, the DOJ claimed Microsoft wouldn’t allow Netscape or other browsers to easily be installed on Windows (allowing them to compete with the company’s own Internet Explorer browser).
Microsoft’s bullying tactics were laid bare during the trial, with the government citing evidence such as a Microsoft executive telling an Intel honcho the company would “cut off Netscape’s air supply” by including Internet Explorer for free in Windows, so no one would pay for a rival browser.
Microsoft lost the suit, appealed, and eventually settled with the feds. It avoided a corporate break-up, but was forced to allow alternatives to Internet Explorer to ship with Windows or be easy to install. The penalty at the time seemed like a slap on the wrist. But Microsoft put so much energy into the fight, it had little time and resources to focus on the changing tech world — and lost out on a generation of tech advances.
Steve Ballmer’s Monkey Boy dance and disastrous 14-year reignSteve Ballmer took over from Bill Gates as Microsoft CEO in 2000 and began a 14-year reign in which he acted like a cross between a clown and the Godfather, with disastrous results.
His solution to almost every problem the company faced was to try to use Windows as a bludgeon to pound and beat competitors. The tactic failed time and time again, and Ballmer learned nothing along the way. In just one example of how arrogance blinded him to the new reality of tech, he insisted a Windows-based mobile operating system would rule the world, telling USA Today in 2007 after the iPhone’s launch: “There’s no chance that the iPhone is going to get any significant market share. No chance.” (More on that below.)
His sometimes clownish public behavior made him a laughingstock, such as his infamous “Monkey Boy Dance,” where he danced, howled, screamed and acted like a madman at a conference to show his enthusiasm for Microsoft. Another YouTube favorite is the famous “Developers” video, which captured him soaked with sweat, screaming “Developers, developers, developers, developers…” — until his voice gave out.
Thanks to his antics, Microsoft under his leadership went from the world’s tech leader to not much more than an afterthought. The company lost out on internet search and web browsing to Google, social networking to Meta (then known as Facebook) and others, and mobile computing to Apple.
A cloudy VistaMicrosoft became the early king of tech based largely on its worldwide monopoly on operating systems, first with the character-based MS-DOS, and later with Windows. So, when one of its operating systems bombed, it had an outsized influence on the tech world and the company itself.
Though it has released plenty of stinkers along the way, Windows Vista stands out for its utter awfulness — so bad that even top executives at the company couldn’t get it to perform the simplest tasks, such as printing.
People hated its resource-hungry interface, it wouldn’t run on older PCs, and it was doomed by countless hardware incompatibilities. To try and juice sales, Microsoft came up with a bone-headed plan to release “Vista-capable PCs” that would run only a stripped-down version of the operating system.
Unfortunately, those PCs couldn’t even do that. Mike Nash, who became a corporate vice president for Windows product management, wrote in an email about the PCs, “I PERSONALLY got burnt…I now have a $2,100 e-mail machine.” Another Microsoft employee wrote in an email, “Even a piece of junk will qualify” to be called Windows Vista Capable.
Steven Sinofsky, who was the top executive in charge of Windows, couldn’t even get his printer to work with it. He admitted he wasn’t even sure what “Vista-capable” meant.
Windows, don’t phone homeWhen it comes to the biggest bomb in Microsoft’s past, none comes close to the disaster of Windows Phone, which started out its sad life in 2001 as a mobile operating system called Pocket PC 2002. Although Microsoft launched the mobile operating system six years before the iPhone’s debut, it was doomed from birth because Ballmer decided any mobile operating system had to be closely tied to Windows — not designed from the ground up for mobile.
I won’t go into all the gory details of what a poorly designed, unusable operating system it was. Instead, I’ll let a few numbers tell the tale. When Microsoft launched a revamped, full-blown version in 2012, it had already spent billions in development costs, then spent $400 million publicizing the launch.
But money can’t buy you love. Few people bought the phones, and the company ended up spending $1,666 in marketing and advertising for each one sold, far above the $100 retail price, which Microsoft was forced to slash to $50, to no avail.
Microsoft bought Nokia for $7.2 billion in a desperate attempt to salvage the operating system. It didn’t work. When the company finally put Windows Phone out of its misery, it had a pathetic 1.3% market share in the US, and less in most other places, including 1% in Great Britain and Mexico, 1.2% in Germany, and 0% in China.
If you like business horror stories, you can get more gruesome details here.
How to bring a handy future Pixel feature to any Android phone today
Hey — have ya heard? Google’s got a simple-sounding but supremely useful new trick in the works for its homemade Pixel phones.
Under-development code spotted in the latest Android beta reveals plans for adding a new double-tap gesture into future Pixel devices. It may or may not make the cut for the upcoming Android 16 release, but it’s absolutely in progress — and odds are, it’ll show up in some new Android version or quarterly update before long.
The way it’d work is refreshingly straightforward: Just tap twice in a row anywhere on your screen, anytime, and poof: The screen shuts off. It’s a fast ‘n’ easy shortcut for an action you probably take dozens of times a day, and while the difference may seem small on the surface, it really does add up and feel a lot easier than reaching for the physical power button every time you need to perform that feat.
It also goes hand in hand with another common Android gesture: the ability to double-tap anywhere on a screen to turn it on when the phone is resting. This one’s already built into Pixels, Samsung Galaxy gadgets, and other Android devices, and it’s an incredibly handy way to get into your device in a jiffy. (Try it out to see if it works on whatever phone you’re using right now — and if it doesn’t, try searching your system settings for double tap to find the associated setting and make sure it’s enabled!)
But strangely, the double-tap move to turn off a screen has traditionally been less readily available and emphasized across Android — particularly within the Pixel universe, where Google’s core Android operating system is present in its unadulterated form.
It’s good to see that Google’s ready to address that — but hey, this is Android. And you’re a smart and enlightened Android Intelligence reader. Here, anything’s possible, and you definitely don’t have to wait for Google to officially give you a feature to enjoy it.
So stretch those lithe little digits and get your favorite tapping appendage ready: It’s time for an instant upgrade to your Android experience.
[Want even more advanced Android knowledge? Check out my free Android Shortcut Supercourse to learn tons of time-saving tricks.]
Android tap-to-turn-off: The Samsung pathWe’ll start with the simplest double-tapping implementation of all — and that’s for anyone using a Samsung Galaxy gizmo.
Few flying squirrels realize that (and FYI, when I say “flying squirrel,” I mean you), but Samsung’s actually already added a double-tap-to-turn-screen-off gesture into its Android implementation. And all you’ve gotta do is take two seconds to see that it’s activated.
On any reasonably recent Galaxy device, head into your system settings and search for double tap. Look for the item labeled “Double tap to turn off screen” and confirm that the toggle next to it is in the on and active position.
Samsung’s Android devices already have a double-tap screen-off option within their settings.JR Raphael, Foundry
And that’s it: From your home or lock screen, you can now just double-tap in any open space to turn the screen off. Paired with the double-tap to turn on option, you’ll never have to shift your fragrant fingies away from the screen whilst working again.
No Samsung? No problem. Keep reading.
Android tap-to-turn-off: The launcher liftoffNo matter what style of Android device you’re using, a custom home screen launcher is one easy way to give yourself immediate access to all sorts of advanced Android gestures.
A launcher, if you aren’t familiar, is a special app that takes over your entire home screen and app drawer interface and replaces the standard out-of-the-box setup with something much more customizable and often also rich with features.
My two favorites of the moment, Niagara and Nova, both have options within their settings to create your own double-tap command — which can absolutely include the ability to have the screen shut off as a result of that action.
Android launchers, like Nova, offer all sorts of custom gesture settings.JR Raphael, Foundry
If you aren’t interested in using a custom launcher and just want a super-simple way to implement this one feature, though, our final path is the one for you.
Android tap-to-turn-off: The universal add-onNo matter what type of Android phone you’re using or what specific setup you prefer within it, a simple app called Double Tap Screen Off / Lock will give you an easy-as-can-be way to bring a double-tap screen-off superpower into your life this instant.
Just install the app from the Play Store and make your way through its initial setup screens. You’ll have to grant the app permission to operate as an accessibility service, which may sound daunting but is genuinely needed for this kind of function to be possible. (The app is from a known and trusted veteran Android developer, and its privacy policy is clear about the fact that it doesn’t collect any form of personal data or share any manner of info with anyone.)
Then, you’ll just tap the “Turn screen off” option on the app’s main screen and activate the toggle to enable it.
The aptly named Double Tap app brings a double-tap screen-off action onto any Android device.JR Raphael, Foundry
Double Tap Screen Off / Lock is sometimes referred to as “Pixel Toolbox,” by the by, but it should work on any Android device — Pixel or otherwise. It’s free to download with an optional $6 pro upgrade that eliminates ads inside its configuration interface and unlocks a handful of extra features, none of which are necessary for this basic purpose. (If you see a prompt to make that upgrade when you first open up the app, you can skip past it by tapping the “x” in the upper-right corner.)
And there ya have it: three possible paths to the same time-saving, ergonomics-enhancing result. Pick the one that makes the most sense for you and enjoy your newfound tap-a-tap-tappin’ power.
Get even more advanced shortcut knowledge with my free Android Shortcut Supercourse. You’ll learn tons of time-saving tricks for whatever Android phone you’re using!