Agregátor RSS
Industries Most Exposed to AI Are Not Only Seeing Productivity Gains but Jobs and Wage Growth Too
New technologies rarely leave work untouched. They also rarely eliminate the need for human contribution altogether.
Forecasts of the impact of artificial intelligence range from the apocalyptic to the utopian. An October 2025 report from Senate Democrats, for example, predicted AI will destroy millions of US jobs. A couple of years earlier, consultant company McKinsey forecast AI will add trillions to the global economy, while emphasizing job losses can be mitigated by training workers to do new things.
The problem is that many of these claims are based on projections, overly simplified surveys, or thought experiments rather than observed changes in the economy. That makes it hard for the public, and often policymakers, to know what to trust.
As a labor economist who studies how technology and organizational change affect productivity and well-being, I believe a better place to start is with actual data on output, employment, and wages—which are all looking relatively more hopeful.
AI and JobsIn one of my new research papers with economist Andrew Johnston, we studied how exposure to generative AI affected industries across America between 2017 and 2024, using administrative data that covers nearly all employers. Our analysis covered a crucial period when generative AI use exploded, allowing us to analyze the effect within businesses and industries.
We measured AI exposure using occupation-level task data matched to each industry and state’s occupational workforce mix prior to the pandemic. A state and industry with more workers in roles requiring language processing, coding, or data tasks scored higher on exposure, for example, compared with one with more plumbers and electricians.
We then took that exposure ranking by occupation and looked at changes in the standard deviation in occupational exposure, comparing that with labor market and GDP across states and industries from 2017 to 2024.
Think of a standard deviation as roughly the gap between a paramedic—whose work centers on physical assessment, emergency response, and hands-on care that AI cannot easily replicate—and a public relations manager, whose work involves drafting communications, analyzing sentiment, and synthesizing information that AI tools handle well. That gap in AI exposure is roughly what we’re measuring when we ask: Does being on the higher-exposure side of that divide change your industry’s trajectory?
This data allowed us to answer two questions: When AI tools became widely available following the public release of ChatGPT in late 2022, did states and industries that were more exposed to generative AI become more productive, and what happened to workers?
Our answers are more encouraging, and more nuanced, than much of the public debate suggests.
We found that industries in states that were more exposed to AI experienced faster productivity growth beginning in 2021—before ChatGPT reached the public—driven by enterprise tools already embedded in professional workflows, including GitHub Copilot for software development, Jasper for marketing and content writing, and Microsoft’s GPT-3-powered business applications. In 2024, for example, industries whose AI exposure was one standard deviation higher saw gains of 10% in productivity, 3.9% in jobs, and 4.8% in wages than comparable industries in the same state.
Those patterns suggest that, at least so far, AI has acted as a productivity-enhancing tool that boosts employment and wages rather than a simple substitute for labor.
Augmentation Versus DisplacementA crucial distinction in the data is between tasks where AI works with people and tasks where AI can act more independently. In sectors where AI mainly complements workers—think marketing, writing, or financial analysis—our data show that employment rose by about 3.6% per standard deviation increase in exposure.
In sectors where AI can execute tasks more autonomously—including basic data processing, generating boilerplate code, or handling standardized customer interactions—we found no significant employment change, though workers in those roles saw slower wage growth.
What these findings suggest is that when AI lowers the cost of completing a task and raises worker productivity, companies expand output enough to increase their demand for labor overall—the same logic that explains why power tools didn’t eliminate construction workers.
The economic question is not whether any given task disappears. It is whether businesses and workers can reorganize fast enough to create new productive combinations. And so far, in most sectors, our evidence suggests they can.
But state policies also matter: These benefits were concentrated in the states with more efficient labor markets, meaning that the impact of generative AI on workers and the economy also depends on the types of policies and institutions of the local economy.
Importantly, these findings hold beyond occupational exposure. In additional work with co-authors at the Bureau of Economic Analysis, we found a similar effect on GDP and employment when looking at actual AI utilization—that is, how often workers use AI. Drawing on the Gallup Workforce Panel, we measured workers actively using AI daily or multiple times a week. We found that each percentage-point increase in the share of frequent AI users in a state and industry is associated with roughly 0.1% to 0.2% higher real output and 0.2% to 0.4% higher employment.
To put that in context: The share of frequent AI users across all occupations rose from about 12% in mid-2024 to 26% by late 2025, a shift our estimates suggest corresponds to roughly 1.4% to 2.8% higher real output—or about 1 to 2 percentage points of annualized growth over that period.
New technologies rarely leave work untouched. But they also rarely eliminate the need for human contribution altogether. Instead, they change the composition of work, as our research shows. Some tasks shrink. Others expand. New ones emerge that were previously too costly or too hard to perform at scale. Put simply, some occupations might go away, but most of them just change.
If anything, the trends documented here are likely to strengthen rather than fade. Not only are generative AI tools rapidly improving, but also the experimentation and research and development that many workers and companies are engaging in are likely to pay large dividends. These investments—often referred to as intangible capital—tend to get unlocked a few years after a technology comes onto the scene, once complementary investments have been made.
The Role of Companies and ManagersWhether AI leads to anxiety or adaptation for workers depends in part on what happens inside organizations. Using additional data collected over many years in the Gallup Workforce Panel covering more than 30,000 US employees from 2023 to 2026, I found in a 2026 paper that workplace adoption of generative AI rose quickly over the period, with the share of workers using AI often increasing from 9% to 26%.
But the more important finding is that adoption was far more common where workers believed their organization had communicated a clear AI strategy and where employees said they trust leadership. This suggests that growing adoption and effective use of AI depends not only on the availability of the technology but on whether managers make its use clear, credible, and safe.
Where that clarity exists, frequent AI use is associated with higher engagement and job satisfaction, and it even reverses the burnout penalties that appear elsewhere.
In other words, the broader economic effects of AI depend not only on how sophisticated the tools are but on whether companies and managers create environments where workers can experiment, reorganize tasks, and integrate new tools into productive routines. That is, if employees do not feel the psychological safety to experiment, they are less likely to use AI, and they are especially less likely to use it for higher-value work.
That is precisely the kind of adaptation that I believe makes labor markets more resilient than the most alarmist forecasts suggest.
This article is republished from The Conversation under a Creative Commons license. Read the original article.
The post Industries Most Exposed to AI Are Not Only Seeing Productivity Gains but Jobs and Wage Growth Too appeared first on SingularityHub.
Hackers exploit Marimo flaw to deploy NKAbuse malware from Hugging Face
When LKML Patches Signal Exploitation Risk Before CVE Assignment
Vybíráme nejlepší fotomobily současnosti. Tohle si kupte místo foťáku
Příběhy Kingdom Come rozšíří povídková kniha. Vrátí se Michal a David, dojde i na nové postavy
Google expands Gemini AI use to fight malicious ads on its platform
Americans who masterminded Nork IT worker fraud sentenced to 200 months behind bars
Two Americans have been jailed for a combined 200 months for helping North Korea generate $5 million through fraudulent IT worker schemes.…
Gemini opustilo prohlížeč a je rovnou na ploše. Google na Macy vpustil AI
New ATHR vishing platform uses AI voice agents for automated attacks
Most "AI SOCs" Are Just Faster Triage. That's Not Enough.
Nová CAPTCHA na katastru nemovitostí
Lego leze do peněz. Tady je 15 nejdražších setů, které jsou aktuálně v prodeji. Začínáme nad 11 tisíci
Rust 1.95.0
Thunderbolt, open-source AI klient od Mozilly
ThreatsDay Bulletin: Defender 0-Day, SonicWall Brute-Force, 17-Year-Old Excel RCE and 15 More Stories
Git identity spoof fools Claude into giving bad code the nod
Security boffins say Anthropic's Claude can be tricked into approving malicious code with just two Git commands by spoofing a trusted developer's identity.…
4K projektor za hubičku. Kvalitní Optoma Photo Life PK31 zlevnil na 11 tisíc
Microsoft’s Windows Recall still allows silent data extraction
Microsoft’s Windows Recall feature remains vulnerable to complete data extraction despite a major security overhaul, according to a cybersecurity researcher who says malware running in a user’s context can quietly siphon off everything Recall has captured, without administrator privileges, kernel exploits, or breaking encryption.
Alexander Hagenah, executive director at Zürich-based financial infrastructure operator SIX Group, made the claim in a LinkedIn post, where he also published a proof-of-concept tool called TotalRecall Reloaded to demonstrate the issue.
Hagenah first exposed Recall’s security flaws in 2024, forcing Microsoft to pull the feature from preview and rebuild it. Microsoft relaunched Recall in April 2025, saying the new architecture would restrict “attempts by latent malware trying to ‘ride along’ with a user authentication to steal data.” Hagenah said it does not.
“When you use Recall normally, TotalRecall Reloaded silently holds the door open behind you and then extracts what Recall has ever captured. That is precisely the scenario Microsoft’s architecture is supposed to restrict,” he wrote in the post.
Hagenah wrote in the post that he disclosed the research to Microsoft’s Security Response Center on March 6, submitting full source code and reproduction steps. Microsoft reviewed the case for a month and closed it on April 3, telling him the behavior “does not represent a bypass of a security boundary or unauthorized access to data.”
“Microsoft says this is by design,” Hagenah wrote. “That worries me.”
In an email response to CSO, a Microsoft spokesperson said, “After careful investigation, we determined that the access patterns demonstrated are consistent with intended protections and existing controls, and do not represent a bypass of a security boundary or unauthorized access to data. The authorization period has a timeout and anti-hammering protection that limit the impact of malicious queries.”
Hagenah’s research does not challenge Microsoft’s encryption, which he said is sound. The gap, he told CSO, is in how decrypted data is handled once it leaves the enclave.
“Plaintext screenshots and extracted text end up in an unprotected process for display,” he told CSO. “As long as decrypted content crosses into a process that same-user code can access, someone will find a way in.”
What a fix would requireA fix is technically feasible, Hagenah said.
“The short-term fix is fairly straightforward. Microsoft could add stronger code integrity and process protections to AIXHost.exe, the process that renders the Recall timeline. Right now, it has none, which makes the injection path possible. That would block the specific technique I demonstrated and materially raise the bar,” he said.
The longer-term problem runs deeper, he said. “Microsoft should rethink how decrypted data is handled after it leaves the enclave. The cryptography and enclave design are genuinely well done, and I want to be clear about that. The problem is that plaintext screenshots and extracted text end up in an unprotected process for display. As long as decrypted content crosses into a process that same-user code can access, someone will find a way in,” he said.
“A durable fix would mean either rendering inside a protected process or adopting a compositing model where raw data never leaves the trust boundary. That is a bigger effort, but it is the only way to close this class of issue properly,” he said.
Exploitation riskThe barrier to weaponizing this technique is lower than Microsoft’s security messaging would suggest, Hagenah said.
“They only need code running in the user’s context and a way to reuse the authorized Recall session,” he said. “That is a much lower bar than many people would assume from Microsoft’s security messaging.”
While Recall’s limitation to Copilot+ PCs and its opt-in status reduce the scale of exposure, targeted abuse is a realistic near-term risk, he said. “For targeted abuse, surveillance, or high-value user collection, this is absolutely realistic,” he said.
Hagenah said he published the source code deliberately so defenders, EDR vendors, and security teams could build detections before threat actors operationalize the technique independently. “In my view, that gives the defensive side a valuable head start,” he said.
Independent security researcher Kevin Beaumont reached a similar conclusion after separately testing the current Recall implementation. “Yep, you can just read the database as a user process,” Beaumont wrote on Mastodon on March 11. “The database also contains all manner of fields that aren’t publicly disclosed for tracking the user’s activity. No AV or EDR alerts triggered,” he wrote.
The article originally appeared in CSO.
Cal.com kvůli hrozbě AI uzavírá zdrojový kód
Cisco says critical Webex Services flaw requires customer action
- « první
- ‹ předchozí
- …
- 38
- 39
- 40
- 41
- 42
- 43
- 44
- 45
- 46
- …
- následující ›
- poslední »



