Kategorie
Microsoft warns of Exchange zero-day flaw exploited in attacks
The trouble with emotion-reading AI
“If you can’t measure it, you can’t fix it.”
That’s a common saying in business, and it tends to be true. But what if the thing you want to fix is your employees’ attitudes?
The AI revolution makes it possible to measure emotions and mental states. So why not use it widely and fix what’s broken?
That’s the idea behind emotion AI, which is also called “affective computing,” “sentiment analysis,” or “algorithmic affect management.” The idea is to use sensors and AI to detect, interpret, classify, and act upon human emotions in the workplace.
Thanks to improvements and breakthroughs in a wide range of technologies (including computer vision, natural language processing, speech and voice analysis, biometrics, machine learning and deep learning, and edge computing hardware) emotion AI is now possible.
Many companies have come forward to provide ready-to-use solutions for emotional AI apps, including Cogito, Affectiva, Hume AI, Entropik, and HireVue.
The idea is simple: Collect data from employees, process it through AI, and get a result that shows how an employee feels. Depending on the solution, the data comes from:
- Vocal features — pitch, tone, cadence, micro-pauses, vocal stress
- Facial expression — video analysis of video calls and through desktop cameras
- Text — mass sentiment analysis on emails, Slack/Teams messages, survey responses, and performance reviews
- Physiological biosignals — heart rate variability, galvanic skin response (via wearables)
- Behavioral telemetry — keystroke cadence, mouse dynamics, app-switching patterns
- Posture and gaze — computer vision analysis from cameras installed in workplaces
Despite the progress and variety of solutions, this whole area is problematic for businesses.
Why companies want to use emotion AIThe range of business goals driving emotion AI is vast. The most defensible reason is safety. Workers in risky jobs, such as factory workers and truck drivers, could be protected with AI tools that help avoid injury and death. A common example is technology that detects when a truck driver is dozing off and either sounds an alarm or switches to autopilot to take control of the truck and pull over.
Another goal is better customer service. Companies like MetLife use software that monitors call center agents’ voice, tone, and pitch to make sure they don’t get snippy or express frustration with customers.
HR departments could use AI to understand the workplace mood by analyzing company communications and employee surveys. Companies can also check for employee burnout and use the technology for hiring. By applying emotion AI to a video job interview, companies might make better hires.
Emotion AI in the workplace can offer other benefits such as lowering employee turnover, healthcare expenses, and safety risks while boosting customer satisfaction, worker productivity, and insight into team or managerial dysfunction.
What’s wrong with emotion AIWhile measuring, then acting upon, the emotions and mental states of employees sounds like a powerful idea, it’s often based on bad science.
Emotion AI systems that lean on facial expressions, for example, are based on a theory by Paul Ekman, an American psychologist at the University of California, San Francisco. He theorized back in the late 1960s that a small set of basic human emotions produces universal, reliably readable facial expressions across cultures.
But Ekman’s theory was shown to be problematic by a 2019 meta-analysis led by Lisa Feldman Barrett, in an article published in Psychological Science in the Public Interest. She looked at more than 1,000 studies and concluded that you can’t always reliably infer people’s emotional states from facial movements alone.
Most emotion AI solutions are based on the assumption that everyone’s emotions can be interpreted the same way, and that’s almost certainly wrong, given how different people can be in appearance, voice, personality and physiology.
Like many areas of business and leadership in recent years, AI is often seen as a solution to the challenges of managing a lot of employees.
Emotion AI holds out the promise that leaders can bypass the need to inspire, motivate and educate employees so that their actions are aligned with company goals, and instead try to achieve this alignment through hyper-surveillance.
But that’s unfair, say some emotion AI supporters. Many organizations use emotion AI systems claiming to help employees in some way. Research suggests that this might backfire.
A 2024 Finnish case study found that workplace emotion-tracking technology tends to undermine wellbeing more than support it and has a bunch of problems. First, the technology often fails to work. Specifically, it claims to identify mental states like “stressed” or “engaged,” which turn out not to faithfully reveal actual internal moods.
Second, the quality of emotional AI output often varies by race. The study found that the faces of black people were wrongly labeled as “angry” or “contemptuous” more often, even when showing the same facial expressions as white participants. That’s just one example of bias that might come from treating employees differently based on an AI’s flawed ability to interpret human emotional expression.
Third, they found that claims of “anonymous aggregation” turn out to be false in practice with smaller teams. The data can unintentionally reveal identities, leading to privacy violations.
Fourth, emotion AI may have the practical effect of requiring “emotional labor,” which means mustering up and conveying the right emotions as part of the job, on an ever-growing range of professions.
And finally, emotion AI is prone to mission creep. Companies often deploy it for one purpose then drift toward increasing worker surveillance.
Emotion AI may have no futureWhile emotion AI is growing in some sectors of the economy, it’s being forcibly shrunk through growing regulatory action. The European Union last year banned emotion AI in the workplace and in educational settings, with narrow exceptions for medical or safety reasons. Multinational corporations are gravitating to the European standard.
There’s even been limited legal or regulatory action against the technologyin a few states, including California, New York, and Illinois.
Some companies have voluntarily rejected emotion AI. Microsoft, for example, announced in June 2022 that it would retire the Azure Face API’s emotion-recognition capabilities (along with inference of gender, age, smile, facial hair, hair, and makeup) as part of an overhaul of its Responsible AI Standard.
The company’s Chief Responsible AI Officer, Natasha Crampton, explained the change by citing “the lack of scientific consensus on the definition of ’emotions,’ the challenges in how inferences generalize across use cases, regions, and demographics, and the heightened privacy concerns around this type of capability.” Microsoft also worried that such technology “can subject people to stereotyping, discrimination, or unfair denial of services.”
So while there are real and helpful uses for emotion AI in some cases, the science behind it is weak, the results are often misleading, employees generally dislike it and find it stressful, bias is likely built in, privacy violations are likely — and it might not even be legal internationally or even across all American states.
Tempting as it is, emotion AI is too problematic to deploy.
AI disclosures: I don’t use AI for writing. The words you see here are mine. I used a few AI tools via Kagi Assistant (disclosure: my son works at Kagi) as well as both Kagi Search and Google Search as one part of my fact-checking for this column. I used a word processing product called Lex, which has AI tools, and after writing the column, I used Lex’s grammar checking tools to hunt for typos and errors and suggest word changes.
On-Prem Microsoft Exchange Server CVE-2026-42897 Exploited via Crafted Email
CISA Adds Cisco SD-WAN CVE-2026-20182 to KEV After Admin Access Exploits
TeamPCP hackers advertise Mistral AI code repos for sale
Hackers exploit auth bypass flaw in Burst Statistics WordPress plugin
Cisco warns of new critical SD-WAN flaw exploited in zero-day attacks
OpenAI confirms security breach in TanStack supply chain attack
Windows 11 and Microsoft Edge hacked at Pwn2Own Berlin 2026
Zero-day exploit completely defeats default Windows 11 BitLocker protections
A zero-day exploit circulating online allows people with physical access to a Windows 11 system to bypass default BitLocker protections and gain complete access to an encrypted drive within seconds.
The exploit, named YellowKey, was published earlier this week by a researcher who goes by the alias Nightmare-Eclipse. It reliably bypasses default Windows 11 deployments of BitLocker, the full-volume encryption protection Microsoft provides to make disk contents off-limits to anyone without the decryption key, which is stored in a secured piece of hardware known as a trusted platform module (TPM). BitLocker is a mandatory protection for many organizations, including those that contract with governments.
When one disk volume manipulates anotherThe core of the YellowKey exploit is a custom-made FsTx folder. Online documentation of this folder is hard to find. As explained later, the directory associated with the file fstx.dll appears to involve what Microsoft calls the transactional NTFS, which allows developers to have “transactional atomicity" for file operations in transactions with a single file, multiple files, or ones that span multiple sources.
Cisco Catalyst SD-WAN Controller Auth Bypass Actively Exploited to Gain Admin Access
Stealer Backdoor Found in 3 Node-IPC Versions Targeting Developer Secrets
Cisco announces record revenue and 4,000 layoffs in the same day
Following a quarter in which his company delivered record revenue, Cisco CEO Chuck Robbins announced that the company's latest round of layoffs begins today.
In a blog post yesterday, Robbins was quick to boast that Cisco’s fiscal Q3 2026 earnings saw revenue increase 12 percent year-over-year to $15.8 billion. He told employees that he and the rest of Cisco’s executive leadership team “could not be prouder of the growth you have all delivered for Cisco.”
But that pride could apparently not save the company’s successful employees from unemployment.
ThreatsDay Bulletin: PAN-OS RCE, Mythos cURL Bug, AI Tokenizer Attacks, and 10+ Stories
Apple’s App Store model for AI
Apple has a design for AI life. It hopes to build on the outstanding hardware performance its systems already provide to create a fantastic environment in which AI developers can thrive. If this plan sounds familiar it’s because it’s all about the App Store, and while it’s easy to expect Apple’s revenue share to change, the plan still makes the company the custodian of the AI age.
The way it should work is if app developers see that one way to bring their AI services to billions of iPhones, iPad, and Mac users is to make AI agents available via Apple’s own portals. These will likely be via App Intents, enabling Siri to execute actions inside their apps without actively opening them.
The Information reports some developers are resistant to joining the initiative, in part because they want to avoid paying any fees. All the same, consider the moment, consider the meaning, and I think the significance is that Apple has at last got its act together with AI.
Ecosystem, services, storeApple is going to bet that the advantages its existing store provides will give customers the faith and trust to access AI apps there rather than somewhere else. The company hasn’t announced its plan yet, though there have been hints. Just look at how Apple is laying things out with these moves (both announced and speculated about). It’s:
- Working with Google to build out Apple Intelligence.
- Working with third parties to support AI services as apps with which to replace or supplement Siri.
- Maintaining investment in better hardware to run AI — you can quite happily run some models natively on an iPad.
- Equipping systems with powerful tools such as Unified Memory and the Neural Engine.
- Rolling out Apple Private Cloud Computer to provide an infrastructure to support private AI in the cloud.
- Pulling these elements together to form an ecosystem.
Like a jigsaw, the pieces fit together to provide a fantastic base from which Apple can distribute increasingly powerful AI APIs developers can use to create amazing AI experiences. I spoke with the smart people at the OmniGroup just last year who explained how they already use Apple Intelligence APIs (aka Foundation Models) to add powerful AI features to apps.
That was just the first lap; the second comes at WWDC 2026; and the third and subsequent races take place over the next 12 to 24 months as Apple implements the elements it’s put in place across its ecosystem.
Making money, one token at a timeThe prize? For Apple, it’s about maintaining its own relevance within the AI age while carving out some way to generate revenue as its hardware ecosystem runs AI agents and services. The company will continue to develop and build out Apple Intelligence as a peer player in the competitive AI market. But, as most now agree, it is also focused on ensuring its platforms are the best systems on which to run AI.
Apple’s attempt to build a profitable, secure, and capable way to run AI — supported by customer-focused security and privacy standards— seems like an answer to some of the emerging challenges around AI deployment. Speak to almost anyone in IT right now and you’ll come across stories of corporate data leaks that may fall foul of data regulation. That’s before you even consider the manner in which AI ownership consolidates power over the intellectual future of humanity into such a small number of hands it almost makes media ownership seem democratic.
Getting the band togetherWith so much at stake, not just for Apple, it feels as if the company has found some of the answers that could enable a less frightening AI future. It has a chance to own the hardware ecosystem while curating the AI services environment for the benefit of its customers — and producing its own trusted systems for casual AI usage.
We’ll find out more in a few weeks.
You can follow me on social media! Join me on BlueSky, LinkedIn, and Mastodon.
18-year-old NGINX vulnerability allows DoS, potential RCE
Cyber-Enabled Cargo Crime: How Cybercrime Tradecraft is Used to Steal Freight
Ghostwriter Targets Ukrainian Government With Geofenced PDF Phishing, Cobalt Strike
KongTuke hackers now use Microsoft Teams for corporate breaches
Linux Security Monitoring Challenges and EDR Visibility Gaps
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- …
- následující ›
- poslední »



