Kategorie
Microsoft Uncovers New XCSSET macOS Malware Variant with Advanced Obfuscation Tactics
Microsoft spots XCSSET macOS malware variant used for crypto theft
Court ban on Google AI stakes would hurt Anthropic clients, say analysts
Anthropic has asked a US court for permission to intervene in the remedy phase of an antitrust case against Google, arguing that the US government’s call for a ban on Google investing in AI developers could hurt it.
Analysts suggest the AI startup’s fears are founded, and that it risks losing customers if the government’s proposal is adopted.
“Its enterprise clients might face uncertainties regarding the continuity of services and support, potentially affecting their operations,” said Charlie Dai, principal analyst at Forrester.
The government’s proposed remedies including the ban on AI investments after the US District Court for the District of Columbia found the search giant guilty of maintaining a monopoly in online search and text advertising markets in August 2024.
The proposed investment ban is aimed at stopping Google from gaining control over products that deal with or control consumer search information, and in addition to preventing further investment in any AI startup would also force it to sell stakes it currently holds, including the $3 billion one in Anthropic.
On Friday, Anthropic filed a request to participate in the remedy phase of the trial as an amicus curiæ or friend of the court.
“A forced, expedited sale of Google’s stake in Anthropic could depress Anthropic’s market value and hinder Anthropic’s ability to raise the capital needed to fund its operations in the future, seriously impacting Anthropic’s ability to develop new products and remain competitive in the tight race at the AI frontier,” the AI startup said in a court filing justifying the request.
It said it had contacted representatives for the plaintiffs in the case — the US government and several US states — seeking to influence the proposal.
Remedy wouldn’t just affect GoogleWhile Anthropic’s primary concern is that the proposed investment ban could hurt the value of the company, it is also worried that it could put it on the back foot against rivals.
“This would provide an unjustified windfall to Anthropic’s much larger competitors in the AI space —including OpenAI, Meta, and ironically Google itself, which (through its DeepMind subsidiary) markets an AI language model, Gemini, that directly competes with Anthropic’s Claude line of products,” the company said in the filing.
Abhivyakti Sengar, senior analyst at Everest Group also shares Anthropic’s view on the effect of the proposed ban.
“Forcing Google to sell its stake in Anthropic throws a wrench into one of the AI industry’s most significant partnerships,” Sengar said, adding that while it might not cause an immediate loss of customers, any disruption to the performance or reliability of Anthropic’s models or its innovation speed could drive business towards its rivals.
The AI startup, additionally, tried to differentiate itself with rivals, such as OpenAI, by pointing out that unlike its competitors it is not owned or dominated by a single technology giant.
“While both Amazon and Google have invested in Anthropic, neither company exercises control over Anthropic. Google, in particular, owns a minority of the company and it has no voting rights, board seats, or even board observer rights,” it said in the filing.
Further, it said that Google doesn’t have any exclusive rights to any of its products despite investing nearly $3 billion since 2022 in two forms, direct equity purchase and purchases of debt instruments that can be converted into equity.
AI was “never part of the case”Among the arguments that Anthropic makes against the proposed remedy, it notes that neither it nor Google’s other AI investments were ever a part of the case.
“Neither complaint alleged any anticompetitive conduct related to AI, and neither mentioned Anthropic. The only mention of AI in either complaint was a passing reference in the US Plaintiffs’ complaint to AI ‘voice assistants’ as one of several ‘access points’ through which mobile-device users could access Google’s search services,” it said in the filing.
In addition, it claimed that forcing Google to sell its stake could diminish Anthropic’s “ability to fund its operations and potentially depress its market value” as alternative investors deal in millions and not the billions Google invested.
“Forcing Google to sell its entire existing stake in Anthropic within a short period of time would flood the market, sating investors who would otherwise fund Anthropic in the future,” it said in the filing.
Analysts too warned that the future of Anthropic’s operations and its ability to retain customers will depend on the startup’s ability to secure investment if the proposal is adopted.
That, said Everest’s Sengar, “will determine whether it will be a setback or an opportunity for greater independence in the AI race.”
Forrester’s Dai agreed, adding that if Anthropic can quickly reassure its customers and demonstrate a clear plan for continuity and innovation, it may retain their trust and loyalty.
Fintech giant Finastra notifies victims of October data breach
Why enterprises are choosing smart glasses that talk — not overwhelm
Meta’s Ray-Ban smart glasses have quietly achieved a milestone that its enterprise-focused competitors could only dream of — selling over two million pairs since their debut in October 2023.
EssilorLuxottica, the eyewear giant that manufactures glasses for Meta, has recently announced that two million pairs of Meta Ray-Bans have been sold since their October 2023 launch. The company also aims to produce 10 million Meta glasses annually by the end of 2026.
In contrast, Microsoft’s HoloLens and Apple’s Vision Pro have struggled to gain traction despite their advanced mixed-reality capabilities. (Microsoft has reportedly discontinued production of its HoloLens 2 headset, although existing units are still available for purchase.)
The answer may lie not just in features or branding but in the fundamental user interface itself — Meta’s lightweight, audio-focused design seems to align more with enterprise needs than fully immersive mixed-reality headsets.
“The biggest barriers to AR headset adoption have been cost, efficiency, and battery life, all of which become more challenging with higher levels of immersivity,” said Neil Shah, VP for research and partner at Counterpoint Research. “Additionally, the lack of a standardized OS or UI has made enterprise integration more fragmented.”
“Rather than pushing an entirely new wearable concept, Meta retrofitted VR capabilities into an existing accessory that people were already comfortable with,” said Faisal Kawoosa, founder and lead analyst at Techarc. “The partnership with Ray-Ban also played a key role in making these smart glasses more socially acceptable.”
Enterprise adoption: simplicity over immersion?While Microsoft’s HoloLens and Apple’s Vision Pro pushed the boundaries of augmented and virtual reality, their enterprise adoption remained limited due to cost, complexity, and user resistance. HoloLens found some traction in industrial training and fieldwork, and Vision Pro positioned itself as the future of spatial computing, but neither saw mass adoption.
“The failure of AR-heavy wearables such as HoloLens and Vision Pro highlights a fundamental mismatch with workplace needs,” said Riya Agrawal, senior analyst at Everest Group. “High costs, complexity of use, and extensive training requirements have slowed deployment. Furthermore, frontline workers—especially in field services—typically need quick, hands-free AI assistance rather than distracting digital overlays.”
Meta’s smart glasses, in contrast, take a different approach. They offer an audio-centric interface with a discreet camera, enabling hands-free communication, real-time guidance, and live transcription without overwhelming users with AR overlays.
This approach fits naturally into enterprise workflows where workers need digital assistance without obstructing their physical environment.
“Enterprise users ideally seek more immersion for use cases like design and development, but current AR/VR limitations make mainstream adoption difficult,” Shah pointed out. “While immersive headsets promise to overlay the digital world onto the physical, limited app integrations and power-hungry designs hinder their viability in real-world enterprise settings.”
“In the enterprise space, VR applications tend to be highly specialized and customized to specific business needs,” Kawoosa added. “Unlike consumer VR, which benefits from broad applications, enterprises see AR as a layer within their existing tech stack rather than a standalone solution. This means generic, one-size-fits-all AR/VR products may struggle in the long run.”
Why do enterprise users prefer audio-centric wearables?Seamless integration into daily workflows has been a major reason for the success of Meta’s smart glasses. Unlike bulky AR headsets, they resemble traditional eyewear, making them more socially and professionally acceptable in meetings, fieldwork, and customer interactions. Open-ear speakers allow users to receive AI-powered insights, instructions, or language translations while staying engaged with their surroundings.
“In many enterprise use cases, HoloLens and Vision Pro offer more computational power than necessary, which only drives up costs without delivering proportional benefits,” Agrawal said. “Smart glasses or audio-driven interfaces solve this by being more cost-effective and practical, aligning better with enterprise workflows.”
The cost has been another decisive factor.
Vision Pro and HoloLens come at steep prices — Apple’s headset costs $3,499, and HoloLens 2 starts at around $3,500. Meanwhile, Meta’s Ray-Ban smart glasses start at a fraction of that price — less than $380, making them more viable for enterprise deployment at scale. Lower costs encourage broader experimentation, allowing businesses to deploy smart glasses across departments rather than limiting them to niche applications.
For field workers, hands-free assistance is critical. Remote guidance and real-time AI-driven instructions are invaluable in sectors like logistics, healthcare, and maintenance.
“For frontline agents, minimizing visual overload is key,” Agrawal said. “The lightweight design and better battery life of smart glasses make them truly wearable all day, unlike bulkier AR headsets that drain power quickly.”
Meta’s smart glasses enable professionals to stream video to remote experts without interrupting their workflow. In contrast, Vision Pro and HoloLens often require users to engage with floating screens or hand gestures, which may not be practical for workers who need to stay focused on manual tasks.
“Simple, AI-driven smart glasses — such as Meta’s Ray-Ban models — offer a hands-free and ear-free approach that feels natural,” said Shah. “Features like real-time guidance for warehouse workers, last-mile delivery directions, and field service assistance make them useful in enterprise settings without the complexity of AR overlays.”
Another key advantage is the ease of adoption. Employees are less likely to resist using audio-centric glasses compared to full-fledged AR headsets, which can feel intrusive or overwhelming.
“The appeal of smart glasses extends beyond cost — they also offer faster adoption and return on investment,” Agrawal pointed out. “Compared to full AR headsets, they require minimal training, making enterprise-wide deployment easier and more scalable.”
Training time is minimal, as users can interact naturally through voice commands and AI-based responses, making enterprise adoption smoother.
“Audio-based interfaces make even more sense in enterprise settings, where they function like an AI-powered assistant — essentially a ‘machine colleague’ that can provide real-time guidance, transcriptions, and hands-free instructions,” Kawoosa pointed out.
The future: will more enterprises embrace smart audio glasses?With plans to scale up production to 10 million units annually by 2026, Meta’s strategy suggests that audio-first smart glasses could become a staple in enterprise environments.
Meanwhile, reports indicate that Meta is working on a version with an integrated display, potentially bringing a hybrid approach that balances visual AR with the audio-first experience that has proven successful.
“While AR and VR can augment meaningful enterprise use cases, their economic and ergonomic limitations have slowed adoption,” Counterpoint’s Shah said. “Simpler AI-powered glasses are serving as an entry point, building familiarity before AR technology matures.”
As immersive AR headsets struggle to find their footing, the rapid success of Meta’s smart glasses may signal a shift in how enterprises perceive wearable technology. Instead of seeking full virtual immersion, businesses may prioritize frictionless, real-world interactions — an area where audio-first smart glasses appear to have the upper hand. “While enterprises currently prefer augmentation over full immersion, AI-driven advancements could accelerate VR adoption in the long term,” Kawoosa said, adding, “However, we are still in the early stages of that transition.”
South Korea Suspends DeepSeek AI Downloads Over Privacy Violations
South Korea Suspends DeepSeek AI Downloads Over Privacy Violations
Microsoft rolls out BIOS update that fixes ASUS blue screen issues
Další vlna podvodů udeřila na mobily Čechů. Podezřelé SMS můžete nahlásit na bezplatné lince 7726
CISO's Expert Guide To CTEM And Why It Matters
CISO's Expert Guide To CTEM And Why It Matters
GenAI can make us dumber — even while boosting efficiency
Generative AI (genAI) tools based on deep learning are quickly gaining adoption, but their use is raising concerns about how they affect human thought.
A new survey and analysis by Carnegie Mellon and Microsoft of 319 knowledge workers who use genAI tools (such as ChatGPT or Copilot) at least weekly showed that while the technology improves efficiency, it can also reduce critical thinking engagement, could lead to over-reliance, and might diminish problem-solving skills over time.
“A key irony of automation is that by mechanizing routine tasks and leaving exception-handling to the human user, you deprive the user of the routine opportunities to practice their judgement and strengthen their cognitive musculature, leaving them atrophied and unprepared when the exceptions do arise,” the study found.
Overall, workers’ confidence in genAI’s abilities correlates with less effort in critical thinking. The focus of critical thinking shifts from gathering information to verifying it, from problem-solving to integrating AI responses, and from executing tasks to overseeing them. The study suggests that genAI tools should be designed to better support critical thinking by addressing workers’ awareness, motivation, and ability barriers.
The research specifically examines the potential impact of genAI on critical thinking and whether “cognitive offloading” could be harmful. Cognitive offloading, or the process of using external devices or processes to reduce mental effort, is not new; it’s been used for centuries.
For example, something as simple as writing things down, or relying on others to help with remembering, problem-solving, or decision-making is a form of cognitive offloading. So is using a calculator instead of mental math.
The paper examined how genAI’s cognitive offloading, in particular, affects critical thinking among workers across various professions. The focus was on understanding when and how knowledge workers perceive critical thinking while using genAI tools and whether the effort required for critical thinking changes with their use.
The researchers classified critical thinking into six categories: knowledge, comprehension, application, analysis, synthesis, and evaluation. Each of those six cognitive activities was scored with a one-item, five-point scale, as has been done in similar research.
The study found that knowledge workers engage in critical thinking primarily to ensure quality, refine AI outputs, and verify AI-generated content. However, time pressures, lack of awareness, and unfamiliarity with domains can hinder reflective thinking.
At college, signs of a decline in thinking abilitiesDavid Raffo, a professor at the Maseeh College of Engineering and Computer Science at Portland State University, said he noticed over a six-year-period that students’ writing skills were dropping.
“Year after year, the writing got worse,” he said. “Then, during Covid, I noticed that papers started getting better. I thought, maybe staying at home had a positive effect. Maybe students were putting more energy and effort into writing their papers and getting better at their communication skills as a result.”
Raffo met with one student to discuss their A- grade on a paper. During the Zoom meeting, however, the student struggled to form grammatically correct sentences. Raffo began to question whether they had written the paper themselves, considering their communication skills didn’t match the quality of their work.
“I wondered if they had used a paid service or generative AI tools. This experience, about three years ago, sparked my interest in the role of technology in academic work and has motivated my ongoing study of this topic,” said Raffo, who is also editor-in-chief of the peer-reviewed Journal of Software Evolution and Process.
The difference between using genAI compared to the use of calculators and Internet search engines lies in which brain functions are engaged and how they affect daily life, said Raffo, who was not involved in the latest study.
GenAI tools offload tasks that involve language and executive functions. The “use it or lose it” principle applies: engaging our brains in writing, communication, planning, and decision-making improves these skills.
“When we offload these tasks to generative AI and other tools, it deprives us of the opportunity to learn and grow or even to stay at the same level we had achieved,” Raffo said.
How AI rewires our brainsThe use of technology, in general, rewires brains to think in new ways — some good, some not so good, according to Jack Gold, principal analyst at tech industry research firm J. Gold Associates. “It’s probably inevitable that AI will do the same thing as past rewiring from technology did,” he said. “I’m not sure we know yet just what that will be.”
As Agentic AI becomes common, people may come to rely on it for problem-solving — but how will we know it’s doing things correctly, Gold said. People might accept its results without questioning, potentially limiting their own skills development by allowing technology to handle tasks.
Lev Tankelevitch, a senior researcher with Microsoft Research, said not all genAI use is bad. He said there’s clear evidence in education that it can enhance critical thinking and learning outcomes. “For example, in Nigeria, an early study suggests that AI tutors could help students achieve two years of learning progress in just six weeks,” Tankelevitch said. “Another study showed that students working with tutors supported by AI were 4% more likely to master key topics.”
The key, he said, is that it was teacher-led. Educators guided the prompts and provided context, showing how a collaboration between humans and AI can drive real learning outcomes, according to Tankelevitch.
The Carnegie Mellon/Microsoft study determined the use of genAI tools shifts knowledge workers’ critical thinking skills in three main ways: from information gathering to verification, from problem-solving to integrating AI responses, and from task execution to task stewardship.
While genAI automates tasks such as information gathering, it also introduces new cognitive tasks, such as assessing AI-generated content and ensuring accuracy. That shift changes the role of workers from doing the work of research to overseeing results, with the responsibility for quality still resting on the human.
Pablo Rivas, assistant professor of Computer Science at Baylor University, while it’s true if a machine’s output goes unchecked, you risk skipping the hard mental work that sharpens problem-solving skills, AI doesn’t have to undermine human intelligence.
“It can be a boost if individuals stay curious and do reality checks. One simple practice is to verify the AI’s suggestions with outside sources or domain knowledge. Another is to reflect on the reasoning behind the AI’s output rather than assuming it’s correct,” he said. “With healthy skepticism and structured oversight, generative AI can increase productivity without eroding our ability to think on our own.”
A right way to use genAI?To support critical thinking, organizations training workforces should focus on information verification, response integration, and task stewardship, while maintaining foundational skills to avoid overreliance on AI. The study highlights some limitations, such as potential biases in self-reporting and the need for future research to consider cross-linguistic and cross-cultural perspectives and long-term studies to track changes in AI use and critical thinking.
Research on genAI’s impact on cognition is key to designing tools that promote critical thinking. Deep reasoning models are helping by making AI processes more transparent, allowing users to better review, question, and learn from its insights, he said.
“Across all of our research, there is a common thread: AI works best as a thought partner, complementing the work people do,” Tankelevitch said. “When AI challenges us, it doesn’t just boost productivity; it drives better decisions and stronger outcomes.”
The Carnegie Mellon-Microsoft study isn’t alone in its findings. Verbal reasoning and problem-solving skills in the US have been steadily dropping, according to a paper published in June 2023 by US researchers Elizabeth Dworak, William Revelle and David Condon. And while IQ scores had been increasing steadily since the beginning of the 20th century — as recently as 2012, IQ scores were rising about 0.3 points a year — a study by Northwestern University in 2023 showed a decline in three key intelligence testing categories.
All technology affects our abilities in various ways, according to Gold. For example, texting undermines the ability to write proper sentences, calculators reduce long division and multiplication skills, social media affects communication, and a focus on typing has led to neglecting cursive and signature skills, he noted.
“So yes, AI will have effects on how we problem solve, just like Google did with our searches,” Gold said. “Before Google, we had to go to the library and actually read multiple source materials to come up with a concept, which required our brain to process ideas and form an opinion. Now it’s just whatever Google search shows. AI will be the same, only accelerated.”
Net neutrality under Trump? Not so neutral
Even before President Donald J. Trump returned to office last month, net neutrality took a punch to the jaw. On Jan. 2, the US Court of Appeals for the Sixth Circuit struck down the Federal Communications Commission’s (FCC) net neutrality rules.
Oh well, it was nice while it lasted.
The latest set of rules, the FCC’s 2024 “Safeguarding and Securing the Open Internet Order,” would have established the three rules of net neutrality:
- No blocking: Broadband providers may not block access to legal content, applications, services, or non-harmful devices.
- No throttling: Broadband providers may not impair or degrade lawful Internet traffic on the basis of content, applications, services, or non-harmful devices.
- No paid prioritization: Broadband providers may not favor some lawful Internet traffic over other lawful traffic in exchange for consideration — in other words, no “fast lanes.” This rule also bans ISPs from prioritizing the content and services of their partners.
There’s nothing new about these rules. They’ve been the cornerstone of the internet you’ve known and used for decades. In 1992, the Commercial Internet Exchange (CIX) brought the first Internet Service Providers (ISPs) together to agree to let traffic run back and forth between them without restrictions. The rules they adopted would become what we call net neutrality.
It only makes sense, right? As Jessica Rosenworcel, former chairperson of the Federal Communications Commission (FCC) and a Democrat, said: “Consumers across the country have told us again and again that they want an internet that is fast, open, and fair.”
In a way, the court decision doesn’t matter. With Trump back in charge, there was no way net neutrality would survive.
After all, the Republicans argue, we can trust ISPs to do the right thing for their customers. As Brendan Carr, current FCC chairperson and a Republican, crowed: “[The January] decision is a good win for the country. Over the past four years, the Biden Administration has worked to expand the government’s control over every feature of the Internet ecosystem. You can see it in the Biden Administration’s efforts to pressure social media companies into censoring the free speech rights of everyday Americans.”
Funny that. Since Carr took over as chairperson, he’s launched investigations of American-led media companies and organizations such as NPR, PBS, Disney, CBS, NBC, and Comcast. Why? Because they’re not kowtowing to Trump and they’ve broadcast news that annoys him.
Nothing is surprising about this. Before Trump was elected again, he and his pack of billionaire buddies were already threatening to revoke network TV broadcast licenses because they didn’t like their news coverage. Carr, of course, is all in favor of this; as he said in a pre-election interview, “The law is very clear. The Communications Act says you have to operate in the public interest. And if you don’t, yes, one of the consequences is potentially losing your license.”
He then listed ABC, NBC, and CBS — but not Fox for some curious reason — as potentially running afoul of his take on the Communications Act of 1934, from which the FCC derives its authority.
As Nilay Patel, editor-in-chief of The Verge, recently wrote: “The FCC is pretty much the only government agency with some authority to directly regulate speech in America because it controls the spectrum used to broadcast radio and television. Carr has started using that authority to punish broadcasters for speech Trump doesn’t like or even for having internal business practices that don’t align with the administration.”
Aside from the national networks, there’s nothing saying Carr, directed by Trump’s sidekick Elon Musk, couldn’t restrict independent social networks such as Bluesky, Counter.social, and Mastodon while leaving X, Threads, and Truth.Social to do what they want.
This could be done, for example, by abusing Section 230 of the Communications Decency Act. In Project 2025‘s FCC section, which Carr authored, he stated: “FCC should work with Congress to ensure that anti-discrimination provisions are applied to Big Tech — including ‘back-end’ companies that provide hosting services and DDoS protection. Reforms that prohibit discrimination against core political viewpoints are one way to do this.”
Core political viewpoints, in this case, means, of course, pro-Trump speech. What this might look like is charging Universal Service Fund fees to non-Trump-friendly network owners.
Speaking of money and networks, Carr also happens to be a big satellite internet supporter. We all know, of course, that Musk’s Starlink is the only major satellite ISP.
What all this means for you is you can expect ISP fees to go ever higher and for there to be even less choice between ISPs in your neighborhood. Of course, that’s mostly the same old, same old, I’m sorry to say. The internet under Trump will come with more restrictions on news and, in all likelihood, even what you can say about the news.
Freedom of news and speech depends on a free Internet; under the current regime, we’re already losing it.
⚡ THN Weekly Recap: Google Secrets Stolen, Windows Hack, New Crypto Scams and More
⚡ THN Weekly Recap: Google Secrets Stolen, Windows Hack, New Crypto Scams and More
New Golang-Based Backdoor Uses Telegram Bot API for Evasive C2 Operations
New Golang-Based Backdoor Uses Telegram Bot API for Evasive C2 Operations
Google Chrome's AI-powered security feature rolls out to everyone
Cybersecurity Regulations and Compliance for Linux Users
Critical CUPS Vulnerability Exposes Linux Systems to Remote Hijacking
- « první
- ‹ předchozí
- …
- 25
- 26
- 27
- 28
- 29
- 30
- 31
- 32
- 33
- následující ›
- poslední »
