Agregátor RSS
Using the password 'admin123' wasn't as bad as sharing it on Slack
PWNED Welcome back to PWNED, the column where we celebrate the people who’ve taught us how not to secure a server. If you’ve ever tied your own shoelaces together, then tripped over them, or attempted to dive into a swimming pool but hit your head on the diving board, we’ll be talking about your cyber equivalent.…
China-Linked GopherWhisper Infects 12 Mongolian Government Systems with Go Backdoors
Chytré hodinky neměří tak přesně, jak se tváří. Těchto šest parametrů zkreslují nejvíc
Vercel Finds More Compromised Accounts in Context.ai-Linked Breach
Apple Fixes iOS Flaw That Let FBI Recover Deleted Signal Messages
Pass the key, passwords have passed their sell-by date
The UK's National Cyber Security Centre (NCSC) has officially endorsed passkeys as the default authentication standard, marking the first time the agency has told consumers to move away from passwords entirely.…
Podpora MacOS od Applu kratší než podpora Windows od Microsoftu? Intel Mac končí
Baudiš s Kučerou prodali další firmu. Američané získali technologický klenot z Brna
Tim Cook’s legacy: a successful CEO who stumbled over AI
Apple’s Tim Cook was viewed as a worthy successor to Steve Jobs when he took over as CEO in August 2011, two months before Jobs’ death.
Apple products became successful (and profitable) in many ways due to his success as COO, where he whipped company operations and supply chains into shape. Cook expanded the company’s product portfolio into new devices such as the Vision Pro and Apple Watch, rolled out a plethora of profitable services, and cut off failed projects like the rumored Apple car.
But Cook, who announced this week he will step down as CEO on Sept. 1 to become executive chairman, has one major blemish on his legacy. He missed perhaps one of the most important moments in computing history — the AI revolution.
Apple could still win AI war, and it now falls on incoming CEO John Ternus, formerly Apple’s senior vice president of hardware engineering, to play catch-up with AI rivals Google, Microsoft and OpenAI.
When ChatGPT took the world by storm in late 2022, Apple was years behind in the AI race and had to scramble to catch up. Cook’s failure is mostly attributable to Apple believing it could always go it alone when it comes to technology, said Jack Gold, principal analyst at J. Gold Associates
“They have a bias to doing everything in house,” Gold said.
The company did not believe in AI until ChatGPT emerged to take the tech industry by storm, according to a 2025 Bloomberg news report, and Cook did not provide the resources needed in-house to develop an AI-powered Siri. Apple was forced in early 2025 to look at partnering with Anthropic and OpenAI to push Siri into the AI era; in finally settled on working with Google’s Gemini.
The stumble was even more apparent since Apple was onto the AI trend early on, when it introduced neural chips for AI in iPhones as early as 2017. (Qualcomm introduced AI chips in its smartphone chips around the same time.)
At the time, the neural engine was hailed as a revolutionary development for developers to plug AI into applications. Apple wanted to use the neural chip’s matrix computing features to accelerate image and video processing, and users loved the results.
Then in 2024, Apple introduced Apple Intelligence to much fanfare, hoping to bring AI technology across its devices and platforms. The AI layer was based on homegrown foundation models.
Though it arrived nearly two years after ChatGPT, Apple Intelligence at the time offered truly innovative features. It had better OS and app integration and privacy features that kept user data secure — in part because of Private Cloud Compute. Other AI providers at the time were harvesting user data to improve their AI models.
Behind the scenes, however, Apple leadership and technological challenges slowed the development of Apple Intelligence. Apple in 2018 had hired former Google executive John Giannandrea to bring AI to Apple products. Giannandrea’s leadership was largely seen as ineffective and he retired last December. In early 2026, Apple turned to former Microsoft AI leader Amar Subramanya to lead the company’s AI efforts.
As a result, many of the main Apple Intelligence features were delayed, with hopes they would finally arrive in 2026.
Then in January came the partnership with Google: “The next generation of Apple Foundation Models will be based on Google’s Gemini models and cloud technology,” the companies said in a joint statement.
Incoming Apple CEO John Ternus and current CEO Tim Cook.
AppleNow, the work for Apple’s soon-to-be CEO Ternus is clear: bring the company fully into the AI age. “Unlike previous market inflections and transitions, Apple can’t afford to come in late to the party this time around,” said Jim McGregor, principal analyst at Tirias Research.
With its experience in devices and talent, Apple is well equipped to build AI into a personal experience, McGregor said. “Apple has to chart a path to reinvent the personal experience, which will drastically change devices and how we use them,” he said.
To succeed, Ternus will have to look at more partnerships and drop Apple’s reliance on in-house development, analysts said.
“When AI is moving at such a fast pace, no one company like Apple can really move fast enough to keep up,” Gold said.
The Google partnership is a solid step in that direction, as finding the expertise to hire the right people is a major hurdle, given the compensation package that others are offering, Gold said.
Whether the company has finally righted the AI ship should become clear in a little over a month at its annual Worldwide Developers Conference — the last one of the Cook era.
Vibe coding za 60 miliard dolarů. Musk chce Cursor, SpaceX díky němu smaže dluh v robotickém programování
Česko nabízí do Hormuzského průlivu radar DPET. Dokáže sledovat stovky cílů současně a nelze jej odhalit ani rušit
Česko nabízí do Hormuzského průlivu radar DPET. Dokáže sledovat stovky cílů současně a nelze jej odhalit ani rušit
Pozdě ale přece: Micron přichází s nebinárními / 3GB GDDR7
#7 MobileLinux Hackday
Claude Mythos signals a new era in AI-driven security, finding 271 flaws in Firefox
The Claude Mythos Preview appears to be living up to the hype, at least from a cybersecurity standpoint. The model, which Anthropic rolled out to a small group of users, including Firefox developer Mozilla, earlier this month, has discovered 271 vulnerabilities in version 148 of the browser. All have been fixed in this week’s release of Firefox 150, Mozilla emphasized.
These findings set a new precedent in AI’s ability to unearth bugs, and could turbocharge cybersecurity efforts.
“Nothing Mythos found couldn’t have been found by a skilled human,” said David Shipley of Beauceron Security. “The AI is not finding a new class of AI-exclusive super bugs. It’s just finding a lot of stuff that was missed.”
However, the news comes as Anthropic is reportedly investigating unauthorized use of Mythos by a small group who reportedly gained access via a third party vendor environment, revealing the double-edged nature of AI.
Closing the fuzzing gapFirefox has previously pointed AI tools, notably Anthropic’s Claude Opus 4.6, at its browser in a quest for vulnerabilities, but Opus discovered just 22 security-sensitive bugs in Firefox 148, while Mythos uncovered more than ten times that many.
Firefox CTO Bobby Holley described the sense of “vertigo” his team felt when they saw that number. “For a hardened target, just one such bug would have been red-alert in 2025,” he wrote in a blog post, “and so many at once makes you stop to wonder whether it’s even possible to keep up.”
Firefox uses a defense-in-depth strategy, with internal red teams applying multiple layers of “overlapping defenses” and automated analysis techniques, he explained. Teams run each website in a separate process sandbox.
However, no layer is impenetrable, Holley noted, and attackers combine bugs in the rendering code with bugs in the sandboxes in an attempt to gain privileged access. While his team has now adopted a more secure programming language, Rust, the developers can’t afford to stop and rewrite the decades’ worth of existing C++ code, “especially since Rust only mitigates certain, (very common) classes of vulnerabilities.”
While automated analysis techniques like fuzzing, which uncovers vulnerabilities or bugs in source code, are useful, some bits of code are more difficult to fuzz than others, “leading to uneven coverage,” Holley pointed out. Human teams can find bugs that AI can’t by reasoning through source code, but this is time-consuming, and is bottlenecked due to limited human resources.
Now, Claude Mythos Preview is closing this gap, detecting bugs that fuzzing doesn’t surface.
“Computers were completely incapable of doing this a few months ago, and now they excel at it,” Holley noted. Mythos Preview is “every bit as capable” as human researchers, he asserted, and there is no “category or complexity” of vulnerability that humans can find that Mythos can’t.
Defenders now able to win ‘decisively’?Gaps between human-discoverable and AI-discoverable bugs favor attackers, who can afford to concentrate months of human effort to find just one bug they can exploit, Holley noted. Closing this gap with AI can help defenders erode that long-term advantage.
The industry has largely been fighting security “to a draw,” he acknowledged, and security has been “offensively-dominant” due to the size of the attack surface, giving adversaries an “asymmetric advantage.” In the face of this, both Mozilla and security vendors have “long quietly acknowledged” that bringing exploits to zero was “unrealistic.”
But now with Mythos (and likely subsequent models), defenders have a chance to win, “decisively,” Holley asserted. “The defects are finite, and we are entering a world where we can finally find them all.”
What security teams should do nowFinding 271 flaws in a mature codebase like Firefox illustrates the fact that AI-driven vulnerability discovery is now operating at a scale and depth that can outpace traditional human-led review, noted Ensar Seker, CISO at cyber threat intelligence company SOCRadar.
Holley’s “vertigo,” he said, was because defenders are realizing the attack surface is larger, and “more rapidly discoverable than previously assumed.”
Security teams must respond by shifting from periodic testing to continuous validation, Seker advised. That means integrating AI-assisted code analysis into continuous integration/continuous delivery (CI/CD) pipelines, prioritizing “patch velocity over perfection,” and assuming that any externally reachable code path will eventually be discovered and weaponized.
“The goal is no longer just finding vulnerabilities first, but reducing the window between discovery and remediation,” he said.
Shipley agreed that any company building software must evaluate resourcing so it can quickly and proactively find and fix vulnerabilities. “But stuff will happen,” he acknowledged. So, in addition to doing proactive work, enterprises must regularly exercise their incident response playbooks.
“The next few years are going to be a marathon, not a sprint,” said Shipley.
Dual-use nature of AI is a challengeHowever, the dual-use nature of these systems present a big challenge. The same capability that helps defenders identify hundreds of flaws can be turned against them if the model or its outputs are exposed, Seker pointed out.
The reported unauthorized access to Mythos “reinforces that AI systems themselves are now high-value targets, effectively becoming part of the attack surface,” he said.
It’s not at all surprising that people found a way to access Mythos, Shipley agreed; it was inevitable. “Nor does Anthropic have some unique, insurmountable or exclusive AI capability for hacking,” he said, pointing out that OpenAI is already catching up in that regard, and others will “catch and surpass” Mythos.
Striking a balance requires treating AI models like privileged infrastructure, Seker noted. Enterprises need strict access controls, output monitoring, and isolation of sensitive workflows. Developers, meanwhile, must adapt by writing code that is resilient to automated scrutiny; this requires stronger input validation, safer defaults, and “fewer assumptions about obscurity.”
“In this paradigm, security isn’t just about defending systems; it’s about defending the tools that are now capable of breaking them at scale,” Seker emphasized.
This article originally appeared on CSOonline.
Another npm supply chain worm is tearing through dev environments
Yet another npm supply-chain attack is worming its way through compromised packages, stealing secrets and sensitive data as it moves through developers' environments, and it shares significant overlap with the open source infections attributed to TeamPCP last month.…
Kde při tankování ušetřit: Přehled slev a věrnostních programů čerpacích stanic
Když škodu způsobí více osob, poškozený si může vybrat, kdo ji zaplatí. Pravidlo však má výjimku
Pokusy s MeshCore: vlastní repeater, spojení přes republiku a klient Meshy
SciPy: konvoluce, fitrace a další operace prováděné s dvourozměrnými signály
- « první
- ‹ předchozí
- …
- 21
- 22
- 23
- 24
- 25
- 26
- 27
- 28
- 29
- …
- následující ›
- poslední »



