Security-Portal.cz je internetový portál zaměřený na počítačovou bezpečnost, hacking, anonymitu, počítačové sítě, programování, šifrování, exploity, Linux a BSD systémy. Provozuje spoustu zajímavých služeb a podporuje příznivce v zajímavých projektech.

Kategorie

Amazon says 175 million customer now use passkeys to log in

Bleeping Computer - 15 Říjen, 2024 - 22:52
Amazon has seen massive adoption of passkeys since the company quietly rolled them out a year ago, announcing today that over 175 million customers use the security feature. [...]
Kategorie: Hacking & Security

Intel, AMD unite in new x86 alliance to tackle AI, other challenges

Computerworld.com [Hacking News] - 15 Říjen, 2024 - 22:36

Semiconductor rivals Intel and AMD announced the formation of an x86-processor advisory group that will try to address ever-increasing AI workloads, custom chiplets, and advances in 3D packaging and system architectures.

Members of the x86 Ecosystem Advisory Group include Broadcom, Dell, Google, Hewlett Packard Enterprise, HP, Lenovo, Meta, Microsoft, Oracle, and Red Hat. Notably missing: TSMC — the world’s largest chipmaker. Linux creator Linus Torvalds and Epic Games CEO Tim Sweeney are also members.

The mega-tech companies plan to collaborate on architectural interoperability and hope to “simplify software development” across the world’s most widely used computing architecture, according to a news announcement.

“We are on the cusp of one of the most significant shifts in the x86 architecture and ecosystem in decades — with new levels of customization, compatibility and scalability needed to meet current and future customer needs,” Intel CEO Pat Gelsinger said in a statement.

Generative AI (genAI) is moving into smartphones, PC, cars and Internet of Things (IoT) devices because the processing power on edge devices can access data locally, return faster results, and they’re more secure.

That’s why, over the next several years, silicon makers are turning their attention to fulfilling the promise of AI at the edge, which will allow developers to essentially offload processing from data centers — giving genAI app makers a free ride as the user pays for the hardware and network connectivity.

Apple, Samsung, and other smartphone and silicone manufacturers are rolling out AI capabilities on their hardware, fundamentally changing the way users interact with edge devices. On the heels of Apple rolling out an early preview of iOS 18.1 with its first generative AI (genAI) tools, IDC released a report saying nearly three in four smartphones will be running AI features within four years

The release of the next version of Windows — perhaps called Windows 12 — later this year is also expected to be a catalyst for genAI adoption at the edge; the new OS is expected to have AI features built in.

At the 2024 Consumer Electronics Show in April, PC vendors and chipmakers showcased advanced AI-driven functionalities. But despite the enthusiasm generated by those selling or making genAI tools and platforms, enterprises are expected to adopt a more measured approach over the next year, according to one Forrester Research report.

“CIOs face several barriers when considering AI-powered PCs, including the high costs, difficulty in demonstrating how user benefits translate into business outcomes, and the availability of AI chips and device compatibility issues,” said Andrew Hewitt, principal analyst at Forrester Research.

Kategorie: Hacking & Security

Finland seizes servers of 'Sipultie' dark web drugs market

Bleeping Computer - 15 Říjen, 2024 - 22:08
The Finnish Customs office took down the website and seized the servers for the darknet marketplace 'Sipulitie' where criminals sold illegal narcotics anonymously. [...]
Kategorie: Hacking & Security

EDRSilencer red team tool used in attacks to bypass security

Bleeping Computer - 15 Říjen, 2024 - 20:47
A tool for red-team operations called EDRSilencer has been observed in malicious incidents attempting to identify security tools and mute their alerts to management consoles. [...]
Kategorie: Hacking & Security

Safer with Google: Advancing Memory Safety

Google Security Blog - 15 Říjen, 2024 - 19:44
Posted by Alex Rebert, Security Foundations, and Chandler Carruth, Jen Engel, Andy Qin, Core Developers

Error-prone interactions between software and memory1 are widely understood to create safety issues in software. It is estimated that about 70% of severe vulnerabilities2 in memory-unsafe codebases are due to memory safety bugs. Malicious actors exploit these vulnerabilities and continue to create real-world harm. In 2023, Google’s threat intelligence teams conducted an industry-wide study and observed a close to all-time high number of vulnerabilities exploited in the wild. Our internal analysis estimates that 75% of CVEs used in zero-day exploits are memory safety vulnerabilities.

At Google, we have been mindful of these issues for over two decades, and are on a journey to continue advancing the state of memory safety in the software we consume and produce. Our Secure by Design commitment emphasizes integrating security considerations, including robust memory safety practices, throughout the entire software development lifecycle. This proactive approach fosters a safer and more trustworthy digital environment for everyone.

This post builds upon our previously reported Perspective on Memory Safety, and introduces our strategic approach to memory safety.

Our journey so far

Google's journey with memory safety is deeply intertwined with the evolution of the software industry itself. In our early days, we recognized the importance of balancing performance with safety. This led to the early adoption of memory-safe languages like Java and Python, and the creation of Go. Today these languages comprise a large portion of our code, providing memory safety among other benefits. Meanwhile, the rest of our code is predominantly written in C++, previously the optimal choice for high-performance demands.

We recognized the inherent risks associated with memory-unsafe languages and developed tools like sanitizers, which detect memory safety bugs dynamically, and fuzzers like AFL and libfuzzer, which proactively test the robustness and security of a software application by repeatedly feeding unexpected inputs. By open-sourcing these tools, we've empowered developers worldwide to reduce the likelihood of memory safety vulnerabilities in C and C++ codebases. Taking this commitment a step further, we provide continuous fuzzing to open-source projects through OSS-Fuzz, which helped get over 8800 vulnerabilities identified and subsequently fixed across 850 projects.

Today, with the emergence of high-performance memory-safe languages like Rust, coupled with a deeper understanding of the limitations of purely detection-based approaches, we are focused primarily on preventing the introduction of security vulnerabilities at scale.

Going forward: Google's two-pronged approach

Google's long-term strategy for tackling memory safety challenges is multifaceted, recognizing the need to address both existing codebases and future development, while maintaining the pace of business.

Our long-term objective is to progressively and consistently integrate memory-safe languages into Google's codebases while phasing out memory-unsafe code in new development. Given the amount of C++ code we use, we anticipate a residual amount of mature and stable memory-unsafe code will remain for the foreseeable future.

Graphic of memory-safe language growth as memory-unsafe code is hardened and gradually decreased over time.

Migration to Memory-Safe Languages (MSLs)

The first pillar of our strategy is centered on further increasing the adoption of memory-safe languages. These languages drastically drive down the risk of memory-related errors through features like garbage collection and borrow checking, embodying the same Safe Coding3 principles that successfully eliminated other vulnerability classes like cross-site scripting (XSS) at scale. Google has already embraced MSLs like Java, Kotlin, Go, and Python for a large portion of our code.

Our next target is to ramp up memory-safe languages with the necessary capabilities to address the needs of even more of our low-level environments where C++ has remained dominant. For example, we are investing to expand Rust usage at Google beyond Android and other mobile use cases and into our server, application, and embedded ecosystems. This will unlock the use of MSLs in low-level code environments where C and C++ have typically been the language of choice. In addition, we are exploring more seamless interoperability with C++ through Carbon, as a means to accelerate even more of our transition to MSLs.

In Android, which runs on billions of devices and is one of our most critical platforms, we've already made strides in adopting MSLs, including Rust, in sections of our network, firmware and graphics stacks. We specifically focused on adopting memory safety in new code instead of rewriting mature and stable memory-unsafe C or C++ codebases. As we've previously discussed, this strategy is driven by vulnerability trends as memory safety vulnerabilities were typically introduced shortly before being discovered.

As a result, the number of memory safety vulnerabilities reported in Android has decreased dramatically and quickly, dropping from more than 220 in 2019 to a projected 36 by the end of this year, demonstrating the effectiveness of this strategic shift. Given that memory-safety vulnerabilities are particularly severe, the reduction in memory safety vulnerabilities is leading to a corresponding drop in vulnerability severity, representing a reduction in security risk.

Risk Reduction for Memory-Unsafe Code

While transitioning to memory-safe languages is the long-term strategy, and one that requires investment now, we recognize the immediate responsibility we have to protect the safety of our billions of users during this process. This means we cannot ignore the reality of a large codebase written in memory-unsafe languages (MULs) like C and C++.

Therefore the second pillar of our strategy focuses on risk reduction & containment of this portion of our codebase. This incorporates:

  • C++ Hardening: We are retrofitting safety at scale in our memory-unsafe code, based on our experience eliminating web vulnerabilities. While we won't make C and C++ memory safe, we are eliminating sub-classes of vulnerabilities in the code we own, as well as reducing the risks of the remaining vulnerabilities through exploit mitigations.

    We have allocated a portion of our computing resources specifically to bounds-checking the C++ standard library across our workloads. While bounds-checking overhead is small for individual applications, deploying it at Google's scale requires significant computing resources. This underscores our deep commitment to enhancing the safety and security of our products and services. Early results are promising, and we'll share more details in a future post.

    In Chrome, we have also been rolling out MiraclePtr over the past few years, which effectively mitigated 57% of use-after-free vulnerabilities in privileged processes, and has been linked to a decrease of in-the-wild exploits.

  • Security Boundaries: We are continuing4 to strengthen critical components of our software infrastructure through expanded use of isolation techniques like sandboxing and privilege reduction, limiting the potential impact of vulnerabilities. For example, earlier this year, we shipped the beta release of our V8 heap sandbox and included it in Chrome's Vulnerability Reward Program.
  • Bug Detection: We are investing in bug detection tooling and innovative research such as Naptime and making ML-guided fuzzing as effortless and wide-spread as testing. While we are increasingly shifting towards memory safety by design, these tools and techniques remain a critical component of proactively identifying and reducing risks, especially against vulnerability classes currently lacking strong preventative controls.

    In addition, we are actively working with the semiconductor and research communities on emerging hardware-based approaches to improve memory safety. This includes our work to support and validate the efficacy of Memory Tagging Extension (MTE). Device implementations are starting to roll out, including within Google’s corporate environment. We are also conducting ongoing research into Capability Hardware Enhanced RISC Instructions (CHERI) architecture which can provide finer grained memory protections and safety controls, particularly appealing in security-critical environments like embedded systems.

    Looking ahead

    We believe it’s important to embrace the opportunity to achieve memory safety at scale, and that it will have a positive impact on the safety of the broader digital ecosystem. This path forward requires continuous investment and innovation to drive safety and velocity, and we remain committed to the broader community to walk this path together.

    We will provide future publications on memory safety that will go deeper into specific aspects of our strategy.

    Notes
    1. Anderson, J. Computer Security Technology Planning Study Vol II. ESD-TR-73-51, Vol. II, Electronic Systems Division, Air Force Systems Command, Hanscom Field, Bedford, MA 01730 (Oct. 1972).

      https://seclab.cs.ucdavis.edu/projects/history/papers/ande72.pdf  

    2. https://www.memorysafety.org/docs/memory-safety/#how-common-are-memory-safety-vulnerabilities  

    3. Kern, C. 2024. Developer Ecosystems for Software Safety. Commun. ACM 67, 6 (June 2024), 52–60. https://doi.org/10.1145/3651621 

    4. Barth, Adam, et al. “The security architecture of the chromium browser." Technical report. Stanford University, 2008.

      https://seclab.stanford.edu/websec/chromium/chromium-security-architecture.pdf 

Kategorie: Hacking & Security

Bringing new theft protection features to Android users around the world

Google Security Blog - 15 Říjen, 2024 - 17:59
Posted by Jianing Sandra Guo, Product Manager and Nataliya Stanetsky, Staff Program Manager, Android

Janine Roberta Ferreira was driving home from work in São Paulo when she stopped at a traffic light. A man suddenly appeared and broke the window of her unlocked car, grabbing her phone. She struggled with him for a moment before he wrestled the phone away and ran off. The incident left her deeply shaken. Not only was she saddened at the loss of precious data, like pictures of her nephew, but she also felt vulnerable knowing her banking information was on her phone that was just stolen by a thief.

Situations like Janine’s highlighted the need for a comprehensive solution to phone theft that exceeded existing tools on any platform. Phone theft is a widespread concern in many countries – 97 phones are robbed or stolen every hour in Brazil. The GSM Association reports millions of devices stolen every year, and the numbers continue to grow.

With our phones becoming increasingly central to storing sensitive data, like payment information and personal details, losing one can be an unsettling experience. That’s why we developed and thoroughly beta tested, a full suite of features designed to protect you and your data at every stage – before, during, and after device theft.

These advanced theft protection features are now available to users around the world through Android 15 and a Google Play Services update (Android 10+ devices).

AI-powered protection for your device the moment it is stolen

Theft Detection Lock uses powerful AI to proactively protect you at the moment of a theft attempt. By using on-device machine learning, Theft Detection Lock is able to analyze various device signals to detect potential theft attempts. If the algorithm detects a potential theft attempt on your unlocked device, it locks your screen to keep thieves out.

To protect your sensitive data if your phone is stolen, Theft Detection Lock uses device sensors to identify theft attempts. We’re working hard to bring this feature to as many devices as possible. This feature is rolling out gradually to ensure compatibility with various devices, starting today with Android devices that cover 90% of active users worldwide. Check your theft protection settings page periodically to see if your device is currently supported.

In addition to Theft Detection Lock, Offline Device Lock protects you if a thief tries to take your device offline to extract data or avoid a remote wipe via Android’s Find My Device. If an unlocked device goes offline for prolonged periods, this feature locks the screen to ensure your phone can’t be used in the hands of a thief.

If your Android device does become lost or stolen, Remote Lock can quickly help you secure it. Even if you can’t remember your Google account credentials in the moment of theft, you can use any device to visit Android.com/lock and lock your phone with just a verified phone number. Remote Lock secures your device while you regain access through Android’s Find My Device – which lets you secure, locate or remotely wipe your device. As a security best practice, we always recommend backing up your device on a continuous basis, so remotely wiping your device is not an issue.

These features are now available on most Android 10+ devices1 via a Google Play Services update and must be enabled in settings.

Advanced security to deter theft before it happens

Android 15 introduces new security features to deter theft before it happens by making it harder for thieves to access sensitive settings, apps, or reset your device for resale:

  • Changes to sensitive settings like Find My Device now require your PIN, password, or biometric authentication.
  • Multiple failed login attempts, which could be a sign that a thief is trying to guess your password, will lock down your device, preventing unauthorized access.
  • And enhanced factory reset protection makes it even harder for thieves to reset your device without your Google account credentials, significantly reducing its resale value and protecting your data.

Later this year, we’ll launch Identity Check, an opt-in feature that will add an extra layer of protection by requiring biometric authentication when accessing critical Google account and device settings, like changing your PIN, disabling theft protection, or accessing Passkeys from an untrusted location. This helps prevent unauthorized access even if your device PIN is compromised.

Real-world protection for billions of Android users

By integrating advanced technology like AI and biometric authentication, we're making Android devices less appealing targets for thieves to give you greater peace of mind. These theft protection features are just one example of how Android is working to provide real-world protection for everyone. We’re dedicated to working with our partners around the world to continuously improve Android security and help you and your data stay safe.

You can turn on the new Android theft features by clicking here on a supported Android device. Learn more about our theft protection features by visiting our help center.

Notes
  1. Android Go smartphones, tablets and wearables are not supported 

Kategorie: Hacking & Security

Apple powers up the iPad mini for Apple Intelligence

Computerworld.com [Hacking News] - 15 Říjen, 2024 - 17:57

As expected, Apple has introduced a much faster Apple Intelligence-capable iPad mini equipped with the same A17 Pro chip used in the iPhone 15 Pro series. That’s a good improvement from the A15 Bionic in the previous model, and makes for faster graphics, computation, and AI calculation. 

It also sets the scene for the public release of the first Apple Intelligence features on Oct. 28, when I expect all of Apple’s heavily promoted wave of current hardware ads to at last make more sense. (We can also expect new Macs before the end of October.)

The iPad mini turns 7

By announcing the new mini by press release, Apple broke with tradition twice with this heavily telegraphed (we all expected it) product iteration.

First, in what from memory seems a fairly rare move, Apple unveiled the new hardware right after a US holiday; second, the release wasn’t flagged by Apple industry early-warning system Mark Gurman, though he did anticipate an October update. The introduction of a highly performant Apple tablet is likely to further accelerate Apple’s iPad sales, which increased 14% in Q2 2024, according to Counterpoint. Apple will remain the world’s leading tablet maker, and reports earlier about the death of this particular component of Apple’s tablet range proved unfounded.

What’s new in iPad mini?

At first glance, the new iPad mini will seem familiar to most users. The biggest change is pretty much an updated chip inside a similar device, with the same height, width, and weight as the model it replaces. Available in blue, purple, starlight, and space gray, the iPad mini has an 8.3-in. Liquid Retina display, similar to before. Remarkably, pricing on the new models starts at $499 for 128GB storage — which is twice the storage at the same starting price as the 2021 iPad mini this one replaces. 

There are other highlights here.

A better, faster, AI processor

The A17 Pro processor means the iPad mini now has a 6-core CPU, which makes for a 30% boost in CPU performance in comparison to the outgoing model. You also get a 25% boost to graphics performance, along with the necessary AI-based computation capability enhancements required to run Apple Intelligence. Of course, the chip is far more capable of handling the kind of professionally focused apps used by designers, pilots, or doctors.

While we all recognize at this stage that Apple’s decision to boost all its products with more powerful chips is because it wants to ensure support for Apple Intelligence, this also means you get better performance for other tasks as well. All the same, it will be interesting to discover the extent to which a far more contextually-capable Siri and the many handy writing assistance tools offered by Apple’s AI will boost existing tablet-based workflows in enterprise, education, and domestic use.

Better for conferencing

If you use your iPad for work, it is likely to be good news that the new iPad mini has a 12-megapixel (MP) back camera and 12MP conferencing camera. While the last-generation model also boasted 12MP cameras, the 5x digital zoom is a welcome enhancement, while the 16-core Neural Engine inside the iPad mini’s chip means those images you do capture are augmented on the fly by AI to improve picture/video quality. Overall, you’ll get better results when taking images or capturing video.

What Apple said

“There is no other device in the world like iPad mini, beloved for its combination of powerful performance and versatility in our most ultraportable design,” said Bob Borchers, Apple’s vice president of Worldwide Product Marketing. “iPad mini appeals to a wide range of users and has been built for Apple Intelligence, delivering intelligent new features that are powerful, personal, and private.

“With the powerful A17 Pro chip, faster connectivity, and support for Apple Pencil Pro, the new iPad mini delivers the full iPad experience in our most portable design at an incredible value.”

In common with all its latest product, Apple is applying every possible focus on AI tools, making crystal clear its plans to continue investing in its unique blend of privacy and the personal augmentation promised by its human-focused AI. The current selection of tools the company is providing should really be seen as a beginning of this part of its new journey.

What else stands out?

Additional improvements in the new iPad mini include:

  • Wi-Fi 6E support, which increases bandwidth if you happen to be on a compatible wireless network; 5G cellular available.
  • A 12-Megapixel wide back camera with Smart HDR 4 support and a built in document scanner with the Cameras app.
  • Apple Pencil Pro support.
  • Available for pre-order today, shipping on Oct. 23.
  • Apple Intelligence arrives with its first wave of features five days later.

There’s an environmental mission visible in the product introduction, too. The new iPad uses 100% recycled aluminium in its enclosure along with 100% recycled rare earth elements in all its magnets and recycled gold and tin in the printed circuit boards.

Please follow me on LinkedInMastodon, or join me in the AppleHolic’s bar & grill group on MeWe.

Kategorie: Hacking & Security

TrickMo Banking Trojan Can Now Capture Android PINs and Unlock Patterns

The Hacker News - 15 Říjen, 2024 - 17:47
New variants of an Android banking trojan called TrickMo have been found to harbor previously undocumented features to steal a device's unlock pattern or PIN. "This new addition enables the threat actor to operate on the device even while it is locked," Zimperium security researcher Aazim Yaswant said in an analysis published last week. First spotted in the wild in 2019, TrickMo is so named for Ravie Lakshmananhttp://www.blogger.com/profile/[email protected]
Kategorie: Hacking & Security

New Malware Campaign Uses PureCrypter Loader to Deliver DarkVision RAT

The Hacker News - 15 Říjen, 2024 - 17:20
Cybersecurity researchers have disclosed a new malware campaign that leverages a malware loader named PureCrypter to deliver a commodity remote access trojan (RAT) called DarkVision RAT. The activity, observed by Zscaler ThreatLabz in July 2024, involves a multi-stage process to deliver the RAT payload. "DarkVision RAT communicates with its command-and-control (C2) server using a custom network Ravie Lakshmananhttp://www.blogger.com/profile/[email protected]
Kategorie: Hacking & Security

New FIDO proposal lets you securely move passkeys across platforms

Bleeping Computer - 15 Říjen, 2024 - 17:18
The Fast IDentity Online (FIDO) Alliance has published a working draft of a new specification that aims to enable the secure transfer of passkeys between different providers. [...]
Kategorie: Hacking & Security

New Linux Variant of FASTCash Malware Targets Payment Switches in ATM Heists

The Hacker News - 15 Říjen, 2024 - 16:43
North Korean threat actors have been observed using a Linux variant of a known malware family called FASTCash to steal funds as part of a financially-motivated campaign. The malware is "installed on payment switches within compromised networks that handle card transactions for the means of facilitating the unauthorized withdrawal of cash from ATMs," a security researcher who goes by HaxRob said.Ravie Lakshmananhttp://www.blogger.com/profile/[email protected]
Kategorie: Hacking & Security

Over 200 malicious apps on Google Play downloaded millions of times

Bleeping Computer - 15 Říjen, 2024 - 16:26
Google Play, the official store for Android, distributed over a period of one year more than 200 malicious applications, which cumulatively counted nearly eight million downloads. [...]
Kategorie: Hacking & Security

Ruská část ISS je plná škrábanců a trhlin. NASA tím však není příliš znepokojená

Zive.cz - bezpečnost - 15 Říjen, 2024 - 15:45
Ruská část Mezinárodní vesmírné stanice (ISS) je plná různých trhlin a škrábanců. Jak ale uvedl Washington Post, přidružený administrátor NASA James Free i její tisková mluvčí Kathryn Hambleton tuto záležitost spíše bagatelizují. Free prohlásil, že agentura Roskosmos byla o situaci opakovaně ...
Kategorie: Hacking & Security

Google bets on nuclear power to drive AI expansion

Computerworld.com [Hacking News] - 15 Říjen, 2024 - 14:33

Google has signed its first corporate deal to purchase power from multiple small modular reactors (SMRs) to meet the energy needs of its AI systems, marking a key step as AI companies shift toward nuclear power.

In a blog post, Google announced an agreement with Kairos Power to source nuclear energy, aiming to bring the first SMR online by 2030, with more reactors planned by 2035.

Continue reading on Network World.

Kategorie: Hacking & Security

Microsoft’s AI research VP joins OpenAI amid fight for top AI talent

Computerworld.com [Hacking News] - 15 Říjen, 2024 - 13:42

Sebastien Bubeck, Microsoft’s vice president of GenAI research, is leaving the company to join OpenAI, the maker of ChatGPT.

Bubeck, a 10-year veteran at Microsoft, played a significant role in driving the company’s generative AI strategy, with a focus on designing more efficient small language models (SLMs) to rival OpenAI’s GPT systems.

His work culminated in the creation of the compact and cost-effective Phi models, which have since been incorporated into key Microsoft products like the Bing chatbot and Office 365 Copilot, gradually replacing OpenAI’s models in specific functions. His contributions helped enhance AI efficiency while reducing operational costs.

Microsoft confirmed the news but has not disclosed the exact role Bubeck will assume at the AI startup, Reuters reported.

“We appreciate the contributions Sebastian has made to Microsoft and look forward to continuing our relationship through his work with OpenAI,” Reuters reported quoting a Microsoft statement.  Most of Bubeck’s co-authors on Microsoft’s Phi LLM research are expected to remain at the company and continue advancing the technology.

Bubeck is expected to contribute his expertise toward OpenAI’s mission of developing AGI, which refers to autonomous systems capable of outperforming humans in most economically valuable tasks, the report added.

Bubeck’s move comes as OpenAI focuses on achieving artificial general intelligence (AGI), a key goal for the company. As per the report, while Microsoft has heavily invested in OpenAI, the company expressed no concerns about Bubeck’s departure.

“Sebastien Bubeck leads the Machine Learning Foundations group at Microsoft Research Redmond. He joined MSR in 2014, after three years as an assistant professor at Princeton University,” reads the profile of Bubeck in the yet-to-be-removed “About” page of Microsoft.

Bubeck’s X profile still shows him as “VP AI and Distinguished Scientist, Microsoft.”

Queries to Microsoft, OpenAI, and Bubeck did not elicit any response.

The great migration at OpenAI

Sebastien Bubeck’s departure from Microsoft to join OpenAI adds to a growing list of high-profile executive shifts in the AI industry, underscoring the intense competition for top talent as tech giants race to develop artificial general intelligence (AGI). While talent mobility is common in the fast-evolving AI landscape, OpenAI has been hit particularly hard with several key figures leaving in recent months.

Of the 11 founding members of OpenAI, only CEO Sam Altman and Wojciech Zaremba, head of the Codex and Research team, remain with the company. In September, Mira Murati, OpenAI’s high-profile CTO, stepped down, followed by co-founder John Schulman, who left to join Anthropic—a public benefit corporation focused on ethical AI development. These exits came on the heels of the departure of another co-founder, Ilya Sutskever, who resigned earlier this year to start his own venture, Safe Superintelligence Inc (SSI), dedicated to developing responsible AI systems.

Earlier in the year, Jan Leike, another leading OpenAI researcher, also left to join Anthropic, publicly expressing concerns that OpenAI’s “safety culture and processes have taken a backseat.” This wave of exits has raised questions about the company’s internal dynamics as it navigates the highly competitive AI landscape.

Despite these setbacks, OpenAI and its key collaborator, Microsoft, remain steadfast in their pursuit of AGI. Microsoft, which has heavily invested in OpenAI, has integrated its AI technology into core products like Bing and Office 365, while OpenAI continues to push the boundaries of AGI development.

“Leaders at big tech companies have either explicitly stated or signaled that they are deliberately working towards AGI,” said Anil Vijayan, partner at Everest Group. “There’s clearly strong belief that it will end up in a winner-take-all scenario, which is heating up the race to be first to the post,”

The race to AGI has intensified the demand for top-tier AI talent, with larger companies having a clear advantage. “We will see these handful of executives move between the big tech companies that can afford to attract high-profile AI executives. Smaller organizations and startups will struggle to retain high-quality AI talent,” Vijayan said.

For executives, the allure of AGI goes beyond compensation. “Top-tier talent is likely to be attracted by alignment to vision, stated goals, and the chance to be part of history — whether that’s AGI or otherwise,” said Vijayan.

This explains why many top AI professionals gravitate toward companies like OpenAI and Anthropic, which push the boundaries of AI and AGI development.

As the AI landscape continues to evolve, the talent war will likely shape the future of AGI, with big tech companies remaining at the forefront of the race.

Kategorie: Hacking & Security

The Rise of Zero-Day Vulnerabilities: Why Traditional Security Solutions Fall Short

The Hacker News - 15 Říjen, 2024 - 13:00
In recent years, the number and sophistication of zero-day vulnerabilities have surged, posing a critical threat to organizations of all sizes. A zero-day vulnerability is a security flaw in software that is unknown to the vendor and remains unpatched at the time of discovery. Attackers exploit these flaws before any defensive measures can be implemented, making zero-days a potent weapon for The Hacker Newshttp://www.blogger.com/profile/[email protected]
Kategorie: Hacking & Security

How Ernst & Young’s AI platform is ‘radically’ reshaping operations

Computerworld.com [Hacking News] - 15 Říjen, 2024 - 12:00

Multinational consultancy Ernst & Young (EY) said generative AI  (genAI) is “radically reshaping” the way it operates, and the company boasts a 96% adoption rate of the technology by employees.

After spending $1.4 billion on a customized generative AI platform called EY.ai, the company said the technology is creating new efficiencies and allowing its employees to focus on higher-level tasks. Following an initial pilot with 4,200 EY tech-focused team members in 2023, the global organization released its large language model (LLM) to its nearly 400,000 employees.

Even so, the company’s executive leadership insists it’s not handing off all of its business functions and operations to an AI proxy and that humans remain at the center of innovation and development. Looking to the future, EY sees the next evolution as artificial general intelligence (AGI) — a neural network that will be able to think for itself and capable of performing any intellectual task a human can at that point it will become a “strategic partner shifting the focus from task automation to true collaboration between humans and machines,” according to Beatriz Sanz Saiz, EY global consulting data and AI leader.

Computerworld interviewed Saiz about how genAI is changing the way the company operates and how its employees perform their jobs.


You launched EY.ai a year ago. How has that transformed your organization? What kinds of efficiencies and/or productivity gains have you seen? “Over the past year, we’ve harnessed AI to radically reshape the way we operate, both internally and in service to our clients. We’ve integrated AI into numerous facets of our operations, from enhancing client service delivery to improving our internal efficiencies. Teams are now able to focus more on high-value activities that truly drive innovation and business growth, while AI assists with complex data analysis and operational tasks.

“What is fascinating is the level of adoption: 96.4% of EY employees are users of the platform, which is enriching our collective intelligence. EY.ai is a catalyst for changing the way we work and re-skilling EY employees at pace.

“We’ve approached this journey by using ourselves as a perfect test case for the many ways in which we can provide transformational assistance to clients. This is central to our Client Zero strategy, in which we refine solutions and demonstrate their effectiveness in real-world settings — then adapt the crucial learnings from that process and apply them to driving innovation and growth for clients.”

How has EY.ai changed over the past year? “EY.ai has evolved in tandem with the rapid pace of technological advancement. Initially, we focused on testing and learning, but now we’re deeply embedding AI across every function of our business. This shift from experimentation to full-scale implementation is enabling us to be more agile, efficient, and responsive to our clients’ needs. In this journey, we’ve learned that AI’s potential isn’t just about isolated use cases — its true power lies in how it enables transformation at scale.

“The platform’s integration has been refined to ensure that it aligns with our core strategy — especially around making AI fit for purpose within the organization. It evolved from Fabric — an EY core data platform — to EY.ai, which incorporates a strong knowledge layer and AI technology ecosystem. In that sense, we’ve put a lot of effort into understanding the nuances of how AI can best serve each business, function and industry. We are rapidly building industry verticals that challenge the status quo of traditional value chains. We are constantly evolving its ethical framework to ensure the responsible use of AI, with humans always at the heart of the decision-making process.”

Can you describe EY.ai in terms of the model behind it, its size, and the number of instances you have (i.e., an instance for each application, or one model for all applications)? “EY.ai isn’t a one-size-fits-all solution; it operates as a flexible ecosystem tailored to the unique needs of different functions within our organization. We deploy a combination of models, ranging from [LLMs] to smaller, more specialized models designed for specific tasks. This multi-model approach allows us to leverage both open-source and proprietary technologies where they best fit, ensuring that our AI solutions are scalable, efficient, and agile across different applications.”

What advice do you have for other enterprises considering implementing their own AI instances? Go big with LLMs or choose small language models based on both open-source or proprietary (such as Llama-3 type) models? What are the advantages of each? “My advice is to start with a clear understanding of your business goals. Large language models are incredibly powerful, but they’re resource-intensive and can sometimes feel like a sledgehammer for tasks that require a scalpel. Smaller models offer more precision and can be fine-tuned to specific needs, allowing for greater efficiency and control. It’s all about finding the right balance between ambition and practicality.”

What is knowledge engineering and who’s responsible for that role? “Knowledge engineering involves structuring, curating, and governing the knowledge that feeds AI systems, ensuring that they can deliver accurate, reliable, and actionable insights. Unlike traditional data science, which focuses on data manipulation, knowledge engineering is about understanding the context in which data exists and how it can be transformed into useful knowledge.

“Responsibility for this role often falls to Chief Knowledge Officers or similar roles within organizations. These individuals ensure that AI is not only ingesting high-quality data, but also making sense of it in ways that align with the organization’s goals and ethical standards.”

What kind of growth are you seeing in the number of Chief Knowledge Officers, and why are they growing in numbers? “The rise of the Chief Knowledge Officer (CKO) is directly tied to the increasing importance of knowledge engineering in today’s AI-driven world. We are witnessing a fundamental shift where data alone isn’t enough. Businesses need structured, actionable knowledge to truly harness AI’s potential.

“CKOs are becoming indispensable, because in the scenario of agent-based workflows in the enterprise, it is knowledge, not just data, that agents will deploy to accomplish an outcome: i.e. customer service, back-office operations, etc. The CKO’s role is pivotal in aligning AI’s capabilities with business strategy, ensuring that insights derived from AI are both accurate and actionable. It’s not just about managing information, it’s about driving strategic value through knowledge.”

What kind of decline are you seeing in data science roles, and why? “We’re seeing a decline in roles focused purely on data wrangling or basic analytics, as these functions are increasingly automated by AI. However, this shift doesn’t mean data science is becoming obsolete — it means it’s evolving.

“Today, the focus is on data architects, knowledge engineering, agent development and AI governance — roles that ensure AI systems are deployed responsibly and aligned with business goals. We’re also seeing a greater emphasis on roles that do the vital job of managing the ethical dimensions of AI, ensuring transparency and accountability in its use and compliance as the new EU AI Act obligations become effective.”

Many companies have invested resources in cleaning up their unstructured and structured data lakes so it can be used for generating AI responses. Why then do you see fewer and not more investments in data scientists? “Companies are prioritizing AI tools that can automate much of the data preparation and curation process.  The role of the data scientist, over time, will evolve into one that’s more about overseeing these automated processes and ensuring the integrity of the knowledge being generated from the data, rather than manually analyzing or cleaning it. This shift also highlights the growing importance of knowledge engineering over traditional data science roles.
 
“The focus is shifting from manual data analysis to systems that can automatically clean, manage, and analyze data at scale. As AI takes on more of these tasks, the need for traditional data science roles diminishes. Instead, the emphasis is on data architects, knowledge engineering — understanding how to structure, govern, and utilize knowledge in ways that enhance AI’s performance and inform AI agent developers.”

What do you see as the top AI roles emerging as the technology continues to be adopted? “We’re seeing a new wave of AI roles emerging, with a strong focus on governance, ethics, and strategic alignment. Chief AI Officers, AI governance leads, knowledge engineers and AI agent developers are becoming critical to ensuring that AI systems are trustworthy, transparent, and aligned with both business goals and human needs.

“Additionally, roles like AI ethicists and compliance experts are on the rise, especially as governments begin to regulate AI more strictly. These roles go beyond technical skills —  they require a deep understanding of policy, ethics, and organizational strategy. As AI adoption grows, so too will the need for individuals who can bridge the gap between the technology and the focus on human-centered outcomes.”

How will artificial general intelligence (AGI) transform the enterprise long term? “AGI will revolutionize the enterprise in ways we can barely imagine today. Unlike current AI, which is designed for specific tasks, AGI will be capable of performing any intellectual task a human can, which will fundamentally change how businesses operate. AGI has the potential to be a strategic partner in decision-making, innovation, and even customer engagement, shifting the focus from task automation to true collaboration between humans and machines. The long-term impact will be profound, but it’s crucial that AGI is developed and governed responsibly, with strong ethical frameworks in place to ensure it serves the broader good.”

Many believe AGI is the more frightening AI evolution. Do you believe AGI has a place in the enterprise, and can it be trusted or controlled? “I understand the concerns around AGI, but with the right safety controls, I believe it has enormous potential to bring positive change if it’s developed responsibly. AGI will certainly have a place in the enterprise. It will fundamentally transform the way companies achieve outcomes. This technology is driven by goals, outcomes — not by processes. It will disrupt the pillar of process in the enterprise, which will be a game changer.

“For that reason, trust and control will be key. Transparency, accountability, and rigorous governance will be essential in ensuring AGI systems are safe, ethical, and aligned with human values. At EY, we strongly advocate for a human-centered approach to AI, and this will be even more critical with AGI. We need to ensure that it’s not just about the technology, but about how that technology serves the real interests of society, businesses, and individuals alike.”

How do you go about ensuring “a human is at the center” of any AI implementation, especially when you may some day be dealing with AGI? “Keeping humans at the center, especially as we approach AGI, is not just a guiding principle — it’s an absolute necessity. The EU AI Act is the most developed effort yet in establishing the guardrails to control the potential impacts of this technology at scale. At EY, we are rapidly adapting our corporate policies and ethical frameworks in order to, first, be compliant, but also to lead the way in showing the path of responsible AI to our clients.

“At EY, we believe that AI implementation should always be framed by ethics, human oversight, and long-term societal impacts. We actively work to embed trust and transparency into every AI system we deploy, ensuring that human wellbeing and ethical considerations remain paramount at all times. AGI will be no different: its success will depend on how well we can align it with human values, protect individual rights, and ensure that it enhances, rather than detracts from, our collective future.”

Kategorie: Hacking & Security

China Accuses U.S. of Fabricating Volt Typhoon to Hide Its Own Hacking Campaigns

The Hacker News - 15 Říjen, 2024 - 10:03
China's National Computer Virus Emergency Response Center (CVERC) has doubled down on claims that the threat actor known as Volt Typhoon is a fabrication of the U.S. and its allies. The agency, in collaboration with the National Engineering Laboratory for Computer Virus Prevention Technology, went on to accuse the U.S. federal government, intelligence agencies, and Five Eyes countries of Ravie Lakshmananhttp://www.blogger.com/profile/[email protected]
Kategorie: Hacking & Security

Researchers Uncover Hijack Loader Malware Using Stolen Code-Signing Certificates

The Hacker News - 15 Říjen, 2024 - 08:43
Cybersecurity researchers have disclosed a new malware campaign that delivers Hijack Loader artifacts that are signed with legitimate code-signing certificates. French cybersecurity company HarfangLab, which detected the activity at the start of the month, said the attack chains aim to deploy an information stealer known as Lumma. Hijack Loader, also known as DOILoader, IDAT Loader, and Ravie Lakshmananhttp://www.blogger.com/profile/[email protected]
Kategorie: Hacking & Security

WordPress Plugin Jetpack Patches Major Vulnerability Affecting 27 Million Sites

The Hacker News - 15 Říjen, 2024 - 06:56
The maintainers of the Jetpack WordPress plugin have released a security update to remediate a critical vulnerability that could allow logged-in users to access forms submitted by others on a site. Jetpack, owned by WordPress maker Automattic, is an all-in-one plugin that offers a comprehensive suite of tools to improve site safety, performance, and traffic growth. It's used on 27 million Ravie Lakshmananhttp://www.blogger.com/profile/[email protected]
Kategorie: Hacking & Security
Syndikovat obsah