Kategorie
Researchers Uncover Flaws in Popular Open-Source Machine Learning Frameworks
Crypto-stealing malware posing as a meeting app targets Web3 pros
Conquering the Complexities of Modern BCDR
More_eggs MaaS Expands Operations with RevC2 Backdoor and Venom Loader
Hackers Leveraging Cloudflare Tunnels, DNS Fast-Flux to Hide GammaDrop Malware
Nebraska Man pleads guilty to $3.5 million cryptojacking scheme
Nebraska Man pleads guilty to dumb cryptojacking operation
Romania's election systems targeted in over 85,000 cyberattacks
U.S. org suffered four month intrusion by Chinese hackers
US arrests Scattered Spider suspect linked to telecom hacks
Meta: AI created less than 1% of the disinformation around 2024 elections
AI-generated content accounted for less than 1% of the disinformation fact-checkers linked to political elections that took place worldwide in 2024, according to social media giant Meta. The company cited political elections in the United States, Great Britain, Bangladesh, Indonesia, India, Pakistan, France, South Africa, Mexico and Brazil, as well as the EU elections.
“At the beginning of the year, many warned about the potential impact that generative AI could have on the upcoming elections, including the risk of widespread deepfakes and AI-powered disinformation campaigns,” Meta President of Global Affairs Nick Clegg wrote. “Based on what we have monitored through our services, it appears that these risks did not materialize in a significant way and that any impact was modest and limited in scope.”
Meta did not provide detailed information on how much AI-generated disinformation its fact-checking uncovered related to major elections.
Apple shops at Amazon for Apple Intelligence services
Apple shops at Amazon.
In this case, it is using artificial intelligence (AI) processors from Amazon Web Services (AWS) for some of its Apple Intelligence and other services, including Maps, Apps, and search. Apple is also testing advanced AWS chips to pretrain some of its AI models as it continues its rapid pivot toward becoming the world’s most widely deployed AI platform.
That’s the big — and somewhat unexpected — news to emerge from this week’s AWS:Reinvent conference.
Apple watchers will know that the company seldom, if ever, sends speakers to other people’s trade shows. So, it matters that Apple’s Senior Director of Machine Learning and AI, Benoit Dupin, took to the stage at the Amazon event. That appearance can be seen as a big endorsement both of AWS and its AI services, and the mutually beneficial relationship between Apple and AWS.
Not a new relationship.Apple has used AWS servers for years, in part to drive its iCloud and Apple One services and to scale additional capacity at times of peak demand. “One of the unique elements of Apple’s business is the scale at which we operate, and the speed with which we innovate. AWS has been able to keep the pace,” Dupin said.
Some might note that Dupin (who once worked at AWS) threw a small curveball when he revealed that Apple has begun to deploy Amazon’s Graviton and Inferentia for machine learning services such as streaming and search. He explained that moving to these chips has generated an impressive 40% efficiency increase in Apple’s machine learning inference workloads when compared to x86 instances.
Dupin also confirmed Apple is in the early stages of evaluating the newly-introduced AWS Trainium 2 AI training chip, which he expects will bring in 50% improvement in efficiency when pre-training AI.
Scale, speed, and Apple IntelligenceOn the AWS connection to Apple Intelligence, he explained: “To develop Apple Intelligence, we needed to further scale our infrastructure for training.” As a result, Apple turned to AWS because the service could provide access to the most performant accelerators in quantity.
Dupin revealed that key areas where Apple uses Amazon’s services include fine-tuning AI models, optimizing trained models to fit on small devices, and “building and finalizing our Apple Intelligence adapters, ready to deploy on Apple devices and servers.. We work with AWS Services across virtually all phase of our AI and ML lifecycle,” he said.
Apple Intelligence is a work in progress and the company is already developing additional services and feature improvements, “As we expand the capabilities and feature of Apple Intelligence, we will continue to depend on the scalable, efficient, high-performance accelerator technologies AWS delivers,” he said.
Apple CEO Tim Cook recently confirmed more services will appear in the future. “I’m not going to announce anything today. But we have research going on. We’re pouring all of ourselves in here, and we work on things that are years in the making,” Cook said.
TSMC, Apple, AWS, AI, oh my!There’s another interesting connection between Apple and AWS. Apple’s M- and A- series processors are manufactured by Taiwan Semiconductor Manufacturing (TSMC), with devices made by Foxconn and others. TSMC also makes the processors used by AWS. And it manufactures the AI processors Nvidia provides; we think it will be tasked with churning out Apple Silicon server processors to support Private Cloud Compute services and Apple Intelligence.
It is also noteworthy that AWS believes it will be able to link more of its processors together for huge cloud intelligence servers beyond what Nvidia can manage. Speaking on the fringes of AWS Reinvent, AWS AI chip business development manager Gadi Hutt claimed his company’s processors will be able to train some AI models at 40% lower cost than on Nvidia chips.
Up next?While the appearance of an Apple exec at the AWS event suggests a good partnership, I can’t help but be curious about whether Apple has its own ambitions to deliver server processors, and the extent to which these might deliver significant performance/energy efficiency gains, given the performance efficiency of Apple silicon.
Speculation aside, as AI injects itself into everything, the gold rush for developers capable of building and maintaining these services and the infrastructure (including energy infrastructure) required for the tech continues to intensify; these kinds of fast-growing industry-wide deployments will surely be where opportunity shines.
You can watch Dupin’s speech here.
You can follow me on social media! Join me on BlueSky, LinkedIn, Mastodon, and MeWe.
Announcing the launch of Vanir: Open-source Security Patch Validation
Today, we are announcing the availability of Vanir, a new open-source security patch validation tool. Introduced at Android Bootcamp in April, Vanir gives Android platform developers the power to quickly and efficiently scan their custom platform code for missing security patches and identify applicable available patches. Vanir significantly accelerates patch validation by automating this process, allowing OEMs to ensure devices are protected with critical security updates much faster than traditional methods. This strengthens the security of the Android ecosystem, helping to keep Android users around the world safe.
By open-sourcing Vanir, we aim to empower the broader security community to contribute to and benefit from this tool, enabling wider adoption and ultimately improving security across various ecosystems. While initially designed for Android, Vanir can be easily adapted to other ecosystems with relatively small modifications, making it a versatile tool for enhancing software security across the board. In collaboration with the Google Open Source Security Team, we have incorporated feedback from our early adopters to improve Vanir and make it more useful for security professionals. This tool is now available for you to start developing on top of, and integrating into, your systems.
The need for VanirThe Android ecosystem relies on a multi-stage process for vulnerability mitigation. When a new vulnerability is discovered, upstream AOSP developers create and release upstream patches. The downstream device and chip manufacturers then assess the impact on their specific devices and backport the necessary fixes. This process, while effective, can present scalability challenges, especially for manufacturers managing a diverse range of devices and old models with complex update histories. Managing patch coverage across diverse and customized devices often requires considerable effort due to the manual nature of backporting.
To streamline the vital security workflow, we developed Vanir. Vanir provides a scalable and sustainable solution for security patch adoption and validation, helping to ensure Android devices receive timely protection against potential threats.
The power of VanirSource-code-based static analysisVanir’s first-of-its-kind approach to Android security patch validation uses source-code-based static analysis to directly compare the target source code against known vulnerable code patterns. Vanir does not rely on traditional metadata-based validation mechanisms, such as version numbers, repository history and build configs, which can be prone to errors. This unique approach enables Vanir to analyze entire codebases with full history, individual files, or even partial code snippets.
A main focus of Vanir is to automate the time consuming and costly process of identifying missing security patches in the open source software ecosystem. During the early development of Vanir, it became clear that manually identifying a high-volume of missing patches is not only labor intensive but also can leave user devices inadvertently exposed to known vulnerabilities for a period of time. To address this, Vanir utilizes novel automatic signature refinement techniques and multiple pattern analysis algorithms, inspired by the vulnerable code clone detection algorithms proposed by Jang et al. [1] and Kim et al. [2]. These algorithms have low false-alarm rates and can effectively handle broad classes of code changes that might appear in code patch processes. In fact, based on our 2-year operation of Vanir, only 2.72% of signatures triggered false alarms. This allows Vanir to efficiently find missing patches, even with code changes, while minimizing unnecessary alerts and manual review efforts.
Vanir's source-code-based approach also enables rapid scaling across any ecosystem. It can generate signatures for any source files written in supported languages. Vanir's signature generator automatically generates, tests, and refines these signatures, allowing users to quickly create signatures for new vulnerabilities in any ecosystem simply by providing source files with security patches.
Android’s successful use of Vanir highlights its efficiency compared to traditional patch verification methods. A single engineer used Vanir to generate signatures for over 150 vulnerabilities and verify missing security patches across its downstream branches – all within just five days.
Vanir for AndroidCurrently Vanir supports C/C++ and Java targets and covers 95% of Android kernel and userspace CVEs with public security patches. Google Android Security team consistently incorporates the latest CVEs into Vanir’s coverage to provide a complete picture of the Android ecosystem’s patch adoption risk profile.
The Vanir signatures for Android vulnerabilities are published through the Open Source Vulnerabilities (OSV) database. This allows Vanir users to seamlessly protect their codebases against latest Android vulnerabilities without any additional updates. Currently, there are over 2,000 Android vulnerabilities in OSV, and finishing scanning an entire Android source tree can take 10-20 minutes with a modern PC.
Flexible integration, adoption and expansion.Vanir is developed not only as a standalone application but also as a Python library. Users who want to integrate automated patch verification processes with their continuous build or test chain may easily achieve it by wiring their build integration tool with Vanir scanner libraries. For instance, Vanir is integrated with a continuous testing pipeline in Google, ensuring all security patches are adopted in ever-evolving Android codebase and their first-party downstream branches.
Vanir is also fully open-sourced, and under BSD-3 license. As Vanir is not fundamentally limited to the Android ecosystem, you may easily adopt Vanir for the ecosystem that you want to protect by making relatively small modifications in Vanir. In addition, since Vanir’s underlying algorithm is not limited to security patch validation, you may modify the source and use it for different purposes such as licensed code detection or code clone detection. The Android Security team welcomes your contributions to Vanir for any direction that may expand its capability and scope. You can also contribute to Vanir by providing vulnerability data with Vanir signatures to OSV.
Vanir ResultsSince early last year, we have partnered with several Android OEMs to test the tool’s effectiveness. Internally we have been able to integrate the tool into our build system continuously testing against over 1,300 vulnerabilities. Currently Vanir covers 95% of all Android, Wear, and Pixel vulnerabilities with public fixes across Android Kernel and Userspace. It has a 97% accuracy rate, which has saved our internal teams over 500 hours to date in patch fix time.
Next stepsWe are happy to announce that Vanir is now available for public use. Vanir is not technically limited to Android, and we are also actively exploring problems that Vanir may help address, such as general C/C++ dependency management via integration with OSV-scanner. If you are interested in using or contributing to Vanir, please visit github.com/google/vanir. Please join our public community to submit your feedback and questions on the tool.
We look forward to working with you on Vanir!
Police shuts down Manson cybercrime market, arrests key suspects
New Android spyware found on phone seized by Russian FSB
This $3,000 Android Trojan Targeting Banks and Cryptocurrency Exchanges
Latrodectus malware and how to defend against it with Wazuh
Critical Mitel MiCollab Flaw Exposes Systems to Unauthorized File and Admin Access
Europol Shuts Down Manson Market Fraud Marketplace, Seizes 50 Servers
Google DeepMind and World Labs unveil AI tools to create 3D spaces from simple prompts
Google DeepMind and startup World Labs this week both revealed previews of AI tools that can be used to create immersive 3D environments from simple prompts.
World Labs, the startup founded by AI pioneer Fei-Fei Li and backed by $230 million in funding, announced its 3D “world generation” model on Tuesday. It turns a static image into a computer game-like 3D scene that can be navigated using keyboard and mouse controls.
“Most GenAI tools make 2D content like images or videos,” World Labs said in a blog post. “Generating in 3D instead improves control and consistency. This will change how we make movies, games, simulators, and other digital manifestations of our physical world.”
One example is the Vincent van Gogh painting “Café Terrace at Night,” which the AI model used to generateadditional content to create a small area to view and move around in. Others are more like first-person computer games.
World Labs’ 3D “world generation” model turns a static image into a computer game-like 3D scene that can be navigated with keyboard and mouse controls.
World Labs
WorldLabs also demonstrated the ability to add effects to 3D scenes, and control virtual camera zoom, for instance. (You can try out the various scenes here.)
Creators that have tested the technology said it could help cut the time needed to build 3D environments, according to a video posted in the blog post, and help users brainstorm ideas much faster.
The 3D scene builder is a “first early preview” and is not available as a product yet.
Separately, Google’s DeepMind AI research division announced in a blog post Wednesday its Genie 2, a “foundational world model” that enables an “endless variety of action-controllable, playable 3D environments.”
It’s the successor to the first Genie model, unveiled earlier this year, which can generate 2D platformer-style computer games from text and image prompts. Genie 2 does the same for 3D games that can be navigated in first-person view or via an in-game avatar that can perform actions such as running and jumping.
It’s possible to generate “consistent worlds” for up to a minute, DeepMind said, with most of the examples showcased in the blog post lasting between 10 and 20 seconds. Genie 2 can also remember parts of the virtual world that are no longer in view, reproducing them accurately when they’re observable again.
DeepMind said its work on Genie is still at an early stage; it’s not clear when the technology might be more widely available. Genie 2 is described as a research tool that can “rapidly prototype diverse interactive experiences” and train AI agents.
Google also announced that its generative AI (genAI) video model, Veo, is now available in a private preview to business customers using its Vertex AI platform. The image-to-video model will open up “new possibilities for creative expression” and streamline “video production workflows,” Google said in a blog post Tuesday.
Amazon Web Services also announced its range of Nova AI models this week, including AI video generation capabilities; OpenAI is thought to be launching Sora, its text-to-video software, later this month.
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- …
- následující ›
- poslední »