Kategorie
Meta's Llama Framework Flaw Exposes AI Systems to Remote Code Execution Risks
TalkTalk investigates breach after data for sale on hacking forum
PayPal to pay $2 million settlement over 2022 data breach
Meta wants everyone to know that it, too, is investing a lot in AI
Not to be outdone by its close rival OpenAI, Meta has announced its plans to spend $60 to $65 billion on AI infrastructure this year, and is building a data center almost as big as Manhattan.
In a Facebook post, Meta CEO Mark Zuckerberg announced his company’s intent to build a 2GW data center, bring roughly 1GW of compute online in 2025, and end the year with more than 1.3 million GPUs.
Included in his post was a blueprint of the planned “Richland Parish Center Data Center” superimposed on a map of Manhattan (the data center will actually be in northeast Louisiana).
DOJ indicts North Korean conspirators for remote IT work scheme
The US Department of Justice this week announced that it had indicted two North Korean nationals and three other men, accusing them of participating in a conspiracy designed to trick US companies into funding the North Korean regime.
According to the indictment, which was filed in federal court in Miami, the scheme leveraged stolen identity documents and paid henchmen in the US to direct well-paid IT work and company computers to two North Korean men, Jin Sung-Il and Pak Jin-Song. The idea, the Justice Department said, was to funnel money back to the North Korean regime, which has limited opportunities to generate cash through legal means thanks to heavy international sanctions.
The conspiracy, according to the indictment, centers on North Korean nationals posing as foreign workers in other nations, or as US nationals, and gaining employment via online platforms that allow companies to advertise for contract IT workers. Using fake or altered identity documents, the North Koreans took on contracts for several US companies, which were not identified by name in the indictment. Those businesses then shipped company laptops to three US-based co-conspirators, Pedro Ernesto Alonso De Los Reyes, Erick Ntekereze Prince, and Emanuel Ashtor, who, the Justice Department said, installed remote access software on them so that they could be operated by Jin and Pak.
The US-based members of the group also used their own companies as fronts for the conspiracy, invoicing several of the victim firms and funneling payments to the North Koreans. The indictment stated that at least 64 US companies were victimized, and payments from ten of them generated at least $866,255 in revenue over the duration of the scheme, which ran for more than six years.
All five defendants are charged with conspiracy to damage a protected computer, mail and wire fraud, money laundering, and transferring false identification documents. The two North Koreans are additionally charged with violating the International Emergency Economic Powers Act. Each could face up to 20 years in prison.
Highlights risk from North Korea“The indictments announced today should highlight to all American companies the risk posed by the North Korean government,” said Assistant Director of the FBI’s Cyber Division, Bryan Vorndran, in a statement.
While the indictments announced Thursday characterized this conspiracy as largely focused on diverting money to the heavily embargoed North Korean government, similar efforts by that country have been aimed at compromising corporate secrets and sensitive information. The “laptop farm” — where a US-based associate such as Prince and Ashtor hosted the provided company laptops in their own homes to conceal the North Korean involvement — has been a known technique for North Korean cyberwarfare since at least 2022, and has been used not just to collect a salary, but to steal data, explore sensitive parts of strategically significant infrastructure, and attempt to extort victimized firms.
The operations are growing in both numbers and sophistication, according to security firms who spoke to CSO in November. One recent case saw a bad actor use deepfake video technology and automated voice translation in a video interview, though this didn’t work particularly well and the interviewers were easily able to tell that something was wrong.
“Her eyes weren’t moving, the lips weren’t in sync, and the voice was mechanical,” Kirkwood told CSO. “It was like something from a 1970s Japanese Godzilla movie.”
Google-owned threat intelligence provider Mandiant told CSO that the number of North Korean IT workers looking to gain valuable freelance positions number in the thousands, and although not all are engaged in purely nefarious activity, the number of intrusion incidents linked to North Korean workers is high.
Zyxel warns of bad signature update causing firewall boot loops
Microsoft to deprecate WSUS driver synchronization in 90 days
Trump’s move to lift Biden-era AI rules sparks debate over fast-tracked advances — and potential risks
President Donald Trump’s executive order removing Biden-Administration rules governing AI development is being cast as an opening of AI development flood gates, which could fast track advances in the still-new technology, but could also pose risks.
Signed on Thursday, the executive order (EO) overturns former President Joe Biden’s 2023 policy, which mandated that AI developers conduct safety testing and share results with the government before releasing systems that could pose risks to national security, public health, or the economy.
The revocation of the 2023 Eo shifts federal oversight from mandates to voluntary commitments, reducing requirements such as safety training submissions and large-scale computer acquisition notices, enabling less regulated innovation.
“This means some states may continue to follow the regulatory guidance in the 2023 EO, while others may not,” said Lydia Clougherty Jones Sr., a director analyst at Gartner Research.
Trump’s policy states its purpose is to “sustain and enhance America’s dominance in AI,” and promote national security. The EO directs the creation of an “AI Action Plan” within 180 days, led by the Assistant to the President for Science and Technology, the White House AI and Crypto Czar, and the National Security Advisor. Michael Kritsios (former US CTO under the Trump administration), David Sacks (venture capitalist and former PayPal executive), and US Rep. Mike Waltz (R-FL), have been nominated or appointed, respectively, to these positions.
The EO states part of its purpose is to “enhance America’s global AI dominance in order to promote human flourishing, economic competitiveness, and national security.”
Mark Brennan, who leads the global technology and telecommunications industry sector group for the Washington-based law firm of Hogan Lovel, said setting a 180-day deadline to develop a new AI action plan means the group drafting the plan will need to quickly “gather input and start writing.”
The mention of “human flourishing” is also “sure to spark diverse interpretations,” Brennan said.
A public-private partnership on AIAlong with the order, Trump also unveiled the Stargate initiative, a public-private venture that would create a new company to build out the nation’s AI infrastructure, including new data centers and new power plants to feed them. Initially, Stargate will team up the US government with OpenAI, Oracle, and Softbank. The companies will initially invest $100 billion in the project, with plans to reach $500 billion. Trump said the move would create 100,000 US jobs.
Oracle CEO Larry Ellison, for example, said 10 new AI data centers are already under construction. He linked the project to the use of AI for digital health records, noting the technology could help develop customized cancer vaccines and improve disease treatment.
Not everyone is, however, upbeat about the loosening of government oversight of AI development and partnerships with the private sector.
The Stargate announcement, along with the Trump Administration’s reversal of the earlier AI safety order, could replace many federal workers in key public service roles, according to Cliff Jurkiewicz, vice president of global strategy at Phenom, a company specializing in AI-enabled human resources.
“While it’s impressive to see such a significant investment by the federal government and private businesses into the nation’s AI infrastructure, the downside is that it has the potential to disenfranchise federal workers who are not properly trained and ready to use AI,” Jurkiewicz said. “Federal employees need training to use AI effectively; it can’t just be imposed on them.”
Stargate will speed up what Jurkiewcz called “the Great Recalibration” — a shift in how work is performed through an human-AI partnership. Over the next 12 to 18 months, businesses will realize they can’t fully replace human knowledge and experience with technology, “since machines don’t perceive the world as we do,” he said.
The move could put smaller AI companies at a competitive disadvantage by stifling innovation, Jurkiewicz said. “Stargate could also deepen inequities, as those who know how to use AI will have a significant advantage over those who don’t.”
Removing AI regulations, however, won’t inherently lead to a completely unbridled technology that can mimic human intelligence in areas such as learning, reasoning, and problem-solving.
Commercial risk will drive responsible AI, with investment and success shaped by the private market and state regulations, according to Gartner. Industry commitments and consortia will advance AI safety and development to meet societal needs, independent of federal or state policies.
AI unleashed to become Skynet?Some predict AI will become as ubiquitous as electricity or the internet, in that it will eventually be operating behind the scenes and woven into everyday life, silently powering countless systems and services without drawing much attention.
“I’m sure the whole Terminator thing could happen. I don’t consider it likely,” said John Veitch, dean of the School of Business and Management at Notre Dame de Namur in Belmont, CA. “I see lots of positive things with AI and taking the guardrails off of it.”
Regulating something as transformative as AI is challenging, much like the early internet. “If we had foreseen social media’s impact in 1999, would we have done things differently? I don’t know,” Veitch said.
Given AI’s complexity, less regulation might be better than more, at least for now, he said.
AI is valuable as the US faces an aging population and a shrinking labor force, Veitch said. With skilled workers harder to find and expensive to hire, AI can replace call centers or assist admissions teams, offering cost-effective solutions. For example, Notre Dame de Namur’s admissions team uses generative AI to follow up on enrollment requests.
Trump’s executive order prioritizes “sovereign AI” affecting the private market, while shifting most regulatory oversight to state and local governments. For example, New York plans to restrict government use of AI for automated decisions without human monitoring, while Colorado’s new AI law, effective in 2026, will require businesses to inform consumers when they’re interacting with AI, Gartner’s Jones said.
The revocation of Biden’s 2023 order reduces federal oversight of model development, removing requirements such as submitting safety training results or sending notifications about large-scale computer cluster acquisitions, which could encourage faster innovation, according to Jones. “Thus, it was not a surprise to see the Stargate announcement and the related public-private commitments,” she said.
Strengthening sovereign AI, Jones said, will boost public-private partnerships like Stargate to maintain US competitiveness and tech leadership.
What enterprises should focus onNow that the regulatory buck has been passed to states, so to speak, organizations should monitor US state AI executive orders, laws, and pending legislation, focusing on mandates that differentiate genAI from other AI techniques and apply to government use, according to a Gartner report.
“We have already seen diverse concerns unique to individual state goals across the nearly 700 pieces of state-level AI-proposed legislation in 2024 alone,” Gartner said.
According to Gartner:
- By 2029, 10% of corporate boards globally are expected to use AI to challenge key executive decisions.
- By 2027, Fortune 500 companies will redirect $500 billion from energy operating expenses to microgrids to address energy risks and AI demands.
- By 2027, 15% of new applications will be fully generated by AI, up from 0% today.
Executives should identify patterns in new laws, especially those addressing AI biases or errors, and align responsible AI with company goals. Companies are also being urged to document AI decision-making and manage high-risk use cases to ensure compliance and reduce harm.
Organizations should also assess opportunities and risks from federal investments in AI and IT modernization. For global operations, companies will need to monitor AI initiatives in regions like the EU, UK, China, and India, Gartner said.
“Striking a balance between AI innovation and safety will be challenging, as it will be essential to apply the appropriate level of regulation,” the researcher said. “Until the new administration determines this balance, state governments will continue to lead the way in issuing regulations focusing on AI innovation and safety-centric measures that impact US enterprises.”
OpenAI’s new Operator agent can use the web for you
OpenAI is releasing a preview version of its first AI agent, Operator, which is specifically designed to use a web browser and can, for example, book a table at a restaurant for the user on its own. (An AI agent is a system that can be given a task and then work on it independently.)
In the meantime, the user can either watch the Operator work on the web or do something completely different. In some cases, the Operator will wait for the user — for example, if it needs to log in somewhere or confirm that an email should be sent.
OpenAI says its Operator will not be able to perform malicious tasks. The company also notes that the tool currently has problems with more complex user interfaces, such as managing a calendar or creating a slideshow for a presentation.
Operator will initially be available to ChatGPT Pro users, which is the most expensive variant of Chat GPT at $200 a month. But ChatGPT Plus users will also have access to the agent in the coming months.
The AI agent will first be available in the US, followed by other countries. More details are available here.
Subaru Starlink flaw let hackers hijack cars in US and Canada
Hackers use Windows RID hijacking to create hidden admin account
Smile! You can now control your Chromebook with your face
Google has started rolling out a number of new features in Chrome OS, according to The Verge, including a new feature that allows a user to control their Chromebook using head movements (to move the mouse pointer) and facial expressions (to click or activate dictation).
In the video below, see how it works:
To use the new feature, your Chromebook must have at least 8GB of RAM. In this context, it can be mentioned that about twenty new Chromebook models will be launched in 2025.
Epic v. Apple — are you not entertained?
It’s game on as Epic takes a direct stab at both Apple and Google with its latest overture to woo developers to offer content via the European Epic games store; the company will pay for participants in its free games program so customers can play — an attempt to exploit the EU’s Digital Markets Act (DMA), which has forced Apple to open up more.
Apple charges developers who want to sell games outside of the App Store a small fee once game installations achieve 1 million downloads during a year. Apple notes that the Core Technology Fee is not applied in numerous cases:
- No fee is applied to free apps.
- No fee is applied to apps from nonprofits, educational institutions, or government entities.
- There’s no charge for the first 1 million app installs.
- Small developers (classified as those earning under $10 million in global business revenue) get a three-year fee on ramp, which means they won’t pay the CTF during the first three years.
- No fees are applied for patches and updates, and users are not charged when they reinstall their apps using iCloud transfer.
With those restrictions, it’s pretty clear that as taxes go, the vast majority of entities don’t pay any tax at all when they choose to distribute applications using the CTF process.
Which puts Epic’s offer in perspective.
What Epic is doingHere’s that offer:
- Epic’s mobile store will open to any game developer seeking distribution later this year.
- For one year, Epic will pay the Apple Core Technology Fee for participants in its free games program.
- The first 20 games to make it to Epic’s store are set to appear in the coming weeks.
“Our aim here isn’t just to launch a bunch of different stores in different places, but to build a single, cross-platform store in which, within the era of multi-platform games, if you buy a game or digital items in one place, you have the ability to own them everywhere,” Epic CEO Tim Sweeney told The Verge.
The idea that you can buy once to play anywhere is probably the best argument for openness I’ve heard yet. We’ll see if it turns out that way.
Another strong argument to Epic’s approach is that Apple’s CTF arrangement means that a developer of multiple apps is more likely to cross that 1 million download threshold, as it is applied across their app catalog, rather than per app. Apple also seeks that CTF fee for downloads across every store.
Who really benefits?All the same, once you do the math, it should be clear that the vast majority of developers already pay nothing when they choose to offer up apps — including games — through Apple’s App Store, or even via third-party stores in the regions in which they can. In fact, the games publishers most likely to pay these fees will be the largest publishers, particularly those who have succeeded in developing strong in-app economies. (Some games, such as Diablo Immortal, really take that to the extreme, with gamers complaining the entire game is overtly built around convincing people to spend money in-game).
But, what Epic is doing with this offer is directly pitching its own app distribution service to the largest games developers who are already making good money through Apple’s ecosystem. As I see it, the offer gives Epic a chance grab a slice of lucrative future income, while hitting Apple where it feels it most — revenue. In business terms, that makes sense.
At the same time, by positioning itself as some kind of freedom fighter, Epic manages to make this commercially-driven grab for revenue allow it to appear to be the good guy in the story. Though as most conflict resolution experts will probably tell you, everyone on every side is usually the hero in their own story.
Apple thinks it is a hero, too.
What’s really going on here is a game of millionaires, with well-heeled companies on all sides strenuously negotiating for different business terms so that revenue is shared differently.
Epic’s making a probably accurate guess that the biggest App Store earners probably don’t care much that if they don’t pay more, smaller developers will have to do so. Nor, I think, do these commercial entities worry much that while consumers might get slightly better deals in some ways, they will eventually find they end up paying more for the same experience. Smaller developers landed with rising platform-related development costs will just charge customers more, and Apple will seek to guard its own bottom line.
It always does.
End result?Sure, you might find some developers racing off to Epic’s store, convinced by all the “Apple Tax” rhetoric until they eventually find they are paying a different tax to Epic instead. Some larger developers will go all-in on third-party outlets, offering inducements to bring consumers across. It won’t be too long until Epic reaches its target 100 million store users, as people will probably follow the content.
Eventually, people will get their software at Epic’s, Apple’s, and other stores, all with varying tech support and security levels, and Apple will still receive the first panic calls from the consumers who don’t understand who to call when things go wrong.
Apple will legitimately argue that it should be compensated for this tech support, as well as platform and ecosystem development. However, if it fails to win that argument, you can anticipate the cost of both developing on and purchasing its platforms will increase. Don’t neglect that it wasn’t really that long ago Apple charged for operating system upgrades. It could do so again.
What I think will happen is that in exchange for buying apps you can use across platforms (good), and slightly better income for developers (also good), most prices won’t change that much, other costs will increase, and while Apple’s App Store power may be mitigated somewhat, the security environment will degrade.
No matter, however, as at least a handful of millionaires will have a few more dollars, while the cost of that wealth transfer will ultimately be shouldered by the group all sides claim to care about, consumers who simply want to use apps on their devices. This will not be a redemption song.
You can follow me on social media! Join me on BlueSky, LinkedIn, Mastodon, and MeWe.
Hacker infects 18,000 "script kiddies" with fake malware builder
Microsoft: Outdated Exchange servers fail to auto-mitigate security bugs
Managed Detection and Response – How are you monitoring?
Hackers get $886,250 for 49 zero-days at Pwn2Own Automotive 2025
RANsacked: Over 100 Security Flaws Found in LTE and 5G Network Implementations
2025 State of SaaS Backup and Recovery Report
MDM and genAI: A match made in Heaven — or something less?
Unless you’ve been in a coma for the past couple of years, you’ve seen countless headlines about how generative AI (genAI) is revolutionizing how we work, automating some jobs out of existence while empowering workers to move into new and more creative roles. GenAI features have been added to almost every type of business app there is, sometimes with more real-world uses than others.
The mobile device management (MDM) market is no exception. But is genAI really changing how IT professionals manage and secure devices in the workplace?
Several major MDM vendors have announced or introduced genAI feature sets in recent months. But just what they mean by “AI” varies widely when it comes to their solutions and workflows, so it’s often unclear just how useful these additions will be for IT.
Here’s what to look for when eyeing the overall market and deciding which direction to take.
Working with MDM dataOne obvious way genAI can deliver value is by making it easier (and faster) to request, interpret, summarize, and react to data. This isn’t the most ground-breaking function, but it can be extremely useful and save a lot of time. Simply being able to ask about a given device or policy — or to query information about a large swath of devices and their configurations and use — is a big advantage, particularly if you want granular data about various groups or subsets of devices you manage.
MDM users aren’t so much getting new information as getting to ask questions about that data in a more straightforward way. Instead of having to run a series of queries and reports and then collating and summarizing that information, IT admins can simply ask questions about device configuration, usage, policies and groups.
Of the major MDM options available, Intune with Copilot from Microsoft seems to have gone further in this area than any other. In addition to natural language processing, Copilot has several Intune-specific prompts that users can try, such as comparing multiple devices to each other or against corporate policies.
This might not take the place of running regular reports about device inventory or compliance, but it can make it much easier if you need to get specific information quickly in addition to those regular reports.
Automating tasksAnother common use for genAI is to automate tasks. Most MDM suites already offer some degree of automation, but learning to create those automations in the first place often requires a learning curve. Being able to describe a task that needs to be performed (repeatedly or for specific needs) can be a game changer.
VMWare sees this as a major focus by allowing its tool to create and run scripts based on natural language prompts.
Put natural language and automation together and you have a powerful way to allocate resources. One use case: being able to automate app licensing based on how devices are actually being used as opposed to how you might expect them to be used.
Threat detectionThreat and malware detection, along with policy compliance, is one of the biggest ways genAI can unlock value when it comes to MDM. JAMF, Kandji and Intune all boast features that leverage genAI’s ability to retrieve, interpret, and act on information indicating suspicious or malicious activity. Not only do MDM tools give you information about potential threats, but responses can be automated. This means that if something looks concerning, access to resources can be immediately halted and the user informed they must work with IT to regain access to information and features.
This allows for a much more proactive approach to security, particularly when threat detection is based on user behavior not just on configuration or policy compliance data.
Device troubleshooting and supportOne of the features Copilot in Intune boasts is the ability to identify errors, their causes, and potential resolutions. Copilot can provide general device, configuration, or app information related to troubleshooting and even provide information about specific errors encountered on a device. It’s not a complete self-service tool or something that will walk you through each step of troubleshooting a specific problem. But being able to find relevant device data — and get a suggested explanation for a problem — are real advantages when it comes to supporting mobile devices in your environment. This can save significant time (and user frustration) when it comes to responding a problem.
Troubleshooting MDM systems and exploring functionalityAlso in the support bucket is the ability to resolve problems with configurations and learn more about how to proactively work with your MDM software. This could represent a big win for organizations, particularly as you onboard additional IT staffers, switch MDM products, or seek to remain updated on the latest capabilities your solution offers.
JAMF and Hexnode are both implementing chatbots designed to help IT workers troubleshoot problems, learn about new features, and understand how to use them.
JAMF incorporates an exceptional set of resources its genAI model has access to, including product information, knowledge base articles, curated posts from its JAMF Nation forums, sessions from the company’s user conference, and selected Apple support documents.
Should you switch MDM providers based on genAI tweaks?Though genAI technology is still evolving, it’s good to see each MDM vendor staking out its own territory when it comes to adding new functions. (It’ll be interesting to see if these differing tacks become a meaningful area of differentiation or whether everyone will eventually end up offering essentially the same feature set.)
As for the market right now, it’s too soon to say that AI alone should influence whether you continue with your current provider or consider switching. Unless you’re already actively planning to migrate to a new company, the AI roadmaps of the competition are worth taking into consideration (with a grain of salt). If you’re satisfied with what you have, taking a wait-and-see approach to how this plays out over of the next couple of years makes more sense than any rash moves.
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- …
- následující ›
- poslední »