Agregátor RSS

China-Linked Earth Alux Uses VARGEIT and COBEACON in Multi-Stage Cyber Intrusions

The Hacker News - 1 Duben, 2025 - 13:03
Cybersecurity researchers have shed light on a new China-linked threat actor called Earth Alux that has targeted various key sectors such as government, technology, logistics, manufacturing, telecommunications, IT services, and retail in the Asia-Pacific (APAC) and Latin American (LATAM) regions. "The first sighting of its activity was in the second quarter of 2023; back then, it was
Kategorie: Hacking & Security

New Case Study: Global Retailer Overshares CSRF Tokens with Facebook

The Hacker News - 1 Duben, 2025 - 13:03
Are your security tokens truly secure? Explore how Reflectiz helped a giant retailer to expose a Facebook pixel that was covertly tracking sensitive CSRF tokens due to human error misconfigurations. Learn about the detection process, response strategies, and steps taken to mitigate this critical issue. Download the full case study here.  By implementing Reflectiz's recommendations, the
Kategorie: Hacking & Security

Case Study: Are CSRF Tokens Sufficient in Preventing CSRF Attacks?

The Hacker News - 1 Duben, 2025 - 13:03
Explore how relying on CSRF tokens as a security measure against CSRF attacks is a recommended best practice, but in some cases, they are simply not enough. Introduction As per the Open Web Application Security Project (OWASP), CSRF vulnerabilities are recognized as a significant threat and are historically part of their top risks. The implications of CSRF attacks are far-reaching and could [email protected]
Kategorie: Hacking & Security

China-Linked Earth Alux Uses VARGEIT and COBEACON in Multi-Stage Cyber Intrusions

The Hacker News - 1 Duben, 2025 - 13:03
Cybersecurity researchers have shed light on a new China-linked threat actor called Earth Alux that has targeted various key sectors such as government, technology, logistics, manufacturing, telecommunications, IT services, and retail in the Asia-Pacific (APAC) and Latin American (LATAM) regions. "The first sighting of its activity was in the second quarter of 2023; back then, it was Ravie Lakshmananhttp://www.blogger.com/profile/[email protected]
Kategorie: Hacking & Security

Ente Photos 1.0

AbcLinuxu [zprávičky] - 1 Duben, 2025 - 12:46
Ente, tj. open source služba pro ukládání a sdílení fotografií a videí, alternativa k výchozím aplikacím od Googlu a Applu, dospěla do verze 1.0.
Kategorie: GNU/Linux & BSD

Code::Blocks 25.03

AbcLinuxu [zprávičky] - 1 Duben, 2025 - 12:25
Po pěti letech byla vydána nová verze 25.03 multiplatformního vývojového prostředí Code::Blocks (Wikipedie). Přehled novinek v changelogu.
Kategorie: GNU/Linux & BSD

Čekali jste na Apple Intelligence pro iPhony? Tady máte osm nových emoji a čekejte dál

Živě.cz - 1 Duben, 2025 - 12:15
** Zpožděná Apple Intelligence dorazila po půl roce do Evropy společně s iOS 18.4 ** Čeština se samozřejmě nekoná, a tak se můžete alespoň těšit na nové smajlíky ** Umělá inteligence od Applu stále jen marně dohání konkurenci
Kategorie: IT News

AI agents can (and will) be scammed

Computerworld.com [Hacking News] - 1 Duben, 2025 - 12:00

Generative AI’s newest superstars — independent-acting agents — are on a tear. Organizations are adopting the technology at a staggering rate because they can use APIs or be embedded with standard apps and automate all kinds of business processes.

An IDC report predicts that within three years, 40% of Global 2000 businesses will be using AI agents and workflows to automate knowledge work, potentially doubling productivity where successfully implemented.

Gartner Research is similarly bullish on the technology. It predicts AI agents will be implemented in 60% of all IT operations tools by 2028, sharply up from less than 5% at the end of 2024. And it expects total agentic AI sales to reach $609 billion over the next five years, Gartner said.

Agentic AI is gaining popularity so quickly because it can autonomously make decisions, take actions, and adapt to achieve specific business goals. AI agents like OpenAI’s Operator, DeepSeek, and Alibaba’s Qwen aim to optimize workflows with minimal human oversight.

Essentially, AI agents or bots are becoming a form of digital employee. And, like human employees, they can be gamed and scammed.

For instance, there have been reports of AI-driven bots in customer service being tricked into transferring funds or sharing sensitive data due to social engineering tactics. Similarly, AI agents handling financial transactions or investments could be vulnerable to hacking if not properly secured.

In November, a cryptocurrency user tricked an AI agent named Freysa to send $50,000 to their account. The autonomous AI agent had been integrated with the Base blockchain, designed to manage a cryptocurrency prize pool.

To date, large-scale malicious abuse of autonomous agents remains limited, but it’s a nascent technology. Experimental instances show potential for misuse through prompt injection attacks, disinformation, and automated scams, according to Leslie Joseph, a principal analyst with Forrester Research.

Avivah Litan, a vice president and distinguished analyst at Gartner Research, said AI Agent mishaps “are still relatively new to the enterprise. [But] I have heard of plenty potential mishaps discovered by researchers and vendors.”

And AI agents can be weaponized for cybercrime.

Gartner Research

“There will be a great AI awakening — people learning how easily AI agents can be manipulated to enact data breaches,” said Ev Kontsevoy, CEO of Teleport, an identity and access management firm. “I think what makes AI agents so unique, and potentially dangerous, is that they represent the first example of software that is vulnerable to both malware and social engineering attacks. That’s because they’re not as deterministic as a typical piece of software.”

Unlike a large language model (LLM) or genAI tools, which usually focus on creating content such as text, images, and music, agentic AI is designed to emphasize proactive problem-solving and complex task execution, much as a human would. The key word is “agency” — software that can act on its own.

Like humans, AI agents can be unpredictable and easily manipulated by creative prompts. That makes them too dangerous to be given unrestricted access to data sources, Kontsevoy said.

Unlike human roles, which have defined permissions, similar constraints haven’t been applied to software. But with AI capable of unpredictable behavior, IT shops are finding they need to impose limits. Leaving AI agents with excessive privileges is risky, as they could be tricked into dangerous actions, such as stealing customer data — something traditional software couldn’t do.

Organizations, Kontsevoy said, must actively manage AI agent behavior and continually update protective measures. Treating the technology as fully mature too soon could expose organizations to significant operational and reputational risks.

Joseph agreed, saying businesses using AI agents should prioritize transparency, enforce access controls, and audit agent behavior to detect anomalies. Secure data practices, strong governance, frequent retraining, and active threat detection can reduce risks with autonomous AI agents.

Growing use cases amplify vulnerabilities

According to Capgemini, 82% of organizations plan to adopt AI agents over the next three years, primarily for tasks such as email generation, coding, and data analysis. Similarly, Deloitte predict enterprises using AI agents this year will grow their use of the technology by 50% over the next two years.

Benjamin Lee, a professor of engineering and computer science at the University of Pennsylvania, called agentic AI a potential ”paradigm shift.” That’s because the agents could boost productivity by enabling humans to delegate large jobs to an agent instead of individual tasks.

But by virtue of their autonomy, Joseph said, AI agents amplify vulnerabilities around unintended actions, data leakage, and exploitation through adversarial prompts. Unlike traditional AI/ML models with limited attack surfaces, agents operate dynamically, making oversight harder.

“Unlike static AI systems, they can independently propagate misinformation or rapidly escalate minor errors into broader systemic failures,” he said. “Their interconnectedness and dynamic interactions significantly raise the risk of cascade failures, where a single vulnerability or misstep triggers a domino effect across multiple systems.”

Some common ways AI agents can be targeted include:

  1. Data Poisoning: AI models can be manipulated by introducing false or misleading data during training. This can affect the agent’s decision-making process and potentially cause it to behave maliciously or incorrectly.
  2. Adversarial Attacks: These involve feeding the AI agent carefully crafted inputs designed to deceive or confuse it. In some cases, adversarial attacks can make an AI model misinterpret data, leading to harmful decisions.
  3. Social Engineering: Scammers might exploit human interaction with AI agents to trick users into revealing personal information or money. For example, if an AI agent interacts with customers, a scammer could manipulate it to act in ways that defraud users.
  4. Security Vulnerabilities: If AI agents are connected to larger systems or the internet, they can be hacked through security flaws, enabling malicious actors to gain control over them. This can be particularly concerning in areas like financial services, autonomous vehicles, or personal assistants.

Conversely, if the agents are well-designed and governed, their very AI’s autonomy could be used to enable adaptive security, allowing them to identify and respond to threats.

Gartner’s Litan pointed to emerging solutions called “guardian agents” — autonomous systems that can oversee agents across domains. They ensure secure, trustworthy AI by monitoring, analyzing, and managing agent actions, including blocking or redirecting them to meet predefined goals.

An AI guardian agent governs AI applications, enforcing policies, detecting anomalies, managing risks, and ensuring compliance within an organization’s IT infrastructure, according to business consultancy EA Principles.

While guardian agents are emerging as one method of keeping agentic AI in line, AI agents still need strong oversight, guardrails, and ongoing monitoring to reduce risks, according to Forrester’s Joseph.

“It’s very important to remember that we are still very much in the Wild West era of agentic AI,” Joseph said. “Agents are far from fully baked, demanding significant maturation before organizations can safely adopt a hands-off approach.”

Kategorie: Hacking & Security

NASA finds generative AI can’t be trusted

Computerworld.com [Hacking News] - 1 Duben, 2025 - 12:00

Although many C-suite and line-of-business (LOB) execs are doing everything they can to focus on generative AI (genAI) efficiency and flexibility — and not about how often the technology delivers wrong answers — IT decision-makers can’t afford to do the same thing.

This isn’t just about hallucinations, although the increasing rate at which these kinds of errors crop up is terrifying. This lack of reliability is primarily caused by elements from one of four buckets:

  • Hallucinations, where genAI tools simply make up answers;
  • Bad training data, whether that means data that’s insufficient, outdated, biased or of low-quality;
  • Ignored query instructions, which is often a manifestation of biases in the training data;
  • Disregarded guardrails, (For a multi-billion-dollar licensing fee, one would think the model would at least try to do what it is told to do.)

Try and envision how your management team would react to a human employee who pulled these kinds of stunts. Here’s the scenario: the boss in his or her office with the problematic employee and that employee’s supervisor.

Exec: “You have been doing excellent work lately. You are far faster than your colleagues and the number of tasks you have figured out how to master is truly amazing. But 20 times over the last month, we found claims in your report that you simply made up. That is just not acceptable. If you promise to never do that again, everything should be fine.”

Supervisor: “Actually, boss, this employee has certain quirks and he is definitely going to continue to make stuff up. So, yes, this will not go away. Heck, I can’t even promise that this worker won’t make up stuff far more often.”

Exec: “OK. We’ll overlook that. But my understanding is that he ignored your instructions repeatedly and did only what he wanted. Can we at least get him to stop doing that?”

Supervisor: “Nope. That’s just what he does. We knew that when we hired him.”

Exec: “Very well. But on three occasions this month, he was found in the restricted part of the building where workers need Top Secret clearance. Can you at least get him to abide by our rules?”

Supervisor: “Nope. And given that his licensing fee was $5.8 billion this year, we’ve invested too much to turn back.”

Exec: “Fair enough. Carry on.”

And yet, that is precisely what so many enterprises are doing today, which is why a March report from the US National Aeronautics and Space Administration (NASA) is so important.

The NASA report found that genAI could not be relied on for critical research.

The “point” of conducting the assessment was “to filter out systems that create unacceptable risk. Just as we would not release a system with the potential to kill into service without performing appropriate safety analysis and safety engineering activities, we should not adopt technology into the regulatory pipeline without acceptable reasons to believe that it is fit for use in the critical activities of safety engineering and certification,” the NASA report said. “There is reason to doubt LLMs as a technology for writing or reviewing assurance arguments. LLMs are machines that BS, not machines that think, and thinking is precisely the task that must be automated if the technology is to improve safety or lower cost.”

In a wonderful display of scientific logic, the report wondered — in a section that should become required reading for CIOs on down the IT food chain — what genAI models could be truly used for.

“It’s worth mentioning the obvious potential alternative to using empirical research to establish the fitness for use of a proposed LLM-based automation before use, namely putting it into practice and seeing what happens. That’s certainly been done before, especially in the early history of industries such as aviation,” NASA researchers wrote. 

“But it is worth asking two questions here: (1) How can this be justified when there are existing practices we are more familiar with? and (2) How would we know whether it was working out? The first question might turn largely on the specifics of a proposed application and the tolerability of the potential harm that failure of the argument-based processes themselves might lead to: if one can find circumstances where failure is an option, there is more opportunity to use something unproven.”

The report then points out the logical contradiction in this kind of experimentation: “But that leaves the second question and raises a wrinkle: ongoing monitoring of less-critical systems is often also less rigorous than for more critical systems. Thus, the very applications in which it is most possible to take chances are those that produce the least reliable feedback about how well novel processes might have worked.”

It also pointed out the flaw in assuming this kind of model would know when circumstances would make a decision a bad idea. “Indeed, it is in corner cases that we might expect the BS to be most likely erroneous or misleading. Because the LLM does not reason from principles, it has no capacity for looking at a case and recognizing features that might make the usual reasoning inapplicable. Training data comprised of ISO 26262-style automotive safety arguments wouldn’t prepare an LLM to recognize, as a human would, that a submersible Lotus is a very different kind of vehicle than a typical sedan or light utility vehicle, and thus that typical reasoning — e.g., about the appropriateness of industry-standard water intrusion protection ratings — might be inapplicable.”

These same logical questions should apply to every enterprise. If the mission-critical nature of sensitive work would preclude genAI use — and if the low monitoring involved in the typical low-risk work makes it an unfit environment for experimenting — where should it be used? 

Gartner analyst Lauren Kornutick agreed these can be difficult decisions, but CIOs must take the reins and act as the “voice of reason.”

Enterprise technology projects in general “can fail when the business is misaligned on expectations versus reality, so someone needs to be a voice of reason in the room. (The CIO) needs to be helping drive solutions and not just running to the next shiny thing. And those are some very challenging conversations to have,” Kornutick said. 

“These are things that need to go to the executive committee to decide the best path forward,” she said. “Are we going to assume this risk? What’s the trade-off? What does this risk look like against the potential ROI? They should be working with the other leaders to align on what their risk tolerance is as a leadership team and then bring that to the board of directors.”

Rowan Curran, senior analyst at Forrester, suggested a more tactical approach. He suggests IT decision-makers insist they be far more involved in the beginning, when each business unit discusses where and how they will use genAI technology.

“You need to be very particular about the new use case they are going for,” Curran said. “Push governance much further to the left, so when they are developing the use case in the first place, you are helping them determine the risk and setting data governance controls.”

Curran also suggested that teams should take genAI data as a starting point and nothing more. “Do not rely on it for the exact answer.”

Trust genAI too much, in other words, and you might be living April Fool’s Day every day of the year.

Kategorie: Hacking & Security

V noci odstartovala mise Fram2 bitcoinového milionáře. Čtyřčlenná posádka se poprvé podívala na polární oběžnou dráhu

Živě.cz - 1 Duben, 2025 - 11:45
Uprostřed noci odstartoval z floridské rampy LC-39A další falkon. Na palubě nicméně tentokrát nenesl další várku starlinků nebo špionážní satelity, ale obytný Crew Dragon Resilience a soukromou misi se čtyřčlennou posádkou Fram2 . Video: Přelet mise Fram2 nad zamrzlou polární oblastí z paluby Crew ...
Kategorie: IT News

Amazon, OpenAI, and China’s Zhipu unveil new AI tools amid intensifying competition

Computerworld.com [Hacking News] - 1 Duben, 2025 - 10:57

A wave of new AI products is hitting the market, signaling a shift toward more autonomous, task-completing systems that could reshape how businesses and consumers interact with digital services.

Amazon has unveiled Nova Act, an AI agent designed to operate a web browser much like a human user. In parallel, OpenAI said it will release an open-weight language model.

Meanwhile, China’s Zhipu AI introduced a free AI assistant aimed at strengthening its position in the domestic market and competing with Western tech giants.

The announcements reflect a growing competitive push among US and Chinese firms to define the next generation of AI capabilities, with a particular focus on agentic AI -systems that can plan, reason, and take action on behalf of users.

These tools are expected to become central across industries, including productivity applications, customer service platforms, and e-commerce systems.

AI agent race heats up


Agentic AI – systems that can autonomously perform tasks – has captured the imagination of the tech industry, but experts caution that the technology is still maturing before it can be widely deployed in high-stakes enterprise environments.

“The launch of Amazon Nova Act seems to be an interesting one for use cases that are public facing which don’t have too much of risk associated with it (even if operations go slightly wrong),” said Sharath Srinivasamurthy, associate vice president of Research at IDC. “The potential for enterprise use cases for Agentic AI is huge but needs some time to evolve as every enterprise is different, workflows are different, and risks associated with operations going wrong are high.”

While Amazon’s Nova is designed to tackle relatively low-risk browser-based tasks, OpenAI’s move toward releasing an open-weight model could have more immediate implications for enterprise adoption strategies.

An open-weight language model provides public access to its trained parameters, enabling developers to adapt the model for custom applications without needing the original data used during training.

In contrast, open-source models go a step further by releasing the underlying code, datasets, and training processes.

“The world is moving towards open models as enterprises prefer to have more control over what they are using,” Srinivasamurthy said. “With Meta and DeepSeek gaining traction due to their open nature and democratization of AI. A shift towards open models was a matter of time for a pioneer like OpenAI. This move will make the market even more competitive.”

As companies reassess their AI strategies, the tension between control, performance, and innovation is expected to shape how and when agentic AI tools are adopted at scale.

Reshaping enterprise strategy

The new launches underscore the shift in how AI is poised to integrate into the enterprise technology stack. Analysts say these developments could accelerate adoption while also forcing businesses to rethink control, customization, and competitive strategy.

“The recent developments from OpenAI, Amazon, and Zhipu AI signal a transformative period in the AI industry,” said Abhivyakti Sengar, practice director at Everest Group. “Amazon’s Nova Act has the potential to redefine enterprise workflows by embedding AI directly into browser environments, streamlining tasks across various sectors.”

Beyond automation, OpenAI’s decision to offer an open-weight model could mark a turning point in how organizations approach AI deployment, particularly in regulated industries or complex internal environments.

Zhipu AI’s emergence as a serious player from China introduces new geopolitical considerations for global companies operating in or partnering with firms in the region.

“OpenAI’s release of an open-weight model democratizes AI development, offering enterprises enhanced customization capabilities while also posing new security considerations,” Sengar added. “Meanwhile, the ascent of Zhipu AI highlights China’s escalating role in AI innovation, urging multinational firms to thoughtfully assess partnerships and competitive strategies in this evolving landscape.” Together, these shifts signal not only the rapid evolution of agentic AI, but also the growing need for enterprises to evaluate AI readiness, governance, and the strategic value of openness in a highly competitive global market.

Kategorie: Hacking & Security

GCHQ intern took top secret spy tool home, now faces prison

The Register - Anti-Virus - 1 Duben, 2025 - 10:51
Not exactly Snowden levels of skill

A student at Britain's top eavesdropping government agency has pleaded guilty to taking sensitive information home on the first day of his trial.…

Kategorie: Viry a Červi

Chytrá zásuvka, která v Česku skutečně odpojuje fázový vodič. Teď zlevnila na 498 Kč

Živě.cz - 1 Duben, 2025 - 10:45
Není nejhezčí ani nejkompaktnější, ale je jedna z mála, která bere ohled na tradiční české elektrické zapojení. Chytrá zásuvka Smart Tuya AP-SP02 totiž rozpojuje fázový vodič na levé straně. Připojený spotřebič tak opravdu nebude pod napětím. Běžně stojí 598 Kč, teď ji stejnojmenný obchod a ...
Kategorie: IT News

Bláznivý Apríl, bláznivě nízké ceny Windows 11! Získejte jen za €20

AbcLinuxu [články] - 1 Duben, 2025 - 10:00

Bláznivé ceny v bláznivém měsíci plném změn počasí, které se v dubnu mění každou chvíli. Bláznivě nízké jsou teď ceny i na Goodoffer24.com, tak kupte klíč k doživotní licenci Windows 11 CDkey a Office.

Kategorie: GNU/Linux & BSD

Mindfactory: Prodeje Radeonu RX 9070 XT dosahují všech GeForce RTX 50 dohromady

CD-R server - 1 Duben, 2025 - 10:00
Německá Mindfactory, která po stěhování skladu a restrukturalizaci obnovila činnost, opět zveřejňuje statistiky prodeje. Potvrzuje, že prodeje Radeonu RX 9070 XT jsou na špičce…
Kategorie: IT News

Ventusky zrychlilo srážkový radar a Windy má mapu pro piloty. Novinky v českých meteoglóbech

Živě.cz - 1 Duben, 2025 - 09:45
České meteoglóby Windy a Ventusky se pochlubily další várkou novinek. Začněme druhým jmenovaným, Ventusky zrychlilo srážkový radar a teď ho nabízí s pětiminutovým intervalem. A to včetně předpovědi zhruba pro příští hodinu a půl. Srážkový radar na Ventusky Stejnou frekvenci najdeme i u českého ...
Kategorie: IT News

Dechberoucí obrázky v ChatuGPT můžete vytvářet už i s bezplatnými účty

Živě.cz - 1 Duben, 2025 - 09:35
Oživeno 1. dubna 2025 | OpenAI zpřístupnilo nový generátor obrázků v ChatuGPT také bezplatným účtům. Lze s nimi vygenerovat tři výstupy za den. [TWITTER:id='1906867488320843823'||] Je to ale zvláštní krok. Šéf OpenAI Sam Altman si totiž před několika dny posteskl , jak generování obrázků „taví ...
Kategorie: IT News

Neoretro: HP Pavilion ze4300 - AMD AthlonXP v notebooku podruhé

CD-R server - 1 Duben, 2025 - 08:45
Po nedávné recenzi DELL Latitude C840 se podíváme opět na další notebook s procesorem společnosti AMD.
Kategorie: IT News
Syndikovat obsah