Kategorie
Zendesk ticket systems hijacked in massive global spam wave
Millions of people imperiled through sign-in links sent by SMS
Websites that authenticate users through links and codes sent in text messages are imperiling the privacy of millions of people, leaving them vulnerable to scams, identity theft, and other crimes, recently published research has found.
The links are sent to people seeking a range of services, including those offering insurance quotes, job listings, and referrals for pet sitters and tutors. To eliminate the hassle of collecting usernames and passwords—and for users to create and enter them—many such services instead require users to provide a cell phone number when signing up for an account. The services then send authentication links or passcodes by SMS when the users want to log in.
Easy to execute at scaleA paper published last week has found more than 700 endpoints delivering such texts on behalf of more than 175 services that put user security and privacy at risk. One practice that jeopardizes users is the use of links that are easily enumerated, meaning scammers can guess them by simply modifying the security token, which usually appears at the right of a URL. By incrementing the token—for instance, by first changing 123 to 124 or ABC to ABD and so on—the researchers were able to access accounts belonging to other users. From there, the researchers could view personal details, such as partially completed insurance applications.
Chainlit AI framework bugs let hackers breach cloud environments
Cisco fixes Unified Communications RCE zero day exploited in attacks
New Android malware uses AI to click on hidden browser ads
Online retailer PcComponentes says data breach claims are fake
OpenAI to add age verification to ChatGPT
OpenAI has adding age verification to ChatGPT following reports that several children and young people have taken their own lives after conversations with the popular chatbot. The move echoes a recent decision by TikTok to do the same thing to protect underage users from accessing inappropriate content.
ChatGPT already has restrictions for users who state that they are under 18. Unsurprisingly, there are users who lie about their age in order to discuss sensitive topics. What’s new is that OpenAI is now adding algorithms to detect when someone lies about their age. In such cases, the restrictions will be imposed automatically.
If the algorithm draws the wrong conclusion about age, users can reset their account by uploading a photograph of themselves, according to Techcrunch.
Fortinet admins report patched FortiGate firewalls getting hacked
North Korean PurpleBravo Campaign Targeted 3,136 IP Addresses via Fake Job Interviews
GDPR violations are rising sharply
It’s becoming increasingly common for companies and organizations to be reported for violations of the GDPR personal data protection law, according to a new report from the DLA Piper law firm. On average, there are now 443 reports of GDPR violations per day in the EU, an increase of 22% compared to 2024.
“The report confirms that cybersecurity issues are intensifying,” Gustav Lundin from DLA Piper said in a statement. “For Swedish organizations, this means that both technical and organizational protective measures need to be reviewed in order to keep pace with the emerging risks and requirements.”
In 2025, total fines amounted to €1.2 billion, most of which involved technology and social media companies. That figure was in line with the previous year.
Jamf has a warning for macOS vibe coders
Just yesterday, we noted the growing threat of ransomware. Now, Jamf Threat Labs is warning that North Korean threat actors are abusing Visual Studio Code task configuration files for malware delivery in a campaign aimed at macOS software developers.
It’s a classic attempt in which developers are tricked into using maliciously crafted GitHub/GitLab projects that contain malicious JavaScript code.
“When the project is opened, Visual Studio Code prompts the user to trust the repository author,” Jamf said. “If that trust is granted, the application automatically processes the repository’s tasks.json configuration file, which can result in embedded arbitrary commands being executed on the system.”
What is this new threat?The malware enables execution of arbitrary JavaScript code on an infected system and collects system information and the public-facing IP address. Jamf also uncovered a JavaScript-based backdoor that provides remote code execution, persistent communication with command-and-control infrastructure and system fingerprinting on macOS system; among other things, that means the malware authors can also switch the vulnerability off and on again remotely.
The bottom line, of course, is that developers must be careful what they use when building apps. “Developers should remain cautious when interacting with third-party repositories, especially those shared directly or originating from unfamiliar sources,” Jamf warned.
AI, the new attack surfaceWhat’s critical about this particular attack vector is its attempt to exploit the “vibe-coding” trend. Visual Studio Code is, after all, the open source AI code editor used by developers across the planet.
This latest exploit does make me wonder if and when we’ll begin to see additional exploits built to capitalize on the latest trends in coding. To what extent can the decision-making systems within AI code companion services be tricked into connecting to malware-infested packages?
Given the growing sophistication of threat actors and the involvement of nation states in creating threats, the latest exploit makes it crystal clear that people are already looking at this. The magnitude of this threat can only grow in future as quantum computers are used to find weaknesses in AI models that can themselves be exploited to distribute malware.
AI already famously hallucinates, so weaponizing those visions is nothing other than a logical next step. “Slopsquatting,” where attackers create malicious software packages using names AI models have already hallucinated into existance, is already a thing.
What to do?To some extent, when using AI agents to help craft code, it becomes even more important to put that code through human code review. Developers — and users — absolutely must put their code through robust security checks, particularly against rogue permissions, data sharing, or worse. AI-generated code should never be allowed to bypass established security processes.
Further out, app distribution service providers must also wake up to the need to insert additional layers of protection within automated or human-driven code review in order to protect against this kind of weaponization in vibe coding.
This could emerge as a particular threat in the current legislative environment concerning app stores. If you think about Europe, there is a danger that as new App Stores appear, not every single code review process they put in place will be capable of catching these kinds of inserted risks. Think about the complex tapestries of spoofs, infected depositories, and fake name malwares that can be created to side-step automated code verification services.
The answer to the AI threat will be…more AI!AI will eventually be used to combat AI. But like everything else in life, there will always be a more powerful AI waiting in the wings to take out both protagonists and open a new chapter in the fight.
Acclaimed author and enthusiastic Mac user Douglas Adams once posited that Deep Thought, the computer, told us the answer to the ultimate question of life, the universe, and everything was 42, which only made sense once the question was redefined. But in today’s era, we cannot be certain the computer did not hallucinate.
Returning to Earth with a gentle bump, Jamf’s latest security story should be seen as a warning to coders everywhere to be wary when using third-party code. Verify before you ship, because, as Adams also wrote: “A common mistake that people make when trying to design something completely foolproof is to underestimate the ingenuity of complete fools.”
You can follow me on social media! Join me on BlueSky, LinkedIn, Mastodon, and MeWe.
Fake Lastpass emails pose as password vault backup alerts
Zoom and GitLab Release Security Updates Fixing RCE, DoS, and 2FA Bypass Flaws
Microsoft shares workaround for Outlook freezes after Windows update
You Got Phished? Of Course! You're Human...
Hackers exploit security testing apps to breach Fortune 500 firms
GitLab warns of high-severity 2FA bypass, denial-of-service flaws
OpenAI advertising paid per impression will launch next month, says report
OpenAI is already working with advertisers to test showing ads to ChatGPT users, and could launch the service commercially as early as next month, according to a news report.
The company announced only last week its plans to insert ads in chats with its AI bots, something CEO Sam Altman once said would be a “last resort” for the company.
Advertisers will initially pay per impression (PPM), and not per click (PPC) as is more common with web-based advertising, The Information reported on Wednesday, citing two people familiar with the company’s plans.
The PPM model will bring in revenue for OpenAI even if users do not interact with the ads but will give advertisers little indication of the impact of their spending.
OpenAI has touted early access to the service to dozens of advertisers, who have each been asked to commit to spend less than $1 million over a trial period of several weeks, the report said, citing the same sources. No details of pricing for the service have been released, so there’s no telling how many impressions the testers will get for their money.
The company is clearly still building the infrastructure for the service: It won’t offer the advertisers self-service tools for buying and managing their ads during the test phase, as it is still working on the technology, the report said.
OpenAI unveiled its plans to slip ads into chats with its bots last Friday, the same day it unveiled ChatGPT Go, a new $8/month pricing tier that will be supported by advertising. Ads will also be displayed to users of its free service, but not (yet) to users of the $20/month ChatGPT Plus or $200/month ChatGPT Pro tiers, or to enterprise customers.
In December, ChatGPT appeared to jump the gun, showing what users took for ads, but that were in fact simply the app suggestions OpenAI announced in October.
Last resortAltman told attendees at a May 2024 Harvard Business School event, “I kind of think of ads as a last resort for us for a business model,” adding, “I would do it if it meant that was the only way to get everybody in the world access to great services, but if we can find something that doesn’t do that, I would prefer that.”
The company’s rushed announcement of the ad service on Friday, before it was ready for testing, prompted speculation that it had failed to find other ways to fund its operations.
OpenAI CFO Sarah Friar took to the company’s blog over the weekend to refute such concerns, saying that while the company’s compute costs are tripling every year, so is its revenue. Compute capacity grew from 0.2 GW in 2023 to 0.6 GW in 2024 and around 1.9 GW in 2025, Friar wrote, while “ARR” rose from $2 billion in 2023 to $6 billion in 2024 and over $20 billion in 2025.
But there were some critical omissions from Friar’s post.
She didn’t spell out what she meant by ARR. One meaning, annual recurring revenue, can be a reliable, if hard-to-calculate, indicator of a company’s future fortunes, measuring present (and likely future) spend from committed customers. But the abbreviation can also have a much looser sense, annual run rate, which is typically calculated as the best or most-recent month’s revenue multiplied by 12 — an easily inflated measure of performance for fast-growing companies like OpenAI, or those with more volatile month-to-month performance.
Most notably, Friar did not say whether OpenAI’s ARR, by either definition, matched or exceeded its costs, key for the company’s long-term survival.
The bottom lineIf OpenAI is counting on advertising to bolster its bottom line, the bottom line is exactly where they will appear during the test phase, as the company intends to add them at the end of answers in ChatGPT, clearly labelled as “Sponsored” and separated from what it calls the “organic” answer written by its AI model.
In the blog post announcing the imminent introduction of the service, it suggested that users might choose to ask questions about an ad in order to make a purchase decision, hinting at why it is favoring pay-per-impression over pay-per-click: Those questions could provide advertisers with the interaction data they crave, and answering them could potentially open up another monetization stream for OpenAI.
Tesla hacked, 37 zero-days demoed at Pwn2Own Automotive 2026
Wikipedia volunteers spent years cataloging AI tells. Now there's a plugin to avoid them.
On Saturday, tech entrepreneur Siqi Chen released an open source plugin for Anthropic's Claude Code AI assistant that instructs the AI model to stop writing like an AI model. Called "Humanizer," the simple prompt plugin feeds Claude a list of 24 language and formatting patterns that Wikipedia editors have listed as chatbot giveaways. Chen published the plugin on GitHub, where it has picked up over 1,600 stars as of Monday.
"It's really handy that Wikipedia went and collated a detailed list of 'signs of AI writing,'" Chen wrote on X. "So much so that you can just tell your LLM to... not do that."
The source material is a guide from WikiProject AI Cleanup, a group of Wikipedia editors who have been hunting AI-generated articles since late 2023. French Wikipedia editor Ilyas Lebleu founded the project. The volunteers have tagged over 500 articles for review and, in August 2025, published a formal list of the patterns they kept seeing.
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- …
- následující ›
- poslední »



