Agregátor RSS
Researchers claim their protocol can create truly random numbers on a current quantum computer
A team that included researchers at a US bank says it has created a protocol that can generate certified truly random numbers, opening the possibility that current generation quantum computers can be used for secure applications in finance, cryptography, cybersecurity, and privacy.
However, an industry analyst is cautious.
“The JPMorgan team’s findings are interesting, but won’t be applicable in the near term for most CSOs, unless they are responsible for high security environments,” said Sandy Carielli, a principal analyst at Forrester Research.
“Quantum random number generation has been around for a while,” she pointed out, “and some CSOs may already be using products in that area. The certification could be a nice extra for highly regulated environments.”
Cyber-crew claims it cracked American cableco, releases terrible music video to prove it
A cyber-crime ring calling itself Arkana has made a cringe music video to boast of an alleged theft of subscriber account data from Colorado-based cableco WideOpenWest (literally, WOW!)…
[webapps] Progress Telerik Report Server 2024 Q1 (10.0.24.305) - Authentication Bypass
[webapps] Rejetto HTTP File Server 2.3m - Remote Code Execution (RCE)
[webapps] Sonatype Nexus Repository 3.53.0-01 - Path Traversal
[webapps] CodeCanyon RISE CRM 3.7.0 - SQL Injection
[webapps] Litespeed Cache 6.5.0.1 - Authentication Bypass
Hijacked Microsoft web domain injects spam into SharePoint servers
Týden na ScienceMag.cz: V Ostravě pokračují přípravy na instalaci kvantového počítače
Temná energie se v čase zřejmě mění. V okolí černé veledíry objevili existenci skryté populace asi deseti tisíc menších černých děr. S vodními vlnami lze provádět úžasné triky. Fyzikové stanovili přísnější omezení pro sterilní neutrina.
Digitální trendy v bankovnictví: Zjednodušovat, ale nabízet víc a ideálně hned
„Dřevařské dluhopisy“ byly průšvih od počátku. Soud rozplétá podvod, záměnu identit a šlendriánství inzerce
Ďalší cieľ prevencie aterosklerózy - lipoproteín(a)
Ryzen 5 9600 je jen o 2-3 % pomalejší než Ryzen 5 9600X
Dotace na IT kurzy: nechte EU zaplatit za vaše vzdělání a ušetřete rodinný rozpočet
Studiu Warner Bros umírají filmy na DVD a je to větší problém, než by se mohlo zdát
China’s FamousSparrow flies back into action, breaches US org after years off the radar
The China-aligned FamousSparrow crew has resurfaced after a long period of presumed inactivity, compromising a US financial-sector trade group and a Mexican research institute. The gang also likely targeted a governmental institution in Honduras, along with other yet-to-be-identified victims.…
With the rise of genAI, it’s time to follow Apple’s Security Recommendations
Apple’s Safari browser has a really useful password management feature, which is now also available as a standalone app called Passwords. If you’ve ever taken a look at it, you may have seen a section called Security Recommendations where you’ll find a collection of all the accounts and passwords that might have been compromised.
If you haven’t already, it’s time to take those collections seriously, because generative AI (genAI) adoption means the scale and nature of the threats posed by purloined passwords and broken IDs is about to grow far greater. That’s because, armed with stolen emails and passwords, criminals will find it relatively easy to throw those credentials at the most popular online services.
If they know you, they know, youThey do this already, of course. If you have a known email address and password you still use that is now being sold on the dark web (for about $10 a collection), it’s a no brainer for attackers to try it out on a range of different services. Sometimes they may get lucky.
Augmented efficiency just means that using genAI, those same attackers can plough through more of these credentials even more swiftly, enabling them to trundle through huge collections of stolen accounts and passwords fast. Stolen credentials were the big attack vector last year, according to Verizon, and were used in around 80% of exploits.
There are around 15 billion compromised credentials available online.
The vast majority of these are useless, which means credential stuffing attacks might not generate much of a success rate. When they do succeed, most victim learn from the experience and secure everything pretty quickly, meaning a very small number of that 15 billion are truly vulnerable. All the same, from time to time they get lucky. And getting lucky now and then is what makes that part of the account login exploitation industry tick.
Money in the middleThese attacks generate millions of dollars of losses every year. With billions on the planet, there’s probably another fool coming in a minute or two, and you don’t want it to be you. That’s why you should spend a little time and audit Apple’s Security Recommendations regularly, as you don’t want a service you use that happens to have its hooks on your personal, payment, health, or other valuable data to be abused.
That’s true for everyone, but for enterprise users there’s a dual challenge. We all know that employees (including business owners) are and will always be the biggest security weakness in the system. The phishing industry has evolved to exploit this.
But that tendency is equally threatening when it comes to account IDs, and together poses a double-whammy threat once empowered by AI. How many company-related accounts have slipped and to what extent do these two vulnerabilities work together?
If someone at Iworkatthisbusiness.com foolishly used their work email and complex work password to secure their access to trivialbuthackedwebsite.com, how long might it be until someone figures that out and sees if they can use this data to crack your corporate systems?
Phisherman’s bluesThese attacks don’t even need to be that smart; they can simply be used to analyze personal patterns to help craft super-effective phishing attacks against specific targets. Really sophisticated attackers could turn to a little agentic AI to gather any available social media data on entities they designate as ripe for attack, helping them create really effective phishing emails — Spear AI, as it may one day be recognized.
Artificial intelligence will help with all of this. It’s really good at identifying patterns in disparate data sets, and analyzing the data that’s already been exfiltrated into the world will be a relatively trivial task — it all just comes down to the questions the machines are asked to answer. They can even use identified patterns in passwords to predict likely password patterns based on user data for brute force attacks. I could go on.
Passwords are not the only fruit, of course.
If you are wise you’ll be using 2FA security and/or Passkeys on all your most important websites, and certainly to protect any with access to your financial details or payment information.
Along with different forms of biometric ID, the industry is shifting to adopt more resilient access control systems — though, of course, subverting those systems is just a new challenge in the cat-and-mouse security game. Only recently, we learned of a new AI attack designed to compromise Google Chrome’s Password Manager, and there will be more attacks of this kind. That’s even before you consider the significance of attacks made against enterprise AI in their own right.
Death to security complacencyThe main takeaway is this: You should act on the warnings given to you by Apple’s Security Recommendations tool. You should avoid re-using passwords, no matter where it is. You should use a Password Manager and other forms of security, such as 2FA, and you should very much beware if you receive an email from a trusted source that contains a link to something that sounds like it was made for you; chances are, it was.
Most of all, I want you to check the credentials that have been leaked, change them, close accounts, and delete payment information from any service you don’t intend to use again. As a person or enterprise, you certainly need to build a response plan for what to do if an account is compromised, or suspected to be compromised; security training even for your most experienced employees is almost certainly going to be of value. Most of all, never, ever use one of these passwords.
Alternatively, ignore Safari’s friendly warning and leave yourself open to having your genuine account credentials being sold online for up to $45 a time.
Why not take the time to secure your accounts? The tools are right there in your browser. What are you waiting for?
You can follow me on social media! Join me on BlueSky, LinkedIn, and Mastodon.
New security requirements adopted by HTTPS certificate industry
The Chrome Root Program launched in 2022 as part of Google’s ongoing commitment to upholding secure and reliable network connections in Chrome. We previously described how the Chrome Root Program keeps users safe, and described how the program is focused on promoting technologies and practices that strengthen the underlying security assurances provided by Transport Layer Security (TLS). Many of these initiatives are described on our forward looking, public roadmap named “Moving Forward, Together.”
At a high-level, “Moving Forward, Together” is our vision of the future. It is non-normative and considered distinct from the requirements detailed in the Chrome Root Program Policy. It’s focused on themes that we feel are essential to further improving the Web PKI ecosystem going forward, complementing Chrome’s core principles of speed, security, stability, and simplicity. These themes include:
- Encouraging modern infrastructures and agility
- Focusing on simplicity
- Promoting automation
- Reducing mis-issuance
- Increasing accountability and ecosystem integrity
- Streamlining and improving domain validation practices
- Preparing for a "post-quantum" world
Earlier this month, two “Moving Forward, Together” initiatives became required practices in the CA/Browser Forum Baseline Requirements (BRs). The CA/Browser Forum is a cross-industry group that works together to develop minimum requirements for TLS certificates. Ultimately, these new initiatives represent an improvement to the security and agility of every TLS connection relied upon by Chrome users.
If you’re unfamiliar with HTTPS and certificates, see the “Introduction” of this blog post for a high-level overview.
Multi-Perspective Issuance Corroboration
Before issuing a certificate to a website, a Certification Authority (CA) must verify the requestor legitimately controls the domain whose name will be represented in the certificate. This process is referred to as "domain control validation" and there are several well-defined methods that can be used. For example, a CA can specify a random value to be placed on a website, and then perform a check to verify the value’s presence has been published by the certificate requestor.
Despite the existing domain control validation requirements defined by the CA/Browser Forum, peer-reviewed research authored by the Center for Information Technology Policy (CITP) of Princeton University and others highlighted the risk of Border Gateway Protocol (BGP) attacks and prefix-hijacking resulting in fraudulently issued certificates. This risk was not merely theoretical, as it was demonstrated that attackers successfully exploited this vulnerability on numerous occasions, with just one of these attacks resulting in approximately $2 million dollars of direct losses.
Multi-Perspective Issuance Corroboration (referred to as "MPIC") enhances existing domain control validation methods by reducing the likelihood that routing attacks can result in fraudulently issued certificates. Rather than performing domain control validation and authorization from a single geographic or routing vantage point, which an adversary could influence as demonstrated by security researchers, MPIC implementations perform the same validation from multiple geographic locations and/or Internet Service Providers. This has been observed as an effective countermeasure against ethically conducted, real-world BGP hijacks.
The Chrome Root Program led a work team of ecosystem participants, which culminated in a CA/Browser Forum Ballot to require adoption of MPIC via Ballot SC-067. The ballot received unanimous support from organizations who participated in voting. Beginning March 15, 2025, CAs issuing publicly-trusted certificates must now rely on MPIC as part of their certificate issuance process. Some of these CAs are relying on the Open MPIC Project to ensure their implementations are robust and consistent with ecosystem expectations.
We’d especially like to thank Henry Birge-Lee, Grace Cimaszewski, Liang Wang, Cyrill Krähenbühl, Mihir Kshirsagar, Prateek Mittal, Jennifer Rexford, and others from Princeton University for their sustained efforts in promoting meaningful web security improvements and ongoing partnership.
Linting
Linting refers to the automated process of analyzing X.509 certificates to detect and prevent errors, inconsistencies, and non-compliance with requirements and industry standards. Linting ensures certificates are well-formatted and include the necessary data for their intended use, such as website authentication.
Linting can expose the use of weak or obsolete cryptographic algorithms and other known insecure practices, improving overall security. Linting improves interoperability and helps CAs reduce the risk of non-compliance with industry standards (e.g., CA/Browser Forum TLS Baseline Requirements). Non-compliance can result in certificates being "mis-issued". Detecting these issues before a certificate is in use by a site operator reduces the negative impact associated with having to correct a mis-issued certificate.
There are numerous open-source linting projects in existence (e.g., certlint, pkilint, x509lint, and zlint), in addition to numerous custom linting projects maintained by members of the Web PKI ecosystem. “Meta” linters, like pkimetal, combine multiple linting tools into a single solution, offering simplicity and significant performance improvements to implementers compared to implementing multiple standalone linting solutions.
Last spring, the Chrome Root Program led ecosystem-wide experiments, emphasizing the need for linting adoption due to the discovery of widespread certificate mis-issuance. We later participated in drafting CA/Browser Forum Ballot SC-075 to require adoption of certificate linting. The ballot received unanimous support from organizations who participated in voting. Beginning March 15, 2025, CAs issuing publicly-trusted certificates must now rely on linting as part of their certificate issuance process.
What’s next?
We recently landed an updated version of the Chrome Root Program Policy that further aligns with the goals outlined in “Moving Forward, Together.” The Chrome Root Program remains committed to proactive advancement of the Web PKI. This commitment was recently realized in practice through our proposal to sunset demonstrated weak domain control validation methods permitted by the CA/Browser Forum TLS Baseline Requirements. The weak validation methods in question are now prohibited beginning July 15, 2025.
It’s essential we all work together to continually improve the Web PKI, and reduce the opportunities for risk and abuse before measurable harm can be realized. We continue to value collaboration with web security professionals and the members of the CA/Browser Forum to realize a safer Internet. Looking forward, we’re excited to explore a reimagined Web PKI and Chrome Root Program with even stronger security assurances for the web as we navigate the transition to post-quantum cryptography. We’ll have more to say about quantum-resistant PKI later this year.
As big tech circles, UK government struggles to reap promised AI benefits
The UK government’s grand plan for AI in the public sector is struggling in the face of growing technological challenges, a report by the Parliamentary Public Accounts Committee (PAC), a bipartisan group of elected members of parliament, has found.
Many of these problems will be familiar to anyone who has tried to make AI work inside an organization: the dead hand of obsolete systems, poor quality data, and a chronic lack of skilled people to implement the technology.
But beyond these issues lies another problem that could prove just as difficult: the monopolistic power of tech vendors that control the AI technology the government so badly desires.
Coming only weeks after the Government Digital Service (GDS) was created to drive AI, the committee’s initial assessment in the AI in Government report is a sobering reality check.
For the birdsThe committee’s report identifies several areas of concern, starting with poor-quality data “locked away in out-of-date legacy IT systems.” Of the 72 systems previously identified as being legacy barriers, 21 hadn’t even yet received remediation funding to overcome these problems, it found.
It also noted a lack of transparency in government data use in AI, which risked creating public mistrust and a future withdrawal by citizens of their consent for its use. Other problems included the perennial shortage of AI and digital skills, an issue mentioned by 70% of government bodies responding to a 2024 National Audit Office (NAO) survey.
Additionally, government departments were running AI test pilots in a siloed way, making it difficult to learn wider lessons, said the committee.
“The government has said it wants to mainline AI into the veins of the nation, but our report raises questions over whether the public sector is ready for such a procedure,” said committee chair, Sir Geoffrey Clifton-Brown.
“Unfortunately, those familiar with our committee’s past scrutiny of the government’s frankly sclerotic digital architecture will know that any promises of sudden transformation are for the birds,” he added.
AI oligopolyThere’s a lot at stake here. AI is often talked up by the ministers as the key to overhauling the state, getting it to work more efficiently and cheaply. It’s a story that has become hugely important in many countries. If progress slows, that promise will be questioned.
In its report, the committee drew attention to the market power of a small band of AI companies. The tech industry has a tendency towards monopolies over time, it said, but with AI it was starting from this position, which might lead to technological lock-in and higher costs, hindering development in the long term.
According to the Open Cloud Coalition (OCC), a recently formed lobby group of smaller cloud providers backed by Google, the UK government’s struggles with AI mirror what happened with cloud deployment from the 2010s onwards, which included the lack of competition.
“This report shows that the dominance of a few large technology suppliers in the public procurement of AI risks stifling competition and innovation, while also hampering growth, exactly the same problems we’ve seen with cloud contracts,” commented Nicky Stewart, senior advisor to the OCC.
Cloud and AI are symbiotic, she noted, and the domination of one or both by a small group of mostly US tech companies risks building monopolies it might be difficult to escape from.
“Without reform, the government will remain over-reliant on a handful of major providers, limiting flexibility and access to innovative, leading edge technology, whilst locking taxpayers into expensive, restrictive agreements,” she said.
Sylvester Kaczmarek, CTO at OrbiSky Systems, a UK company specializing in integrating AI into aerospace applications, agreed that supplier dominance could stifle innovation, but remained just as skeptical of AI’s projected cost savings. Implementation was always where technologies proved themselves, he pointed out.
“Are savings over-sold? Most likely, in the short run,” said Kaczmarek. “There is a lot of groundwork to be laid before large-scale, reliable AI deployment can safely deliver meaningful savings. [governments need to] prioritize realistic roadmaps and more comprehensive value.”
Infostealer campaign compromises 10 npm packages, targets devs
- « první
- ‹ předchozí
- …
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
- 21
- …
- následující ›
- poslední »
