Security-Portal.cz je internetový portál zaměřený na počítačovou bezpečnost, hacking, anonymitu, počítačové sítě, programování, šifrování, exploity, Linux a BSD systémy. Provozuje spoustu zajímavých služeb a podporuje příznivce v zajímavých projektech.

Kategorie

GitHub Token Leak Exposes Python's Core Repositories to Potential Attacks

The Hacker News - 3 hodiny 1 min zpět
Cybersecurity researchers said they discovered an accidentally leaked GitHub token that could have granted elevated access to the GitHub repositories of the Python language, Python Package Index (PyPI), and the Python Software Foundation (PSF) repositories. JFrog, which found the GitHub Personal Access Token, said the secret was leaked in a public Docker container hosted on Docker Hub. "This Newsroomhttp://www.blogger.com/profile/[email protected]
Kategorie: Hacking & Security

Where does Apple Intelligence come from?

Computerworld.com [Hacking News] - 4 hodiny 58 min zpět

Apple Intelligence isn’t entirely Apple’s intelligence; just like so many other artificial intelligence (AI) tools, it also leans into all the human experience shared on the internet because all that data informs the AI models the company builds.

That said, the company explained where it gets the information it uses when it announced Apple Intelligence last month: “We train our foundation models on licensed data, including data selected to enhance specific features, as well as publicly available data collected by our web-crawler, AppleBot,” Apple explained.

Your internet, their product

Apple isn’t alone in doing this. In using the public internet this way, it is following the same approach as others in the business. The problem: that approach is already generating arguments between copyright holders and AI firms, as both sides grapple with questions around copyright, fair use, and the extent to which data shared online is commodified to pour even more cash into the pockets of Big Tech firms. 

Getty Images last year sued Stability AI for training its AI using 12 million images from its collection without permission. Individual creatives have also taken a stance against these practices. The concern is the extent to which AI firms are unfairly profiting from the work humans do, without consent, credit, or compensation.

In a small attempt to mitigate such accusations, Apple has told web publishers what they have to do to stop their content being used for Apple product development

Can you unmake an AI model?

What isn’t clear is the extent to which information already scraped by Applebot for use in Apple Intelligence (or any generative AI service) can then be winnowed out of the models Apple has already made. Once the model is created using your data, to what extent can your data be subsequently removed from it? The learning — and potential for copyright abuse — has already been baked in.

But where is the compensation for those who’ve made their knowledge available online? 

In most cases, the AI firms argue that what they are doing can be seen as fair use rather than being any violation of copyright laws. But, given that what constitutes fair use differs in different nations, it seems highly probable that the evolving AI industry is heading directly toward regulatory and legal challenges around their use of content.

That certainly seems to be part of the concern coming from regulators in some jurisdictions, and we know the legal framework around these matters is subject to change. This might also be part of what has prompted Apple to say it will not introduce the service in the EU just yet.

Move fast and take things

Right now, AI companies are racing faster than government regulation. Some in the space are attempting to side-step such debates by placing constraints around how data is trained. Adobe, for example, claims to train its imaging models only using legitimately licensed data. 

In this case, that means Adobe Stock images licensed content and older content that is outside of copyright.

Adobe isn’t just being altruistic in this — it knows customers using its generative AI (genAI) tools will be creating commercial content and recognizes the need to ensure its customers don’t end up being sued for illegitimate use of images and other creative works. 

What about privacy?

But when it comes to Apple Intelligence, it looks like the data you’ve published online has now become part of the company product, with one big exception: private data.

“We never use our users’ private personal data or user interactions when training our foundation models, and we apply filters to remove personally identifiable information like social security and credit card numbers that are publicly available on the Internet,” it said. 

Apple deserves credit for its consistent attempts to maintain data privacy and security, but perhaps it should develop a stronger and more public framework toward the protection of the creative endeavors of its customer base.

Please follow me on Mastodon, or join me in the AppleHolic’s bar & grill and Apple Discussions groups on MeWe.

Kategorie: Hacking & Security

Exim 4.98 Addresses Critical Vulnerabilities, Bolsters Email Server Security

LinuxSecurity.com - 6 hodin 26 min zpět
Exim is one of Unix-like systems' most widely used mail transfer agents. It's essential for email delivery and handling and is a significant part of the Internet email infrastructure.
Kategorie: Hacking & Security

AI chip battleground shifts as software takes center stage

Computerworld.com [Hacking News] - 6 hodin 44 min zpět

The AI landscape is undergoing a transformative shift as chipmakers, traditionally focused on hardware innovation, are increasingly recognizing the pivotal role of software.

This strategic shift is redefining the AI race, where software expertise is becoming as crucial as hardware prowess.

AMD’s recent acquisitions: a case study

AMD’s recent acquisition of Silo AI, Europe’s largest private AI lab, exemplifies this trend. Silo AI brings to the table a wealth of experience in developing and deploying AI models, particularly large language models (LLMs), a key area of focus for AMD.

This acquisition not only enhances AMD’s AI software capabilities but also strengthens its presence in the European market, where Silo AI has a strong reputation for developing culturally relevant AI solutions.

“Silo AI plugs important capability gap [for AMD] from software tools (Silo OS) to services (MLOps) to helping tailor sovereign and open source LLMs and at the same time expanding its footprint in the important European market,” said Neil Shah, partner & co-founder at Counterpoint Research.

AMD’s move follows its previous acquisitions of Mipsology and Nod.ai, further solidifying its commitment to building a robust AI software ecosystem. Mipsology’s expertise in AI model optimization and compiler technology, coupled with Nod.ai’s contributions to open-source AI software development, provides AMD with a comprehensive suite of tools and expertise to accelerate its AI strategy.

“These strategic moves strengthen AMD’s ability to offer open-source solutions tailored for enterprises seeking flexibility and interoperability across platforms,” said Prabhu Ram, VP of industry research group at Cybermedia Research. “By integrating Silo AI’s capabilities, AMD aims to provide a comprehensive suite for developing, deploying, and managing AI systems, appealing broadly to diverse customer needs. This aligns with AMD’s evolving market position as a provider of accessible and open AI solutions, capitalizing on industry trends towards openness and interoperability.”

Beyond AMD: A broader industry trend

This strategic shift towards software is not limited to AMD. Other chip giants like Nvidia and Intel are also actively investing in software companies and developing their own software stacks.

“If you look at the success of Nvidia, it is driven not by silicon but by software (CUDA) and services (NGC with MLOps, TAO, etc.) it offers on top of its compute platform,” Shah said. “AMD realizes this and has been investing in building software (ROCm, Ryzen Aim, etc.) and services (Vitis) capabilities to offer an end-to-end solution for its customers to accelerate AI solution development and deployment.”

Nvidia’s recent acquisition of Run:ai and Shoreline.io, both specializing in AI workload management and infrastructure optimization, also underscores the importance of software in maximizing the performance and efficiency of AI systems.

But this doesn’t mean chipmakers follow similar trajectories toward their goals. Manish Rawat, semiconductor analyst at Techinsights pointed out that for a large part, Nvidia’s AI ecosystem has been established through proprietary technologies and a robust developer community, giving it a strong foothold in AI-driven industries.

“AMD’s approach with Silo AI signifies a focused effort to expand its capabilities in AI software, positioning itself competitively against Nvidia in the evolving AI landscape,” Rawat added.

Another relevant example in this regard is Intel’s acquisition of Granulate Cloud Solutions, a provider of real-time continuous optimization software. Granulate assists cloud and data center clients in optimizing compute workload performance while lowering infrastructure and cloud expenses.

Software to drive differentiation

The convergence of chip and software expertise is not just about catching up with competitors. It’s about driving innovation and differentiation in the AI space.

Software plays a crucial role in optimizing AI models for specific hardware architectures, improving performance, and reducing costs. Eventually, software could decide who rules the AI chip market.

“The bigger picture here is that AMD is obviously competing with NVIDIA for supremacy in the AI world,” said Hyoun Park, CEO and chief analyst at Amalgam Insights. “Ultimately, this is not just a question of who makes the better hardware, but who can actually back the deployment of enterprise-grade solutions that are high-performance, well-governed, and easy to support over time. And although Lisa Su and Jensen Huang are both among the absolute brightest executives in tech, only one of them can ultimately win this war as the market leader for AI hardware.” 

The rise of full-stack AI solutions

The integration of software expertise into chip companies’ offerings is leading to the emergence of full-stack AI solutions. These solutions encompass everything from hardware accelerators and software frameworks to development tools and services.

By offering a comprehensive suite of AI capabilities, chipmakers can cater to a wider range of customers and use cases, from cloud-based AI services to edge AI applications.

For instance, Silo AI, first and foremost, brings an experienced talent pool, especially working on optimizing AI models, tailored LLMs, and more, according to Shah. Silo AI’s SIloOS particularly is a very powerful addition to AMD’s offerings allowing its customer to leverage advanced tools and modular software components to customize AI solutions to their needs. This was a big gap for AMD.

“Thirdly, Silo AI also brings in MLOps capabilities which are a critical capability for a platform player to help its enterprise customers deploy, refine and operate AI models in a scalable way,” Shah added. “This will help AMD develop a service layer on top of the software and silicon infrastructure.”

Implications for enterprise tech

The shift of chipmakers from purely hardware to also providing software toolkits and services has significant ramifications for enterprise tech companies.

Shah stressed that these developments are crucial for enabling enterprise and AI developers to fine-tune their AI models for enhanced performance on specific chips, applicable to both training and inference phases.

This advancement not only speeds up product time-to-market but also aids partners, whether they are hyperscalers or manage on-premises infrastructures, in boosting operational efficiencies and reducing total cost of ownership (TCO) by improving energy usage and optimizing code.

“Also, it’s a great way for chipmakers to lock these developers within their platform and ecosystem as well as monetize the software toolkits and services on top of it. This also drives recurring revenue, which chipmakers can reinvest and boost the bottom line, and investors love that model,” Shah said.

The future of AI: a software-driven landscape

As the AI race continues to evolve, the focus on software is set to intensify. Chipmakers will continue to invest in software companies, develop their own software stacks, and collaborate with the broader AI community to create a vibrant and innovative AI ecosystem.

The future of AI is not just about faster chips — it’s about smarter software that can unlock the full potential of AI and transform the way we live and work.

Kategorie: Hacking & Security

OpenAI reportedly stopped staffers from warning about security risks

Computerworld.com [Hacking News] - 8 hodin 13 min zpět

A whistleblower letter obtained by The Washington Post accuses OpenAI of illegally restricting employees from communicating with authorities about the risks their technology may pose. The letter was reportedly sent to the US Securities and Exchange Commission (SEC) — the agency that oversees the trading of securities — urging the regulators to review OpenAI.

According to the letter, OpenAI allegedly used illegal non-disclosure agreements to, among other things, force employees to refrain from whistle-blowing incentives and it required them to state whether they had contact with authorities.

OpenAI has come under previous criticism for the restrictive design of its non-disclosure agreements, which it said it would modify. In a statement to The Washington Post, OpenAI spokesperson Hannah Wong said: “Our whistleblower policy protects employees’ rights to make protected disclosures.”

More OpenAI news:

Kategorie: Hacking & Security

OpenAI is working on new reasoning AI technology

Computerworld.com [Hacking News] - 8 hodin 20 min zpět

ChatGPT developer OpenAI is developing a new kind of reasoning AI models with the project name “Strawberry” that can be used for research, according to a report by Reuters. Strawberry was apparently earlier known by the name “Q” and would be considered a breakthrough within OpenAI.

The plan is for the new Strawberry models to not only be able to generate answers based on instructions, but also to be able to plan ahead by navigating the internet independently and reliably perform what OpenAI calls “deep research.”

How Strawberry works under the hood remains unclear; it is also unknown how long the technology may be from completion. In a comment to Reuters, an OpenAI spokesperson said continued research into new AI opportunities is ongoing within the industry. However, the spokesperson did not say anything specific about Strawberry in particular.

More OpenAI news:

Kategorie: Hacking & Security

OpenAI whistleblowers seek SEC probe into ‘restrictive’ NDAs with staffers

Computerworld.com [Hacking News] - 8 hodin 1 min zpět

Some employees of ChatGPT-maker OpenAI have reportedly written to the US Securities and Exchange Commission (SEC) seeking a probe into some employee agreements, which they term restrictive non-disclosure agreements (NDAs).

These staffers-turned-whistleblowers have written to the SEC alleging that the company forced their employees to sign agreements that were not in compliance with SEC’s regulations.

“Given the well-documented potential risks posed by the irresponsible deployment of AI, we urge the commissioners to immediately approve an investigation into OpenAI’s prior NDAs, and to review current efforts apparently being undertaken by the company to ensure full compliance with SEC rules,” read the letter shared with Reuters by the office of Senator Chuck Grassley.

The same letter alleges that OpenAI made employees sign agreements that curb their federal rights to whistleblower compensation and urges the financial watchdog to impose individual penalties for each such agreement signed.

Further, the whistleblowers have alleged that OpenAI’s agreements with employees restricted them from making any disclosure to authorities without checking with the management first and any failure to comply with these agreements would attract penalties for the staffers.

The company, according to the letter, also did not create any separate or specific exemptions in the employee non-disparagement clauses for disclosing securities violations to the SEC.

An email sent to OpenAI about the letter went unanswered.

The Senator’s office also cast doubt about the practices at OpenAI. “OpenAI’s policies and practices appear to cast a chilling effect on whistleblowers’ right to speak up and receive due compensation for their protected disclosures,” the Senator was quoted as saying.

Experts in the field of AI have been warning against the use of the technology without proper guidelines and regulations.

In May, more than 150 leading artificial intelligence (AI) researchers, ethicists, and others signed an open letter calling on generative AI (genAI) companies to submit to independent evaluations of their systems to maintain basic protection against the risks of using large-scale AI.

Last April, the who’s who of the technology industry called for AI labs to stop training the most powerful systems for at least six months, citing “profound risks to society and humanity.”

That open letter, which now has more than 3,100 signatories including Apple co-founder Steve Wozniak, called out San Francisco-based OpenAI Lab’s recently announced GPT-4 algorithm in particular, saying the company should halt further development until oversight standards were in place. OpenAI, on the other hand, in May formed a safety and security committee led by board members as they started researching their next large language models.

More OpenAI news:

Kategorie: Hacking & Security

Analysts expect weak demand for Apple Vision Pro

Computerworld.com [Hacking News] - 8 hodin 26 min zpět

On Friday, Apple Vision Pro was launched in Europe. But the analysts do not expect any major sales success.

According to research firm IDC, fewer than 500,000 units of the mixed reality headset will be sold in 2024, which is partly due to the high price. Apple’s headset costs $3,499; that corresponds, for example, to almost 50,000 Swedish kronor with included VAT and other fees.

By comparison, Facebook’s parent company Meta sells its Meta Quest 3 headset for $499, and its predecessor, the Meta Quest 2, retails for $299.

According to rumors, a cheaper variant of the Apple Vision Pro will be launched in 2025, but release dates and details remain unclear.

More on Apple Vision Pro:

Kategorie: Hacking & Security

10,000 Victims a Day: Infostealer Garden of Low-Hanging Fruit

The Hacker News - 9 hodin 17 min zpět
Imagine you could gain access to any Fortune 100 company for $10 or less, or even for free. Terrifying thought, isn’t it? Or exciting, depending on which side of the cybersecurity barricade you are on. Well, that’s basically the state of things today. Welcome to the infostealer garden of low-hanging fruit. Over the last few years, the problem has grown bigger and bigger, and only now are we The Hacker Newshttp://www.blogger.com/profile/[email protected]
Kategorie: Hacking & Security

CRYSTALRAY Hackers Infect Over 1,500 Victims Using Network Mapping Tool

The Hacker News - 9 hodin 45 min zpět
A threat actor that was previously observed using an open-source network mapping tool has greatly expanded their operations to infect over 1,500 victims. Sysdig, which is tracking the cluster under the name CRYSTALRAY, said the activities have witnessed a tenfold surge, adding it includes "mass scanning, exploiting multiple vulnerabilities, and placing backdoors using multiple [open-source Newsroomhttp://www.blogger.com/profile/[email protected]
Kategorie: Hacking & Security

The promise and peril of ‘agentic AI’

Computerworld.com [Hacking News] - 10 hodin 9 min zpět

Amazon last week made an unusual deal with a company called Adept in which Amazon will license the company’s technology and also poach members of its team, including the company’s co-founders.

The e-commerce, cloud computing, online advertising, digital streaming and artificial intelligence (AI) giant is no doubt hoping the deal will propel Amazon, which is lagging behind companies like Microsoft, Google and Meta in the all-important area of AI. (In fact, Adept had previously been in acquisition talks with both Microsoft and Meta.) 

Adept specializes in the hottest area of AI that hardly anyone is talking about, but which some credibly claim is the next leap forward for AI technology. 

But wait, what exactly is agentic AI? The easiest way to understand it is by comparison to LLM-based chatbots. 

How agentic AI differs from LLM chatbots

We know all about LLM-based chatbots like ChatGPT. Agentic AI systems are based on the same kind of large language models, but with important additions. While LLM-based chatbots respond to specific prompts, trying to deliver what’s asked for literally, agentic systems take that further by incorporating autonomous goal-setting, reasoning, and dynamic planning. They’re also designed to integrate with applications, systems, and platforms. 

While LLMs, such as ChatGPT, reference huge quantities of data and hybrid systems, like Perplexity AI, combine that with real-time web searches, agentic systems further incorporate changing circumstances and contexts to pursue goals, causing them to reprioritize tasks and change methods to achieve those goals. 

While LLM chatbots have no ability to make actual decisions, agentic systems are characterized by advanced contextual reasoning and decision-making. Agentic systems can plan, “understand” intent, and more fully integrate with a much wider range of third-party systems and platforms. 

What’s it good for?

One obvious use for agentic AI is as a personal assistant. Such a tool could — based on natural-language requests — schedule meetings and manage a calendar, change times based on others’ and your availability, and remind you of the meetings. And it could be useful in the meetings themselves, gathering data in advance, creating an agenda, taking notes and assigning action items, then sending  follow-up reminders. All this could theoretically begin with a single plain-language, but vague, request.

It could read, categorize and answer emails on your behalf, deciding which to answer and which to leave for you to respond to. 

You could tell your agentic AI assistant to fill out forms for you or subscribe to services, entering the requested information and even processing any payment. It could even theoretically surf the web for you, gathering information and creating a report.

Like today’s LLM chatbots, agentic AI assistants could use multimodal input and could receive verbal instructions along with audio, text, and video inputs harvested by cameras and microphones in your glasses. 

Another obvious application for agentic AI is for customer service. Today’s interactive voice response (IVR) systems seem like a good idea in theory — placing the burden on customers to navigate complex decision trees while struggling with inadequate speech recognition so that a company doesn’t need to pay humans to interface with customers — but fail in practice. 

Agentic AI promises to transform automated customer service. Such technology should be able to function as if it not only understands the words but also the problems and goals of a customer on the phone, then perform multi-step actions to arrive at a solution.

They can do all kinds of things a lower-level employee might do — qualify sales leads, do initial outreach for sales calls, automate fraud detection and loan application processing at a bank, autonomously screen candidates applying for jobs, and even conduct initial interviews and other tasks. 

Agentic AI should be able to achieve very large-scale goals as well — optimize supply chains and distribution networks, manage inventory, optimize delivery routes, reduce operating costs, and more.

The risk of agentic AI

Let’s start with the basics. The idea of AI that can operate “creatively” and autonomously — capable of doing things across sites, platforms and networks, directed by a human-created prompt with limited human oversight — is obviously problematic.

Let’s say a salesperson directs agentic AI to set up a meeting with a hard-to-reach potential client. The AI understands the goal and has vast information about how actual humans do things, but no moral compass and no explicit direction to conduct itself ethically.  

One way to reach that client (based on the behavior of real humans in the real world) could be to send an email, tricking the person into clicking on self-executing malware, which would open a trojan on the target’s system to be used for exfiltrating all personal data and using that data to find out where that person would be at a certain time. The AI could then place a call to that location, and say there’s an emergency. The target would then take the call, and the AI would try to set up a meeting.

This is just one small example of how an agentic AI without coded or prompted ethics could do the wrong thing. The possibilities for problems are endless. 

Agentic AI could be so powerful and capable that there’s no way this ends well without a huge effort on the development and maintenance of AI governance frameworks that include guidelines, safety measures, and constant oversight by well-trained people.

Note: The rise of LLMs, starting with ChatGPT, engendered fears that AI could take jobs away from people; agentic AI is the technology that could really do that at scale.

The worst-case scenario would be for millions of people to be let go and replaced by agentic AI. The best case is that the technology  would be inferior to a human partnering with it. With such a tool, human work could be made far more efficient and less error-prone. 

I’m pessimistic that agentic AI can benefit humanity if the ethical considerations remain completely in the hands of Silicon Valley tech bros, investors, and AI technologists. We’ll need to combine expertise from AI, ethics, law, academia, and specific industry domains and move cautiously into the era of agentic AI.

It’s reasonable to feel both thrilled by the promise of agentic AI and terrified about the potential negative effects. One thing is certain: It’s time to pay attention to this emerging technology. 

With a giant, ambitious, capable, and aggressive company like Amazon making moves to lead in agentic AI, there’s no ignoring it any longer. 

More by Mike Elgan:

Kategorie: Hacking & Security

Renegade business units trying out genAI will destroy the enterprise before they help

Computerworld.com [Hacking News] - 10 hodin 9 min zpět

One of the more tired cliches in IT circles refers to “Cowboy IT” or “Wild West IT,” but it’s the most appropriate way to describe enterprise generative AI (genAI) efforts these days. As much as IT is struggling to keep on top of internal genAI efforts, the biggest danger today involves various business units globally creating or purchasing their very own experimental AI efforts.

We’ve talked extensively about Shadow AI (employees/contractors purchasing AI tools outside of proper channels) and Sneaky AI (longtime vendors silently adding AI features into systems without telling anyone). But Cowboy AI is perhaps the worst of the bunch because no one can get intro trouble. Most boards and CEOs are openly encouraging all business units to experiment with genAI and see what enterprise advantages they can unearth.

The nightmare is that almost none of those line of business (LOB) teams understand how much they are putting the enterprise at risk. Uncontrolled and unmanaged, genAI apps are absolutely dangerous.

Longtime Gartner analyst Avivah Litan (whose official title these days is Distinguished VP Analyst) wrote on LinkedIn recently about the cybersecurity dangers from these kinds of genAI efforts. Although her points were intended for security talent, the problems she describes are absolutely a bigger problem for IT.

“Enterprise AI is under the radar of most Security Operations, where staff don’t have the tools required to protect use of AI,” she wrote. “Traditional Appsec tools are inadequate when it comes to vulnerability scans for AI entities. Importantly, Security staff are often not involved in enterprise AI development and have little contact with data scientists and AI engineers. Meanwhile, attackers are busy uploading malicious models into Hugging Face, creating a new attack vector that most enterprises don’t bother to look at. 

“Noma Security reported they just detected a model a customer had downloaded that mimicked a well-known open-source LLM model. The attacker added a few lines of code that caused a forward function. Still, the model worked perfectly well, so the data scientists didn’t suspect anything. But every input to the model and every output from the model were also sent to the attacker, who was able to extract it all. Noma also discovered thousands of infected data science notebooks. They recently found a keylogging dependency that logged all activities on their customer’s Jupyter notebooks. The keylogger sent the captured activity to an unknown location, evading Security which didn’t have the Jupyter notebooks in its sights.”

IT leaders: How many of the phrases above sound a little too familiar? 

Your team “often not involved in enterprise AI development and have little contact with data scientists and AI engineers?” Bad guys “creating a new attack vector that most enterprises don’t bother to look at?” Or maybe “the model worked perfectly well so the data scientists didn’t suspect anything. But every input to the model and every output from the model were also sent to the attacker, who was able to extract it all” or a manipulated external app which your IT team “didn’t have in its sights?”

Some enterprises have debated creating a new AI executive, but that’s unlikely to help. It will more than likely be an executive with lots of responsibilities, far too little budget and no actual authority to get any business unit to comply with the AI chief’s edicts. It’s sort of like many CISOs today, a toothless manager but with even more headaches. 

The better answer is to use the best power in the world to force LOB executives to take AI efforts seriously: make it an HR-approved criteria for their annual bonus. Put massive financial penalties on any problems that result from AI efforts their unit undertakes. (Paycheck hits get their attention because it is literally money out of their pockets.) Then add a caveat: If IT approves the effort in writing, then you are fully blameless for anything bad that later happens.

Magically, getting IT signoff becomes important to those LOB leaders. Then and only then, the CIO will have the clout to protect the company from errant AI.

Another possible outcome of this carrot-stick approach is that business execs will still want to maintain control and will instead hire AI experts for their units directly. That works, too. 

The cost of trying out many of these genAI efforts — especially for a relatively short time — is often negligible. That can be bad because it makes it easy for LOB workers to underestimate the risks to the business that they are accepting. 

The potential of genAI is unlimited and exciting, but if strict rules aren’t put in place right away, it could well destroy a business before it has a chance to help. 

Yippee-ki-yay, CIO.

Kategorie: Hacking & Security

Singapore Banks to Phase Out OTPs for Online Logins Within 3 Months

The Hacker News - 12 hodin 50 min zpět
Retail banking institutions in Singapore have three months to phase out the use of one-time passwords (OTPs) for authentication purposes when signing into online accounts to mitigate the risk of phishing attacks. The decision was announced by the Monetary Authority of Singapore (MAS) and The Association of Banks in Singapore (ABS) on July 9, 2024. "Customers who have activated their digital Newsroomhttp://www.blogger.com/profile/[email protected]
Kategorie: Hacking & Security

New HardBit Ransomware 4.0 Uses Passphrase Protection to Evade Detection

The Hacker News - 14 hodin 59 min zpět
Cybersecurity researchers have shed light on a new version of a ransomware strain called HardBit that comes packaged with new obfuscation techniques to deter analysis efforts. "Unlike previous versions, HardBit Ransomware group enhanced the version 4.0 with passphrase protection," Cybereason researchers Kotaro Ogino and Koshi Oyama said in an analysis. "The passphrase needs to be provided duringNewsroomhttp://www.blogger.com/profile/[email protected]
Kategorie: Hacking & Security

Navigating the Cybersecurity Maze: Advanced Linux Security Practices for Professionals

LinuxSecurity.com - 13 Červenec, 2024 - 13:00
As cyber threats rapidly advance, Linux administrators and InfoSec professionals are essential defenders against increasingly sophisticated threats. Protectors of critical infrastructure and sensitive data, these experts must implement a wide array of security practices designed specifically to their unique challenges.
Kategorie: Hacking & Security

Open Source Vulnerability Assessment Tools & Scanners

LinuxSecurity.com - 13 Červenec, 2024 - 13:00
Computer systems, software, applications, and other interfaces are vulnerable to network security threats. Failure to find these cybersecurity vulnerabilities can lead to the downfall of a company. Therefore, businesses must utilize vulnerability scanners regularly within their systems and servers to identify existing loopholes and weaknesses that can be resolved through security patching.
Kategorie: Hacking & Security

Průzkumník ve Windows 11 vytvoří archivy 7Z a TAR. Novinky přináší povinná servisní aktualizace

Zive.cz - bezpečnost - 13 Červenec, 2024 - 12:45
**Microsoft vydal 9. července večer servisní aktualizace pro Windows **Průzkumník ve Windows 11 podporuje tvorbu archivů 7Z a TAR **Vývojový tým opravil 139 slabých míst zabezpečení
Kategorie: Hacking & Security

AT&T Confirms Data Breach Affecting Nearly All Wireless Customers

The Hacker News - 13 Červenec, 2024 - 07:51
American telecom service provider AT&T has confirmed that threat actors managed to access data belonging to "nearly all" of its wireless customers as well as customers of mobile virtual network operators (MVNOs) using AT&T's wireless network. "Threat actors unlawfully accessed an AT&T workspace on a third-party cloud platform and, between April 14 and April 25, 2024, exfiltrated Newsroomhttp://www.blogger.com/profile/[email protected]
Kategorie: Hacking & Security

For July, Microsoft’s Patch Tuesday update fixes four zero-day flaws

Computerworld.com [Hacking News] - 12 Červenec, 2024 - 21:00

Microsoft released 132 updates in its July Patch Tuesday update while addressing four zero-days (CVE-2024-35264CVE-2024-37985CVE-2024-38080 and CVE-2024-38112) affecting Windows desktop, Microsoft .NET and Visual Studio. This is a very significant patch cycle for Microsoft SQL Server, but there are no updates for Microsoft browsers and a low profile set of patches for Microsoft Office. No major revisions require attention, with testing focused squarely on SQL dependent applications. 

The team at Readiness has provided a useful infographic detailing the risks with each of the updates this cycle. 

Known issues 

Each month, Microsoft publishes a list of known issues included in its latest release, including two reported minor issues:

  • After you install KB5034203 (dated 01/23/2024) or later updates, some Windows devices that use the DHCP Option 235 to discover Microsoft Connected Cache (MCC) nodes in their network might be unable to use those nodes. Microsoft offered two options to mitigate the issue through setting the Cache Hostname or using group policies. Microsoft is still working on a resolution.
  • Context menus and dialog buttons in some Windows apps, or parts of the Windows OS user interface (UI), might display in English when English is not set as the display language. This might also affect font size.

We fully expect to see more issues relating to how the Windows UI presented over the coming months as Microsoft works through some of the core level issues with new ARM builds. This means that even non-ARM builds will be affected (see CVE-2024-37985). Look out for input method editor, language pack, and dialog box language issues for non-English builds.

Major revisions 

This Patch Tuesday saw Microsoft publishing the following major revisions to past  security and feature updates, including:

  • CVE-2024-30098 : Windows Cryptographic Services Security Feature Bypass. Microsoft has added a FAQ to explain how this vulnerability is being addressed and further actions customers must take to be protected from it. This is an informational change only; no further action is required.
Mitigations and workarounds

Microsoft published the following vulnerability-related mitigations for this month’s release cycle: 

Each month, the Readiness team analyses the latest Patch Tuesday updates and provides detailed, actionable testing guidance based on assessing a large application portfolio and a detailed analysis of the patches and their potential impact on the Windows platforms and app installations.

For this cycle, we have grouped the critical updates and required testing efforts into different functional areas:

Microsoft Office
  • Test out your Teams logins (which shouldn’t take too long).
  • Because SharePoint was updated, third-party extensions or dependencies will require testing.
  • Due to the change in Outlook, Internet Calendars (ICS files) will require testing.
  • With the Visio update, large CAD drawings will require a basic import and load test.
Microsoft .NET and developer tools

Microsoft has updated the Microsoft .NET, MSI Installer and Visual Studio with the following testing guidance:

  • PowerShell updates will require a diagnostics test. Try the command, “import-module Microsoft.powershell.diagnostics – verbose” and validate that you are getting the correct results from your home directory.
  • Due to the change in the Windows core installation technology (MSI), please validate that User Account Control (UAC) still functions as expected.
Microsoft SQL Server

This month is a big update for both Microsoft SQL Server and the local, or workstation supporting elements of OLE. The primary focus for this kind of complex effort should be your line-of-business or core applications. These are the applications that have multiple data connections and rely on complex, multiple object/session requirements. Due to the changes this month, we can’t recommend specific Windows feature testing regimes, as we are most concerned that the business logic (and resulting data) of the application in question might be affected. Only you will know what looks good; we advise a comparative testing regime across unpatched and newly patched systems looking for data disparities.

Windows

Microsoft made another update to the Win32 and GDI subsystems with a recommendation to test out a significant portion of your application portfolio. We also recommend that you test the following functional areas in the Windows platform:

  • File compression has been updated, so file and archive extraction scenarios will need to be exercised.
  • Due to the Microsoft codec updates, perform a system reboot and test that your audio and camera still work together.
  • Security updates will require the testing of the creation of new Windows certificates.
  • Networking changes will require a test of DNS and DHCP, specifically the DHCP R_DhcpAddSubnetElement API. As part of these changes, testing VPN authentication will be required. Try to include your Network Policy Server (NPS) as part of the connection creation and deletion effort.
  • This month’s update to Remote Desktop Services (RDS) will require the creation and revocation of license requests.
  • A significant update to the Network Driver Interface Specification (NDIS) will require testing of network traffic involving repeated bursts of large files. Try using Teams while this networking burst testing is in progress.
  • Backup and printing have been updated, so test your volumes and ensure that when you print out a test page, your OS does not crash (yes, really). Try printing out TIFF files. (Hey, you might like it.)

As part of the ongoing effort to support the new ARM architecture, Microsoft released the first patch for this new platform, CVE-2024-37985. This is an Intel assigned processor-level vulnerability that has been mitigated by a Microsoft OS level patch. The Readiness team has provided guidance on potential ARM-related compatibility and testing issues. 

Specifically, the Readiness team was concerned with Input Method Editors (IMEs). We suggest a full test cycle of Windows input related features such as keyboard, mouse, touch, pen, gesture and dictation. Some internet shortcuts might be affected as well as wallpapers.

Windows lifecycle update 

This section contains important changes to servicing (and most security updates) to Windows desktop and server platforms.

  • Home and Pro editions of Windows 11, version 22H2 will reach end of service on Oct. 8, 2024. Until then, these editions will only receive security updates. They will no longer receive non-security, preview updates.

Each month, we break down the update cycle into product families (as defined by Microsoft) with the following basic groupings: 

  • Browsers (Microsoft IE and Edge);
  • Microsoft Windows (both desktop and server); 
  • Microsoft Office;
  • Microsoft Exchange Server ;
  • Microsoft Development platforms (ASP.NET Core, .NET Core and Chakra Core)
  • Adobe (if you get this far).
Browsers

Microsoft did not release any updates for its non-Chromium browsers. Following the stable channel release of Chrome (applicable until July 25, 2024) we have not seen any changes, deprecations or testing profile updates to this browser. No further action required.
 

Windows

Microsoft released four critical and 83 updates rated as important with two zero-day patches (CVE-2024-38080 and CVE-2024-38112) affecting the Microsoft Hyper-V and MSHTML feature groups, respectively. In addition to these critical updates, Microsoft patches for July affect the following Windows feature groups:

  • Windows NTLM, Kernel, GDI and Graphics;
  • Windows Backup;
  • Windows Codecs;
  • Microsoft Hyper-V;
  • Windows (Line) Print and Fax ;
  • Windows Remote Desktop and Gateway;
  • Windows Secure Boot and Enrolment Manager.

Add these Windows updates to your Patch Now release cycle.

Microsoft Office 

Microsoft returns to form with a critical update for Office this month (CVE-2024-38023) for the SharePoint platform. We have another update for Outlook related to spoofing (CVE-2024-38020), but this vulnerability is not wormable and requires user interaction. There are four more, lower rated updates; please add all of these updates to your standard release schedule.

Microsoft SQL (nee Exchange) Server 

There were no updates for Microsoft Exchange Server this month. However, we have seen the largest release of Microsoft SQL updates in the past few years. These SQL-related updates cover 37 separate reported vulnerabilities (CVEs) and the following main product features

  • SQL Server Native Client OLE DB Provider;
  • Microsoft OLE DB Driver for SQL.

We covered the testing requirements for this SQL update in our testing guidance section above. This month’s SQL updates will require some preparation and dedicated testing before adding to your standard release schedule.

Microsoft development platforms 

Microsoft released four, low-profile updates to the Microsoft .NET and Visual Studio platforms. We do not expect serious testing requirements for these vulnerabilities. However, CVE-2024-35264 has been reported as publicly disclosed by Microsoft. This makes this an unusually urgent patch for Microsoft Visual Studio attracting a “Patch Now” rating this month.

Adobe Reader (and other third-party updates) 

Very much as our Microsoft Exchange section has been “hijacked” by SQL Server updates this month, we’re using the Adobe section for third-party updates. (There are no updates to Adobe Reader.) 

  • CVE-2024-3596: NPS RADIUS Server. A vulnerability exists in the RADIUS protocol that potentially affects many products and implementations of the RFC 2865 in the UDP version of the RADIUS protocol. 
  • CVE-2024-38517 and CVE-2024-39684: GitHub Active Directory Management Rights. The  vulnerability assigned to this CVE is in the RapidJSON library which is consumed by the Microsoft Active Directory Rights Management Services Client, hence the inclusion of this CVE with this update.
  • CVE-2024-37985: This memory related update from Intel relates to its prefetcher technology. Affected: Core Windows OS memory related components — particularly the new ARM builds, which I find both confusing and ironic.

Read Greg Lambert‘s 2024 Patch Tuesday reports:

Kategorie: Hacking & Security
Syndikovat obsah