Agregátor RSS

Počáteční podpora pro Wayland pro IntelliJ IDEA

AbcLinuxu [zprávičky] - 9 Červenec, 2024 - 17:23
V IDE firmy Jetbrains založených na IntelliJ bude možné nativně použít Wayland. Podpora je přidána ve verzi 2024.2 EAP (Early Access Program).
Kategorie: GNU/Linux & BSD

Privátní filtry začínají být znovu v oblibě. Jsou první překážkou mezi telefonem a potenciálním zlodějem

Živě.cz - 9 Červenec, 2024 - 17:15
Privátní filtry se znovu vracejí do módy • Slyší na ně zejména mladí uživatele tzv. Generace Z • Mohou být první překážkou mezi vašim telefonem a možným zlodějem
Kategorie: IT News

Firefox 128.0

AbcLinuxu [zprávičky] - 9 Červenec, 2024 - 17:13
Webový prohlížeč Mozilla Firefox byl vydán ve verzi 128.0. K dispozici jsou také poznámky k vydání pro podniky, jedná se o nové vydání s rozšířenou podporou (ESR).
Kategorie: GNU/Linux & BSD

Herní myš G309 Lightspeed je tou nejlevnější od Logitechu, která má bezdrátové nabíjení

Živě.cz - 9 Červenec, 2024 - 16:45
Logitech dnes uvedl novou herní myš G309 Lightspeed. Rovnou ji začal prodávat na svém webu za 2249 Kč, v jiných obchodech by se měla objevit až koncem měsíce. Dostupná bude v černé a bílé barvě. Potěší náročnější hráče, kteří chtějí spíš lehkou myš se symetrickým tvarem (byť bočními tlačítky pro ...
Kategorie: IT News

Elexon's Insight into UK electricity felled by expired certificate

The Register - Anti-Virus - 9 Červenec, 2024 - 16:01
Understanding the power needs of the UK begins with knowing when renewals are due

Certificate Watch  Demonstrating that Microsoft is not alone in its inability to keep track of certificates is UK power market biz Elexon.…

Kategorie: Viry a Červi

This Enormous Computer Chip Beat the World’s Top Supercomputer at Molecular Modeling

Singularity HUB - 9 Červenec, 2024 - 16:00

Computer chips are a hot commodity. Nvidia is now one of the most valuable companies in the world, and the Taiwanese manufacturer of Nvidia’s chips, TSMC, has been called a geopolitical force. It should come as no surprise, then, that a growing number of hardware startups and established companies are looking to take a jewel or two from the crown.

Of these, Cerebras is one of the weirdest. The company makes computer chips the size of tortillas bristling with just under a million processors, each linked to its own local memory. The processors are small but lightning quick as they don’t shuttle information to and from shared memory located far away. And the connections between processors—which in most supercomputers require linking separate chips across room-sized machines—are quick too.

This means the chips are stellar for specific tasks. Recent preprint studies in two of these—one simulating molecules and the other training and running large language models—show the wafer-scale advantage can be formidable. The chips outperformed Frontier, the world’s top supercomputer, in the former. They also showed a stripped down AI model could use a third of the usual energy without sacrificing performance.

Molecular Matrix

The materials we make things with are crucial drivers of technology. They usher in new possibilities by breaking old limits in strength or heat resistance. Take fusion power. If researchers can make it work, the technology promises to be a new, clean source of energy. But liberating that energy requires materials to withstand extreme conditions.

Scientists use supercomputers to model how the metals lining fusion reactors might deal with the heat. These simulations zoom in on individual atoms and use the laws of physics to guide their motions and interactions at grand scales. Today’s supercomputers can model materials containing billions or even trillions of atoms with high precision.

But while the scale and quality of these simulations has progressed a lot over the years, their speed has stalled. Due to the way supercomputers are designed, they can only model so many interactions per second, and making the machines bigger only compounds the problem. This means the total length of molecular simulations has a hard practical limit.

Cerebras partnered with Sandia, Lawrence Livermore, and Los Alamos National Laboratories to see if a wafer-scale chip could speed things up.

The team assigned a single simulated atom to each processor. So they could quickly exchange information about their position, motion, and energy, the processors modeling atoms that would be physically close in the real world were neighbors on the chip too. Depending on their properties at any given time, atoms could hop between processors as they moved about.

The team modeled 800,000 atoms in three materials—copper, tungsten, and tantalum—that might be useful in fusion reactors. The results were pretty stunning, with simulations of tantalum yielding a 179-fold speedup over the Frontier supercomputer. That means the chip could crunch a year’s worth of work on a supercomputer into a few days and significantly extend the length of simulation from microseconds to milliseconds. It was also vastly more efficient at the task.

“I have been working in atomistic simulation of materials for more than 20 years. During that time, I have participated in massive improvements in both the size and accuracy of the simulations. However, despite all this, we have been unable to increase the actual simulation rate. The wall-clock time required to run simulations has barely budged in the last 15 years,” Aidan Thompson of Sandia National Laboratories said in a statement. “With the Cerebras Wafer-Scale Engine, we can all of a sudden drive at hypersonic speeds.”

Although the chip increases modeling speed, it can’t compete on scale. The number of simulated atoms is limited to the number of processors on the chip. Next steps include assigning multiple atoms to each processor and using new wafer-scale supercomputers that link 64 Cerebras systems together. The team estimates these machines could model as many as 40 million tantalum atoms at speeds similar to those in the study.

AI Light

While simulating the physical world could be a core competency for wafer-scale chips, they’ve always been focused on artificial intelligence. The latest AI models have grown exponentially, meaning the energy and cost of training and running them has exploded. Wafer-scale chips may be able to make AI more efficient.

In a separate study, researchers from Neural Magic and Cerebras worked to shrink the size of Meta’s 7-billion-parameter Llama language model. To do this, they made what’s called a “sparse” AI model where many of the algorithm’s parameters are set to zero. In theory, this means they can be skipped, making the algorithm smaller, faster, and more efficient. But today’s leading AI chips—called graphics processing units (or GPUs)—read algorithms in chunks, meaning they can’t skip every zeroed out parameter.

Because memory is distributed across a wafer-scale chip, it can read every parameter and skip zeroes wherever they occur. Even so, extremely sparse models don’t usually perform as well as dense models. But here, the team found a way to recover lost performance with a little extra training. Their model maintained performance—even with 70 percent of the parameters zeroed out. Running on a Cerebras chip, it sipped a meager 30 percent of the energy and ran in a third of the time of the full-sized model.

Wafer-Scale Wins?

While all this is impressive, Cerebras is still niche. Nvidia’s more conventional chips remain firmly in control of the market. At least for now, that appears unlikely to change. Companies have invested heavily in expertise and infrastructure built around Nvidia.

But wafer-scale may continue to prove itself in niche, but still crucial, applications in research. And it may be the approach becomes more common overall. The ability to make wafer-scale chips is only now being perfected. In a hint at what’s to come for the field as a whole, the biggest chipmaker in the world, TSMC, recently said it’s building out its wafer-scale capabilities. This could make the chips more common and capable.

For their part, the team behind the molecular modeling work say wafer-scale’s influence could be more dramatic. Like GPUs before them, adding wafer-scale chips to the supercomputing mix could yield some formidable machines in the future.

“Future work will focus on extending the strong-scaling efficiency demonstrated here to facility-level deployments, potentially leading to an even greater paradigm shift in the Top500 supercomputer list than that introduced by the GPU revolution,” the team wrote in their paper.

Image Credit: Cerebras

Kategorie: Transhumanismus

Evolve Bank & Trust confirms LockBit stole 7.6 million people's data

The Register - Anti-Virus - 9 Červenec, 2024 - 15:52
Making cyberattack among the largest ever recorded in finance industry

Evolve Bank & Trust says the data of more than 7.6 million customers was stolen during the LockBit break-in in late May, per a fresh filing with Maine's attorney general.…

Kategorie: Viry a Červi

Boeing přiznal vinu za dvě tragické havárie svých letadel 737 Max, které si vyžádaly 346 životů

Živě.cz - 9 Červenec, 2024 - 15:45
Jeden z největších výrobců letadel na světě – americký Boeing – souhlasil s mimosoudním přiznáním viny spojeným se dvěma tragickými nehodami jeho letadel 737 MAX. Uzavření dohody oznámil soudu zástupce americké vlády v podání u okresního soudu ze 7. července. Podrobnosti přináší magazín Ars ...
Kategorie: IT News

AI managing AI that is monitoring AI: What could possibly go wrong?

Computerworld.com [Hacking News] - 9 Červenec, 2024 - 15:33

If IT leaders were in a statistical analysis class, many would be in a lot of trouble. If students were given a very low reliability element and told to pair it with another low reliability element, a good student would know that the error rate — the risk of bad data results — would get higher. Quite likely much higher.

And yet, some tech leaders seem fine with the idea of combatting generative AI’s bad data — a.k.a. hallucinations — by marrying different genAI programs. Even worse, they are now embracing the idea of using genAI to monitor/manage other genAI as a way to negate hallucinations. Math doesn’t work that way.

Consider: OpenAI recently launched a genAI program to try and identify errors made by other genAI programs. “We’ve trained a model, based on GPT-4, called CriticGPT to catch errors in ChatGPT’s code output. We found that when people get help from CriticGPT to review ChatGPT code, they outperform those without help 60% of the time,” the company wrote in a post announcing the new app.

OpenAI predicts that hallucinations are likely to become harder for humans to find. The company talks about the limits of its Reinforcement Learning from Human Feedback (RLHF) approach, in which human AI trainers evaluate ChatGPT responses. 

“As we make advances in reasoning and model behavior, ChatGPT becomes more accurate and its mistakes become more subtle. This can make it hard for AI trainers to spot inaccuracies when they do occur, making the comparison task that powers RLHF much harder,” OpenAI wrote. “This is a fundamental limitation of RLHF, and it may make it increasingly difficult to align models as they gradually become more knowledgeable than any person that could provide feedback.”

This is consistent with many other reports on genAI efforts, which suggest that, despite what experienced IT folk have come to expect from software (namely, that software gets generally better as it goes through updates), hallucinations are likely to get worse.

“Worse” in this context is a complex word. The hallucinations may not necessarily become more frequent and/or the lies genAI chatbots tell may not become more outlandish. But “worse” in that they will become more nuanced, making it more likely that humans won’t catch them. That is a legitimate problem.

That said, it’s not at all certain that throwing more genAI at this problem will help as much as it will create more problems.

OpenAI’s argument is not that the software will work on its own, but that this new genAI software will train humans to be better at spotting hallucinations created by a different genAI program. 

“CriticGPT’s suggestions are not always correct, but we find that they can help trainers to catch many more problems with model-written answers than they would without AI help,” the company wrote. “Additionally, when people use CriticGPT, the AI augments their skills, resulting in more comprehensive critiques than when people work alone, and fewer hallucinated bugs than when the model works alone.”

And therein lies the logic problem here. One of the criticisms of generative AI is that it is terrific at mimicking humans but fails to actually understand humans. I’m reminded of a column I wrote more than a decade ago, about engineers creating a product that tests for true love. (It was an actual product: a Bluetooth bra that would unhook only when it detected true love. Really. To be clear, I am not officially suggesting that engineers are as bad at understand human emotions as genAI. Not disputing it, but also not officially saying it.) 

Getting back to genAI logic, the flawed assumption that OpenAI is making is that humans will continue checking their systems for lies. Humans are lazy, and human IT employees are overworked and under-resourced. The far more likely outcome is that humans will trust the AI-watching-AI more and more. That is where the real danger exists.

Another example of this “trust AI to find errors in other AI” comes from Morgan Stanley. In a CIO.com piece looking at Morgan Stanley’s recent genAI rollout, the CEO of another financial company spoke of using multiple genAI models to check on each other. 

Morgan Stanley wants to use genAI to create transcripts and summaries of its client meetings. What Aaron Cirksena, founder and CEO of MDRN Capital, suggested was that Morgan could also run transcripts and summaries from the genAI capabilities within Zoom, Google, Microsoft, or Apple — and then use yet another genAI program to compare the results and flag any informational conflicts. “How likely is it that both AI systems will get the same thing wrong?” Cirksena asked. 

It is a legitimate question. But so is the opposite question: How likely is it that one or more of these genAI programs will introduce more hallucinations into the process? What if the checker program hallucinates that there are no conflicts when there are? 

An even worse problem is if the checker app labels things as disconnects that are actually fine. Why is that worse? This brings us back to the human nature issue. The more hassles that the checker program delivers to humans, the less inclined they will be to use it or believe it. 

Consider mobile voice recognition today. Its accuracy is strong enough (often topping 99% and certainly topping 98%) that people are inclined to dictate a message and then send it. This has caused confusion and embarrassment. 

I recently crafted a reply where I told a colleague, “Fine. You can do that.” But the iPhone’s voice recognition heard the words “fine” and “you” next to each other and decided that the most likely F-word was a very different one. It was fine on the screen, so I hit Send and then it changed it to the “other” word and did indeed send. Apple, can you please block your system from changing a word after the message is proofread?

When voice recognition accuracy percentages were in the low 90s, mistakes were so common that people carefully checked. I fear the same disaster is going to hit with AI checking AI. Wonder what disasters that will deliver?

Kategorie: Hacking & Security

Apple removes VPN apps in Russia; here’s what to do next

Computerworld.com [Hacking News] - 9 Červenec, 2024 - 15:15

Russia’s state communications watchdog, Roskomnadzor, has forced Apple to stop offering Virtual Private Network (VPN) apps via the App Store in Russia as that nation continues to censor internal dissent.

The regulator has already blocked access to dozens of VPNs in Russia, and Apple has now removed apps for 25 VPN services, including Proton VPN, Red Shield VPN, and Le VPN.

Millions in Russia use a VPN

Millions of people in or near conflict zones rely on VPNs to gain access to information that is not published via official channels. The number of Russians using such services spiked since the invasion of Ukraine, and adoption has not slowed. One VPN provider reports that Web traffic from nations with high degrees of censorship (including Russia) climbed an astounding 212% in 2023.

Russia doesn’t like its people avoiding censorship, which is why it forced Apple to remove the apps from its store. Some industry observers, including security consultant and Objective-See founder Patrick Wardle, have argued that if app sideloading were supported on iPhones, users might have options to download these apps elsewhere.

Apple isn’t the only big US tech firm to have acted against VPN apps in Russia. In 2022, Surfshark revealed that Google was forced to delist over 36,000 URL’s that linked to VPN services from Russia. 

A state of digital isolation

“While users on other operating systems can request mirror download links from VPN providers, it’s much trickier for iOS users who don’t want to jailbreak their devices to download the VPN apps that have been removed from the official store,” said Simon Migliano, head of research at Top10VPN.com. “It’s very disappointing to see Apple complying with the Russian authorities’ increasingly draconian crackdown on VPNs that pushes the country ever closer to digital isolation, cut off from the global internet.”

Apple is also just one component of a larger attack on VPN use in Russia. The UK Ministry of Defence (MoD) points out that the ban is “almost certainly intended to restrict the ability of Russian citizens to access independent Russian, and international media, as well as to simplify the ability of the security services to monitor Russian citizens.” 

The MoD also notes that simultaneously with the crackdown on VPN apps distributed in Russia, state authorities demanded telecom providers there end support for Voice over Internet Protocol (VoIP) telephony services

It’s a pattern of repression, control, and erosion of communication that was ongoing in Russia even before the invasion of Ukraine. “While there are a shrinking number of VPNs still available in the Russian version of the App Store, fewer and fewer high-quality services remain, which means they are less likely to work as they lack the sophisticated traffic obfuscation offered by bigger brands,” said Migliano.

Apple says nothing

Apple has made no public comment on the removal so far. If it did, I imagine it would argue that failure to comply with the request could also threaten the interests of existing iPhone users in Russia, as it is possible Apple would be forced to cease processing software updates and other forms of tech support to customers there. This would make their devices vulnerable to attack by state-sponsored hackers. 

It is also worth noting that any current or former Apple employees in Russia might have been exposed to reprisals by Russian authorities had the company refused to comply.

How to (still) access VPNs in Russia

There are some ways people in Russia (or elsewhere) can still use VPNs on iPhones without an app, principally by using an additional device as hotspot and a non-Russian VPN server. This requires changing your country in your AppleID settings, so you can access another nation’s App Store. You might also need a non-Russian payment method. 

“This should allow the installation of VPN apps that have been removed from the Russian app store. My advice would be to install several and cycle through them whenever they get blocked,” said Migliano.

“If you don’t already have a working VPN, it’s also possible to set up Tor on a non-iOS device that can act as a hotspot for connected mobile devices to access the App Store from international IP addresses. Currently, the best options for Russia are Astrill, PrivateVPN, and Windscribe, as they have the best connection success rate, despite the crackdown,” he added.

Please follow me on Mastodon, or join me in the AppleHolic’s bar & grill and Apple Discussions groups on MeWe.

More by Jonny Evans:

Kategorie: Hacking & Security

Developing and prioritizing a detection engineering backlog based on MITRE ATT&CK

Kaspersky Securelist - 9 Červenec, 2024 - 15:00

Detection is a traditional type of cybersecurity control, along with blocking, adjustment, administrative and other controls. Whereas before 2015 teams asked themselves what it was that they were supposed to detect, as MITRE ATT&CK evolved, SOCs were presented with practically unlimited space for ideas on creating detection scenarios.

With the number of scenarios becoming virtually unlimited, another question inevitably arises: “What do we detect first?” This and the fact that SOC teams forever play the long game, having to respond with limited resources to a changing threat landscape, evolving technology and increasingly sophisticated malicious actors, makes managing efforts to develop detection logic an integral part of any modern SOC’s activities.

The problem at hand is easy to put into practical terms: the bulk of the work done by any modern SOC – with the exception of certain specialized SOC types – is detecting, and responding to, information security incidents. Detection is directly associated with preparation of certain algorithms, such as signatures, hard-coded logic, statistical anomalies, machine learning and others, that help to automate the process. The preparation consists of at least two processes: managing detection scenarios and developing detection logic. These cover the life cycle, stages of development, testing methods, go-live, standardization, and so on. These processes, like any others, require certain inputs: an idea that describes the expected outcome at least in abstract terms.

This is where the first challenges arise: thanks to MITRE ATT&CK, there are too many ideas. The number of described techniques currently exceeds 200, and most are broken down into several sub-techniques – MITRE T1098 Account Manipulation, for one, contains six sub-techniques – while SOC’s resources are limited. Besides, SOC teams likely do not have access to every possible source of data for generating detection logic, and some of those they do have access to are not integrated with the SIEM system. Some sources can help with generating only very narrowly specialized detection logic, whereas others can be used to cover most of the MITRE ATT&CK matrix. Finally, certain cases require activating extra audit settings or adding selective anti-spam filtering. Besides, not all techniques are the same: some are used in most attacks, whereas others are fairly unique and will never be seen by a particular SOC team. Thus, setting priorities is both about defining a subset of techniques that can be detected with available data and about ranking the techniques within that subset to arrive at an optimized list of detection scenarios that enables detection control considering available resources and in the original spirit of MITRE ATT&CK: discovering only some of the malicious actor’s atomic actions is enough for detecting the attack.

A slight detour. Before proceeding to specific prioritization techniques, it is worth mentioning that this article looks at options based on tools built around the MITRE ATT&CK matrix. It assesses threat relevance in general, not in relation to specific organizations or business processes. Recommendations in this article can be used as a starting point for prioritizing detection scenarios. A more mature approach must include an assessment of a landscape that consists of security threats relevant to your particular organization, an allowance for your own threat model, an up-to-date risk register, and automation and manual development capabilities. All of this requires an in-depth review, as well as liaison between various processes and roles inside your SOC. We offer more detailed maturity recommendations as part of our SOC consulting services.

MITRE Data Sources

Optimized prioritization of the backlog as it applies to the current status of monitoring can be broken down into the following stages:

  • Defining available data sources and how well they are connected;
  • Identifying relevant MITRE ATT&CK techniques and sub-techniques;
  • Finding an optimal relation between source status and technique relevance;
  • Setting priorities.

A key consideration in implementing this sequence of steps is the possibility of linking information that the SOC receives from data sources to a specific technique that can be detected with that information. In 2021, MITRE completed its ATT&CK Data Sources project, its result being a methodology for describing a data object that can be used for detecting a specific technique. The key elements for describing data objects are:

  • Data Source: an easily recognizable name that defines the data object (Active Directory, application log, driver, file, process and so on);
  • Data Components: possible data object actions, statuses and parameters. For example, for a file data object, data components are file created, file deleted, file modified, file accessed, file metadata, and so on.

MITRE Data Sources

Virtually every technique in the MITRE ATT&CK matrix currently contains a Detection section that lists data objects and relevant data components that can be used for creating detection logic. A total of 41 data objects have been defined at the time of publishing this article.

MITRE most relevant data components

The column on the far right in the image above (Event Logs) illustrates the possibilities of expanding the methodology to cover specific events received from real data sources. Creating a mapping like this is not one of the ATT&CK Data Sources project goals. This Event Logs example is rather intended as an illustration. On the whole, each specific SOC is expected to independently define a list of events relevant to its sources, a fairly time-consuming task.

To optimize your approach to prioritization, you can start by isolating the most frequent data components that feature in most MITRE ATT&CK techniques.

The graph below presents the up-to-date top 10 data components for MITRE ATT&CK matrix version 15.1, the latest at the time of writing this.

The most relevant data components (download)

For these data components, you can define custom sources for the most results. The following will be of help:

  • Expert knowledge and overall logic. Data objects and data components are typically informative enough for the engineer or analyst working with data sources to form an initial judgment on the specific sources that can be used.
  • Validation directly inside the event collection system. The engineer or analyst can review available sources and match events with data objects and data components.
  • Publicly available resources on the internet, such as Sensor Mappings to ATT&CK, a project by the Center for Threat-Informed Defense, or this excellent resource on Windows events: UltimateWindowsSecurity.

That said, most sources are fairly generic and typically connected when a monitoring system is implemented. In other words, the mapping can be reduced to selecting those sources which are connected in the corporate infrastructure or easy to connect.

The result is an unranked list of integrated data sources that can be used for developing detection logic, such as:

  • For Command Execution: OS logs, EDR, networked device administration logs and so on;
  • For Process Creation: OS logs, EDR;
  • For Network Traffic Content: WAF, proxy, DNS, VPN and so on;
  • For File Modification: DLP, EDR, OS logs and so on.

However, this list is not sufficient for prioritization. You also need to consider other criteria, such as:

  • The quality of source integration. Two identical data sources may be integrated with the infrastructure differently, with different logging settings, one source being located only in one network segment, and so on.
  • Usefulness of MITRE ATT&CK techniques. Not all techniques are equally useful in terms of optimization. Some techniques are more specialized and aimed at detecting rare attacker actions.
  • Detection of the same techniques with several different data sources (simultaneously). The more options for detecting a technique have been configured, the higher the likelihood that it will be discovered.
  • Data component variability. A selected data source may be useful for detecting not only those techniques associated with the top 10 data components but others as well. For example, an OS log can be used for detecting both Process Creation components and User Account Authentication components, a type not mentioned on the graph.
Prioritizing with DeTT&CT and ATT&CK Navigator

Now that we have an initial list of data sources available for creating detection logic, we can proceed to scoring and prioritization. You can automate some of this work with the help of DeTT&CT, a tool created by developers unaffiliated with MITRE to help SOCs with using MITRE ATT&CK for scoring and comparing the quality of data sources, coverage and detection scope according to MITRE ATT&CK techniques. The tool is available under the GPL-3.0 license.

DETT&CT supports an expanded list of data sources as compared to the MITRE model. This list is implemented by design and you do not need to redefine the MITRE matrix itself. The expanded model includes several data components, which are parts of MITRE’s Network Traffic component, such as Web, Email, Internal DNS, and DHCP.

You can install DETT&CT with the help of two commands: git clone and pip install -r. This gives you access to DETT&CT Editor: a web interface for describing data sources, and DETT&CT CLI for automated analysis of prepared input data that can help with prioritizing detection logic and more.

The first step in identifying relevant data sources is describing these. Go to Data Sources in DETT&CT Editor, click New file and fill out the fields:

  • Domain: the version of the MITRE ATT&CK matrix to use (enterprise, mobile or ICS).
  • This field is not used in analytics; it is intended for distinguishing between files with the description of sources.
  • Systems: selection of platforms that any given data source belongs to. This helps to both separate platforms, such as Windows and Linux, and specify several platforms within one system. Going forward, keep in mind that a data source is assigned to a system, not a platform. In other words, if a source collects data from both Windows and Linux, you can leave one system with two platforms, but if one source collects data from Windows only, and another, from Linux only, you need to create two systems: one for Windows and one for Linux.

After filling out the general sections, you can proceed to analyzing data sources and mapping to the MITRE Data Sources. Click Add Data Source for each MITRE data object and fill out the relevant fields. Follow the link above for a detailed description of all fields and example content on the project page. We will focus on the most interesting field: Data quality. It describes the quality of data source integration as determined according to five criteria:

  • Device completeness. Defines infrastructure coverage by the source, such as various versions of Windows or subnet segments, and so on.
  • Data field completeness. Defines the completeness of data in events from the source. For example, information about Process Creation may be considered incomplete if we see that a process was created, but not the details of the parent process, or for Command Execution, we see the command but not the arguments, and so on.
  • Defines the presence of a delay between the event happening and being added to a SIEM system or another detection system.
  • Defines the extent to which the names of the data fields in an event from this source are consistent with standard naming.
  • Compares the period for which data from the source is available for detection with the data retention policy defined for the source. For instance, data from a certain source is available for one month, whereas the policy or regulatory requirements define the retention period as one year.

A detailed description of the scoring system for filling out this field is available in the project description.

It is worth mentioning that at this step, you can describe more than just the top 10 data components that cover the majority of the MITRE ATT&CK techniques. Some sources can provide extra information: in addition to Process Creation, Windows Security Event Log provides data for User Account Authentication. This extension will help to analyze the matrix without limitations in the future.

After describing all the sources on the list defined earlier, you can proceed to analyze these with reference to the MITRE ATT&CK matrix.

The first and most trivial analytical report identifies the MITRE ATT&CK techniques that can be discovered with available data sources one way or another. This report is generated with the help of a configuration file with a description of data sources and DETT&CT CLI, which outputs a JSON file with MITRE ATT&CK technique coverage. You can use the following command for this:

python dettect.py ds -fd <data-source-yaml-dir>/<data-sources-file.yaml> -l

The resulting JSON is ready to be used with the MITRE ATT&CK matrix visualization tool, MITRE ATT&CK Navigator. See below for an example.

MITRE ATT&CK coverage with available data sources

This gives a literal answer to the question of what techniques the SOC can discover with the set of data sources that it has. The numbers in the bottom right-hand corner of some of the cells reflect sub-technique coverage by the data sources, and the colors, how many different sources can be used to detect the technique. The darker the color, the greater the number of sources.

DETT&CT CLI can also generate an XLSX file that you can conveniently use as the integration of existing sources evolves, a parallel task that is part of the data source management process. You can use the following command to generate the file:

python dettect.py ds -fd <data-source-yaml-dir>/<data-sources-file.yaml> -e

The next analytical report we are interested in assesses the SOC’s capabilities in terms of detecting MITRE ATT&CK techniques and sub-techniques while considering the scoring of integrated source quality as done previously. You can generate the report by running the following command:

python dettect.py ds -fd <data-source-yaml-dir>/<data-sources-file.yaml> --yaml

This generates a DETT&CT configuration file that both contains matrix coverage information and considers the quality of the data sources, providing a deeper insight into the level of visibility for each technique. The report can help to identify the techniques for which the SOC in its current shape can achieve the best results in terms of completeness of detection and coverage of the infrastructure.

This information too can be visualized with MITRE ATT&CK Navigator. You can use the following DETT&CT CLI command for this:

python dettect.py v -ft output/<techniques-administration-file.yaml> -l

See below for an example.

MITRE ATT&CK coverage with available sources considering their quality

For each technique, the score is calculated as an average of all relevant data source scores. For each data source, it is calculated from specific parameters. The following parameters have increased weight:

  • Device completeness;
  • Data field completeness;
  • Retention.

To set up the scoring model, you need to modify the project source code.

It is worth mentioning that the scoring system presented by the developers of DETT&CT tends to be fairly subjective in some cases, for example:

  • You may have one data source out of the three mentioned in connection with the specific technique. However, in some cases, one data source may not be enough even to detect the technique on a minimal level.
  • In other cases, the reverse may be true, with one data source giving exhaustive information for complete detection of the technique.
  • Detection may be based on a data source that is not currently mentioned in the MITRE ATT&CK Data Sources or Detections for that particular technique.

In these cases, the DETT&CT configuration file techniques-administration-file.yaml can be adjusted manually.

Now that the available data sources and the quality of their integration have been associated with the MITRE ATT&CK matrix, the last step is ranking the available techniques. You can use the Procedure Examples section in the matrix, which defines the groups that use a specific technique or sub-technique in their attacks. You can use the following DETT&CT command to run the operation for the entire MITRE ATT&CK matrix:

python dettect.py g

In the interests of prioritization, we can merge the two datasets (technique feasibility considering available data sources and their quality, and the most frequently used MITRE ATT&CK techniques):

python dettect.py g -p PLATFORM -o output/<techniques-administration- file.yaml> -t visibility

The result is a JSON file containing techniques that the SOC can work with and their description, which includes the following:

  • Detection ability scoring;
  • Known attack frequency scoring.

See the image below for an example.

Technique frequency and detection ability

As you can see in the image, some of the techniques are colored shades of red, which means they have been used in attacks (according to MITRE), but the SOC has no ability to detect them. Other techniques are colored shades of blue, which means the SOC can detect them, but MITRE has no data on these techniques having been used in any attacks. Finally, the techniques colored shades of orange are those which groups known to MITRE have used and the SOC has the ability to detect.

It is worth mentioning that groups, attacks and software used in attacks, which are linked to a specific technique, represent retrospective data collected throughout the period that the matrix has existed. In some cases, this may result in increased priority for techniques that were relevant for attacks, say, from 2015 through 2020, which is not really relevant for 2024.

However, isolating a subset of techniques ever used in attacks produces more meaningful results than simple enumeration. You can further rank the resulting subset in the following ways:

  • By using the MITRE ATT&CK matrix in the form of an Excel table. Each object (Software, Campaigns, Groups) contains the property Created (date when the object was created) that you can rely on when isolating the most relevant objects and then use the resulting list of relevant objects to generate an overlap as described above:
    python dettect.py g -g sample-data/groups.yaml -p PLATFORM -o output/<techniques-administration-file.yaml> -t visibility
  • By using the TOP ATT&CK TECHNIQUES project created by MITRE Engenuity.

TOP ATT&CK TECHNIQUES was aimed at developing a tool for ranking MITRE ATT&CK techniques and accepts similar inputs to DETT&CT. The tool produces a definition of 10 most relevant MITRE ATT&CK techniques for detecting with available monitoring capabilities in various areas of the corporate infrastructure: network communications, processes, the file system, cloud-based solutions and hardware. The project also considers the following criteria:

  • Choke Points, or specialized techniques where other techniques converge or diverge. Examples of these include T1047 WMI, as it helps to implement a number of other WMI techniques, or T1059 Command and Scripting Interpreter, as many other techniques rely on a command-line interface or other shells, such as PowerShell, Bash and others. Detecting this technique will likely lead to discovering a broad spectrum of attacks.
  • Prevalence: technique frequency over time.

MITRE ATT&CK technique ranking methodology in TOP ATT&CK TECHNIQUES

Note, however, that the project is based on MITRE ATT&CK v.10 and is not supported.

Finalizing priorities

By completing the steps above, the SOC team obtains a subset of MITRE ATT&CK techniques that feature to this or that extent in known attacks and can be detected with available data sources, with an allowance for the way these are configured in the infrastructure. Unfortunately, DETT&CT does not offer any way of creating a convenient XLSX file with an overlap between techniques used in attacks and those that the SOC can detect. However, we have a JSON file that can be used to generate the overlap with the help of MITRE ATT&CK Navigator. So, all you need to do for prioritization is to parse the JSON, say, with the help of Python. The final prioritization conditions may be as follows:

  • Priority 1 (critical): Visibility_score >= 3 and Attacker_score >= 75. From an applied perspective, this isolates MITRE ATT&CK techniques that most frequently feature in attacks and that the SOC requires minimal or no preparation to detect.
  • Priority 2 (high): (Visibility_score < 3 and Visibility_score >= 1) and Attacker_score >= 75. These are MITRE ATT&CK techniques that most frequently feature in attacks and that the SOC is capable of detecting. However, some work on logging may be required, or monitoring coverage may not be good enough.
  • Priority 3 (medium): Visibility_score >= 3 and Attacker_score < 75. These are MITRE ATT&CK techniques with medium to low frequency that the SOC requires minimal or no preparation to detect.
  • Priority 4 (low): (Visibility_score < 3 and Visibility_score >= 1) and Attacker_score < 75. These are all other MITRE ATT&CK techniques that feature in attacks and the SOC has the capability to detect.

As a result, the SOC obtains a list of MITRE ATT&CK techniques ranked into four groups and mapped to its capabilities and global statistics on malicious actors’ actions in attacks. The list is optimized in terms of the cost to write detection logic and can be used as a prioritized development backlog.

Prioritization extension and parallel tasks

In conclusion, we would like to highlight the key assumptions and recommendations for using the suggested prioritization method.

  • As mentioned above, it is not fully appropriate to use the MITRE ATT&CK statistics on the frequency of techniques in attacks. For more mature prioritization, the SOC team must rely on relevant threat data. This requires defining a threat landscape based on analysis of threat data, mapping applicable threats to specific devices and systems, and isolating the most relevant techniques that may be used against a specific system in the specific corporate environment. An approach like this calls for in-depth analysis of all SOC activities and links between processes. Thus, when generating a scenario library for a customer as part of our consulting services, we leverage Kaspersky Threat Intelligence data on threats relevant to the organization, Managed Detection and Response statistics on detected incidents, and information about techniques that we obtained while investigating real-life incidents and analyzing digital evidence as part of Incident Response service.
  • The suggested method relies on SOC capabilities and essential MITRE ATT&CK analytics. That said, the method is optimized for effort reduction and helps to start developing relevant detection logic immediately. This makes it suitable for small-scale SOCs that consist of a SIEM administrator or analyst. In addition to this, the SOC builds what is essentially a detection functionality roadmap, which can be used for demonstrating the process, defining KPIs and justifying a need for expanding the team.

Lastly, we introduce several points regarding the possibilities for improving the approach described herein and parallel tasks that can be done with tools described in this article.

You can use the following to further improve the prioritization process.

  • Grouping by detection. On a basic level, there are two groups: network detection or detection on a device. Considering the characteristics of the infrastructure and data sources in creating detection logic for different groups helps to avoid a bias and ensure a more complete coverage of the infrastructure.
  • Grouping by attack stage. Detection at the stage of Initial Access requires more effort, but it leaves more time to respond than detection at the Exfiltration stage.
  • Criticality coefficient. Certain techniques, such as all those associated with vulnerability exploitation or suspicious PowerShell commands, cannot be fully covered. If this is the case, the criticality level can be used as an additional criterion.
  • Granular approach when describing source quality. As mentioned earlier, DETT&CT helps with creating quality descriptions of available data sources, but it lacks exception functionality. Sometimes, a source is not required for the entire infrastructure, or there is more than one data source providing information for similar systems. In that case, a more granular approach that relies on specific systems, subnets or devices can help to make the assessment more relevant. However, an approach like that calls for liaison with internal teams responsible for configuration changes and device inventory, who will have to at least provide information about the business criticality of assets.

Besides improving the prioritization method, the tools suggested can be used for completing a number of parallel tasks that help the SOC to evolve.

  • Expanding the list of sources. As shown above, the coverage of the MITRE ATT&CK matrix requires diverse data sources. By mapping existing sources to techniques, you can identify missing logs and create a roadmap for connecting or introducing these sources.
  • Improving the quality of sources. Scoring the quality of data sources can help create a roadmap for improving existing sources, for example in terms of infrastructure coverage, normalization or data retention.
  • Detection tracking. DETT&CT offers, among other things, a detection logic scoring feature, which you can use to build a detection scenario revision process.

RADIUS Protocol Vulnerability Exposes Networks to MitM Attacks

The Hacker News - 9 Červenec, 2024 - 14:39
Cybersecurity researchers have discovered a security vulnerability in the RADIUS network authentication protocol called BlastRADIUS that could be exploited by an attacker to stage Mallory-in-the-middle (MitM) attacks and bypass integrity checks under certain circumstances. "The RADIUS protocol allows certain Access-Request messages to have no integrity or authentication checks," InkBridge Newsroomhttp://www.blogger.com/profile/[email protected]
Kategorie: Hacking & Security

RADIUS Protocol Vulnerability Exposes Networks to MitM Attacks

The Hacker News - 9 Červenec, 2024 - 14:39
Cybersecurity researchers have discovered a security vulnerability in the RADIUS network authentication protocol called BlastRADIUS that could be exploited by an attacker to stage Mallory-in-the-middle (MitM) attacks and bypass integrity checks under certain circumstances. "The RADIUS protocol allows certain Access-Request messages to have no integrity or authentication checks," InkBridge
Kategorie: Hacking & Security

Small is big: Meta bets on AI models for mobile devices

Computerworld.com [Hacking News] - 9 Červenec, 2024 - 14:27

Facebook-parent Meta has been working on developing a new small language model (SLM) compatible with mobile devices with the aim of running on-device applications while mitigating energy consumption during model inferencing tasks, a paper published by company researchers showed.  

To set the context, large language models (LLMs) have a lot more parameters. For instance, Mistral-22B has 22 billion parameters while GPT-4 has 1.76 trillion parameters. In contrast, smaller language models have relatively fewer parameters, such as Microsoft’s Phi-3 family of SLMs, which have different versions starting from 3.8 billion parameters.  

A parameter helps an LLM decide between different answers it can provide to queries — the more the number of parameters, the more the need for a larger computing infrastructure.

However, Meta researchers believe that effective SLMs with less than a billion parameters can be developed and it would unlock the adoption of generative AI across use cases involving mobile devices, which have relatively less compute infrastructure than a server or a rack.

The researchers, according to the paper, ran experiments with models, architected differently, having 125 million and 350 million parameters, and found that smaller models prioritizing depth over width enhance model performance.

“Contrary to prevailing belief emphasizing the pivotal role of data and parameter quantity in determining model quality, our investigation underscores the significance of model architecture for sub-billion scale LLMs,” the researchers wrote.

“Leveraging deep and thin architectures, coupled with embedding sharing and grouped-query attention mechanisms, we establish a strong baseline network denoted as MobileLLM, which attains a remarkable 2.7%/4.3% accuracy boost over preceding 125M/350M state-of-the-art models,” they added.

The 125 and 350 million models, dubbed MobileLLM, according to the researchers, were as effective as large language models, such as Llama 2, in handling chat and several API calling tasks, highlighting the capability of small models for common on-device use cases. While MobileLLM is not available across any of Meta’s products for public use, the researchers have made the code and data for the experiment available along with the paper.

More Meta news:

Kategorie: Hacking & Security

The Rise of Eldorado: Addressing the New Wave of Ransomware-as-a-Service Threats Targeting Linux Systems

LinuxSecurity.com - 9 Červenec, 2024 - 14:00
Cybersecurity has always been dynamic, and threats are evolving rapidly. One of the latest entrants into this dangerous arena is Eldorado, a ransomware-as-a-service (RaaS) that targets Windows and Linux systems. As revealed by Group-IB's recent discovery , this new ransomware has been making waves since it was first discovered in March 2024.
Kategorie: Hacking & Security

Hackers Exploiting Jenkins Script Console for Cryptocurrency Mining Attacks

The Hacker News - 9 Červenec, 2024 - 13:50
Cybersecurity researchers have found that it's possible for attackers to weaponize improperly configured Jenkins Script Console instances to further criminal activities such as cryptocurrency mining. "Misconfigurations such as improperly set up authentication mechanisms expose the '/script' endpoint to attackers," Trend Micro's Shubham Singh and Sunil Bharti said in a technical write-up
Kategorie: Hacking & Security

Hackers Exploiting Jenkins Script Console for Cryptocurrency Mining Attacks

The Hacker News - 9 Červenec, 2024 - 13:50
Cybersecurity researchers have found that it's possible for attackers to weaponize improperly configured Jenkins Script Console instances to further criminal activities such as cryptocurrency mining. "Misconfigurations such as improperly set up authentication mechanisms expose the '/script' endpoint to attackers," Trend Micro's Shubham Singh and Sunil Bharti said in a technical write-up Newsroomhttp://www.blogger.com/profile/[email protected]
Kategorie: Hacking & Security

Google lidem nabídne 50Gb/s internet a už vyhlíží dvakrát rychlejší optiku

Živě.cz - 9 Červenec, 2024 - 13:45
GFiber, sesterská společnost Googlu, v americkém Kansas City otestovala pasivní optické připojení 50G-PON, které nabídne rychlost až 50 Gb/s. Teoretickou hranici však firma v ukázce nedosáhla, naměřila „jen“ 41,89 Gb/s na straně downlinku a 19,6 Gb/s na straně uplinku. Vybavení dodala finská Nokia, ...
Kategorie: IT News
Syndikovat obsah