Positive Research Center

Syndikovat obsah
Positive Researchhttp://www.blogger.com/profile/12273696227623127095noreply@blogger.comBlogger21513
Aktualizace: 23 min 53 sek zpět

We need to talk about IDS signatures

19 Březen, 2018 - 12:20

The names Snort and Suricata are known to all who work in the field of network security. WAF and IDS are two classes of security systems that analyze network traffic, parse top-level protocols, and signal the presence of malicious or unwanted network activity. Whereas WAF helps web servers detect and avoid attacks targeted only at them, IDS detects attacks in all network traffic.

Many companies install an IDS to control traffic inside the corporate network. The DPI mechanism lets them collect traffic streams, peer inside packets at the IP, HTTP, DCE/RPC, and other levels, and identify both the exploitation of vulnerabilities and network activity by malware.

At the heart of both systems are signature sets used for detecting known attacks, developed by network security experts and companies worldwide.
We at the @attackdetection team also develop signatures to detect network attacks and malicious activity. Later on in the article, we'll discuss a new approach we discovered that disrupts the operation of Suricata IDS systems, and then hides all trace of such activity.

How does IDS workBefore plunging into the technical details of this IDS bypass technique and the stage at which it is applied, let's refresh our concept of the operating principle behind IDS.

First of all, incoming traffic is divided into TCP, UDP, or other traffic streams, after which the parsers mark and break them down into high-level protocols and their related fields, normalizing them, if required. The decoded, decompressed, and normalized protocol fields are then checked against the signature sets that detect network attack attempts or malicious packets in the network traffic.

Incidentally, the signature sets are the product of numerous individual researchers and companies. Among the vendors are such names as Cisco Talos and Emerging Threats, and the open set of rules currently counts more than 20,000 active signatures.

Common IDS evasion methodsIDS flaws and software errors sometimes mean that attacks go unspotted in network traffic. The following are fairly well-known bypass techniques at the stream-parsing stage:
  • Non-standard fragmentation of packets, including at the IP, TCP, and DCERPC levels, which the IDS is sometimes unable to cope with.
  • Packages with boundary or invalid TTL or MTU values ​​can also be incorrectly processed by the IDS.
  • Ambiguous overlapping of TCP fragments (TCP SYN numbers) can be handled differently by the IDS than on the server or client for which the TCP traffic was intended.
  • For instance, instead of ignoring it, a TCP FIN dummy packet with an invalid checksum (so-called TCP un-sync) can be interpreted as the end of the session.
  • A different timeout time for the TCP session between the IDS and the client can also serve as a tool for hiding attacks.

As for the protocol-parsing and field-normalization stage, many WAF bypass techniques can be applied to an IDS. Here are just some of them:
  • HTTP double-encoding.
  • A Gzip-compressed HTTP packet without a corresponding Content-Encoding header might remain uncompressed at the normalization stage; this technique can sometimes be detected in malware traffic.
  • The use of rare encodings, such as Quoted-Printable for POP3/IMAP, can also render some signatures useless.

And don't forget about bugs specific to every vendor of IDS systems or third-party libraries inside them, which are available on public bug trackers.One of these specific bugs used to disable signature checks in certain conditions was discovered by the @attackdetection team in Suricata; this error could be exploited to conceal attacks like BadTunnel.

During this attack, the vulnerable client opens an HTML page generated by the attacker, establishing a UDP tunnel through the network perimeter to the attacker's server for ports 137 on both sides. Once the tunnel is established, the attacker is able to spoof names inside the network of the vulnerable client by sending fake responses to NBNS requests. Although three packets went to the attacker's server, it was sufficient to respond to just one of them to establish the tunnel.

The error was due to the fact that since the response to the first UDP packet from the client was an ICMP packet, for example ICMP Destination Unreachable, the imprecise algorithm meant that the stream was verified with signatures only for ICMP. Any further attacks, including name spoofing, remained unspotted by the IDS, as they were carried out on top of the UDP tunnel. Despite the lack of a CVE identifier for this vulnerability, it led to the evasion of IDS security functions.

The above-mentioned bypass techniques are well known and have been eliminated in modern and long-developed IDS systems, while specific bugs and vulnerabilities work only for unpatched versions.

Since our team investigates network security and network attacks, and develops and tests network signatures first hand, we couldn't fail to notice bypassing techniques linked to the signatures themselves and their flaws.

Bypassing signaturesWait a sec, how can signatures be a problem?

Researchers study emerging threats and form an understanding of how an attack can be detected at the network level on the basis of operational features or other network artifacts, and then translate the resulting picture into one or more signatures in an IDS-friendly language. Due to the limited capabilities of the system or researcher error, some methods of exploiting vulnerabilities remain undetected.

If the protocol and message format of a particular family or generation of malware remain unchanged, and the signatures for them work just fine, then when it comes to exploiting vulnerabilities, the more complex the protocol and its variability, the simpler it is for the attacker to change the exploit with no loss of functionality—and bypass the signatures.
Although you can find many decent signatures from different vendors for the most dangerous and high-profile vulnerabilities, other signatures can be evaded by simple methods. Here's an example of a very common signature error for HTTP: at times it's enough just to change the order of the HTTP GET arguments to bypass a signature check.

And you'd be right to think that substring checks with a fixed order of arguments are encountered in signatures—for example, "?action=checkPort" or 'action=checkPort&port=". All that's needed is to carefully study the signature and check whether it contains such hardcode.

Some other equally complex checking protocols and formats are DNS, HTML, and DCERPC, which all have extremely high variability. Therefore, to cover the signatures of all attack variations and develop not only high-quality but speedy signatures, the developer must possess wide-ranging skills and solid knowledge of network protocols.

The inadequacy of IDS signatures is old hat, and you can find plenty of other opinions in various reports: 1, 2, 3.

How much does a signature weighAs already mentioned, signature speed is the developer's responsibility, and naturally the more signatures, the more scanning resources are required. The "golden mean" rule recommends adding one CPU per thousand signatures or 500 Mbps network traffic in the case of Suricata.

It depends on the number of signatures and volume of network traffic. Although this formula looks good, it leaves out the fact that signatures can be fast or slow, and traffic can be extremely diverse. So what happens if a slow signature encounters bad traffic?

Suricata is able to log data on the performance of signatures. The log gathers data on the slowest signatures, and generates a list specifying execution time in ticks—CPU time and number of checks performed. The slowest signatures are at the top.

The highlighted signatures are described as slow. The list is constantly updated; different traffic profiles would be sure to list other signatures. This is because signatures generally consist of a subset of simple checks, such as searching for a substring or regular expression arranged in a certain order. When checking a network packet or stream, the signature checks its entire contents for all valid combinations. As such, the tree of checks for one and the same signature can have more or fewer branches, and the execution time will vary depending on the traffic analyzed. One of the developer's tasks, therefore, is to optimize the signature to operate on any kind of traffic.

What happens if the IDS is not properly implemented and not capable of checking all network traffic? Generally, if the load on CPU cores is on average more than 80%, it means the IDS is already starting to skip some packet checks. The higher the load on the cores, the more network traffic checks are skipped, and the greater the chances that malicious activity will go unnoticed.

What if an attempt is made to increase this effect when the signature spends too much time checking network packets? Such an operating scheme would sideline the IDS by forcing it to skip packets and attacks. For starters, we already have a top list of hot signatures on live traffic, and we'll try to amplify the effect on them.

Let's operateOne of these signatures reveals an attempt in the traffic to exploit the vulnerability CVE-2013-0156 RoR YAML Deserialization Code Execution.

All HTTP traffic directed to corporate web servers is checked for the presence of three strings in the strict sequence—"type", 'yaml", '!Ruby"—and checked with a regular expression.

Before we set about generating "bad" traffic, I'll present some hypotheses that might help our investigation:

  • It's easier to find a matching substring than prove there is no such match.
  • For Suricata, checking with a regular expression is slower than searching for a substring.

This means that if we want long checks from a signature, these checks should be unsuccessful and use regular expressions.

In order to get to the regex check, there must be three substrings in the packet one after the other.

Let's try combining them in this order and running the IDS to perform a check. To construct files with HTTP traffic in Pcap format from the text, I used the Cisco Talos file2pcap tool:

Another log, keyword_perf.log, helps us see that the chain of checks successfully made it (content matches—3) to the regular expression (PCRE) and then failed (PCRE matches—0). If later we want to benefit from resource-intensive PCRE checks, we need to completely parse it and pick out some effective traffic.

The task of reverse parsing a regular expression, although easy to do manually, is poorly automated due to such constructions as backreferences or named capture groups: I didn't find any methods at all to automatically select a string for successfully passing a regular expression.

The following construction was the minimum string required for such an expression. To test the theory that an unsuccessful search is more resource-intensive than a successful one, we'll trim the rightmost character from the string and run the regex again.

It turns out that the same principle also applies to regular expressions: the unsuccessful check took more steps than its successful counterpart. In this case, the difference was greater than 50%. You can see this for yourself.

Further study of this regular expression produced another eye-opener. If we repeatedly duplicate the minimum required string without the last character, it is reasonable to expect an increase in the number of steps taken to complete the check, but the growth curve is explosive:

The scan time for several dozen such strings is already around one second, and increasing their number risks a timeout error. This effect in regular expressions is called catastrophic backtracking, and there are many articles devoted to it. Such errors are still encountered in common products; for example, one was recently found in the Apache Struts framework.

Let's take the strings obtained and check them with Suricata:

  Keyword Ticks  Checks    Matches
  --------   --------   -------   --------
  content 19135  4         3   
  pcre    1180797 1         0       

However, instead of catastrophic backtracking, the IDS barely notices the load—only 1 million ticks. This is the story of how after debugging and examining the Suricata IDS source code and the libpcre library used inside it, I stumbled upon these PCRE limits:


These limits save regular expressions from catastrophic backtracking in many regex libraries. The same limits can be found in WAF, where regex checks predominate. Sure, these limits can be changed in the IDS configuration, but they are propagated by default and changing them isn't recommended.

Using only a regular expression won't help us achieve the desired result. But what if we use the IDS to check a network packet with this content?

In this case, we get the following log values:

Keyword Avg. Ticks  Checks Matches
--------   ----------  -------  --------
content    3338        7      6   
pcre       12052       3      0     

There were 4 checks, which became 7 only because of duplication of the initial string. Although the mechanism remains unclear, we should expect the number of checks to snowball if we further duplicate the strings. In the end, I got the following values:

content            1508  1507      
pcre                1492  0       

In total, the number of checks of substrings and regular expressions does not exceed 3000, no matter what content is checked by the signature. Clearly, the IDS itself also has an internal limiter, which goes by the name of inspection-recursion-limit, set by default to that same figure of 3000. With all the PCRE and IDS limits and restrictions on the one-time size of content being checked, by modifying the content and using snowballing regex checks, you get the result you're after:

content 3626    1508  1507      
pcre    1587144   1492  0 

Although the complexity of one regex check has not changed, the number of such checks has shot up to the 1500 mark. Multiplying the number of checks by the average number of clock cycles spent on each check, we get the coveted figure of 3 billion ticks.

 Num  Rule      Avg Ticks
-------- ------------ -----------
1    2016204  3302218139

That's more than a thousand-fold increase! The operation requires only the curl utility for generating the minimum HTTP POST request. It looks something like this:

The minimum set of HTTP fields and HTTP body with a repeating pattern.
Such content cannot be infinitely large so as to cause the IDS to spend vast resources on checking it, since although inside it the TCP segments are joined in a single stream, the stream and the collected HTTP packets are not checked entirely, no matter how big they are. Instead, they are checked in small chunks about 3-4 kilobytes in size. The size of the segments to be checked, as well as the depth of the checks, is set in config (like everything in the IDS). The segment size "wobbles" slightly from launch to launch to avoid fragmentation attacks on such segments—when the attacker, knowing the default segment size, splits the network packets so that the attack is divided into two neighboring segments and cannot be detected by the signature.

So, we just got our hands on a powerful weapon that loads the IDS in excess of 3,000,000,000 CPU ticks per utilization. What does that even mean?

The actual figure obtained is roughly 1 second of average CPU operation. Basically, by sending an HTTP request of size 3 KB, we load the IDS for a full second. The more cores in the IDS, the more data streams it can process simultaneously.

Remember that the IDS does not sit idle and generally spends some resources on monitoring background network traffic, thereby lowering the attack threshold.

Taking metrics on a working IDS configuration with 8/40 Intel Xeon E5-2650 v3 CPU cores (2.30 GHz) without background traffic, where all 8 CPU cores are 100% loaded, the threshold value turns out to be only 250 Kbps. And that's for a system designed to process a multi-gigabit network stream, i.e. thousands of times greater.

To exploit this particular signature, the attacker need only send about 10 HTTP requests per second to the protected web server to gradually fill the network packet queue of the IDS. When the buffer is full up, the packets start to bypass the IDS, which is when the attacker can use any tools or carry out arbitrary attacks while remaining unnoticed by the detection systems. The constant flow of malicious traffic can disable the IDS until the traffic stops bombarding the internal network, while for short-term attacks the attacker can send a short spike from such packets and also blind the detection system for a brief period.

Current mechanisms are unable to detect slow signatures: although IDS has a profiling code, the system cannot distinguish a signature that is merely slow from one that is catastrophically slow, and automatically signal it. Note that signature triggering is not signaled either, due to the lack of relevant content.

Do you remember the unexplained rise in the number of checks? There was indeed an IDS error that led to an increase in the number of superfluous checks. The vulnerability was given the name CVE-2017-15377 and has now been fixed in Suricata IDS 3.2 and 4.0.

The above approach works well for one specific instance of the signature. It is distributed as part of an open signature set and usually is enabled by default, but at the top of the list of hot signatures new ones keep emerging, while others continue waiting for their traffic. The signature description language for Snort and Suricata supplies the developer with many handy tools, such as base64 decoding, content jumping, and mathematical operations. Other combinations of checks can also cause explosive growth in the consumption of resources. Careful monitoring of performance data can be a springboard for exploitation. After the CVE-2017-15377 problem was remedied, we again launched Suricata to check our network traffic and saw exactly the same picture: a list of the hottest signatures at the top of the log, but this time with different numbers. This suggests that such signatures—and ways to exploit them—are numerous.

Not only IDS, but also anti-viruses, WAF, and many other systems are based on signature-search methods. As a result, this approach can be applied to search for weaknesses in their operation. It can stealthily prevent detection systems from doing their job of detecting malicious activity. Related network activity cannot be detected by security tools or anomaly detectors. As an experiment, enable the profiling setting in your detection system—and keep an eye on the top of the performance log.

Author: Kirill Shipulin of the @attackdetection team, Twitter | Telegram

How to assemble a GSM phone based on SDR

13 Březen, 2018 - 13:50

The smartphones so familiar to most of us contain an entire communication module separate from the main CPU. This module is what makes a "smartphone" a "phone." Regardless of whether the phone's user-facing operating system is Android or iOS, the module usually runs a proprietary closed-source operating system and handles all voice calls, SMS messages, and mobile Internet traffic.

Of course, open-source projects are more interesting to security researchers than closed-source ones. The ability to look under the hood and see how a particular program component works makes it possible to find and fix errors, plus verify that undocumented functionality is not present. As a pleasant bonus, access to source code helps novice developers to learn from colleagues and make contributions of their own.

OsmocomBB project
These considerations inspired creation of the Open Source Mobile Communications (Osmocom) project all the way back in 2008. Initially, the developers' attention was focused on OpenBSC, a project for running a self-contained cellular network on hefty commercial base stations. OpenBSC was first presented in 2008 at the Chaos Communication Congress annual conference.

Over time, Osmocom branched out into more projects. Today it serves as the umbrella for dozens of initiatives, one of which is OsmocomBB—a free and open-source implementation of the mobile-side GSM protocol stack. Unlike its predecessors, such as TSM30, MADos for Nokia 33XX, and Airprobe, OsmocomBB caught the attention of researchers and developers, and continues to be developed.

OsmocomBB was initially envisioned as a full-fledged firmware for open-source cell phones, including a GUI and other components, focusing on an alternative implementation of the GSM protocol stack. However, this idea did not catch on among potential users, so OsmocomBB today serves as an indispensable set of research tools and learner’s aid for those new to GSM.

With OsmocomBB, researchers can assess the security of GSM networks and investigate how the radio interface (Um interface) functions on cellular networks. What kind of encryption is used, if any? How often are encryption keys and temporary subscriber IDs changed? What is the likelihood that a voice call or SMS message will be intercepted or forged by an attacker? OsmocomBB allows quickly finding the answers to these and many other questions. Some of the many other uses include launching a small GSM base station, investigating the security of SIM cards, and sniffing traffic.

Similar to the case with the Aircrack-ng project and network cards, OsmocomBB's primary hardware platform consists of mobile phones based on the Calypso chipset. Generally speaking, these are Motorola C1XX phones. At the start of OsmocomBB development, it was decided to use the phones in the interest of saving time—otherwise, the process of designing and manufacturing new equipment could drag on indefinitely. Another reason was that some parts of the source code and specifications for the Calypso chipset had been leaked to the Internet, which gave a head start to reverse engineering of the firmware and subsequent development.

However, this expedient came at a price. Phones based on the Calypso chipset are no longer produced, forcing researchers to search for secondhand models. Moreover, some parts of the current implementation of the GSM stack physical layer are heavily based on a Digital Signaling Processor (DSP). The code of this processor is proprietary and not fully known to the public. Both factors create roadblocks for OsmocomBB, reducing its potential and increasing the barrier to entry for developers and project users at large. As just one example, implementing GPRS support is impossible without changing the DSP firmware.

Breathing in new life with a new hardware platform
The OsmocomBB software consists of a number of applications, each of which has a specific purpose. Some applications run directly on a computer in any UNIX-like environment. Other applications are provided in the form of firmware that must be loaded on to the phone. The applications interact through the phone serial port, which is combined with the headset connector. In other words, an unremarkable TRS connector (2.5-mm microjack) can be used for transmitting both sound and data! Similar technology is used in smartphones in order to support headphone remotes, selfie sticks, and other accessories.

Lack of other interfaces (such as USB) and the need to use a serial port also impose certain limitations, particularly on the data transfer rate. The low bandwidth of the serial interface limits the ability to sniff traffic and run a base station. Moreover, a ready-made cable for connecting the phone to USB is difficult to find; in most cases this cable must be DIY'ed, raising the barrier to entry higher still.

Eventually, the combination of these difficulties gave rise to the idea of switching to a different hardware platform that would remove these software and hardware limits. Such a platform should be available to everyone, in terms of both physical availability and price. Due to rapid growth in popularity and availability, Software-Defined Radio (SDR) technology perfectly meets these requirements.

The essence of SDR is to develop general-purpose radio equipment not tied to a specific communication standard. Thanks to this, SDR has become very popular both among radio amateurs and manufacturers of commercial equipment. Today, SDR is actively used in cellular communications for deployment of GSM, UMTS, and LTE networks.

That said, the idea of using SDR to develop and launch a GSM mobile phone with OsmocomBB is not new. Osmocom developers worked on this but abandoned their efforts. A Swiss laboratory also attempted to do so, unfortunately never advancing beyond the proof-of-concept stage. Nonetheless, we decided to resume work in this direction by implementing support for a new SDR-based hardware platform for OsmocomBB. The platform is identical to the Calypso chipset in terms of backward compatibility, while also being more open to modifications.

The remainder of this article will describe the process of developing the new platform, the problems encountered, and the solutions we found. In the conclusion, we will share our results, limitations of the current implementation, ideas for further development, and advice for how to get OsmocomBB working on SDR.

Project history
As mentioned already, OsmocomBB includes two types of applications: some run on a computer and others are loaded on to the phone as part of alternative firmware. The  both sides interact via osmocon, a small program that connects them to each other through the serial port. Interaction occurs using the simple L1CTL (GSM Layer 1 Control) binary protocol, which supports only three types of messages: request (REQ), conformation (CONF), and indication (IND).

We decided to preserve this structure, as well as the protocol itself, for transparent compatibility with existing applications. This resulted in a new application, trxcon (short for "transceiver connection"), which serves as a bridge among high-level applications (such as mobile and ccch_scan) and the transceiver (a separate application that manages SDR).

The transceiver is a separate program that performs low-level tasks of the GSM physical layer, such as time and frequency synchronization with the network, signal detection and demodulation, and modulation and transmission of the outgoing signal. Among ready-made solutions, there are two suitable projects: OsmoTRX and GR-GSM. The first is an improved modification of the transceiver from the OpenBTS project (it is used by Osmocom projects for running base stations), while the second provides a set of GNU Radio blocks for receiving and decoding GSM signals.

Despite the completeness of its implementation and out-of-the-box support for signal transmission, OsmoTRX with its cocktail of C and C++ will not please the developer who values clean, readable source code. For example, altering a few lines of code in OsmoTRX may require studying the entire class hierarchy, while GR-GSM offers incomparable modularity and freedom of modification.

Nevertheless, OsmoTRX still has a number of advantages. The most important of these are performance, low resource requirements, and small size of executable code and dependencies. All this makes the project fairly friendly to embedding on systems with limited resources. By comparison, GNU Radio looks positively gluttonous. Development targeted OsmoTRX exclusively at first, but ultimately the choice was made to use GR-GSM as transceiver.

To ensure backward compatibility, a TRX interface was implemented in trxcon. This interface is also used in the OsmoTRX, OsmoBTS, and OpenBTS projects. The interface uses three UDP sockets for each connection; each socket has a separate purpose. One of them is the CTRL interface, which allows controlling the transceiver (setting frequency, gain, and so on). The second one is called DATA—as the name implies, it exchanges information that needs to be transmitted (Uplink) or that has already been received (Downlink). The last socket, CLCK, is used to pass on timestamps from the transceiver.

We also implemented a new application, grgsm_trx, for GR-GSM. The application initializes the basic set of blocks (flow graph) and provides the TRX interface for an external control application, which in our case is trxcon. The flow graph initially consisted only of blocks for reception, that is, detection and demodulation of bursts—the smallest information pieces of the GSM physical interface. Every burst that is output by the demodulator is a bit sequence consisting mainly of a payload and midamble that allows the receiver to synchronize with the transmitter but (unlike a preamble) is located in the middle.

At this point in development, high-level applications such as ccch_scan were already able to set the SDR to a certain frequency, launch the synchronization process with the base station, and demodulate the received signal. However, these initial successes were accompanied by difficulties. Since most of the implementation of the OsmocomBB physical layer previously relied on the phone DSP, encoding and decoding of packets according to GSM 05.03 specifications was not implemented separately—it was performed with proprietary code.

The newly implemented transceiver would pass on bursts to the upper layers of the implementation, while the current implementation of the upper layers was expecting LAPDm byte packets (mostly 23 bytes each) from the physical layer. Moreover, the transceiver needed accurate Time Division Multiple Access (TDMA) with the base station, even though high-level applications were completely unaware and transmitted outgoing packets when needed.

To fix this, we implemented a TDMA scheduler that accepts LAPDm packets from high-level applications, encodes them into bursts, and passes them to the transceiver, determining the transmission time using frame and timeslot numbers. The scheduler reassembles bursts arriving from the transceiver, decodes them and passes them on to the upper layers. According to GSM 05.03, coding and decoding involve, respectively, adding redundant data to information bits and then recovering LAPDm packets from those padded sequences by using the Viterbi algorithm.

It may sound confusing, but a similar process of encoding and decoding LAPDm packets takes place both on the mobile phone and on the base station. Fortunately, a free open-source implementation already existed, in the form of Osmocom Base Transceiver Station (OsmoBTS). This project's code related to GSM 05.03 was reworked, documented, and moved to libosmocore (the Osmocom project  core) as a child library called libosmocoding. Thanks to this, many projects—including OsmocomBB, GR-GSM, and OsmoBTS—can all take advantage of this implementation without duplicating code. The TDMA scheduler itself was also implemented in a way similar to OsmoBTS, but taking mobile phone workings into account.

After this, receiving was successful! But the most important feature for the functioning of a mobile phone—data transmission—was still missing. The problem was that initially there were no blocks in GR-GSM for modulating and transmitting the signal. Fortunately, project author Piotr Krysik supported implementation of this functionality and contributed to collaboration for this purpose.

To avoid wasting time while data transmission was being worked on, we came upon a temporary workaround. As it turned out later, this workaround was a very useful solution in its own right—a set of tools for emulating the transceiver as a virtual Um interface. Since both OsmocomBB and OsmoBTS now support the TRX interface, the projects can be easily interconnected: each Downlink burst from the OsmoBTS is passed on to the trxcon application, while every Uplink burst from OsmocomBB is passed on to OsmoBTS. A simple application written in Python, called FakeTRX, allowed running a virtual GSM network without any equipment!

Thanks to this set of tools, a large number of bugs in implementation of the TDMA scheduler were subsequently found and fixed. Support for dedicated channels, such as SDCCH and TCH, was also implemented. The first type of GSM logical channels is mainly used for sending SMS messages, USSD requests and (sometimes) establishing voice calls. The second type is used for voice transmission during a call. The GSM Audio Packet Knife (GAPK) project helped to provide basic support in OsmocomBB for recording and encoding, as well as decoding and reproduction of sound, since this task was previously performed by the phone DSP.

Meanwhile, Piotr Krysik developed and successfully implemented all the missing blocks necessary for signal transmission. Since GSM uses Gaussian Minimum Shift Keying (GMSK) modulation, he used the existing GMSK Modulator block from GNU Radio. However, the main problem was to ensure synchronization with the base station. Each burst must be transmitted on time, that is to say, within the timeslot allocated by the base station. Timing buffers in the TDMA system allow compensating for small discrepancies. The situation was complicated by the lack of an accurate clock generator in most SDR devices, due to which the whole system would tend to drift.

The solution we found involves using the hardware clock of SDR devices, such as those from USRP, based on which incoming bursts are stamped with the current hardware time. By comparing these stamps and the current frame number decoded from the SCH burst, it is possible to perform adjustment and set the exact time for transmitting outgoing information. The only problem was that standard GNU Radio blocks designed for interaction with SDR do not support temporary stamps, so we had to replace the blocks with UHD Source and Sink, restricting support to USRP devices.

As a result, when the transceiver was ready for operation, the time had come to venture beyond the virtual Um interface. But some glitch is always bound to come up during the first trial run of something new—and sure enough, our attempt to run the project on real equipment was unsuccessful. We had not taken into account one aspect of timing in GSM: the time count for the signal transmitted by the phone (Uplink) is specially delayed by three timeslots relative to the received signal (Downlink), which gives phones with a half-duplex communication module the time needed to perform frequency hopping. One small adjustment later, the project was up and running! For the first time, with the help of OsmocomBB and SDR, we could send an SMS message and make a voice call.

Thanks to this work, we managed to build a bridge of sorts between OsmocomBB and SDR transceivers functioning through the Universal Hardware Driver (UHD). We implemented the main components of the GSM physical layer that are necessary for high-level applications, such as ccch_scan, cbch_scan, and mobile. All our work was made available to the public in the OsmocomBB main repository.

Now, by using SDR as a hardware platform for OsmocomBB, it has become possible to run a completely transparent GSM protocol stack that is free of proprietary components that have closed source code, such as a DSP of Calypso-based phones, while also allowing on-the-fly debugging and modification of each component. In addition, developers and researchers gain a number of opportunities, for example:
  • Running the network and phone in other frequency bands (such as 2.4 GHz)
  • Integrating alternative audio codecs (for example, Speex or Opus)
  • Implementing the GPRS / EGPRS stack

The tools for creating a virtual Um interface, which we referred to previously, were also published in the project repository. These tools are useful both for experienced developers (for example, for simulating load levels for various components of cellular network infrastructure and testing their stability) and for novice users, who can begin studying GSM in practice without the need to search for and purchase equipment.
However, the current implementation of the new hardware platform for OsmocomBB still contains certain limitations, most of which come from the SDR technology itself. For example, most available SDR  devices, such as USRP, UmTRX, and LimeSDR, have a relatively small transmit power when compared to the maximum transmit power of ordinary phones. Another implementation gap is lack of support for frequency hopping, which enables the subscriber to simultaneously communicate with multiple base stations at different frequencies, which reduces interference and complicates signal interception. Frequency hopping is implemented on the networks of most operators; in addition, GSM specifications define this technology as mandatory support for every phone. While the problem with signal power can be solved with the help of amplifiers or simply by using a laboratory base station, implementing frequency hopping will require much more effort.

Further development plans include:
  • Supporting physical (non-virtual) SIM cards
  • Introducing support for a wider range of SDR devices
  • Supporting Circuit-Switched Data (CSD)
  • Implementing an embedded transceiver based on OsmoTRX
  • Supporting GPRS / EDGE
The project was also presented at 34th annual Chaos Computer Club conference:

Conclusion: tips for getting started with your own SDRHere is our advice for how to run GSM on your own SDR. To start with, we suggest experimenting with the virtual Um interface with the help of our TRX Toolkit:

In addition to OsmocomBB, you will need a complete entire set of central network infrastructure components from Osmocom: either OsmoNiTB (Network in The Box) or all components separately, including BTS, BSC, MSC, MGW, and HLR. Instructions for compiling source code can be found on the project website, or you can use ready-made packages for Debian, Ubuntu, or OpenSUSE.

To test the implementation on your own network, you can use any available implementation of the GSM network stack, such as Osmocom, OpenBTS, or YateBTS. Launching your own network requires a separate SDR device or a commercial base station, such as nanoBTS. Because of the described limitations and other possible flaws, we highly recommended to not test the project on actual operator networks!

To build the transceiver, you will need to install GNU Radio and compile a separate branch of the GR-GSM project from source code. For details on installing and using the transceiver, visit the Osmocom project website.

Good luck!

Author: Vadim Yanitskiy, Positive Technologies

The First Rule of Mobile World Congress Is: You Do Not Show Anyone Your Mobile World Congress Badge

5 Březen, 2018 - 11:59

The biggest event of the telecom industry attracted particularly wide media coverage this year: the King of Spain personally arrived in Barcelona for the opening of the annual Mobile World Congress (MWC 2018), which caused a wave of protests by supporters of the region's independence from Madrid. As a result, newspaper front pages and TV channel prime time are all taken by high tech and telecom innovations against the backdrop of protesting crowds. And it is recommended that all participants and visitors to the Congress should not wear a badge outside the venue for greater security.

The Mobile World Congress has been held annually for more than 30 years.

Among the participants are mobile operators, manufacturers of all kinds of communication devices, application developers, and even auto giants and international payment systems. Without exaggeration, all industry players try to time their long-awaited announcements for the date of the event. The unspoken motto is: if you have your place in the mobile world, even if a small one, you must be in Barcelona! Even Apple, which is traditionally absent from shows, including the MWC, makes its presence felt here: when journalists describe new versions of gadgets by dozens of Asian vendors, they occasionally allow themselves comparisons—"like Apple," "no worse than Apple," "just like Apple."

What immediately caught the eye this year is the abundance of robots:

and the dominance of automotive brands, including Mercedes, Audi, Smart:

, and even Bentley on the Visa stand symbolizing the concept of connected car.

Robots not only attract the attention of visitors but also do a good job at stands and serve as a reminder of the expanded possibilities of the Internet of things. With cars, it is a different story: on the one hand, more and more of their components can to some extent access the Internet; on the other hand, visitors take little interest in it and only use them for taking pictures. Although it is the presence of such high-tech devices at such a high-profile event that should make one think about how it really works and how safe it is when your car is connected to something out there and most importantly—what for. And this is exactly where all the horror stories about hacking IoT gadgets come to mind. Examples are plentiful—security threats to connected cars have been detailed time and again. By the way, one of the halls had a Ferrari on display, all strewn with sponsors' logos, including Kaspersky Lab, which is gratifying—at least such a reminder may make the participants finally think seriously about the security of mobile solutions that they offer.

In general, the main topics of the MWC 2018 and its key words are best summarized on the stand of the French corporation Atos:

Literally everything is mentioned there, including blockchain 
As far as security is concerned, AV vendors were the highlights of the MWC, although the number of information security companies at the show should be times over, and this understanding will soon inevitably come! Among AV vendors, noteworthy are the already mentioned Kaspersky Lab, which devoted its participation in this year's show to the security of the Internet of things, as well as Avast with its new Smart Life solution for IoT device security.

By the way, one of the walls of Kaspersky Lab's stand is devoted to video graphics of how attackers use holes in IoT security and what they can really do. These are notorious use cases, which should convince vendors of the importance of taking security into account when launching their smart devices.

Given the lack of attention to information security, we, Positive Technologies, could not stay away and decided to fill this gap—in the format of a special event for key experts in the telecom industry. My London colleagues and I told representatives of the largest telecom operators how hackers attack SS7 networks and what operators can do to protect themselves from attackers.

For the past three years, we have not only analyzed possible threats and vectors of attacks via mobile networks but also detected real attacks using PT Telecom Attack Discovery.

It is no secret that today cybercriminals are not only aware of the security flaws of signaling networks but also actively exploit these vulnerabilities. Our monitoring shows that attackers spy on subscribers, intercept calls, bypass billing systems, block users. Just one large operator with several dozen million subscribers is attacked more than 4,000 times daily.

Security monitoring projects in SS7 networks were conducted for large telecom operators in Europe and the Middle East. Attacks aimed at fraud, disruption of subscriber availability, interception of subscriber traffic (including calls and text messages) totaled less than two percent. However, these are the most dangerous threats for users.

According to our research, 100 percent of attacks aimed at intercepting text messages are successful. Theft of security codes sent in this way is fraught with compromising e-banking and mobile banking systems, online stores, e-government portals, and many other services. Another type of attack—denial of service—is a threat to electronic IoT devices. Today, not only individual user devices are connected to mobile communication networks but also elements of smart city infrastructure, modern industrial enterprises, transport, energy, and other companies.

Fraud against the operator or subscribers is also a matter of serious concern. An essential part of such attacks are related to unauthorized sending of USSD requests (81%). Such requests allow transferring money from a subscriber's account, enabling premium-rate services for a subscriber, or sending phishing messages on behalf of a trusted service.

We raise this issue year after year, our task is to warn about real threats so that operators paid significantly more attention to security, while all ordinary subscribers were also alert and did not fall prey at least to banal social engineering. It is gratifying to see operators growing aware of the existing risks and drawing conclusions: in 2017, all analyzed networks used SMS Home Routing, and one in three networks had signaling traffic filtering and blocking enabled. But this is not enough. Today, we still see that all the networks that we analyzed are prone to vulnerabilities caused both by occasional incorrect setup of equipment and by architectural flaws of SS7 signaling networks that cannot be eliminated using existing tools.

Countering criminals takes a comprehensive approach to security. It is necessary to regularly assess the security of the signaling network in order to discover existing vulnerabilities and develop measures to reduce the risks of realizing threats, and keep security settings up to date afterwards. It is also important to continuously monitor and analyze messages that cross the network perimeter to detect potential attacks. This task can be performed by a threat detection and response system, which allows discovering illegitimate activity in its early stages and blocking suspicious requests. This approach allows providing a high level of protection without disrupting the normal functioning of the mobile network.

Author: Dmitry Kurbatov, Head of Telecommunications Security, Positive Technologies

New bypass and protection techniques for ASLR on Linux

22 Únor, 2018 - 18:06
By Ilya Smith (@blackzert), Positive Technologies researcher

0. Abstract
The Linux kernel is used on systems of all kinds throughout the world: servers, user workstations, mobile platforms (Android), and smart devices. Over the life of Linux, many new protection mechanisms have been added both to the kernel itself and to user applications. These mechanisms include address space layout randomization (ASLR) and stack canaries, which complicate attempts to exploit vulnerabilities in applications.
This whitepaper analyzes ASLR implementation in the current version of the Linux kernel (4.15-rc1). We found problems that allow bypassing this protection partially or in full. Several fixes are proposed. We have also developed and discussed a special tool to demonstrate these issues. Although all issues are considered here in the context of the x86-64 architecture, they are also generally relevant for most Linux-supported architectures.

Many important application functions are implemented in user space. Therefore, when analyzing the ASLR implementation mechanism, we also analyzed part of the GNU Libc (glibc) library, during which we found serious problems with stack canary implementation. We were able to bypass stack canary protection and execute arbitrary code by using ldd.

This whitepaper describes several methods for bypassing ASLR in the context of application exploitation.

Address space layout randomization is a technology designed to impede exploitation of certain vulnerability types. ASLR, found in most modern operating systems, works by randomizing addresses of a process so that an attacker is unable to know their location. For instance, these addresses are used to:

  • Delegate control to executable code.
  • Make a chain of return-oriented programming (ROP) gadgets (1).
  • Read (overwrite) important values in memory.

The technology was first implemented for Linux in 2005. In 2007, it was introduced in Microsoft Windows and macOS as well. For a detailed description of ASRL implementation in Linux, see (2).

Since the appearance of ASLR, attackers have invented various methods of bypassing it, including:

  • Address leak: certain vulnerabilities allow attackers to obtain the addresses required for an attack, which enables bypassing ASLR (3).
  • Relative addressing: some vulnerabilities allow attackers to obtain access to data relative to a particular address, thus bypassing ASLR (4).
  • Implementation weaknesses: some vulnerabilities allow attackers to guess addresses due to low entropy or faults in a particular ASLR implementation (5).
  • Side channels of hardware operation: certain properties of processor operation may allow bypassing ASLR (6).

Note that ASLR is implemented very differently on different operating systems, which continue to evolve in their own directions. The most recent changes in Linux ASLR involved Offset2lib (7), which was released in 2014. Implementation weaknesses allowed bypassing ASLR because all libraries were in close proximity to the binary ELF file image of the program. The solution was to place the ELF file image in a separate, randomly selected region.
In April 2016, the creators of Offset2lib also criticized the current implementation, pointing out the lack of entropy by ASLR-NG when selecting a region address (8). However, no patch has been published to date.
With that in mind, let's take a look at how ASLR currently works on Linux.

2. ASLR on Linux
First, let us have a look at Ubuntu 16.04.3 LTS (GNU/Linux 4.10.0-40-generic x86_64) with the latest updates installed. The workings are largely the same regardless of Linux distribution or kernel version after 3.18-rc7. If we run "less /proc/self/maps" from the Linux command line, we will see something resembling the following:

  • The base address of the binary application (/bin/less, in our case) is 5627a82bf000.
  • The heap start address is 5627aa2d4000, being the address of the end of the binary application plus a random value, which in our case equals 1de7000 (5627aa2d4000 5627a84ed000). The address is aligned to 2^12 due to the x86-64 architecture.
  • Address 7f3631293000 is selected as mmap_base. The address will serve as the upper boundary when a random address is selected for any memory allocation via the mmap system call.
  • Libraries ld-2.23.so, libtinfo.so.5.9, and libc-2.23.so are located consecutively.

If subtraction is applied to the neighboring memory regions, we will note the following: there is a substantial difference between the binary file, heap, stack, the lowest local-archive address, and the highest ld address. There is not a single free page between the loaded libraries (files).

If we repeat the procedure several times, the picture will remain practically the same: the difference between pages will vary, while libraries and files will remain identical in location relative to one another. This fact will be crucial for our analysis.

3. Memory allocation: inner workings
Now we will look at the mechanism used to allocate virtual memory of a process. The logic is stored in the do_mmap kernel function, which implements memory allocation both on the part of the user (mmap syscall) and on the part of the kernel (when executing execve). In the first stage, an available address is selected ( get_unmapped_area); in the second stage, pages are mapped to that address (mmap_region). We will start with the first stage.

The following options are possible when selecting an address:

  1. If the MAP_FIXED flag is set, the system will return the value of the addr argument as the address.
  2. If the addr argument value is not zero, this value is used as a hint and, in some cases, will be selected. 
  3. The largest address of an available region will be selected as the address, as long as it is suitable in length and lies within the allowed range of selectable addresses.
  4. The address is checked for security-related restrictions. (For details, see Section 7.3.)

If all is successful, the region of memory at the selected address will be allocated.

Details of address selection algorithm
The structure underlying the manager of process virtual memory is vm_area_struct (or vma, for short):

This structure describes the start of the virtual memory region, the region end, and access flags for pages within the region.

vma is organized in a doubly linked list (9) of region start addresses, in ascending order, and also an augmented red-black tree (10) of region start addresses, in ascending order as well. A good rationale for this solution is given by the kernel developers themselves (11).

Example of a vma doubly linked list in the ascending order of addresses

The red-black tree augment is the amount of available memory for a particular node. The amount of available memory for a node is defined as whichever is the highest of:

  • The difference between the start of the current vma and end of the preceding vma in an ascending-ordered doubly linked list 
  • Amount of available memory of the left-hand subtree 
  • Amount of available memory of the right-hand subtree

Example of an augmented vma red-black tree
This structure makes it possible to quickly search (in O(log n) time) for the vma that corresponds to a certain address or select an available range of a certain length.

During the address selection process, two important boundaries are identified as well: the minimum lower boundary and the maximum upper boundary. The lower boundary is determined by the architecture as the minimum allowable address or as the minimum value permitted by the system administrator. The upper boundary—mmap_base—is selected as stackrandom, where stack is the maximum stack address while random is a random value with entropy of 28 to 32 bits, depending on relevant kernel parameters. The Linux kernel cannot choose an address higher than mmap_base. In the address space of the address process, large mmap_base values either correspond to the stack and special system regions (vvar and vdso), or will never be used unless explicitly marked with the MMAP_FIXED flag.

So in this whole scheme, the following values remain unknown: the address of the start of the main thread stack, the base address for loading the application binary file, the start address of the application heap, and mmap_base, which is the starting address for memory allocation with mmap.

4. Problems with current implementation
The memory allocation algorithm just described has a number of weaknesses.

4.1 Close proximity of memory location
An application uses virtual RAM. Common uses of memory by an application include the heap, code, and data (.rodata, .bss) of loaded modules, thread stacks, and loaded files. Any mistake in processing the data from these pages may affect nearby data as well. As more pages with differing types of contents are located in close proximity, the attack area becomes larger and the probability of successful exploitation rises.

Examples of such mistakes include out-of-bounds (4), overflow (integer (12) or buffer (13)), and type confusion (14).

A specific instance of this problem is that the system remains vulnerable to the Offset2lib attack, as described in (7). In short: the base address for program loading is not allocated separately from libraries, yet the kernel selects it as mmap_base. If the application contains vulnerabilities, it becomes easier to exploit them, because library images are located in close proximity to the binary application image.

A good example demonstrating this problem is a PHP vulnerability in (15) that allows reading or altering neighboring memory regions.

Section 5 will provide several examples.

4.2 Fixed method of loading libraries
In Linux, dynamic libraries are loaded practically without calling the Linux kernel. The ld library (from GNU Libc) is in charge of this process. The only way the kernel participates is via the mmap function (we will not yet consider open/stat and other file operations): this is required for loading the code and library data into the process address space. An exception is the ld library itself, which is usually written in the executable ELF file as the interpreter for file loading. As for the interpreter, it is loaded by the kernel.

If ld from GNU Libc is used as the interpreter, libraries are loaded in a way resembling the following:

  1. The program ELF file is added to the file queue for processing.
  2. The first ELF file is taken out of the queue (FIFO).
  3. If the file has not been loaded yet into the process address space, it is loaded with the help of mmap.
  4. Each library needed for the file in question is added to the queue of files for processing.
  5. As long as the queue is not empty, repeat step 2.

This algorithm means that the order of loading is always determinate and can be repeated if all the required libraries (their binary files) are known. This allows recovering the addresses of all libraries if the address of any single library is known:
  1. Assume that the address of the libc library is known.
  2. Add the length of the libc library to the libc loading address—this is the loading address of the library that was loaded before libc.
  3. Continuing in the same manner, we obtain mmap_base values and addresses of the libraries that were loaded before libc.
  4. Subtract from the libc address the length of the library loaded after libc. This is the address of the library loaded after libc.
  5. Iterating in the same manner, we obtain the addresses of all libraries that were loaded at program start with the ld interpreter.

If a library is loaded while the program is running (for instance, via the dlopen function), its position in relation to other libraries may be unknown to attackers in some cases. For example, this may happen if there were mmap calls for which the size of allocated memory regions is unknown to attackers.
When it comes to exploiting vulnerabilities, knowledge of library addresses helps significantly: for instance, when searching for gadgets to build ROP chains. What's more, if any library contains a vulnerability that allows reading or writing values relative to the library address, such a vulnerability will be easily exploited, since the libraries are sequential.

Most Linux distributions contain compiled packages with the most widespread libraries (such as libc). This means that the length of libraries is known, giving a partial picture of the distribution of virtual address space of a process in such a case.

Theoretically, one could build a large database for this purpose. For Ubuntu, it would contain versions of libraries including ld, libc, libpthread, and libm; for each version of a library, multiple versions of libraries necessary for it (dependencies) may be analyzed. So by knowing the address of one library, one can know possible map versions describing the distribution of part of the process address space.

Examples of such databases are libcdb.com and libc.blukat.me, which are used to identify libc versions based on offsets for known functions.

All this means that a fixed method of loading libraries is an application security problem. The behavior of mmap, described in the previous section, compounds the problem. In Android, this problem is solved in version 7 and later (16) (17).

4.3 Fixed order of execution
Programs have an interesting property: there is a pair of certain points in the execution thread between which the program state is predictable. For example, once a client has connected to a network service, the service allocates some resources to the client. Part of these resources may be allocated from the application heap. In this case, the relative position of objects in the heap is usually predictable.

This property is useful for exploiting applications, by "building" the program state required by an attacker. Here we will call this state a fixed order of execution.

In some cases of this property, there is a certain fixed point in the thread of execution. At this point, from the start of execution, from launch to launch, the program state remains identical except for some variables. For example, before the main function is executed, the ld interpreter must load and initialize all the libraries and then initialize the program. As noted in Section 4.2, the relative position of libraries will always be the same. During execution of the main function, the differences will consist in the specific addresses used for program loading, libraries, stack, heap, and objects allocated in memory. These differences are due to the randomization described in Section 6.

As a result, an attacker can obtain information on the relative position of program data. This position is not affected by randomization of the process address space.

At this stage, the only possible source of entropy is competition between threads: if the program creates several threads, their competition in working with the data may introduce entropy to the location of objects. In this example, creating threads before executing the main function is possible with the help of the program global constructors or required libraries.

When the program starts using the heap and allocating memory from it (usually with the help of new/malloc), the mutual position of objects in the heap will remain constant for each launch up to a certain moment.

In some cases, the position of thread and heap stacks will also be predictable in relation to library addresses.

If needed, it is possible to obtain these offsets to use in exploitation. One way is to simply execute "strace -e mmap" for this application twice and compare the difference in addresses.

4.4 Holes
If an application allocates memory with mmap and then frees up part of that memory, this can cause holes—free memory regions that are surrounded by occupied regions. Problems may come up if this free memory (hole) is again allocated for a vulnerable object (a object during whose processing the application demonstrates a vulnerability). This brings us back to the problem of closely located objects in memory.

One illustrative example of such holes was found in the code for ELF file loading in the Linux kernel. When loading the ELF file, the kernel first reads the size of the file and tries to map it in full via do_mmap . Once the file has been fully loaded, the memory after the first segment is freed up. All following segments are loaded at a fixed address ( MAP_FIXED) that is set relative to the first segment. All this is needed in order to load the entire file at the selected address and separate segments by rights and offsets in accordance with their descriptions in the ELF file. This approach can cause memory holes if the holes were present in the ELF file between segments.
In the same situation, during loading of an ELF file, the ld interpreter (GNU Libc) does not call unmap but changes permissions for the free pages (holes) to PROT_NONE, which forbids the process from having any access to these pages. This approach is more secure.

To fix the problem of ELF file loading and related holes, the Linux kernel features a patch implementing the same logic as in ld from GNU Libc (see Section 7.1).

4.5 TLS and thread stack
Thread Local Storage (TLS) is a mechanism whereby each thread in a multithread process can allocate locations for data storage (18). The mechanism is implemented differently on different architectures and operating systems. In our case, this is the glibc implementation under x86-64. For x86, any difference will not be material for the mmap problem in question.

In the case of glibc, mmap is also used to create TLS. This means that TLS is selected in the way described already here. If TLS is close to a vulnerable object, it can be altered.

What is interesting about TLS? In the glibc implementation, TLS is pointed to by the segment register fs (for the x86-64 architecture). Its structure is described by the tcbhead_t type defined in glibc source files:

This type contains the field stack_guard, which contains a so-called canary—a random or pseudorandom number for protecting an application from stack overflows (19).
This protection works in the following way: when a function is entered, a canary obtained from tcbhead_t.stack_guard is placed on the stack. At the end of the function, the stack value is compared to the reference value in tcbhead_t.stack_guard. If the two values do not match, the application will return an error and terminate.

Canaries can be bypassed in several ways:

  • If an attacker does not need to overwrite this value (20).
  • If an attacker has managed to read or anticipate this value, making it possible to perform a successful attack (20).
  • If an attacker can overwrite this value with a known one, making it possible to cause a stack overflow (20).
  • An attacker can take control before the application terminates (21).
  • The listed bypasses highlight the importance of protecting TLS from reading or overwriting by an attacker.

Our research revealed that glibc has a problem in TLS implementation for threads created with the help of pthread_create. Say that it is required to select TLS for a new thread. After allocating memory for the stack, glibc initializes TLS in upper addresses of this memory. On the x86-64 architecture considered here, the stack grows downward, putting TLS at the top of the stack. Subtracting a certain constant value from TLS, we obtain the value used by a new thread for the stack register. The distance from TLS to the stack frame of the function that the argument passed to pthread_create is less than one page. Now a would-be attacker does not need to guess or peek at the canary value—the attacker can just overwrite the reference value alongside with the stack value, bypassing protection entirely. A similar problem was found in Intel ME (22).
4.6 malloc and mmap
When using malloc, sometimes glibc uses mmap for allocating new memory areas if the size of requested memory is larger than a certain value. In such cases, memory will be allocated with the help of mmap, so, after memory allocation, the address will be close to libraries or other data allocated with mmap. Attackers pay close attention to mistakes in handling of heap objects, such as heap overflow, use after free (23), and type confusion (14).

An interesting behavior of the glibc library was found when a program uses pthread_create. At the first call of malloc from the thread created with pthread_create, glibc will call mmap to create a new heap for this stack. So, in this thread, all the addresses called via malloc will be located close to the stack of this same thread. (For details, see Section 5.7.)

Some programs and libraries use mmap for mapping files to the address space of a process. The files may be used as, for example, cache or for fast saving (altering) of data on the drive.

Here is an abstract example: an application loads an MP3 file with the help of mmap. Let us call the load address mmap_mp3. Then the application reads, from the loaded data, the offset to the start of audio data. If the application contains a mistake in its routine for verifying the length of that value, an attacker may specially craft an MP3 file able to obtain access to the memory region located after mmap_mp3.

4.7 MAP_FIXED and loading of ET_DYN ELF files
The mmap manual says the following regarding the MAP_FIXED flag:


Don't interpret addr as a hint: place the mapping at exactly that address. addr must be a multiple of the page size. If the memory region specified by addr and len overlaps pages of any existing mapping(s), then the overlapped part of the existing mapping(s) will be discarded. If the specified address cannot be used, mmap() will fail. Because requiring a fixed address for a mapping is less portable, the use of this option is discouraged.

If the requested region with the MAP_FIXED flag overlaps existing regions, successful mmap execution will overwrite existing regions.

Therefore, if a programmer makes a mistake with MAP_FIXED, existing memory regions may be redefined.

An interesting example of such a mistake has been found both in the Linux kernel and in glibc.
As described in (24), ELF files are subject to the requirement that, in the Phdr header, ELF file segments must be arranged in ascending order of vaddr addresses:


The array element specifies a loadable segment, described by p_filesz and p_memsz. The bytes from the file are mapped to the start of the memory segment. If the segment's memory size (p_memsz) is larger than the file size (p_filesz), the "extra" bytes are defined to hold the value 0 and to follow the segment's initialized area. The file size may not be larger than the memory size. Loadable segment entries in the program header table appear in ascending order, sorted on the p_vaddr member.

However, this requirement is not checked. The current code for ELF file loading is as follows:

All segments are processed according to the following algorithm:

  1. Calculate the size of the loaded ELF file: the address of the end of the last segment end, minus the start address of the first segment.
  2. With the help of mmap, allocate memory for the entire ELF file with that size, thus obtaining the base address for ELF file loading.
  3. In the case of glibc, change access rights. If loading from the kernel, release regions that create holes. Here the behavior of glibc and the Linux kernel differ, as described in Section 4.4.
  4. With the help of mmap and the MAP_FIXED flag, allocate memory for remaining segments by using the address obtained by isolating the first segment and adding the offset obtained from the ELF file header.

This enables an intruder to create an ELF file, one of whose segments can fully overwrite an existing memory region—such as the thread stack, heap, or library code.

An example of a vulnerable application is the ldd tool, which is used to check whether required libraries are present in the system. The tool uses the ld interpreter. Taking advantage of the problem with ELF file loading just discussed, we succeeded in executing arbitrary code with ldd:

The issue of MAP_FIXED has also been raised in the Linux community previously (25). However, no patch has been accepted.

For informational purposes, the source code of this example is located in the folder evil_elf.

4.8 Cache of allocated memory
glibc has many different caches, of which two are interesting in the context of ASLR: the cache for the stack of a newly created thread and the heap stack. The stack cache works as follows: on thread termination, stack memory will not be released but will be transferred to the corresponding cache. When creating a thread stack, glibc first checks the cache. If the cache contains a region of the required length, glibc uses that region. In this case, mmap will not be accessed, and the new thread will use the previously used region with the same addresses. If the attacker has successfully obtained the thread stack address, and can control creation and deletion of program threads, the intruder can use knowledge of the address for vulnerability exploitation. Further, if the application contains uninitialized variables, their values can also be subject to the attacker's control, which may lead to exploitation in some cases.

The heap cache works as follows: on thread termination, its heap moves to the corresponding cache. When a heap is created again for a new thread, the cache is checked first. If the cache has an available region, this region will be used. In this case, everything about the stack in the previous paragraph applies here as well.

5. Examples
There may be other cases where mmap is used. This means that this problem leads to a whole class of potentially vulnerable applications.
Here are some examples illustrating these problems.

5.1 Stacks of two threads
Using pthread_create, let us create two threads and calculate the difference between local variables of both threads. Source code:

Output after the first launch:

Output after the second launch:

As we can see, even as the addresses of variables change, the difference between them remains the same. In this example, the difference is marked by the word "Diff"; address values are given it. The example shows that vulnerable code from the stack of one thread may affect another thread or any neighboring memory region, whether or not ASLR is present.

5.2 Thread stack and large buffer allocated with malloc
Now, in the main thread of an application, let us allocate a large amount of memory with the help of malloc and launch a new thread. We then calculate the difference between the address obtained with malloc and the variable in the stack of the new thread. Here is the source code:

Output after the first launch:Output after the second launch:

Again, the difference does not change. This example shows that when a large buffer allocated with malloc is processed, vulnerable code can affect the stack of a new thread despite ASLR protections.
5.3 mmap and thread stack
Here we will allocate memory with the help of mmap and launch a new thread with pthread_create. Then we calculate the difference between the address allocated with mmap and the address of the variable in the stack of the new thread. Here is the source code:

Output after the first launch:

Output after the second launch:

The difference remains unchanged. This example shows that when a buffer allocated with mmap is processed, vulnerable code can affect the stack of a new thread, regardless of ASLR.

5.4 mmap and TLS of the main thread
Let us allocate memory with the help of mmap to obtain the TLS address of the main thread. Then we calculate the difference between the two and make sure that the canary value on the main thread stack is the same as the TLS value. Here is the source code:

Output after the first launch:

Output after the second launch:

As seen here, the difference remains the same from launch to launch, while the canary values match. So if a corresponding vulnerability is present, it is possible to alter the canary and bypass protection. For example, this could be a buffer overflow vulnerability in the stack and a vulnerability allowing to overwrite memory at an offset from the region allocated with mmap. In this example, the offset equals 0x85c8700. The example shows a method of bypassing ASLR and the stack canary.

5.5 mmap and glibc
A similar example was discussed in Section 4.2, but here is one with a slightly different twist: let us allocate memory with mmap to obtain the difference between this address and the system and execv functions from the glibc library. Here is the source code:

Output after the first launch:

Output after the second launch:

As we can see, the difference between the allocated region and functions remains the same. This example shows a method of bypassing ASLR when vulnerable code interacts with a buffer allocated with mmap. The distances (in bytes) to library functions and data will remain constant, which can also be used for exploiting the application.
5.6 Buffer overflow in child thread stack
Let us create a new thread and overflow the stack buffer up to the TLS value. If there are no arguments in the command line, we will not overwrite the TLS canary—otherwise, we will. This argument logic is simply a way of showing the difference in the program's behavior.
Overwriting will be done with the 0x41 byte. Here is the source code:

In this example, protection was successful in detecting the stack overflow and terminating the application with an error before an attacker could seize control. Now we overwrite the reference canary value:

In the second example, we successfully overwrite the canary and execute the pwn_payload function with launch of the sh interpreter.

This example shows a method of bypassing stack overflow protections. To carry out exploitation successfully, an attacker needs to overwrite a sufficient number of bytes in order to overwrite the canary reference value. In this example, the attacker needs to overwrite at least 0x7b8+0x30, or 2024, bytes.

5.7 Thread stack and small buffer allocated with malloc
Let us now create a thread, allocate some memory with malloc, and calculate the difference from the local variable in this thread. Source code:

The first launch:

And the second launch:

In this case, the difference was not the same. Nor will it remain the same from launch to launch. Let us consider the reasons for this.

The first thing to note: the malloc-derived pointer address does not correspond to the process heap address.

glibc creates a new heap for each new thread created with the help of pthread_create. The pointer to this heap lies in TLS, so any thread allocates memory from its own heap, which increases performance, since there is no need to sync threads in case of concurrent malloc use.
But why then is the address "random"?

When allocating a new heap, glibc uses mmap; the size depends on the configuration. In this case, the heap size is 64 MB. The heap start address must be aligned to 64 MB. So the system first allocates 128 MB and then aligns a piece of 64 MB in this range while unaligned pieces are released and create a "hole" between the heap address and the closest region that was previously allocated with mmap.
Randomness is brought about by the kernel itself when selectingmmap_based: this address is not aligned to 64 MB, as were the mmap memory allocations before the call of the malloc in question.
Regardless of why address alignment is required, this leads to a very interesting effect: bruteforcing becomes possible.

The Linux kernel defines the process address space for x86-64 as "47 bits minus one guard page", which for simplicity we will round to 2^47 (omitting the one-page subtraction in our size calculations). 64 MB is 2^26, so the significant bits equal 47 – 26 = 21, giving us a total of 2^21 various heaps of secondary threads.

This substantially narrows the bruteforcing range.

Because the mmap address is selected in a known way, we can assume that the heap of the first thread created with pthread_create will be selected as 64 MB close to the upper address range. To be more precise, it will be close to all the loaded libraries, loaded files, and so on.
In some cases, it is possible to calculate the total amount of memory allocated before the call to the malloc in question. In our case, we loaded only glibc and ld and created a stack for the thread. So this value is small.

Section 6 will show the way in which the mmap_base address is selected. But for now, here is some additional information: mmap_base is selected with an entropy of 28 to 32 bits depending on kernel settings at compile time (28 bits by default). So some top boundary is set off by that same amount.
Thus, in many cases, the upper 7 bits of the address will equal 0x7f, while in rare cases, they will be 0x7e. That gives us another 7 bits of certainty. There are a total of 2^14 possible options for selecting a heap for the first thread. The more threads are created, the lesser that value is for the next heap selection.

Let us illustrate this behavior with the following C code:

Then let us launch the program a sufficient number of times with Python code for collecting address statistics:

This code launches the simple './t' program, which creates a new thread, a sufficient number of times.
The code allocates a buffer with the help of malloc and displays the buffer address. Once the program has completed this, the address is read and the program calculates how many times the address was encountered during operation of the program. The script collects a total of 16,385 different addresses, which equals 2^14+1. This is the number of attempts an attacker could make, in the worst case, to guess the heap address of the program in question.

There is another option—thread stack and large buffer allocated with malloc—but this is rather similar to the one described above. The only difference is that if the buffer size is too large, mmap is called again, so it is difficult to say where the allocated region will be placed: it may fill a hole or stand in front of the heap.

5.8 Stack cache and thread heaps
In this example, we will create a thread and allocate memory with malloc. Then we record the addresses of the thread stack and the pointer obtained with malloc. Let us then initialize a certain stack variable with the value 0xdeadbeef. Then we terminate the stack and create a new one, allocating memory with malloc. We compare the addresses and values of the variable, which this time is not initialized. Here is the source code:


As clearly seen, the addresses of local variables in the stack for consecutively created threads remain the same. Also the same are the addresses of variables allocated for them via malloc; some values of the first thread's local variables are still accessible to the second thread. An attacker can use this to exploit vulnerabilities of uninitialized variables (26). Although the cache speeds up the application, it also enables attackers to bypass ASLR and carry out exploitation.

6. Mapping process address space
When a new process is created, the kernel follows the algorithm below to determine its address space:

  1. After a call to execve, the virtual memory of the process is completely cleared.
  2. This creates the very first vma, which describes the process stack (stack_base). Initially, its address is selected as 2^47 – pagesize (where pagesize is the page size, on x86-64 this equals 4096), then it is adjusted by a certain random value random1 not exceeding 16 GB (this happens quite late, after the binary file base is selected, so some interesting effects are possible: if an application binary file occupies the entire memory, the stack will be next to the base address of the binary file).
  3. The kernel selects mmap_base, an address in relation to which the system will later load all the libraries in the process address space. The address is determined as stack_baserandom2 – 128 MB where random2 is a random value whose upper boundary depends on the kernel configuration and has a range of 1 TB to 16 TB.
  4. The kernel tries to load the program binary file. If the file is PIE (regardless of the base loading address), the base address is (2^47 – 1) * 2/3 + random3, where the random3 value is also determined by the kernel configuration and has an upper boundary of 1 TB to 16 TB.
  5. If the file needs dynamically loaded libraries, the kernel tries to load an interpreter to load all the required libraries and perform all initializations. Usually, the interpreter in ELF files is ld from glibc. The address is selected in relation to mmap_base.
  6. The kernel sets the new process heap as the end of the loaded ELF file plus a certain random4 value with an upper boundary of 32 MB.

After these stages, the process is launched. The start address is either the one from the ELF file of the interpreter (ld) or the one from the ELF file of the program if there is no interpreter (a statically linked ELF).
If ASLR is on and if there is a possibility of loading at an arbitrary address, the program file process will look as follows:

Each library, being loaded with the interpreter, will get control if a list of global constructors is defined in it. In this case, library functions for allocating resources (global constructors) required for this library will be called.

Thanks to the known sequence of library loading, it is possible to obtain a certain point in the program execution thread that allows "building" memory regions in terms of their relative locations to one another, regardless of whether ASLR is present. Increasing knowledge about the libraries, their constructors, and program behavior will push this point further away from the point of process creation.

To determine specific addresses, one still needs a vulnerability allowing to obtain the address of some mmap region or to read (write) memory relative to a particular mmap region:

  • If an attacker knows the address of some mmap region that was allocated from the start of the process to the constant execution point (Section 4.3), the attacker can successfully calculate mmap_base and the address of any loaded library or any other mmap region.
  • If it is possible to address relative to a certain mmap region from the point of constant execution, it is not necessary to know any additional address.

To prove the feasibility of mapping process memory in this way, we wrote Python code to simulate kernel behavior when searching for new regions. The method of loading ELF files and the order of library loading were recreated as well. To simulate a vulnerability that allows reading library addresses, the /proc: file system was used: the script reads the ld address (thus recovering mmap_base) and, having the libraries, it repeats the process memory map. Then it compares the result with the original. The script completely repeated the address space of all processes. Script code is available at: https://github.com/blackzert/aslur

6.1 Attack vectors
Let us review some vulnerabilities that have already become "classic" because of their prevalence.
1. Heap buffer overflows: There are various well-known vulnerabilities in application operation with the glibc heap, as well as methods for exploiting them. We can categorize these vulnerabilities under two types: they either allow modifying memory relative to the address of the vulnerable heap, or allow modifying memory addresses known to the attacker. In some cases, it is possible to read arbitrary data from objects on the heap. This fact gives rise to several vectors:

• In case of modifying (reading) memory relative to an object in the heap, we are primarily interested in the heap of the thread created with pthread_create, because the distance from it to any library (stack) of the thread will be less than the distance from the heap of the main thread.
• In case of reading (writing) memory relative to some address, it is first of all necessary to try to read the addresses from the heap itself, as they usually contain pointers to vtable or to libc.main_arena.

Knowledge of the libc.main_arena address yields the glibc address and, subsequently, the mmap_base address. To obtain the vtable address, it is required to know the address of either some library (and hence mmap_base as well) or the program loading address. An attacker who knows the program loading address can read library addresses from the .got.plt section containing addresses for required library functions.

2. Buffer overflow:

• At the stack, it leads to the canary scenario in question.
• At the heap, it leads to the scenario described in #1.
• At the mmap region, it leads to overwriting neighboring regions, depending on the context.

7. Fixes
In this article, we have reviewed several problems; now we can consider fixes for some of them. Let us start with the simplest solutions and then proceed to more complicated ones.

7.1 Hole in ld.so
As shown in Section 4.4, the ELF interpreter loader in the Linux kernel contains an error and allows releasing part of the interpreter library memory. A relevant fix was proposed to the community, but was neglected without action:


7.2 Order of loading ELF file segments

As noted above, in the kernel and in the code of the glibc library, there is no checking of file ELF segments: the code simply trusts that they are in the correct order. Proof-of-concept code, as well as a fix, is enclosed: https://github.com/blackzert/aslur

The fix is quite simple: we go through the segments and ensure that the current one does not overlap the next one, and that the segments are sorted in the ascending order of vaddr.

7.3 Use of mmap_min_addr when searching for mmap allocation addresses

As soon as a fix was written for mmap, in order to return addresses with sufficient entropy, a problem arose: some mmap calls failed with an access permission error. This happened even as root or when requested by the kernel when executing execve.

In the address selection algorithm (described earlier in Section 3), one of the listed options is checking addresses for security restrictions. In the current implementation, this check verifies that the selected address is larger than mmap_min_addr. This is a system variable that can be changed by an administrator through sysctl. The system administrator can set any value, and the process cannot allocate a page at an address less than this value. The default value is 65536.

The problem was that when the address function for mmap was called on x86-64, the Linux kernel used 4096 as the value of the minimal lower boundary, which is less than the value of mmap_min_addr. The function cap_mmap_addr forbids this operation if the selected address falls between 4096 and mmap_min_addr.

cap_mmap_addr is called implicitly; this function is registered as a hook for security checking. This architectural solution raises questions: first, we choose the address without having the ability to test it with external criteria, and then we check its permissibility in accordance with the current system parameters. If the address does not pass the check, then even if the address is selected by the kernel, it can be "forbidden" and the entire operation will end with the EPERM error.

An attacker can use this fact to cause denial of service in the entire system: if the attacker can specify a very large value, no user process can start in the system. Moreover, if the attacker manages to store this value in the system parameters, then even rebooting will not help—all created processes will be terminated with the EPERM error.

Currently, the fix is to use the mmap_min_addr value as the lowest allowable address when making a request to the address search function. Such code is already used for all other architectures.
What will happen if the system administrator starts changing this value on a running machine? This question remains unanswered, since all new allocations after the change may end with the EPERM error; no program code expects such an error and does not know what to do with it. The mmap documentation states the following:

"EPERM The operation was prevented by a file seal; see fcntl (2)."

That is to say, the kernel cannot return EPERM to MAP_ANONYMOUS, although in fact that is not so.

7.4 mmap
The main mmap problem discussed here is the lack of entropy in address choice. Ideally, the logical fix would be to select memory randomly. To select it randomly, one must first build a list of all free regions of appropriate size and then, from that list, select a random region and an address from this region that meets the search criteria (the length of the requested region and the allowable lower and upper boundaries).

To implement this logic, the following approaches can be applied:

1. Keep the list of voids in a descending-order array. In this case, the choice of random element is made in a single operation, but maintaining this array requires many operations for releasing (allocating) the memory when the current virtual address space map of the process changes.
2. Keep the list of voids in a tree and a list, in order to find an outer boundary that satisfies the length requirement, and select a random element from the array. If the element does not fit the minimum/maximum address restrictions, select the next one, and so on until one is found (or none remain). This approach involves complex list and tree structures similar to those already existing for vma with regard to change of address space.
3. Use the existing structure of the augmented red-black vma tree to bypass the list of allowed gap voids and select a random address. In the worst case, each choice will have to bypass all the peaks, but rebuilding the tree does not incur any additional slowdown of performance.

Our choice went to the last approach. We can use the existing vma organizational structure without adding redundancy and select an address using the following algorithm:
1. Use the existing algorithm to find a possible gap void with the largest valid address. Also, record the structure of vma following it. If there is no such structure, return ENOMEM.
2. Record the found gap as the result and vma as the maximum upper boundary.
3. Take the first vma structure from the doubly linked list. It will be a leaf in the red-black tree, because it has the smallest address.
4. Make a left-hand traversal of the tree from the selected vma, checking the permissibility of the free region between the vma in question and its predecessor. If the free region is allowed by the restrictions, obtain another bit of entropy. If the entropy bit is 1, redefine the current value of the gap void.
5. Return a random address from the selected gap void region.
One way to optimize the fourth step of the algorithm is not to enter subtrees whose gap extension size is smaller than the required length.

This algorithm selects an address with sufficient entropy, although it is slower than the current implementation.

As far as obvious drawbacks, it is necessary to bypass all vma structures that have a sufficient gap void length. However, this is offset by the absence of any performance slowdown when changing address space.

8. Testing fixes for ASLR
After applying the described fixes to the kernel, the process /bin/less looks as follows:

As seen in the example:

  1. All the libraries were allocated in random locations and are at a random distance from one another.
  2. The file /usr/lib/locale/locale-archive mapped with mmap is also located at random addresses.
  3. The hole in /lib/x86_64-linux-gnu/ld-2.26.so is not filled with any mmap mapping.

This patch was tested on Ubuntu 17.04 with Google Chrome and Mozilla Firefox running. No problems were found.

9. Conclusion
This research has demonstrated many interesting features of the kernel and glibc in terms of handling of program code. The problem of close memory location was articulated and considered in detail. The following problems were found:

  • The algorithm for choosing the mmap address does not contain entropy.
  • Loading of ELF files in the kernel and interpreter contains a segment processing error.
  • When searching for an address with do_mmap, the kernel does not take into account mmap_min_addr on the x86-64 architecture.
  • Loading an ELF file in the kernel allows creating memory holes in the program ELF file and the ELF file interpreter.
  • Using mmap to allocate memory for libraries, the ELF file interpreter from GNU Libc ld loads libraries at mmap_base-dependent addresses. In addition, libraries are loaded in a fixed order.
  • Using mmap to allocate a stack, heap, and TLS thread, the GNU Libc library places them at mmap_base-dependent addresses also.
  • The GNU Libc library places TLS threads created with pthread_create at the top of the stack, which allows bypassing buffer overflow protections on the stack by overwriting the canary.
  • The GNU Libc library caches previously allocated heaps (stacks) of threads, which, in some cases, allows successful exploitation of a vulnerable application.
  • The GNU Libc library creates a heap for new threads aligned to 2^26, which substantially narrows the bruteforcing range.

These problems help an attacker to bypass ASLR or protections against stack buffer overflows. For some of these problems, fixes (in the form of kernel patches) have been proposed here. Proof-of-concept code has been presented for all problems mentioned. An algorithm ensuring sufficient entropy for address selection is proposed. The same approach can be used to analyze ASLR on other operating systems such as Windows and macOS.
A number of peculiarities of the GNU Libc implementation were reviewed; in some cases, these peculiarities inadvertently facilitate exploitation of vulnerable applications.
References 1. Erik Buchanan, Ryan Roemer, Stefan Savage, Hovav Shacham. Return-Oriented Programming: Exploits Without Code Injection. [Online] Aug 2008. https://www.blackhat.com/presentations/bh-usa-08/Shacham/BH_US_08_Shacham_Return_Oriented_Programming.pdf.
2. xorl. [Online] https://xorl.wordpress.com/2011/01/16/linux-kernel-aslr-implementation/.
3. Reed Hastings, Bob Joyce. Purify: Fast Detection of Memory Leaks and Access Errors. [Online] December 1992 https://web.stanford.edu/class/cs343/resources/purify.pdf.
4. Improper Restriction of Operations within the Bounds of a Memory Buffer. [Online] https://cwe.mitre.org/data/definitions/119.html.
5. AMD Bulldozer Linux ASLR weakness: Reducing entropy by 87.5%. [Online] http://hmarco.org/bugs/AMD-Bulldozer-linux-ASLR-weakness-reducing-mmaped-files-by-eight.html.
6. Dmitry Evtyushkin, Dmitry Ponomarev, Nael Abu-Ghazaleh. Jump Over ASLR: Attacking Branch Predictors to Bypass ASLR. [Online] http://www.cs.ucr.edu/~nael/pubs/micro16.pdf.
7. Hector Marco-Gisbert, Ismael Ripoll. Offset2lib: bypassing full ASLR on 64bit Linux. [Online] https://cybersecurity.upv.es/attacks/offset2lib/offset2lib.html.
8. Hector Marco-Gisbert, Ismael Ripoll-Ripoll. ASLR-NG: ASLR Next Generation. [Online] 2016 https://cybersecurity.upv.es/solutions/aslr-ng/ASLRNG-BH-white-paper.pdf.
9. Doubly linked list. [Online] https://en.wikipedia.org/wiki/Doubly_linked_list.
10. Bayer, Rudolf. Symmetric binary B-Trees: Data structure and maintenance algorithms. [Online] January 24, 1972 https://link.springer.com/article/10.1007%2FBF00289509.
11. Lespinasse, Michel. mm: use augmented rbtrees for finding unmapped areas. [Online] November 5, 2012 https://lkml.org/lkml/2012/11/5/673.
12. Integer Overflow or Wraparound. [Online] https://cwe.mitre.org/data/definitions/190.html.
13. Classic Buffer Overflow. [Online] https://cwe.mitre.org/data/definitions/120.html.
14. Incorrect Type Conversion or Cast. [Online] https://cwe.mitre.org/data/definitions/704.html.
15. CVE-2014-9427. [Online] https://www.cvedetails.com/cve/CVE-2014-9427/.
16. Security Enhancements in Android 7.0. [Online] https://source.android.com/security/enhancements/enhancements70.
17. Implement Library Load Order Randomization. [Online] https://android-review.googlesource.com/c/platform/bionic/+/178130/2.
18. Thread-Local Storage. [Online] http://gcc.gnu.org/onlinedocs/gcc-3.3/gcc/Thread-Local.html.
19. One, Aleph. Smashing The Stack For Fun And Profit. [Online] http://www.phrack.org/issues/49/14.html#article.
20. Fritsch, Hagen. Buffer overflows on linux-x86-64. [Online] April 16, 2009 http://www.blackhat.com/presentations/bh-europe-09/Fritsch/Blackhat-Europe-2009-Fritsch-Buffer-Overflows-Linux-whitepaper.pdf.
21. Litchfield, David. Defeating the Stack Based Buffer Overflow Prevention. [online] September 8, 2003 https://crypto.stanford.edu/cs155old/cs155-spring05/litch.pdf.
22. Maxim Goryachy, Mark Ermolov. HOW TO HACK A TURNED-OFF COMPUTER, OR RUNNING. [online] https://www.blackhat.com/docs/eu-17/materials/eu-17-Goryachy-How-To-Hack-A-Turned-Off-Computer-Or-Running-Unsigned-Code-In-Intel-Management-Engine-wp.pdf.
23. Use After Free. [online] https://cwe.mitre.org/data/definitions/416.html.
24. [Online] http://www.skyfree.org/linux/references/ELF_Format.pdf.
25. Hocko, Michal. mm: introduce MAP_FIXED_SAFE. [Online] https://lwn.net/Articles/741335/.
26. Use of Uninitialized Variable. [online] https://cwe.mitre.org/data/definitions/457.html.