Kategorie
Microsoft 365 apps to soon block file access via FPRPC by default
From Chrome renderer code exec to kernel with MSG_OOB
Posted by Jann Horn, Google Project Zero
IntroductionIn early June, I was reviewing a new Linux kernel feature when I learned about the MSG_OOB feature supported by stream-oriented UNIX domain sockets. I reviewed the implementation of MSG_OOB, and discovered a security bug (CVE-2025-38236) affecting Linux >=6.9. I reported the bug to Linux, and it got fixed. Interestingly, while the MSG_OOB feature is not used by Chrome, it was exposed in the Chrome renderer sandbox. (Since then, sending MSG_OOB messages has been blocked in Chrome renderers in response to this issue.)
The bug is pretty easy to trigger; the following sequence results in UAF:
char dummy;
int socks[2];
socketpair(AF_UNIX, SOCK_STREAM, 0, socks);
send(socks[1], "A", 1, MSG_OOB);
recv(socks[0], &dummy, 1, MSG_OOB);
send(socks[1], "A", 1, MSG_OOB);
recv(socks[0], &dummy, 1, MSG_OOB);
send(socks[1], "A", 1, MSG_OOB);
recv(socks[0], &dummy, 1, 0);
recv(socks[0], &dummy, 1, MSG_OOB);
I was curious to explore how hard it is to actually exploit such a bug from inside the Chrome Linux Desktop renderer sandbox on an x86-64 Debian Trixie system, escalating privileges directly from native code execution in the renderer to the kernel. Even if the bug is reachable, how hard is it to find useful primitives for heap object reallocation, delay injection, and so on?
The exploit code is posted on our bugtracker; you may want to reference it while following along with this post.
Backstory: The featureSupport for using MSG_OOB with AF_UNIX stream sockets was added in 2021 with commit 314001f0bf92 ("af_unix: Add OOB support", landed in Linux 5.15). With this feature, it is possible to send a single byte of "out-of-band" data that the recipient can read ahead of the rest of the data. The feature is very limited - out-of-band data is always a single byte, and there can only be a single pending byte of out-of-band data at a time. (Sending two out-of-band messages one after another causes the first one to be turned into a normal in-band message.) This feature is used almost nowhere except in Oracle products, as discussed on an email thread from 2024 where removal of the feature was proposed; yet it is enabled by default when AF_UNIX socket support is enabled in the kernel config, and it wasn't even possible to disable MSG_OOB support until commit 5155cbcdbf03 ("af_unix: Add a prompt to CONFIG_AF_UNIX_OOB") landed in December 2024.
Because the Chrome renderer sandbox allows stream-oriented UNIX domain sockets and didn't filter the flags arguments of send()/recv() functions, this esoteric feature was usable inside the sandbox.
When a message (represented by a socket buffer / struct sk_buff, short SKB) is sent between two connected stream-oriented sockets, the message is added to the ->sk_receive_queue of the receiving socket, which is a linked list. An SKB has a length field ->len describing the length of data contained within it (counting both data in the SKB's "head buffer" as well as data indirectly referenced by the SKB in other ways). An SKB also contains some scratch space that can be used by the subsystem currently owning the SKB (char cb[48] in struct sk_buff); UNIX domain sockets access this scratch space with the helper #define UNIXCB(skb) (*(struct unix_skb_parms *)&((skb)->cb)), and one of the things they store in there is a field u32 consumed which stores the number of bytes of the SKB that have already been read from the socket. UNIX domain sockets count the remaining length of an SKB with the helper unix_skb_len(), which returns skb->len - UNIXCB(skb).consumed.
MSG_OOB messages (sent with something like send(sockfd, &message_byte, 1, MSG_OOB), which goes through queue_oob() in the kernel) are also added to the ->sk_receive_queue just like normal messages; but to allow the receiving socket to access the latest out-of-band message ahead of the rest of the queue, the ->oob_skb pointer of the receiving socket is updated to point to this message. When the receiving socket receives an OOB message with something like recv(sockfd, &received_byte, 1, MSG_OOB) (implemented in unix_stream_recv_urg()), the corresponding socket buffer stays on the ->sk_receive_queue, but its consumed field is incremented, causing its remaining length (unix_skb_len()) to become 0, and the ->oob_skb pointer is cleared; the normal receive path will have to deal with this when encountering the remaining-length-0 SKB.
This means that the normal recv() path (unix_stream_read_generic()), which runs when recv() is called without MSG_OOB, must be able to deal with remaining-length-0 SKBs and must take care to clear the ->oob_skb pointer when it deletes an OOB SKB. manage_oob() is supposed to take care of this. Essentially, when the normal receive path obtains an SKB from the ->sk_receive_queue, it calls manage_oob() to take care of all the fixing-up required to deal with the OOB mechanism; manage_oob() will then return the first SKB that contains at least 1 byte of remaining data, and manage_oob() ensures that this SKB is no longer referenced as ->oob_skb. unix_stream_read_generic() can then proceed as if the OOB mechanism didn't exist.
Backstory: The bug, and what led to itIn mid-2024, a userspace API inconsistency was discovered, where recv() could spuriously return 0 (which normally signals end-of-file) when trying to read from a socket with a receive queue that contains a remaining-length-0 SKB left behind by receiving an OOB SKB. The fix for this issue introduced two closely related security issues that can lead to UAF; it was marked as fixing a bug introduced by the original MSG_OOB implementation, but luckily was actually only backported to Linux 6.9.8, so the buggy fix did not land in older LTS kernel branches.
After the buggy fix, manage_oob() looked as follows:
static struct sk_buff *manage_oob(struct sk_buff *skb, struct sock *sk,
int flags, int copied)
{
struct unix_sock *u = unix_sk(sk);
if (!unix_skb_len(skb)) {
struct sk_buff *unlinked_skb = NULL;
spin_lock(&sk->sk_receive_queue.lock);
if (copied) {
skb = NULL;
} else if (flags & MSG_PEEK) {
skb = skb_peek_next(skb, &sk->sk_receive_queue);
} else {
unlinked_skb = skb;
skb = skb_peek_next(skb, &sk->sk_receive_queue);
__skb_unlink(unlinked_skb, &sk->sk_receive_queue);
}
spin_unlock(&sk->sk_receive_queue.lock);
consume_skb(unlinked_skb);
} else {
struct sk_buff *unlinked_skb = NULL;
spin_lock(&sk->sk_receive_queue.lock);
if (skb == u->oob_skb) {
if (copied) {
skb = NULL;
} else if (!(flags & MSG_PEEK)) {
if (sock_flag(sk, SOCK_URGINLINE)) {
WRITE_ONCE(u->oob_skb, NULL);
consume_skb(skb);
} else {
__skb_unlink(skb, &sk->sk_receive_queue);
WRITE_ONCE(u->oob_skb, NULL);
unlinked_skb = skb;
skb = skb_peek(&sk->sk_receive_queue);
}
} else if (!sock_flag(sk, SOCK_URGINLINE)) {
skb = skb_peek_next(skb, &sk->sk_receive_queue);
}
}
spin_unlock(&sk->sk_receive_queue.lock);
if (unlinked_skb) {
WARN_ON_ONCE(skb_unref(unlinked_skb));
kfree_skb(unlinked_skb);
}
}
return skb;
}
After this change, syzbot (the public syzkaller instance operated by Google) reported that a use-after-free occurs in the following scenario, as described by the fix commit for the syzbot-reported issue:
1. send(MSG_OOB)
2. recv(MSG_OOB)
-> The consumed OOB remains in recv queue
3. send(MSG_OOB)
4. recv()
-> manage_oob() returns the next skb of the consumed OOB
-> This is also OOB, but unix_sk(sk)->oob_skb is not cleared
5. recv(MSG_OOB)
-> unix_sk(sk)->oob_skb is used but already freed
In other words, the issue is that when the receive queue looks like this (shown with the oldest message at the top):
- SKB 1: unix_skb_len()=0
- SKB 2: unix_skb_len()=1 <--OOB pointer
and a normal recv() happens, then manage_oob() takes the !unix_skb_len(skb) branch, which deletes the SKB with remaining length 0 and skips forward to the following SKB; but it then doesn't go through the skb == u->oob_skb check as it otherwise would, which means it doesn't clear out the ->oob_skb pointer before the SKB is consumed by the normal receive path, creating a dangling pointer that will lead to UAF on a subsequent recv(... MSG_OOB).
This issue was fixed, making the checks for remaining-length-0 SKBs and ->oob_skb in manage_oob() independent:
static struct sk_buff *manage_oob(struct sk_buff *skb, struct sock *sk,
int flags, int copied)
{
struct sk_buff *read_skb = NULL, *unread_skb = NULL;
struct unix_sock *u = unix_sk(sk);
if (likely(unix_skb_len(skb) && skb != READ_ONCE(u->oob_skb)))
return skb;
spin_lock(&sk->sk_receive_queue.lock);
if (!unix_skb_len(skb)) {
if (copied && (!u->oob_skb || skb == u->oob_skb)) {
skb = NULL;
} else if (flags & MSG_PEEK) {
skb = skb_peek_next(skb, &sk->sk_receive_queue);
} else {
read_skb = skb;
skb = skb_peek_next(skb, &sk->sk_receive_queue);
__skb_unlink(read_skb, &sk->sk_receive_queue);
}
if (!skb)
goto unlock;
}
if (skb != u->oob_skb)
goto unlock;
if (copied) {
skb = NULL;
} else if (!(flags & MSG_PEEK)) {
WRITE_ONCE(u->oob_skb, NULL);
if (!sock_flag(sk, SOCK_URGINLINE)) {
__skb_unlink(skb, &sk->sk_receive_queue);
unread_skb = skb;
skb = skb_peek(&sk->sk_receive_queue);
}
} else if (!sock_flag(sk, SOCK_URGINLINE)) {
skb = skb_peek_next(skb, &sk->sk_receive_queue);
}
unlock:
spin_unlock(&sk->sk_receive_queue.lock);
consume_skb(read_skb);
kfree_skb(unread_skb);
return skb;
}
But a remaining issue is that when this function discovers a remaining-length-0 SKB left behind by recv(..., MSG_OOB), it skips ahead to the next SKB and assumes that it is not also a remaining-length-0 SKB. If this assumption is broken, manage_oob() can return a pointer to the second remaining-length-0 SKB, which is bad because the caller unix_stream_read_generic() does not expect to see remaining-length-0 SKBs:
static int unix_stream_read_generic(struct unix_stream_read_state *state,
bool freezable)
{
[...]
int flags = state->flags;
[...]
int skip;
[...]
skip = max(sk_peek_offset(sk, flags), 0); // 0 if MSG_PEEK isn't set
do {
struct sk_buff *skb, *last;
[...]
last = skb = skb_peek(&sk->sk_receive_queue);
last_len = last ? last->len : 0;
again:
#if IS_ENABLED(CONFIG_AF_UNIX_OOB)
if (skb) {
skb = manage_oob(skb, sk, flags, copied);
if (!skb && copied) {
unix_state_unlock(sk);
break;
}
}
#endif
if (skb == NULL) {
[...]
}
while (skip >= unix_skb_len(skb)) {
skip -= unix_skb_len(skb);
last = skb;
last_len = skb->len;
skb = skb_peek_next(skb, &sk->sk_receive_queue);
if (!skb)
goto again;
}
[...]
/* Mark read part of skb as used */
if (!(flags & MSG_PEEK)) {
UNIXCB(skb).consumed += chunk;
[...]
if (unix_skb_len(skb))
break;
skb_unlink(skb, &sk->sk_receive_queue);
consume_skb(skb); // frees the SKB
if (scm.fp)
break;
} else {
If MSG_PEEK is not set (which is the only case in which SKBs can actually be freed), skip is always 0, and the while (skip >= unix_skb_len(skb)) loop condition should always be false; but when a remaining-length-0 SKB unexpectedly gets here, the condition turns into 0 >= 0, and the loop skips ahead to the first SKB that does not have remaining length 0. That SKB could be the ->oob_skb; in which case this again bypasses the logic in manage_oob() that is supposed to set ->oob_skb to NULL before the current ->oob_skb can be freed.
So the remaining bug can be triggered by first doing the following twice, creating two remaining-length-0 SKBs in the ->sk_receive_queue:
send(socks[1], "A", 1, MSG_OOB);
recv(socks[0], &dummy, 1, MSG_OOB);
If another OOB SKB is then sent with send(socks[1], "A", 1, MSG_OOB), the ->sk_receive_queue will look like this:
- SKB 1: unix_skb_len()=0
- SKB 2: unix_skb_len()=0
- SKB 3: unix_skb_len()=1 <--OOB pointer
Now, recv(socks[0], &dummy, 1, 0) will trigger the bug and free SKB 3 while leaving ->oob_skb pointing to it; making it possible for subsequent recv() syscalls with MSG_OOB to use the dangling pointer.
The initial primitiveThis bug yields a dangling ->msg_oob pointer. Pretty much the only way to use that dangling pointer is the recv() syscall with MSG_OOB, either with or without MSG_PEEK, which is implemented in unix_stream_recv_urg(). (There are other codepaths that touch it, but they're mostly just pointer comparisons, with the exception of the unix_ioctl() handler for SIOCATMARK, which is blocked in Chrome's seccomp sandbox.)
unix_stream_recv_urg() does this:
static int unix_stream_recv_urg(struct unix_stream_read_state *state)
{
struct socket *sock = state->socket;
struct sock *sk = sock->sk;
struct unix_sock *u = unix_sk(sk);
int chunk = 1;
struct sk_buff *oob_skb;
mutex_lock(&u->iolock);
unix_state_lock(sk);
spin_lock(&sk->sk_receive_queue.lock);
if (sock_flag(sk, SOCK_URGINLINE) || !u->oob_skb) {
[...]
}
// read dangling pointer
oob_skb = u->oob_skb;
if (!(state->flags & MSG_PEEK))
WRITE_ONCE(u->oob_skb, NULL);
spin_unlock(&sk->sk_receive_queue.lock);
unix_state_unlock(sk);
// read primitive
// ->recv_actor() is unix_stream_read_actor()
chunk = state->recv_actor(oob_skb, 0, chunk, state);
if (!(state->flags & MSG_PEEK))
UNIXCB(oob_skb).consumed += 1; // write primitive
mutex_unlock(&u->iolock);
if (chunk < 0)
return -EFAULT;
state->msg->msg_flags |= MSG_OOB;
return 1;
}
At a high level, the call to state->recv_actor() (which goes down the call path unix_stream_read_actor -> skb_copy_datagram_msg -> skb_copy_datagram_iter -> __skb_datagram_iter(cb=simple_copy_to_iter)) gives a read primitive: it is trying to copy one byte of data referenced by the oob_skb to userspace, so by replacing the memory pointed to by oob_skb with controlled, repeatedly writable data, it is possible to repeatedly cause copy_to_user(<userspace pointer>, <kernel pointer>, 1) with arbitrary kernel pointers. As long as MSG_PEEK is set, this can be repeated; only when MSG_PEEK is clear, the ->msg_oob pointer is cleared.
The only write primitive this bug yields is the increment UNIXCB(oob_skb).consumed += 1 that happens when MSG_PEEK is not set. In the build I'm looking at, the consumed field that is incremented is located 0x44 bytes into the oob_skb, an object which is effectively allocated with an alignment of 0x100 bytes. This means that, if the write primitive is applied to a 64-bit length value or a pointer, it would have to do an increment at offset 4 relative to the 8-byte aligned overwrite target, and it would effectively increment the 64-bit pointer/length by 4 GiB.
My exploit for this issueDiscarded strategy for using the write primitive: Pointer incrementIt would be possible to free the sk_buff and reallocate it as some structure containing a pointer at offset 0x40. The write primitive would effectively increment this pointer by 4 GiB (because it would increment by 1 at an offset 4 bytes into the pointer). But this would fundamentally rely on the machine having significantly more than 4 GiB of RAM, which feels gross and a bit like cheating.
Overall strategySince this issue relatively straightforwardly leads to a semi-arbitrary read (subject to usercopy hardening restrictions), but the write primitive is much more gnarly, I decided to go with the general approach of: first get the read primitive working; then use the read primitive to assist in exploiting the write primitive. This way, ideally everything after the read primitive bootstrapping can be made reliable with enough work.
Dealing with per-cpu stateLots of things in this exploit rely on per-cpu kernel data structures and will fail if a task is migrated between CPUs at the wrong time. In some places in the exploit, I repeatedly check which CPU the exploit is running on with sched_getcpu(), and retry if the CPU number changed; though I was too lazy to do that everywhere perfectly, and this could be done even better by relying more directly on the "restartable sequences" subsystem.
Note that the Chrome sandbox policy forbids __NR_getcpu; but that has no effect at all on sched_getcpu(), in particular on x86-64, because there are two faster alternatives to the getcpu() syscall that glibc prefers to use instead:
- The kernel's rseq subsystem maintains a struct rseq in userspace for each thread, which contains the cpu_id that the thread is currently running on; if rseq is available, glibc will read from the rseq struct.
- On x86-64, the vDSO contains a pure-userspace implementation of the getcpu() syscall which relies on either the RDPID instruction or, if that is not available, the LSL instruction to determine the ID of the current CPU without having to perform a syscall. (This is implemented in vdso_read_cpunode() in the kernel sources, which is compiled into the vDSO that is mapped into userspace.)
On the targeted Debian kernel, struct sk_buff is in the skbuff_head_cache SLUB cache, which normally uses order-1 unmovable pages. I had trouble finding a good reallocation primitive that also uses order-1 pages (though maple_node might have been an option); so I went for reallocation as a pipe page (order-0 unmovable), though that means that the reallocation will go through the buddy allocator and requires the order-0 unmovable list to become empty so that an order-1 page is split up.
This is not very novel, so I will only describe a few interesting aspects of the strategy here - if you want a better understanding of how to free a SLUB page and reallocate it as something else, there are plenty of existing writeups, including one I wrote a while ago (section "Attack stage: Freeing the object's page to the page allocator"), though that one does not discuss the buddy allocator.
To make it more likely for a reallocation of an order-1 page as an order-0 page to succeed, the exploit starts by allocating a large number of order-0 unmovable pages to drain the order-0 and order-1 unmovable freelists. Most ways of allocating large amounts of kernel memory are limited in the sandbox; in particular, the default file descriptor table size soft limit (RLIMIT_NOFILE) is 4096 on Debian (Chrome leaves this limit as-is), and I can neither use setrlimit() to bump that number up (due to seccomp) nor create subprocesses with separate file descriptor tables. (A real exploit might be able to work around this by exploiting several renderer processes, though that seems like a pain.) The one primitive I have for allocating large amounts of unmovable pages are page tables: by creating a gigantic anonymous VMA (read-only to avoid running into Chrome's RLIMIT_DATA restrictions) and then triggering read faults all over this VMA, an unlimited number of page tables can be allocated. I use this to spam around 10% of total RAM with page tables. (To figure out how much RAM the machine has, I'm testing whether mmap() works with different sizes, relying on the OVERCOMMIT_GUESS behavior of __vm_enough_memory(); though that doesn't actually work precisely in the sandbox due to the RLIMIT_DATA limit. A cleaner and less noisy way might be to actually fill up RAM and use mincore() to figure out how large the working set can get before pages get swapped out or discarded.)
Afterwards, I create 41 UNIX domain sockets and use them to spam 256 SKB allocations each; since each SKB uses 0x100 bytes, this allocates a bit over 2.5 MiB of kernel memory. That is enough to later flush a slab page out of both SLUB's per-cpu partial list as well as the page allocator's per-cpu freelist, all the way into the buddy allocator.
Then I set up a SLUB page containing a dangling pointer, try to flush this page all the way into the buddy allocator, and reallocate it as a pipe page by using 256 pipes to each allocate 2 pages (which is the minimum size that a pipe always has, see PIPE_MIN_DEF_BUFFERS). This allocates 25624KiB = 2 MiB worth of order-0 pages.
At this point, I have probably reallocated the SKB as a pipe page; but I don't know in which pipe the SKB is located, or at which offset. To figure that out, I store fake SKBs in the pipe pages that point to different data; then, by triggering the bug with recv(..., MSG_OOB|MSG_PEEK), I can read one byte at the pointed-to location and narrow down where in which pipe the SKB is. I don't know the addresses of any kernel objects yet; but the X86-64 implementation of copy_to_user() is symmetric and also works if you pass a userspace pointer as the source, so I can simply use userspace data pointers in the crafted SKBs for now. (SMAP is not an issue here - SMAP is disabled for all memory accesses in copy_to_user(). On x86-64, copy_to_user() is actually implemented as a wrapper around copy_user_generic(), which is a helper that accepts both kernel and userspace addresses as source and destination.)
Afterwards, I have the ability to call copy_to_user(..., 1) on arbitrary kernel pointers through recv(..., MSG_OOB|MSG_PEEK) using the controlled SKB.
Properties of the read primitiveOne really cool aspect of a copy_to_user()-based read primitive on x86-64 is that it doesn't crash even when called on invalid kernel pointers - if the kernel memory access fails, the recv() syscall will simply return an error (-EFAULT).
The main limitation is that usercopy hardening (__check_object_size()) will catch attempts to read from some specific memory ranges:
- Ranges that wrap around - not an issue here, only ranges of length 1 can be used anyway.
- Addresses <=16 - not an issue here.
- The kernel stack of the current process, if some other criteria are met. Not an issue here - even if I want to read from a kernel stack, I'll probably want to read the kernel stack of another thread, which isn't protected.
- The kernel .text section - all of .data and such is accessible, just .text is restricted. When targeting a specific kernel build, that's not really relevant.
- kmap() mappings - those don't exist on x86-64.
- Freed vmalloc allocations, or ranges that straddle the bounds of a vmalloc allocation. Not an issue here.
- Ranges in the direct mapping, or in the kernel image address range, that straddle the bounds of a high-order folio. Not an issue here, only ranges of length 1 can be used anyway.
- Ranges in the direct mapping, or in the kernel image address range, that are used as SLUB pages in non-kmalloc slab caches, at offsets not allowed by usercopy allowlisting (see __check_heap_object()). This is the most annoying part.
(There might be other ways of using this bug to read memory with different constraints, like by using the frag_iter->len read in __skb_datagram_iter() to influence an offset from which known data is subsequently read, but that seems like a pain to work with.)
Locating the kernel imageTo break KASLR of the kernel image at this point, there are lots of options, partially thanks to copy_to_user() not crashing on access to invalid addresses; but one nice option is to read an Interrupt Descriptor Table (IDT) entry through the read-only IDT mapping at the fixed address 0xfffffe0000000000 (CPU_ENTRY_AREA_RO_IDT_VADDR), which yields the address of a kernel interrupt handler.
Using the read primitive to observe allocator state and other thingsFrom here on, my goal is to use the read primitive to assist in exploiting the write primitive; I would like to be able to answer questions like:
- What is the mapping between struct page */struct ptdesc */struct slab * and the corresponding region in the direct mapping? (This is easy and just requires reading some global variables out of the .data/.bss sections.)
- At which address will the next sk_buff allocation be?
- What is the current state of this particular page?
- Where are my page tables located, and which physical address does a given virtual address map to?
Because usercopy hardening blocks access to objects in specialized slabs, reading the contents of a struct kmem_cache is not possible, because a kmem_cache is allocated from a specialized slab type which does not allow usercopy. But there are many important pieces of kernel memory that are readable, so it is possible to work around that:
- The kernel .data/.bss sections, which contain things like pointers to kmem_cache instances.
- The vmemmap region, which contains all instances of struct page/struct folio/struct ptdesc/struct slab (these types all together effectively form a union) which describe the status of each page. These also contain things like a SLUB freelist head pointer; a pointer to the kmem_cache associated with a given SLUB page; or an intrusive linked list element tying together the root page tables of all processes.
- Kernel stacks of other threads (located in vmalloc memory).
- Per-CPU memory allocations (located in vmalloc memory), which are used in particular for memory allocation fastpaths in SLUB and the page allocator; and also the metadata describing where the per-cpu memory ranges are located.
- Page tables.
So to observe the state of the SLUB allocator for a given slab cache, it is possible to first read the corresponding kmem_cache* from the kernel .data/.bss section, then scan through all per-cpu memory for objects that look like a struct kmem_cache_cpu (with a struct slab * and a freelist pointer pointing into the corresponding direct mapping range), and check which kmem_cache the struct slab's kmem_cache* points to to determine whether the kmem_cache_cpu is for the right slab cache. Afterwards, the read primitive can be used to read the slab cache's per-cpu freelist head pointer out of the struct kmem_cache_cpu.
To observe the state of a struct page/struct slab/..., the read primitive can be used to simply read the page's refcount and mapcount (which contains type information). This makes it possible to observe things like "has this page been freed yet or is it still allocated" and "as what type of page has this page been reallocated".
To locate the page table root of the current process, it is similarly not possible to directly go through the mm_struct because that is allocated from a specialized slab type which does not allow usercopy (except in the saved_auxv field). But one way to work around this is to instead walk the global linked list of all root page tables (pgd_list), which stores its elements inside struct ptdesc, and search for a struct ptdesc which has a pt_mm field that points to the mm_struct of the current process. The address of this mm_struct can be obtained from the per-cpu variable cpu_tlbstate.loaded_mm. Afterwards, the page tables can be walked through the read primitive.
Finding a reallocation target: The magic of CONFIG_RANDOMIZE_KSTACK_OFFSETHaving already discarded the "bump a pointer by 4 GiB" and "reallocate as a maple tree node" strategies, I went looking for some other allocation which would place an object such that incrementing the value at address 0x...44 leads to a nice primitive. It would be nice to have something there like an important flags field, or a length specifying the size of a pointer array, or something like that. I spent a lot of time looking at various object types that can be allocated on the kernel heap from inside the Chrome sandbox, but found nothing great.
Eventually, I realized that I had been going down the wrong path. Clearly trying to target a heap object was foolish, because there is something much better: It is possible to reallocate the target page as the topmost page of a kernel stack!
That might initially sound like a silly idea; but Debian's kernel config enables CONFIG_RANDOMIZE_KSTACK_OFFSET=y and CONFIG_RANDOMIZE_KSTACK_OFFSET_DEFAULT=y, causing each syscall invocation to randomly shift the stack pointer down by up to 0x3f0 bytes, with 0x10 bytes granularity. That is supposed to be a security mitigation, but works to my advantage when I already have an arbitrary read: instead of having to find an overwrite target that is at a 0x44-byte distance from the preceding 0x100-byte boundary, I effectively just have to find an overwrite target that is at a 0x4-byte distance from the preceding 0x10-byte boundary, and then keep doing syscalls and checking at what stack depth they execute until I randomly get lucky and the stack lands in the right position.
With that in mind, I went looking for an overwrite target on the stack, strongly inspired by Seth's exploit that overwrote a spilled register containing a length used in copy_from_user. Targeting a normal copy_from_user() directly wouldn't work here - if I incremented the 64-bit length used inside copy_from_user() by 4 GiB, then even if the copy failed midway through due to a userspace fault, copy_from_user() would try to memset() the remaining kernel memory to zero.
I discovered that, on the codepath pipe_write -> copy_page_from_iter -> copy_from_iter, the 64-bit length variable bytes of copy_page_from_iter() is stored in register R14, which is spilled to the stack frame of copy_from_iter(); and this stack spill is in a stack location where I can clobber it.
When userspace calls write() on a pipe, the kernel constructs an iterator (struct iov_iter) that encapsulates the userspace memory range passed to write(). (There are different types of iterators that can encapsulate a single userspace range, a set of userspace ranges, or various types of kernel memory.) Then, pipe_write() (which is called anon_pipe_write() in newer kernels) essentially runs a loop which allocates a new pipe_buffer slot in the pipe, places a new page allocation in this pipe buffer slot, and copies up to a page worth of data (PAGE_SIZE bytes) from the iov_iter to the pipe buffer slot's page using copy_page_from_iter(). copy_page_from_iter() effectively receives two length values: The number of bytes that fit into the caller-provided page (bytes, initially set to PAGE_SIZE here) and the number of bytes available in the struct iov_iter encapsulating the userspace memory range (i->count). The amount of data that will actually be copied is limited by both.
If I manage to increment the spilled register R14 which contains bytes by 4 GiB while copy_from_iter() is busy copying data into the kernel, then after copy_from_iter() returns, copy_page_from_iter() will effectively no longer be bounded by bytes, only by i->count (based on the length userspace passed to write()); so it will do a second iteration, which copies into out-of-bounds memory behind the pipe buffer page. If userspace calls write(fd, buf, 0x3000), and the overwrite happens in the middle of copying bytes 0x1000-0x1fff of the userspace buffer into the second pipe buffer page, then bytes 0x2000-0x2fff will be written out-of-bounds behind the second pipe buffer page, at which point i->count will drop to 0, terminating the operation.
Reallocating a SLUB page as a stack page, with arb-read assistanceSo to get the ability to increment-after-free a value in a stack page, I again start by draining the low-order page allocator caches. But this time, the arb-read can be used to determine when an object at the right in-page offset is at the top of the SLUB freelist for the sk_buff slub cache; and the arb-read can also determine whether I managed to allocate an entire slab page worth of objects, with no other objects mixed in. Then, when flushing the page out of the SLUB allocator, the arb-read helps to verify that the page really has been freed (its refcount field should drop to 0); and afterwards, the page is flushed out of the page allocator's per-cpu freelist.
Then, to reallocate the page, I run a loop that first allocates a pipe page, then checks the refcount field of the target page. If the refcount of the target page goes up, I probably found the target page, and can exit the loop; otherwise, I free the pipe page again, reallocate it as a page table to drain the page away, and try again. (Directly allocating as a page table would be cumbersome because page tables have RCU lifetime, so once a page has been allocated as a page table, it is hard to reallocate it. Keeping drained pages in pipe buffers might not work well due to the low file descriptor table size, and each pipe FD pair potentially only being able to reference two pages.)
Once I have reallocated the target page as a pipe buffer, I free it again, then free three more pages (from other helper pipes), and then create a new thread with the clone() syscall. If everything goes well, clone() will allocate four pages for the new kernel stack: First the three other pages I freed last, and then the target page as the last page of the stack. By walking the page tables, I can verify that the target page really got reused as the last page of the target stack.
Remaining prerequisites for using the write primitiveAt this point, I have the write primitive set up such that I can trigger it on a specific stack memory location. The write primitive essentially first reads some surrounding (stack) memory (in unix_stream_read_actor() and its callees skb_copy_datagram_msg -> skb_copy_datagram_iter) and expects that memory to have a certain structure before incrementing the value at a specific stack location.
I also know what stack allocation I want to overwrite.
The remaining issues are:
- I need to ensure that an OOB copy_from_user() behind a pipe buffer page will overwrite some data that helps in compromising the kernel.
- I need to be able to detect at what stack depth pipe_write() is running, and depending on that either try again or proceed to trigger the bug.
- The UAF reads preceding the UAF increment need to see the right kind of data to avoid crashing.
- copy_from_iter() needs to take enough time to allow me to increment a value in its stack frame.
Page tables have several nice properties here:
- It is easy for me to cause allocation of as many page tables as I want.
- I can easily determine the physical and kernel-virtual addresses of page tables that the kernel has allocated for my process (by walking the page tables with the arb read).
- They are order-0 unmovable allocations, just like pipe buffers, so the page allocator will allocate them in the same 2MiB pageblocks.
So I am choosing to use the OOB copy_from_user() to overwrite a page table.
This requires that I can observe where my pipe buffer pages are located; for that, I again use the SLUB per-cpu freelist observing trick, this time on the kmalloc-cg-192 slab cache, to figure out where a newly created pipe's pipe_inode_info is located. From there, I can walk to the pipe's pipe_buffer array, which contains pointers to the pages used by the pipe.
With the ability to observe both where my page tables are located and where pipe buffer pages are allocated, I can essentially alternatingly allocate page tables and pipe buffer pages until I get two that are adjacent.
Detecting pipe_write() stack depthTo run pipe_write() with a write() syscall such that I can reliably determine at which depth the function is running and decide whether to go ahead with the corruption, without having to race, I can prepare a pipe such that it initially only has space for one more pipe_buffer, and then call write() with a length of 0x3000. This will cause pipe_write() to first store 0x1000 bytes in the last free pipe_buffer slot, then wait for space to become available again. From another thread, it is possible to detect when pipe_write() has used the last free pipe_buffer slot by repeatedly calling poll() on the pipe: When poll() stops reporting that the pipe is ready for writing (POLLOUT), pipe_write() must have used up the last free pipe_buffer slot.
At that point, I know that the syscall entry part of the kernel stack is no longer changing. To check whether the syscall is executing at a specific depth, it is enough to check whether the return address for the return from x64_sys_call to do_syscall_64 is at the expected position on the kernel stack using the arb read - it can't be a return address left from a preceding syscall because the same stack location where that return address is stored is always clobbered by a subsequent call to syscall_exit_to_user_mode at the end of a syscall.
If the stack randomization is the correct one, I can then do more setup and resume pipe_write() by using read() to clear pipe buffer entries; otherwise, I will use read() to clear pipe buffer entries, let pipe_write() run to completion, and try again.
Letting the reads in the increment primitive see the right dataThe increment primitive happens on this call graph:
unix_stream_recv_urg
[read dangling pointer from ->oob_skb]
unix_stream_read_actor [called as state->recv_actor]
[UAF read UNIXCB(skb).consumed]
skb_copy_datagram_msg
skb_copy_datagram_iter
__skb_datagram_iter
skb_headlen
[UAF read skb->len]
[UAF read skb->data_len]
skb_frags_readable
[UAF read skb->unreadable]
skb_shinfo [for reading nr_frags]
skb_end_pointer
[UAF read skb->head]
[UAF read skb->end]
skb_walk_frags
skb_shinfo [for reading frag_list]
[forward iteration starting at skb_shinfo(skb)->frag_list along ->next pointers]
[UAF increment of UNIXCB(oob_skb).consumed]
A promising aspect here is that this codepath first does all the reads; then it does a linked list walk through attacker-controlled pointers with skb_walk_frags(); and then it does the write. skb_walk_frags() is defined as follows:
#define skb_walk_frags(skb, iter) \
for (iter = skb_shinfo(skb)->frag_list; iter; iter = iter->next)
and is used like this in __skb_datagram_iter():
skb_walk_frags(skb, frag_iter) {
int end;
WARN_ON(start > offset + len);
end = start + frag_iter->len;
if ((copy = end - offset) > 0) {
if (copy > len)
copy = len;
if (__skb_datagram_iter(frag_iter, offset - start,
to, copy, fault_short, cb, data))
goto fault;
if ((len -= copy) == 0)
return 0;
offset += copy;
}
start = end;
}
So if I run recv(..., MSG_OOB) on the UNIX domain socket while the dangling ->oob_skb pointer points to data I control, and craft that fake SKB such that its skb_shinfo(skb)->frag_list points to another fake SKB with ->len=0 and a ->next pointer pointing back to itself, I can cause the syscall to get stuck in an infinite loop. It will keep looping until I replace the ->next pointer with NULL, at which point it will perform just the UAF increment.
This is great news: instead of needing to ensure that the stack contains the right data for the UAF reads and the overwrite target for the UAF increment at the same time, I can first place controlled data on the stack, and then afterwards separately place the overwrite target on the stack.
To place controlled data on the stack, I initially considered using select() or poll(), since I know that those syscalls copy large-ish amounts of data from userspace onto the stack; however, those have the disadvantage of immediately validating the supplied data, and it would be hard to make them actually stay in the syscall, rather than immediately returning out of the syscall with an error and often clobbering the on-stack data array in the process. Eventually I discovered that sendmsg() on a datagram-oriented UNIX domain socket works great for this: ___sys_sendmsg(), which implements the sendmsg() syscall, will import the destination address pointed to by msg->msg_name into a stack buffer (struct sockaddr_storage address), then call into the protocol-specific ->sendmsg handler - in the case of datagram-oriented UNIX domain sockets, unix_dgram_sendmsg(). This function coarsely validates the structure of the destination address (checking that it specifies the AF_UNIX family and is no larger than struct sockaddr_un), then waits for space to become available in the socket's queue before doing anything else with the destination address. This makes it possible to place 108 bytes of controlled data on a kernel stack, and that data will stay there until the syscall can continue or bail out when space becomes available in the socket queue or the socket is shut down. I actually need a bit more data on the stack, but luckily the struct iovec iovstack[UIO_FASTIOV] is directly in front of the address, and unused elements at the end of the iovstack are guaranteed to be zeroed thanks to CONFIG_INIT_STACK_ALL_ZERO=y, which happens to be exactly what I need.
It would be helpful to be able to reliably wait for the sendmsg() syscall to enter the kernel and copy the destination address onto the kernel stack before inspecting the state of its stack; this is luckily possible by supplying a single-byte "control message" via msg->msg_control and msg->msg_controllen, which will mostly be ignored because it is too small to be a legitimate control message, but will be copied onto the kernel stack in ____sys_sendmsg() after the destination address has been copied onto the stack. It is possible to detect from userspace when this kernel access to msg->msg_control happens by pointing it to a userspace address which is not yet populated with a page table entry, then polling mincore() on this userspace address.
So now my strategy is roughly:
- In a loop, call sendmsg() on the thread with the stack the dangling ->oob_skb pointer points to to place a fake SKB on the stack until the fake SKB lands at the right stack offset thanks to CONFIG_RANDOMIZE_KSTACK_OFFSET, and have that fake SKB's skb_shinfo(skb)->frag_list point to a second fake SKB with a ->next pointer that refers back to itself. (This second fake SKB can be placed anywhere I want, so I'm putting it in a userspace-owned page, so that userspace can directly write into it.)
- On a second thread, use write() on a UNIX domain socket to use the dangling ->oob_skb pointer, which will start looping endlessly, following the ->next pointer.
- On the thread that called sendmsg() before, now call write(..., 0x3000) on a pipe with one free pipe_buffer slot in a loop until the syscall handler lands at the right stack offset thanks to CONFIG_RANDOMIZE_KSTACK_OFFSET.
- Let the pipe write() continue, and wait until it is in the middle of copying data from userspace memory to a pipe buffer page.
- Set the ->next pointer in the second fake SKB to NULL, so that the write() on the UNIX domain socket stops looping, performs the UAF increment, and returns.
- Wait for the pipe write() to finish, at which point the page table behind the pipe data page should have been overwritten with controlled data.
I need to slow down a copy_from_iter() call. There are several strategies for this that don't work (or don't work well) in a Chrome renderer sandbox:
- userfaultfd: not accessible in the Chrome Desktop renderer sandbox, and nowadays usually anyways nerfed such that only root can use it to intercept usercopy operations
- FUSE: not accessible in the Chrome Desktop renderer sandbox
- causing lots of major page faults: I'm not sure if there is some indirect way to get a file descriptor to a writable on-disk file; but either way, this seems like it would be a pain from a renderer.
But as long as only a single userspace memory read needs to be delayed, there is another option: I can create a very large anonymous VMA; fill it with mappings of the 4KiB zeropage; ensure that no page is mapped at one specific location in the VMA (for example with madvise(..., MADV_DONTNEED), which zaps page table entries in the specified range); and then have one thread run an mprotect() operation on this large anonymous VMA while another thread tries to access the part of the userspace region where no page is currently mapped. The mprotect() operation will keep the VMA write-locked while it walks through all the associated page table entries, modifies the page table entries as required, and performs TLB flushes if necessary; so a concurrent page fault in this VMA will have to wait until the mprotect() has finished. One limitation of this technique is that the part of the accessed userspace range that causes the slowdown will be filled with zeroes; but that can just be a single byte at the start or end of the range being copied, so it's not a major limitation.
Based on some rough testing on my machine, if mprotect() has to iterate through 128 MiB of page tables populated with zeropage mappings, it takes something like 500-1000ms depending on which way the page table entries are changed.
Page table controlPutting all this together, I can overwrite the contents of a page table with controlled data. I'm using that controlled write to place a new entry in the page table that points back to the page table, effectively creating a userspace mapping of the page table; and then I can use this to map arbitrary kernel memory writably into userspace.
My exploit demonstrates its ability to modify kernel memory with this by using it to overwrite the UTS information printed by uname.
Takeaway: Chrome sandbox attack surfaceOne thing that stood out to me about this is that I was able to use a somewhat large number of kernel interfaces in this exploit; in particular:
interface
usecase
anonymous VMA creation
page table allocations
madvise()
fast VMA splitting and merging
AF_UNIX SOCK_STREAM sockets
triggering the bug; SKB allocation and freeing
sched_getcpu() (via syscall-less fastpaths)
interacting with per-cpu kernel structures
eventfd()
synchronization between threads
pipe()
allocation and freeing of order-0 unmovable pages with controlled contents
pipe()
stack overwrite target
AF_UNIX SOCK_DGRAM sockets
placing controlled data on the stack
sendmsg()
placing controlled data on the stack
mprotect()
slowing down copy_from_user()
munmap()
TLB flushing
madvise(..., MADV_DONTNEED)
zapping PTEs for slowing down subsequent copy_from_user() or subsequently detecting copy_from_user()
mincore()
detecting copy_from_user()
clone()
racing operations on multiple threads; reallocating pages as kernel stack
poll()
detecting progress of concurrent pipe_write()
Some of these are obviously needed to implement necessary features of the sandboxed renderer; others seem like unnecessary attack surface. I hope to look at this more systematically in the future.
Takeaway: Esoteric kernel features in core interfaces are an issue for browser sandboxesOne thing I've noticed, not just with this issue, but several issues before that, is that core kernel subsystems (which are exposed in renderer sandbox policies and such) sometimes have flags that trigger esoteric ancillary features that are unintentionally exposed by Chrome's renderer sandbox. Such features seem to often be more buggy than the core feature that the policy intended to expose. Examples of this from Chrome's past include:
- futex() was broadly exposed in the sandbox, making it possible to reach a bug in Priority Inheritance futexes from the renderer sandbox.
- memfd_create() was exposed in the sandbox without checking its flags, making it possible to create HugeTLB mappings using the MFD_HUGETLB flag. There have been several bugs in HugeTLB, which is to my knowledge almost exclusively used by some server applications that use large amounts of RAM, such as databases.
- pipe2() was exposed in the sandbox without checking its flags, making it possible to create "notification pipes" using the O_NOTIFICATION_PIPE flag, which behave very differently from normal pipes and are used exclusively for posting notifications from the kernel "keys" subsystem to userspace.
When faced with an attacker who already has an arbitrary read primitive, probabilistic mitigations that randomize something differently on every operation can be ineffective if the attacker can keep retrying until the arbitrary read confirms that the randomization picked a suitable value or even work to the attacker's advantage by lining up memory locations that could otherwise never overlap, as done here using the kernel stack randomization feature.
Picking per-syscall random stack offsets at boottime might avoid this issue, since to retry with different offsets, the attacker would have to wait for the machine to reboot or try again on another machine. However, that would break the protection for cases where the attacker wants to line up two syscalls that use the same syscall number (such as different ioctl() calls); and it could also weaken the protection in cases where the attacker just needs to know what the randomization offset for some syscall will be.
Somewhat relatedly, Blindside demonstrated that this style of attack can be pulled off without a normal arbitrary read primitive, by “exploiting” a real kernel memory corruption bug during speculative execution in order to leak information needed for subsequently exploiting the same memory corruption bug for real.
Takeaway: syzkaller fuzzing and complex data structuresThe first memory corruption bug described in this post was introduced in late June 2024, and discovered by syzkaller in late August 2024. Hitting that bug required 6 syscalls: One to set up a socket pair, four send()/recv() calls to set up a dangling pointer, and one more recv() call to actually trigger UAF by accessing the dangling pointer.
Hitting the second memory corruption bug, which I found by code review, required 8 syscalls: One to set up a socket pair, six send()/recv() calls to set up a dangling pointer, and one more recv() to cause UAF.
This was not a racy bug; in a KASAN build, running the buggy syscall sequence once would be enough to get a kernel splat. But when a fuzzer chains together syscalls more or less at random, the chance of running the right sequence of syscalls drops exponentially with each syscall required...
The most important takeaway from this is that data structures with complex safety rules (in this case, rules about the ordering of different types of SKBs in the receive queues of UNIX domain stream sockets) don't just make it hard for human programmers to keep track of safety rules, they also make it hard for fuzzers to construct inputs that explore all relevant state patterns. This might be an area for fuzzer improvement - perhaps fuzzers could reach deeper into specific subsystems by generating samples that focus on interaction with a single kernel subsystem, or by monitoring whether additional syscalls chained to the end of a base sample cause additional activity in a particular subsystem.
Takeaway: copy_from_user() delays don't require FUSE or userfaultfdFUSE and userfaultfd are the most effective and reliable ways to inject delays on copy_from_user() calls because they can set up separate delays for multiple memory regions, provide precise control over the timing of the injected delay, don't require large allocations or slow preparation, and allow placing arbitrary data in the page that is eventually installed. However, applying mprotect() to a large anonymous VMA filled with zeropage mappings (with 128 MiB of page tables) turns out to be sufficient to delay kernel execution by around a second. In the past, I have pushed for restricting userfaultfd because of how it can delay operations like copy_from_user(), but perhaps userfaultfd was not actually significantly more useful in this regard than mprotect().
Takeaway: Usercopy hardeningThe hardening checks I encountered when calling copy_to_user() on arbitrary kernel addresses were a major annoyance, but could be worked around, since access to almost anything except type-specific SLUB pages is allowed. That said, I'm not sure how important improving these checks is - trying to protect against an attacker who can pass arbitrary kernel pointers to copy_to_user() might be futile, and guarding against out-of-bounds/use-after-free copy_to_user() or such is the major focus of this hardening.
ConclusionsEven in somewhat constrained environments, it is possible to pull off moderately complex Linux kernel exploits.
Chrome's Linux desktop renderer sandbox exposes kernel attack surface that is never legitimately used in the sandbox. This needless functionality doesn’t just allow attackers to exercise vulnerabilities they otherwise couldn’t; it also exposes kernel interfaces that are useful for exploitation, enabling heap grooming, delay injection and more. The Linux kernel contributes to this issue by exposing esoteric features through the same syscalls as commonly-used core kernel functionality. I hope to do a more in-depth analysis of Chrome's renderer sandbox on Linux in a follow-up blogpost.
Microsoft will kill the Lens PDF scanner app for iOS, Android
33 time-saving tips for the Chrome Android browser
Mobile web browsing is all about finding what you need quickly and with as little hassle as possible — well, in theory, anyway. In the real world, the act of surfing sites from your smartphone is often anything but efficient.
From sites that have not-so-friendly mobile interfaces to browser commands that take far too many steps to execute, hopping around the World Wide Internuts from a handheld device can frequently leave something to be desired.
Fear not, though, my fellow finger-tappers: There are plenty of tricks you can learn to make your mobile web journey more pleasant and productive. Try these next-level tips for Google’s Chrome Android browser and get ready for a much better mobile browsing experience.
1. Switch tabs the simpler wayFirst things first: Got multiple tabs open? Move between ’em with minimal effort by sliding your finger horizontally across the address bar. You’ll be zapping between sites in seconds.
2. Manage tabs like a proFor more advanced tab management, swipe down on a tab, starting at the address bar. That’ll take you to Chrome’s tab overview interface, where you can see all of your open tabs as cards.
From there, tap on any tab to jump to it, swipe sideways on it to close it, or touch and hold it to drag it to a different place in the interface. You can even drag a tab on top of another tab to create a group and keep all of your open stuff organized.
Chrome’s tab overview interface — which seems to be in a constant state of flux — is the fastest way to view and manage tabs.
JR Raphael / Foundry
3. Close all of your tabs at onceWhen you have tons of tabs open and want to clean house quickly, tap the three-dot menu icon within that same tab overview interface — and whaddya know? There’s a handy hidden command there for closing all of your tabs in one fell swoop.
4. Let Chrome close (and open!) tabs for youIf you really want to save time and stop futzing around with all those tabs you’re always leaving open, Chrome has a relatively recent feature that can actually clean house on your behalf — with absolutely no active effort required.
Tap the browser’s three-dot menu icon, select “Settings,” then select “Tabs” followed by “Inactive.” There, you can tell Chrome to automatically archive untouched tabs for you after seven, 14, or 21 days and move ’em into a special separate section of the browser — then close ’em entirely if you still don’t mess with ’em after a few months.
Chrome’s inactive tabs option is an easy way to keep your browser from getting cluttered.
JR Raphael / Foundry
Note, too, the option in that main “Tabs” menu to “Automatically open tab groups from other devices.” Flip that switch into the on and active position, and anytime you create a new group of tabs within Chrome on another device, it’ll automatically appear within the browser on Android as well.
5. Copy a site’s URL in no timeSure, you can copy a site’s address by opening the main Chrome menu, selecting “Share,” and then tapping the double-box icon (what looks like two overlapping rectangles) next to the site’s name from the menu that pops up — but sweet sassy molassey, that sure seems like a lot of steps.
Snag a URL with less work by tapping the address bar at the top of the screen and then hitting that same exact copy icon directly next to the page’s URL instead.
6. Embrace invisible address bar shortcutsSpeaking of handy hidden commands within Chrome’s address bar area, make yourself a mental note of the following out-of-sight extras lurking within your browser’s buttons:
- Pressing and holding the Home button (assuming you have that button set to be visible) will surface a one-step shortcut to editing the Chrome home page and customizing it to your liking.
- Pressing and holding the tab indicator icon will reveal handy commands for closing and opening tabs without all the usual steps.
- Pressing and holding the shortcut button — the one right next to the browser’s address area (which we’ll go over more in a moment) — pops up an easy way to edit that button’s function.
And one more thing, while we’re thinking about this sort of saucy step-saver…
7. Take the superspeed path to Chrome’s settingsYou can always get into the Chrome Android settings via the browser’s main menu, as we’ve already mentioned — but, little-known fact: There’s an even easier way to zap yourself directly into that area, if you know where to look.
So here it is: From the default Chrome new tab page, simply press and hold your finger onto your profile picture for a split second.
And now you know.
8. Share with a single stepSharing a page is probably the command I use more than any other in Chrome on Android, whether I’m sending something to a friend or colleague, saving it into my notes for later reference, or emailing it to random strangers. (Hey, we all have our quirks.) And yet, that blasted sharing button is never as readily available as it oughta be.
Well, here’s the fix: With one quick adjustment to an out-of-the-way Chrome setting, you can enable a permanently present one-tap button for sharing a page from the browser to anywhere else on your phone. It’ll save you precious time, and there’s absolutely no downside.
Just tap the three-dot menu icon in Chrome’s upper-right corner, select “Settings,” then:
- Look for the line labeled “Toolbar shortcut,” within the “Advanced” section of browser options.
- Tap that, then make sure the toggle at the top of the screen that comes up next is in the on and active position.
- And, last but not least, select “Share this page” from the list of options in that area.
Or, if you’re really feeling fancy, use the trick we just went over to take a shortcut to that same area of the Chrome Android settings — then make the same selection.
However you get there, once you make your way back out of those settings and into the main Chrome interface, you’ll see a spiffy new dedicated sharing button right in the browser’s upper-right corner — directly next to the open-tabs indicator.
Once you activate Chrome’s toolbar shortcut for sharing (left), you’ll see a new one-tap sharing shortcut right within your address bar for especially easy access (right).
JR Raphael / Foundry
Much easier, no?
9. Share a link to specific text within a web pageWhen it comes to website sharing, a link alone isn’t always enough. Sometimes, you want to point someone to a specific section of text within a page — and typically, there’s no great way to do that.
Or so you’d think. When such a need next arises, press and hold your finger to the text in question within the page in Chrome. Use the selectors to highlight the exact area of text you want, then tap “Share” in the menu right above the words.
Click the button to copy the link or use one of the other available sharing options to send it to another app, and the link will be specially structured so that the page will automatically scroll down to your selected text and highlight it as soon as it’s opened — like this:
When you create a link to specific text within the Chrome Android app, the page will open to that exact area with your text highlighted.
JR Raphael / Foundry
Point made, seconds saved.
10. Send a link to one of your other devicesForget sharing with other people for a minute: What about when you need to send a link to yourself — from your phone to a computer or maybe even another Android device?
The Chrome Android app has a handy option that’ll handle that for you. All you’ve gotta do is tap the share icon within the main Chrome menu (or at the top of your browser, if you followed our earlier tip!) and then select “Send to devices” from the menu that shows up.
That’ll give you a list of available devices where you’re signed into Chrome, and once you select any of ’em, your current page will pop up on that device as a notification — no wires or self-emailing required.
Who knew?!
11. Edit and expand screenshots with minimal effortSometimes, a picture can be worth a thousand words (or at least a couple hundred). If you feel the urge to capture and share a screenshot of something you’re viewing in Chrome, make a mental note: You can do it from right within the browser and even rely on Chrome’s built-in tools for expanding your screenshot without ever having to leave that environment.
Just hit that share command once again, and this time, look for the “Long Screenshot” option in the menu that appears at the bottom of the screen. Tap it, and you’ll find yourself in a fancy editor where you can both crop and extend the area you’re capturing to show as much of the page as you want — even with scrolling, if needed.
Chrome’s built-in screenshot editor makes it easy to capture regular or even expanded screenshots within the browser.
JR Raphael / Foundry
Once you’re finished, it’s just one more tap — of the checkmark icon at the bottom of the screen — to save your creation and either store it locally or share it to any other destination on your device.
12. Save a page for offline viewingThe next time you’re about to head onto a flight, into a tunnel, or into a time machine that’s transporting you back to an era without Wi-Fi, plan ahead and save some articles for your offline reading enjoyment.
You’d probably never know it, but Chrome actually makes that easy to do: While viewing any web page, open the main Chrome menu — by pressing the three-dot icon in the app’s upper-right corner — and tap the downward-facing arrow icon at the top. And that’s it: Chrome will save the entire page offline for you. Whenever you want to find it, just open up that same menu and select “Downloads.”
All of your saved pages will be there and waiting, regardless of what place, year, or dimension you happen to be visiting.
13. Convert a page into a PDFMaybe you want to make a more permanent and easily shareable offline copy of a web page. Hey, no problem: Just save it as a PDF.
Open Chrome’s main menu while viewing the page, then select “Share” followed by “Print.” Make sure the printer is set to “Save as PDF” — if you see some other printer name at the top of the screen, tap it to change it — and then tap the circular “PDF” icon in the screen’s upper-right corner and hit the “Save” button on the next screen.
(You can also take a moment to clean the page up before saving it, if you really want to get fancy.)
All that’s left is to fire up your favorite Android file manager to find the document.
14. Edit a PDF from right within ChromeWhile we’re thinking about PDFs, ever find yourself needing to fill out a form, sign a document, or make other quick ‘n’ simple changes to a PDF you’ve opened on the web? If so, take note: Chrome now has a snazzy new PDF editor built right into the Android browser — and while it may not be enough for advanced PDF editing needs, it can be precisely what the metaphorical doctor ordered for basic document modifications.
Just tap the link to any PDF, anywhere on the web. (The World Wide Web Consortium — an organization responsible for developing global web standards — has a simple dummy PDF you can use for testing, if you want.)
That should instantly open the PDF right within Chrome — and from there, you can tap the pencil-shaped editing icon to mark up, highlight, and erase stuff as you see fit.
You can now perform basic PDF markups and edits right within Chrome on Android.
JR Raphael / Foundry
If you aren’t seeing the new native Chrome PDF editor, don’t fret: It’s a very recent addition, and it may not be fully rolled out to everyone yet. You can force it to appear, anyhow, though, with a quick ‘n’ easy under-the-hood adjustment.
And if you need even more robust Android PDF editing powers, I’ve got you covered there, too.
15. Turn any page into your own personal podcastWhenever you’re next trying to catch up on Very Important Business Reading™ whilst driving, walking, or maybe even waltzing around city streets, why not let Chrome save you from distraction-induced dread and read the info aloud?
The Android Chrome app has an excellent reading system that can save you time by letting you ingest info even when your eyes are (or at least should be) otherwise occupied. Tap the browser’s three-dot menu icon and look for the “Listen to this page” option to try it out. (The option will appear only when you’re actively viewing a page with lots of text, like an article, that Chrome thinks it can read.)
Chrome’s out-loud reading feature is a great way to listen to the web on the go — or even just in your office.
JR Raphael / Foundry
Once a page is being read to you, you can tap the playback bar at the bottom of the screen to uncover additional controls and options — including the ability to adjust playback speed and change the voice being used for the reading.
And if this possibility tickles your fancy, you might also enjoy exploring Chrome’s specific text reading capability as well as the excellent (and all too easily overlooked) Android-wide Reading Mode system — which features its own out-loud reading mechanism and works almost anywhere on your device, even outside of the browser.
16. Act on text within a web pageWhy waste energy typing things into Chrome when you can just tap to find what you need? Anytime you see text on a web page that you want to act on, press and hold your finger on the words — then use the sliders that appear to adjust what’s selected.
Chrome will pop up a small menu with options to perform a web search on the phrase or to share it to any other app on your device (like a messaging service or note-taking app, for instance). If you’re using 2017’s Android 8.0 release or higher — and at this point, you’d better be! — the system should also automatically recognize and offer appropriate one-touch suggestions for things like phone numbers, physical addresses, and email addresses.
Tap some text to share it, search for it, or act on it in other context-appropriate ways.
JR Raphael / Foundry
17. Adjust your addressesOne of the most annoying chores around web work — especially on a phone — is filling out forms and plopping in things like your mailing address (or your company’s address) time and time again.
Chrome can eliminate that hassle and make your life meaningfully easier. Look in the “Addresses and more” area of the browser’s settings and see what you find.
If you already have some addresses stored in that area, take a moment to clean ’em up and winnow ’em down so that only addresses you actually need are present — and so that all the info is complete and up to date for easy automatic filling. You can also manually add in new addresses while you’re there.
And if that section isn’t yet activated for you, tap the toggle at the top of the screen to fix that and then take a few minutes to add in pertinent addresses and other form info you might find yourself filling in on sites over time.
Trust me: Your future self will thank you.
18. Search without stoppingThere’s an even simpler way to perform a web search when you only need a quick peek at the information: Highlight the phrase you want to look up, as described in the previous tip — and then look for the Google bar that appears at the bottom of your screen.
Either tap that bar or slide up on it, and you’ll be able to glance at the results for the term right on top of the page you’re already viewing. You can then tap on any result you see to open it in a new tab, tap the icon in the upper-right corner of the panel to open that as a new tab, or slide your finger down on the panel to close it altogether.
Chrome’s built-in quick search option is a convenient way to peek at results without interrupting your workflow.
JR Raphael / Foundry
And if you aren’t seeing that bar when you select text, head back into Chrome’s settings and tap “Google services” followed by “Touch to Search,” then make sure the toggles for that feature are in the on and active position.
19. Get answers right in Chrome’s address barSometimes, you don’t even need to open a thing to get the information you require. The Chrome Android browser is able to serve up instant answers right within its address bar — so if, for instance, you want to know how old Mark Zuckerberg is (the correct answer is always “old enough to know better”) or how much $25 is in euros, just type the question directly into the box at the top of your browser. Chrome will give you the info right then and there, and you can go right back to whatever else you were doing without having to load another page.
20. Preview a link before you commitI don’t know about you, but I tend to open up an awful lot of links while I’m looking around the web. And more often than not, I end up looking at the resulting pages for approximately 2.7 seconds before deciding to close ’em and move on.
The Chrome Android app has an incredibly useful command that saves me tons of time with that manner of browsing. Just open up any web page (heck, even this one!) and press and hold your finger on any link you see.
Select the “Preview page” option from the menu that appears, and there ya have it: You can see the linked page in an overlay panel, just like with the search results in our earlier tip. You can then tap the box-with-an-arrow icon in the panel’s upper-right corner to open it as its own tab and slide it downward (or tap the “x” in its title bar) to dismiss it entirely.
Convenient, wouldn’t ya say?
21. Find what you need fasterChrome has a hidden way to scan a page for a particular term without much effort: Open the browser’s main menu, select “Find in page,” and type in the term you want. Hit the down arrow at the top of screen once — and then, instead of hitting that same arrow over and over to see every place the term appears, slide your finger down the vertical bar at the right side of the screen.
That’ll move you rapidly through the page, with every instance of your term highlighted for hassle-free viewing.
22. Zoom single-handedlyPinch-to-zoom is, like, so 2013. When you’re using your phone with a single hand, as so many of us tend to do these days, Chrome has two far easier methods of magnifying a specific part of your screen.
First, on many devices, you can simply double-tap anywhere on a page to zoom into that area and have it take up the full width of your display. Double-tapping a second time will then zoom back out.
Second — and especially nifty — you can double-tap and leave your finger down, then drag downward to zoom in or upward to zoom out. It sounds a bit strange, but give it a try; it’ll get you where you need to go without all the awkward finger yoga that comes with one-handed pinching.
(Note that these advanced zooming methods won’t work on all web pages; generally, if a site is already optimized for mobile viewing, you’ll be limited to the regular ol’ pinching action. But more often than not, the need to zoom comes up when a site isn’t properly optimized — or when you’re deliberately viewing the desktop version of a site — and that’s when these techniques are typically available.)
23. Zoom where you wanna zoomFor some inexplicable reason, lots of websites prevent you from zooming in on your mobile device in any manner. And for a variety of reasons — whether you want to make the text larger or get a closer look at something that catches your eye — there are bound to be times when you want to move up close and personal.
Thankfully, Chrome lets you take back control. Head into the app’s settings, open the Accessibility section, and find the option labeled “Force enable zoom.”
Activate the checkbox alongside it and get ready to zoom to your heart’s content — whether the website you’re looking at wants you to or not.
24. Make the web easier to readLet’s face it: Some websites don’t exactly make reading pleasant. Whether it’s an annoying layout or a font that hurts your cerebrum, we’ve all come across a page that could be a little easier on the eyes. (Uh, no need to name any specifics, OK?)
Google has a solution: Chrome’s simplified view mode, which makes any website a bit more mobile-friendly by simplifying the formatting and stripping out extraneous elements such as ads, navigation bars, and boxes with related content.
Look in that aforementioned Accessibility section of Chrome’s settings and make sure the box next to “Simplified view for web pages” is activated. Then, whenever you’re opening an article, watch for an icon that looks like a screen with lines on it — at the right side of Chrome’s address bar, between the box with the current site’s URL and the tab indicator icon.
Tap that, and the entire page will transform right in front of your tired eyes.
Before and after: Chrome’s simplified view.
JR Raphael / Foundry
Need easier reading yet? Go back to that Accessibility section of Chrome’s settings and play with the “Default zoom” slider at the top of the page. It’ll make all the text you encounter across the web larger, independent of your system-level text size setting.
25. Fine-tune your simplified reading settingsNow that you’ve got those tidied-up pages ready and available, take another several seconds to customize exactly how Chrome is optimizing the web for you — so that your decluttered view is as optimized for you as possible.
After activating the “Simplified view” for a page (as described in our previous tip), tap the three-dot icon in the upper-right corner of the screen and select the “Appearance” option.
That’ll pull up a nifty customization panel that lets you change all sorts of stuff about how the page looks — ranging from its color scheme to the font used for its text and even the size of the words on the screen.
You can fine-tune and adjust Chrome’s “Simplified view” to make it suit your style.
JR Raphael / Foundry
Find the setup that’s easiest on your eyes, then know you can enjoy that specific visual every time you flip the “Simplified view” on moving forward.
26. Silence a site in no timeSites that automatically play videos you didn’t ask for are bad enough (insert awkward eye darting here), but sites that include audio in their autoplay videos are absolutely inexcusable.
You’d never know it, but Chrome has a super-fast way to get any site to shut up when it’s barking at you at the wrong time. Just tap the little control panel icon next to the site’s URL in the Chrome Android address bar, and you’ll see a secret on-demand control panel for adjusting all sorts of useful site-specific settings.
One tap in Chrome’s address bar, and you can silence any site that’s playing audio in no time.
JR Raphael / Foundry
If the site is making sound, the “Sound allowed” permission will be there and waiting. And all you’ll have to do is tap it to reveal a toggle that’ll let you muffle that misbehaving web-kitten once and for all without having to dig deep into any out-of-the-way menus.
27. Refresh with a flickNeed to reload a page? Swipe downward from anywhere in the main browser area. (You’ll need to be scrolled all the way to the top of the page in order for it to work.) Once you see a circle with an arrow appear, you can let go, sit back, and say: “Ahh. Isn’t that refreshing?”
28. Slide your way through Chrome’s commandsExcessive tapping is for amateurs. Instead of tapping Chrome’s menu icon, lifting your finger, and then tapping the item you want (pshaw!), slide downward on the button to move right into the menu without ever lifting your precious paw. Just keep swiping down until you reach your desired option, then let go — and Chrome will select it for you.
29. Pick up where you left offOne of Chrome’s most powerful features is something you might not even know exists: The browser always keeps all of your tabs synced and available across devices — which means you can open up Chrome on your Android device and get to the same tabs you left open on your laptop or desktop computer.
All you’ve gotta do to take advantage of it is open up Chrome’s main menu and select “Recent tabs.” There, you’ll find a full list of tabs currently or recently open in Chrome on any devices where you’re signed in. Just tap the tab you want, or press and hold on a device’s name to find an option to open all of its listed tabs at once.
30. Find that site you surfed to earlierMaybe it’s not a recently open tab you need but one you had open a while ago — say, a page you were viewing from your laptop last night, before you shut it down and put on your favorite pink footie pajamas.
Well, no problemo: Tiptoe your way back to Chrome’s main menu, and this time, select the line labeled “History.”
Here’s the secret about that section: It shows every page you’ve had opened in Chrome while signed in on any device — including desktop and laptop computers along with any other phones or tablets — all in a single searchable list. You can browse through the pages chronologically or look for specific keywords using the box at the top of the screen.
This might also be a good time to remind yourself about the existence of Chrome’s Incognito mode for the type of web surfing you don’t want kept on record. And don’t forget, too: You can always clear your full browsing history from Chrome on any device, should the need ever arise. (Don’t worry: I won’t ask for details.)
31. Make a site especially easy to accessFor any site you visit routinely, make your life a little easier by placing a one-tap shortcut directly to it on your device’s home screen. Just look in Chrome’s menu for the “Add to home screen” option. That’ll put the current site’s icon right where you’ll always see it for fast future access.
With some sites, you might see an “Install app” option instead. That indicates the site is available as a progressive web app, and installing it will give you an even more robust app-like experience — sometimes with the benefit of built-in offline access.
A shortcut to a website is at left above the dock area; opposite it is a shortcut for a site turned into a progressive web app.
JR Raphael / Foundry
32. Give your browsing a simple speed boostThese tips all revolve around the notion of saving time — so how ’bout one that quite literally makes web pages load faster, with next to no waiting required on your part?
Chrome’s “Preload Pages” feature is a powerful yet out-of-sight system for doing exactly that: When activated, the feature automatically predicts which links within a page you’re likely to tap and open, then it preemptively preloads those pages for you — using Google’s servers to handle the heavy lifting.
That way, when you actually tap the link, the page is already there and ready and thus pops up almost instantly.
You can try it out by opening the Privacy and Security section of Chrome’s settings and then tapping the “Preload pages” option. Select either “Standard preloading” or “Extended preloading” and see how much of a difference you notice from either of those paths.
33. Swim into a wilder channelYou may know that Chrome offers different release channels for its desktop browser — but did you know you can also opt to be more adventurous with Chrome on your Android device?
If you like trying out new features before they’re released, grab Google’s Chrome Beta app. It gets new elements and interface changes before they’re ready for prime time (which, fair warning, means they might occasionally be a bit unpolished).
If you want to go a step further, try out the Chrome Dev app. It’s described as the “bleeding edge” version of Chrome, with experimental elements that are guaranteed to be “rough around the edges.” (Careful with those fingers!)
And if you’re really feeling bold, give the Chrome Canary app a whirl. It’s the most unstable and frequently updated channel of Chrome, with features so fresh they’re bound to be partially uncooked on occasion.
The best part? All of the Chrome Android channels exist as separate standalone apps. That means you can install any or all of them and run them right alongside the regular Chrome app, with no major commitment and no real risk involved.
And that, my friends, is what we call livin’ on the edge — in the most gentle and hazard-free way imaginable.
For even more time-saving magic, come check out my free Android Shortcut Supercourse to uncover advanced options for zooming around your device, typing out text faster than ever, and all sorts of other buried treasures.
This article was originally published in June 2018 and most recently updated in August 2025.
Columbia University data breach impacts nearly 870,000 individuals
Royal and BlackSuit ransomware gangs hit over 450 US companies
GreedyBear Steals $1M in Crypto Using 150+ Malicious Firefox Wallet Extensions
What Is a SQLi Vulnerability?
Fake WhatsApp developer libraries hide destructive data-wiping code
OpenAI drops GPT-5: smarter, sharper, and built for the real world
More than two years after GPT-4’s release, OpenAI has unveiled GPT-5, boasting sharper reasoning, multimodal input, better math skills, and cleaner task execution, according to the company.
The large language model (LLM) — now rolling out to ChatGPT users and available in the API — is “smarter, more stable, and more versatile” and built to handle real-world tasks more like a human expert, OpenAI said.
In anticipation of OpenAI’s new AI model, Anthropic released the latest version of its own chatbot, Claude, earlier in the week.
Claude Opus 4.1 came with improvements particularly in two key areas: it significantly improved its coding capabilities, solving up to 75% of real-world programming tasks based on SWE Verified benchmarks; and the model is capable in detailed research and analysis, especially in tasks that require tracking lots of information and intelligently finding answers, according to Anthropic.
For developers, OpenAI claims GPT-5 is its most powerful coding model to date, outperforming GPT-o3 in benchmarks and real-world tasks. The model is “fine-tuned for agentic tools” like Cursor, Windsurf, Copilot, and Codex CLI, and it set new records in testing, the company stated in a blog.
According to OpenAI, GPT-5 delivers sharper reasoning, handling complex problems and multi-step instructions with greater accuracy and focus. It stays on track, follows directions more precisely, and produces more useful, reliable output, the company said.
Users can also expect to see fewer hallucinations and will have better customization tools, making GPT-5 more dependable and easier to adapt to specific industries and needs, OpenAI said.
It also builds on GPT-4o’s multimodal abilities, offering smoother interactions across text, images, and audio, according to OpenAI.
GPT-5 will be OpenAI’s “most significant do or die moment yet,” according to Nathaniel Whittemore, CEO of Superintelligent, a New York-based AI education platform.
“Ever since the launch of ChatGPT, they’ve been the model state of the art. While competitors like Google and Meta can take advantage of hundreds of millions of existing users to put AI products in front of, OpenAI relies on winning new users by being far ahead of the other AI labs,” Whittemore said.
OpenAI chief operating officer Brad Lightcap said ChatGPT is now in use by more than five million business users — up from three million in June.
Biopharmaceutical company Amgen is one of the early adopters of GPT-5. Sean Bruich, senior vice president of AI & Data at Amgen, said AI only works in science if it meets the highest bar, and “GPT-5 clears it,” delivering sharper accuracy, better context, and faster results across Amgen’s workflows.
“GPT-5… is doing a better job navigating ambiguity where context matters. We are seeing promising early results from deploying GPT-5 across workflows,” he said. He also said the model was faster, more reliable, and had higher quality outputs than GPT-4 and other earlier models.
Ethan Mollick, an associate professor at The Wharton School, had early access to GPT-5. “It is a big deal,” he said in a blog post. He asked the model to do something dramatic to prove that point. The model thought for 24 seconds and then delivered a poetic manifesto of AI capability — specifically, a rhetorical, alliterative showcase of “a multifunctional intelligence system.”
“GPT-5 just does stuff, often extraordinary stuff, sometimes weird stuff, sometimes very AI stuff, on its own. And that is what makes it so interesting,” Mollick said.
After “many AI conversations,” Mollick said he has found two big issues that limit most people’s success in using AI models: First, most people don’t know which model to use — so they get fast, weak results instead of more complete answers from the powerful reasoning models.
“The longer [the models] think, the better the answer, but thinking costs money and takes time. So OpenAI previously made the default ChatGPT use fast, dumb models, hiding the good stuff from most users,” Mollick said. “A surprising number of people have never seen what AI can actually do because they’re stuck on GPT-4o, and don’t know which of the confusingly named models are better.”
Second, most people also don’t know what AI can do or how to ask — especially with newer agentic AIs. GPT-5 fixes both problems by choosing models well and suggesting actions, he said. “It is very proactive, always suggesting things to do.”
GPT-5 is beginning to roll out to ChatGPT Plus, Pro, Team, and Free users, with access for Enterprise and Edu customers coming next week. “Once free users reach their GPT‑5 usage limits, they will transition to GPT‑5 mini,” OpenAI said.
CISA orders fed agencies to patch new Exchange flaw by Monday
ChatGPT's GPT-5 models released: everything you need to know
Hybrid Exchange environment vulnerability needs fast action
Administrators with hybrid Exchange Server environments are urged by Microsoft and the US Cybersecurity and Infrastructure Security Agency (CISA) to quickly plug a high-severity vulnerability or risk system compromise.
Hybrid Exchange deployments offer organizations the ability to extend the user features and admin controls of the on-prem version of Exchange within Microsoft 365. Hybrid deployment can serve as an intermediate step to moving completely to an Exchange Online organization, Microsoft said.
The benefits include secure mail routing between on-premises and Exchange Online organizations, mail routing with a shared domain namespace (for example, both on-premises and Exchange Online organizations use the @contoso.com SMTP domain) and calendar sharing between on-premises and Exchange Online organizations.
SocGholish Malware Spread via Ad Tools; Delivers Access to LockBit, Evil Corp, and Others
New EDR killer tool used by eight different ransomware groups
Bouygues Telecom confirms data breach impacting 6.4 million customers
Webinar: How to Stop Python Supply Chain Attacks—and the Expert Tools You Need
Webinar: How to Stop Python Supply Chain Attacks—and the Expert Tools You Need
SonicWall finds no SSLVPN zero-day, links ransomware attacks to 2024 flaw
Apple pours $600b into Trump’s American manufacturing dream
Leverage is everything in today’s White House, with the administration using tariffs to force business leaders and nations to toe the line.
But leverage doesn’t just run in one direction. While you can achieve a certain amount through strong-arm tactics, there does come a point at which limited concessions must be made or those golden goose eggs will stop appearing with the dawn. It’s all in Art of the Deal.
Apple promises billions in US investmentAs expected, Apple CEO Tim Cook appeared at the White House to promise an astonishing $100 billion in investment in the US, including pouring cash into the development of next-generation technologies. That investment means Apple will now invest $600b in the US across the next four years, he said.
Cook stopped short of bringing iPhone assembly to the US but seems to have made sufficient serious commitments to satisfy the current administration, which has excused Apple from some of the steep tariffs it had threatened to hammer the business with.
Behind the smoke, mirrors, presentation glass, and pure gold of the announcement, a couple of significant strategies have emerged:
- Apple will contribute to the future development of US industry through its new Apple Manufacturing Academy.
- Apple is also investing in the development of new technologies, which will be made in America.
- Agreement has also been reached to make billions of chips used inside Apple’s devices in the USA.
- Apple is also investing in the one big vulnerability the US has when it comes to silicon manufacturing: its lack of rare earth supply. Apple’s recently disclosed $500 million investment in MP Materials represents a big investment in creating an end-to-end silicon supply chain in America.
- These commitments to a more US-centric future supply line are being supplemented by big investments in the existing US manufacturing supply chain, including in the Apple American Manufacturing Program.
The iPhone won’t be made in America for a while (though we were told “Tim Cook is working on it”), but put together, these announcements mean that Apple is making significant and highly strategic investments to help build the future for US industry. It also means all the glass made in iPhones and Apple Watch will “soon” be made in the US.
Solving big problems, one challenge at a timeThrough these targeted investments, Apple is grappling with the big problems that hold back the expansion of the US tech manufacturing industry: Training, facilities investment, technological optimization of manufacturing processes, raw materials supply, and investment in new tech the US can hope to become a unique supplier for.
All of these challenges have to be solved if the current US government’s vision of a larger manufacturing industry that creates millions of jobs is to be realized.
These are existential challenges that need to be solved before manufacturing jobs have any chance of returning to the US in larger quantities. Take staff, for example: the reality is that if you can’t find trained staff, there’s no point building a factory, which means you need to train the staff first.
Equally, if you can’t get the raw materials locally, then it makes more sense to work with them at factories close to their source. Finally, the best industries are unique industries — unless the US sees wages collapse, then it can’t realistically compete in labor costs against lower-wage nations.
What I’m saying is that Apple’s now $600 billion investment in US manufacturing appears to reflect a realistic attempt to ease some of the pain points that need to be solved if a tech industry manufacturing renaissance is to be unleashed in the US.
Unless these problems are solved, then that kind of enlightenment just can’t take place.
We have a long way to reach the mountain topBut, as with any journey to any so-called “Promised Land,” it’s going to take time to get there, and no one presently at the top of the pile appears to have the capacity to travel in time or part the seas.
The big challenges Apple has put its money into solving are real barriers to the overall US manufacturing investment plan. That, presumably, is why the current administration appears to have accepted Apple’s point and given the company more time before imposing those huge but fluctuating tariffs that so threaten its business.
You can argue whether these moves represent compromise or capitulation, but those questions at present don’t seem to figure anywhere in US media, despite which the “athletic” form of Tim Cook, who rises at 5am most days to go to the gym, seems to have identified an arrangement that can easily be seen as victory.
$600 billion to save Apple’s US business while also enabling it to invest in more jobs on its home turf may seem a small price to pay eventually, particularly if you are among the tens of thousands of Americans likely to find employment as a result. Though the always present danger when dealing with any authoritarian is that the more you give, the more they demand to take.
You can follow me on social media! Join me on BlueSky, LinkedIn, and Mastodon.
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- …
- následující ›
- poslední »
