Singularity HUB

Syndikovat obsah SingularityHub
SingularityHub chronicles the technological frontier with coverage of the breakthroughs, players, and issues shaping the future.
Aktualizace: 22 min 55 sek zpět

DeepMind’s New AI Teaches Itself to Play Minecraft From Scratch

11 Duben, 2025 - 16:58

The AI made a “mental map” of the world to collect the game’s most sought-after material.

My nephew couldn’t stop playing Minecraft when he was seven years old.

One of the most popular games ever, Minecraft is an open world in which players build terrain and craft various items and tools. No one showed him how to navigate the game. But over time, he learned the basics through trial and error, eventually figuring out how to craft intricate designs, such as theme parks and entire working cities and towns. But first, he had to gather materials, some of which—diamonds in particular—are difficult to collect.

Now, a new DeepMind AI can do the same.

Without access to any human gameplay as an example, the AI taught itself the rules, physics, and complex maneuvers needed to mine diamonds. “Applied out of the box, Dreamer is, to our knowledge, the first algorithm to collect diamonds in Minecraft from scratch without human data or curricula,” wrote study author, Danijar Hafner, in a blog post.

But playing Minecraft isn’t the point. AI scientist have long been after general algorithms that can solve tasks across a wide range of problems—not just the ones they’re trained on. Although some of today’s models can generalize a skill across similar problems, they struggle to transfer those skills across more complex tasks requiring multiple steps.

In the limited world of Minecraft, Dreamer seemed to have that flexibility. After learning a model of its environment, it could “imagine” future scenarios to improve its decision making at each step and ultimately was able to collect that elusive diamond.

The work “is about training a single algorithm to perform well across diverse…tasks,” said Harvard’s Keyon Vafa, who was not involved in the study, to Nature. “This is a notoriously hard problem and the results are fantastic.”

Learning From Experience

Children naturally soak up their environment. Through trial and error, they quickly learn to avoid touching a hot stove and, by extension, a recently used toaster oven. Dubbed reinforcement learning, this process incorporates experiences—such as “yikes, that hurt”—into a model of how the world works.

A mental model makes it easier to imagine or predict consequences and generalize previous experiences to other scenarios. And when decisions don’t work out, the brain updates its modeling of the consequences of actions—”I dropped a gallon of milk because it was too heavy for me”—so that kids eventually learn not to repeat the same behavior.

Scientists have adopted the same principles for AI, essentially raising algorithms like children. OpenAI previously developed reinforcement learning algorithms that learned to play the fast-paced multiplayer Dota 2 video game with minimal training. Other such algorithms have learned to control robots capable of solving multiple tasks or beat the hardest Atari games.

Learning from mistakes and wins sounds easy. But we live in a complex world, and even simple tasks, like, say, making a peanut butter and jelly sandwich, involve multiple steps. And if the final sandwich turns into an overloaded, soggy abomination, which step went wrong?

That’s the problem with sparse rewards. We don’t immediately get feedback on every step and action. Reinforcement learning in AI struggles with a similar problem: How can algorithms figure out where their decisions went right or wrong?

World of Minecraft

Minecraft is a perfect AI training ground.

Players freely explore the game’s vast terrain—farmland, mountains, swamps, and deserts—and harvest specialized materials as they go. In most modes, players use these materials to build intricate structures—from chicken coups to the Eiffel Tower—craft objects like swords and fences, or start a farm.

The game also resets: Every time a player joins a new game the world map is different, so remembering a previous strategy or place to mine materials doesn’t help. Instead, the player has to more generally learn the world’s physics and how to accomplish goals—say, mining a diamond.

These quirks make the game an especially useful test for AI that can generalize, and the AI community has focused on collecting diamonds as the ultimate challenge. This requires players to complete multiple tasks, from chopping down trees to making pickaxes and carrying water to an underground lava flow.

Kids can learn how to collect diamonds from a 10-minute YouTube video. But in a 2019 competition, AI struggled even after up to four days of training on roughly 1,000 hours of footage from human gameplay.

Algorithms mimicking gamer behavior were better than those learning purely by reinforcement learning. One of the organizers of the competition, at the time, commented that the latter wouldn’t stand a chance in the competition on their own.

Dreamer the Explorer

Rather than relying on human gameplay, Dreamer explored the game by itself, learning through experimentation to collect a diamond from scratch.

The AI is comprised of three main neural networks. The first of these models the Minecraft world, building an internal “understanding” of its physics and how actions work. The second network is basically a parent that judges the outcome of the AI’s actions. Was that really the right move? The last network then decides the best next step to collect a diamond.

All three components were simultaneously trained using data from the AI’s previous tries—a bit like a gamer playing again and again as they aim for the perfect run.

World modeling is the key to Dreamer’s success, Hafner told Nature. This component mimics the way human players see the game and allows the AI to predict how its actions could change the future—and whether that future comes with a reward.

“The world model really equips the AI system with the ability to imagine the future,” said Hafner.

To evaluate Dreamer, the team challenged it against several state-of-the-art singular use algorithms in over 150 tasks. Some tested the AI’s ability to sustain longer decisions. Others gave either constant or sparse feedback to see how the programs fared in 2D and 3D worlds.

“Dreamer matches or exceeds the best [AI] experts,” wrote the team.

They then turned to a far harder task: Collecting diamonds, which requires a dozen steps. Intermediate rewards helped Dreamer pick the next move with the largest chance of success. As an extra challenge, the team reset the game every half hour to ensure the AI didn’t form and remember a specific strategy.

Dreamer collected a diamond after roughly nine days of continuous gameplay. That’s far slower than expert human players, who need just 20 minutes or so. However, the AI wasn’t specifically trained on the task. It taught itself how to mine one of the game’s most coveted items.

The AI “paves the way for future research directions, including teaching agents world knowledge from internet videos and learning a single world model” so they can increasingly accumulate a general understanding of our world, wrote the team.

“Dreamer marks a significant step towards general AI systems,” said Hafner.

The post DeepMind’s New AI Teaches Itself to Play Minecraft From Scratch appeared first on SingularityHub.

Kategorie: Transhumanismus

Our Conscious Perception of the World Depends on This Deep Brain Structure

10 Duben, 2025 - 17:36

The thalamus is a gateway, shuttling select information into consciousness.

How consciousness emerges in the brain is the ultimate mystery. Scientists generally agree that consciousness relies on multiple brain regions working in tandem. But the areas and neural connections supporting our perception of the world have remained elusive.

A new study, published in Science, offers a potential answer. A Chinese team recorded the neural activity of people with electrodes implanted deep in their brains as they performed a visual task. Called the thalamus, scientists have long hypothesized the egg-shaped area is a central relay conducting information across multiple brain regions.

Previous studies hunting for the brain mechanisms underlying consciousness have often focused on the cortex—the outermost regions of the brain. Very little is known about how deeper brain structures contribute to our sense of perception and self.

Simultaneously recording neural activity from both the thalamus and the cortex, the team found a wave-like signal that only appeared when participants reported seeing an image in a test. Visual signals specifically designed not to reach awareness had a different brain response.

The results support the idea that parts of the thalamus “play a gate role” for the emergence of conscious perception, wrote the team.

The study is “really pretty remarkable,” said Christopher Whyte at the University of Sydney, who was not involved in the work, to Nature. One of the first to simultaneously record activity in both deep and surface brain regions in humans, it reveals how signals travel across the brain to support consciousness.

The Ultimate Enigma

Consciousness has teased the minds of philosophers and scientists for centuries. Thanks to modern brain mapping technologies, researchers are beginning to hunt down its neural underpinnings.

At least half a dozen theories now exist, two of which are going head-to-head in a global research effort using standardized tests to probe how awareness emerges in the human brain. The results, alongside other work, could potentially build a unified theory of consciousness.

The problem? There still isn’t definitive agreement on what we mean by consciousness. But practically, most scientists agree it has at least two modes. One is dubbed the “conscious state,” which is when, for example, you’re awake, asleep, or in a coma. The other mode, “conscious content,” captures awareness or perception.

We’re constantly bombarded with sights, sounds, touch, and other sensations. Only some stimuli—the smell of a good cup of coffee, the sound of a great playlist, the feel of typing on a slightly oily keyboard—reach our awareness. Others are discarded by a web of neural networks long before we perceive them.

In other words, the brain filters signals from the outside world and only brings a sliver of them into conscious perception. The entire process from sensing to perceiving takes just a few milliseconds.

Brain imaging technologies such as functional magnetic resonance imaging (fMRI) can capture the brain’s inner workings as we process these stimuli. But like a camera with slow shutter speed, the technology struggles to map activated brain areas in real time at high resolution. The delay also makes it difficult to track how signals flow from one brain area to another. Because a sense of awareness likely emerges from coherent activation across multiple brain regions, this makes it more difficult to decipher how consciousness emerges from neural chatter.

Most scientists have focused on the cortex, with just a few exploring the function of deeper brain structures. “Capturing neural activity in the thalamic nuclei [thalamus] during conscious perception is very difficult” because of technological restrictions, wrote the authors.

Deep Impact

The new study solved the problem by tapping a unique resource: People with debilitating and persistent headaches that can’t be managed with medication but who are otherwise mentally sharp and healthy.

Each participant in the study already had up to 20 electrodes implanted in different parts of the thalamus and cortex as part of an experimental procedure to dampen their headache pain. Unlike fMRI studies that cover the whole brain with time lag and relatively low resolution, these electrodes could directly pick up neural signals in the implanted areas with minimal delay.

Often dubbed the brain’s Grand Central Station, the thalamus is a complex structure housing multiple neural “train tracks” originating from different locations. Each track routes and ferries a unique combination of incoming sensations to other brain regions for further processing.

The thalamus likely plays “a crucial role in regulating the conscious state” based on previous theoretical and animal studies, wrote the team. But testing its role in humans has been difficult because of its complex structure and location deep inside the brain. The five participants, each with electrodes already implanted in their thalamus and cortex for treatment, were the perfect candidates for a study matching specific neural signals to conscious perception.

Using a custom task, the team measured if participants could consciously perceive a visual cue—a blob of alternating light and dark lines—blinking on a screen. Roughly half the trials were designed so the cue appeared too briefly for the person to register, as determined by previous work. The participants were then asked to move their eyes towards the left or right of the screen depending on whether they noticed the cue.

Throughout the experiment the team captured electrical activity from parts of each participant’s thalamus and prefrontal cortex—the front region of the brain that’s involved in higher level thinking such as reasoning and decision making.

Unique Couplings

Two parts of the thalamus sparked with activity when a person consciously perceived the cue, and the areas orchestrated synchronized waves of activity to the cortex. This synchronized activity disappeared when the participants weren’t consciously aware of the cue.

The contributions to “consciousness-related activity were strikingly different” across the thalamus, wrote the authors. In other words, these specific deep-brain regions may form a crucial gateway for processing visual experiences so they rise to the level of perception.  

The findings are similar to results from previous studies in mice and non-human primates. One study, tracked how mice react to subtle prods to their whiskers. The rodents were trained to only lick water when they felt a touch but otherwise go about their business. Each mouse’s thalamus and cortex sparked when they went for the water, forming similar neural circuits as those observed in humans during conscious perception. Other studies in monkeys have also identified the thalamus as a hot zone for consciousness, although they implicate slightly different areas of the structure.

The team is planning to conduct similar visual experiments in monkeys to clarify which parts of the thalamus support conscious perception. For now, the full nature of consciousness in the brain remains an enigma. But the new results offer a peek inside the human mind as it perceives the world with unprecedented detail.

Liad Mudrik at Tel Aviv University, who was not involved in the study, told Nature it is “one of the most elaborate and extensive investigations of the role of the thalamus in consciousness.”

The post Our Conscious Perception of the World Depends on This Deep Brain Structure appeared first on SingularityHub.

Kategorie: Transhumanismus

What Makes the Human Brain Unique? Scientists Compared It With Monkeys and Apes to Find Out

8 Duben, 2025 - 20:42

Our closest relatives in the animal kingdom are wired up differently.

Scientists have long tried to understand the human brain by comparing it to other primates. Researchers are still trying to understand what makes our brain different to our closest relatives. Our recent study may have brought us one step closer by taking a new approach—comparing the way brains are internally connected.

The Victorian palaeontologist Richard Owen incorrectly argued that the human brain was the only brain to contain a small area called the Hippocampus minor. He claimed that made it unique among the animal kingdom, and he argued, the human brain was therefore clearly unrelated to other species. We’ve learned a lot since then about the organization and function of our brain, but there is still much to learn.

Most studies comparing the human brain to that of other species focus on size. This can be the size of the brain, size of the brain relative to the body, or the size of parts of the brain to the rest of it. However, measures of size don’t tell us anything about the internal organization of the brain. For instance, although the enormous brain of an elephant contains three times as many neurons as the human brain, these are predominantly located in the cerebellum, not in the neocortex, which is commonly associated with human cognitive abilities.

Until recently, studying the brain’s internal organization was painstaking work. The advent of medical imaging techniques, however, has opened up new possibilities to look inside the brains of animals quickly, in great detail, and without harming the animal.

Our team used publicly available MRI data of white matter, the fibers connecting parts of the brain’s cortex. Communication between brain cells runs along these fibers. This costs energy and the mammalian brain is therefore relatively sparsely connected, concentrating communications down a few central pathways.

The connections of each brain region tell us a lot about its functions. The set of connections of any brain region is so specific that brain regions have a unique connectivity fingerprint.

In our study, we compared these connectivity fingerprints across the human, chimpanzee, and macaque monkey brain. The chimpanzee is, together with the bonobo, our closest living relative. The macaque monkey is the non-human primate best known to science. Comparing the human brain to both species meant we could not only assess which parts of our brain are unique to us, but also which parts are likely to be shared heritage with our non-human relatives.

Much of the previous research on human brain uniqueness has focused on the prefrontal cortex, a group of areas at the front of our brain linked to complex thought and decision making. We indeed found that aspects of the prefrontal cortex had a connectivity fingerprint in the human that we couldn’t find in the other animals, particularly when we compared the human to the macaque monkey.

A higher value means the brains are more different. JNeurosci/Rogier Mars and Katherine Bryant, CC BY-NC-ND

But the main differences we found were not in the prefrontal cortex. They were in the temporal lobe, a large part of cortex located approximately behind the ear. In the primate brain, this area is devoted to deep processing of information from our two main senses: vision and hearing. One of the most dramatic findings was in the middle part of the temporal cortex.

The feature driving this distinction was the arcuate fasciculus, a white matter tract connecting the frontal and temporal cortex and traditionally associated with processing language in humans. Most if not all primates have an arcuate fasciculus but it is much larger in human brains.

However, we found that focusing solely on language may be too narrow. The brain areas that are connected via the arcuate fasciculus are also involved in other cognitive functions, such as integrating sensory information and processing complex social behavior. Our study was the first to find the arcuate fasciculus is involved in these functions. This insight underscores the complexity of human brain evolution, suggesting that our advanced cognitive abilities arose not from a single change, as scientists thought, but through several, interrelated changes in brain connectivity.

While the middle temporal arcuate fasciculus is a key player in language processing, we also found differences between the species in a region more at the back of the temporal cortex. This temporoparietal junction area is critical in processing information about others, such as understanding others’ beliefs and intentions, a cornerstone of human social interaction.

In humans, this brain area has much more extensive connections to other parts of the brain processing complex visual information, such as facial expressions and behavioral cues. This suggests that our brain is wired to handle more intricate social processing than those of our primate relatives. Our brain is wired up to be social.

These findings challenge the idea of a single evolutionary event driving the emergence of human intelligence. Instead, our study suggests brain evolution happened in steps. Our findings suggest changes in frontal cortex organization occurred in apes, followed by changes in temporal cortex in the lineage leading to humans.

Richard Owen was right about one thing. Our brains are different from those of other species—to an extent. We have a primate brain, but it’s wired up to make us even more social than other primates, allowing us to communicate through spoken language.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

The post What Makes the Human Brain Unique? Scientists Compared It With Monkeys and Apes to Find Out appeared first on SingularityHub.

Kategorie: Transhumanismus

This Brain-Computer Interface Is So Small It Fits Between the Follicles of Your Hair

8 Duben, 2025 - 00:15

A tiny sensor to control devices with your thoughts—no surgery required.

Brain-computer interfaces are typically unwieldy, which makes using them on the move a non-starter. A new neural interface small enough to be attached between the user’s hair follicles keeps working even when the user is in motion.

At present, brain-computer interfaces are typically used as research devices designed to study neural activity or, occasionally, as a way for patients with severe paralysis to control wheelchairs or computers. But there are hopes they could one day become a fast and intuitive way for people to interact with personal devices through thoughts alone.

Invasive approaches that implant electrodes deep in the brain provide the highest fidelity connections, but regulators are unlikely to approve them for all but the most pressing medical problems in the near term.

Some researchers are focused on developing non-invasive technologies like electroencephalography (EEG), which uses electrodes stuck to the outside of the head to pick up brain signals. But getting a good readout requires stable contact between the electrodes and scalp, which is tricky to maintain, particularly if the user is moving around during normal daily activities.

Now, researchers have developed a neural interface just 0.04 inches across that uses microneedles to painlessly attach to the wearer’s scalp for a highly stable connection. To demonstrate the device’s potential, the team used it to control an augmented reality video call. The interface worked for up to 12 hours after implantation as the wearer stood, walked, and ran.

“This advance provides a pathway for the practical and continuous use of BCI [brain-computer interfaces] in everyday life, enhancing the integration of digital and physical environments,” the researchers write in a paper describing the device in the Proceedings of the National Academy of Sciences.

To create their device, the researchers first molded resin into a tiny cross shape with five microscale spikes sticking out of the surface. They then coated these microneedles with a conductive polymer called PEDOT so they could pick up electrical signals from the brain.

Besides firmly attaching the sensor to the head, the needles also penetrate an outer layer of the scalp made up of dead skin cells that acts as an insulator. This allows the sensor to record directly from the epidermis, which the researchers say enables much better signal acquisition.

The researchers also attached a winding, snake-like copper wire to the sensor and connected it to the larger wires that carry the recorded signal away to be processed. This means that even if the larger wires are jostled as the subject moves, it doesn’t disturb the sensor. A module decodes the brain readings and then transmits them wirelessly to an external device.

To show off the device’s capabilities, they used it to control video calls conducted on a pair of Nreal augmented reality glasses. They relied on “steady-state visual evoked potentials,” in which the brain responds in a predictable way when the user looks at an image flickering at a specific frequency.

By placing different flickering graphics next to different buttons in the video call interface, the user could answer, reject, and end calls by simply looking at the relevant button. The system correctly detected their intention in real-time with an average accuracy of 96.4 percent as the user carried out a variety of movements. They also showed that the recording quality remained stable over 12 hours, while a gold-standard EEG electrode fell off over the same period.

The device was fabricated using a method that would allow mass production, the researchers say, and could also have applications as a wearable health monitor. If they can scale the approach up, an always-on connection between our brains and personal devices may not be so far away.

The post This Brain-Computer Interface Is So Small It Fits Between the Follicles of Your Hair appeared first on SingularityHub.

Kategorie: Transhumanismus

This Week’s Awesome Tech Stories From Around the Web (Through April 5)

5 Duben, 2025 - 16:00
Robotics

Invasion of the Home Humanoid RobotsCade Metz | The New York Times

“Artificial intelligence is already driving cars, writing essays and even writing computer code. Now, humanoids, machines built to look like humans and powered by AI, are poised to move into our homes so they can help with the daily chores.”

Robotics

Roomba Creator Says Humanoid Robots Are OverhypedRocket Drew | The Information

“‘We’ve hardly started on humanoid hype,’ [Rodney] Brooks said. ‘It’s going to go worse and worse and worse.’ Humanoid robots are enthralling because people can imagine them doing everything a human can do, Brooks said, but they still struggle with basic skills such as walking, falling, and coordinating multiple body parts to manipulate an object.”

Computing

A 32-Bit Processor Made With an Atomically Thin SemiconductorJohn Timmer | Ars Technica

“The authors argue that it’s probably one of the most sophisticated bits of ‘beyond silicon’ hardware yet implemented. That said, they don’t expect this technology to replace silicon; instead, they view it as potentially filling some niche needs, like ultra-low-power processing for simple sensors. But if the technology continues to advance, the scope of its potential uses may expand beyond that.”

Computing

World’s Smallest LED Pixels Squeeze Into Astounding 127,000-PPI DisplayMichael Irving | New Atlas

“Scientists in China have created a new type of display with the smallest pixels and the highest pixel density ever. Individual pixels were shrunk to 90 nanometers—about the size of a virus—and a record 127,000 of them were crammed into every inch of a display.”

Biotechnology

Alphabet-Backed Isomorphic Labs Raises $600 Million for AI Drug DevelopmentHelena Smolak | The Wall Street Journal

“‘This funding will further turbocharge the development of our next-generation AI drug design engine, help us advance our own programs into clinical development, and is a significant step forward towards our mission of one day solving all disease with the help of AI,’ Chief Executive Officer Demis Hassabis, who is also the head of Google’s AI division DeepMind, said.”

Robotics

The Hypershell Exoskeleton Is So Good at Climbing Cliffs, It Ruined My WorkoutKyle Barr | Gizmodo

“The Hypershell is a device made for assisting your walks, runs, bikes, or hikes. In a rarity for weird tech, the hiking exoskeleton accomplishes nearly everything it promises to. It does its job so well, and it left me devoid of the exercise and that sense of calm I normally get from my hikes.”

Science

Why Everything in the Universe Turns More ComplexPhilip Ball | Quanta Magazine

“[The researchers] have proposed nothing less than a new law of nature, according to which the complexity of entities in the universe increases over time with an inexorability comparable to the second law of thermodynamics—the law that dictates an inevitable rise in entropy, a measure of disorder. If they’re right, complex and intelligent life should be widespread.”

Future

DeepMind Has Detailed All the Ways AGI Could Wreck the WorldRyan Whitwam | Ars Technica

“While some in the AI field believe AGI is a pipe dream, the authors of [a new] DeepMind paper project that it could happen by 2030. With that in mind, they aimed to understand the risks of a human-like synthetic intelligence, which they acknowledge could lead to ‘severe harm.'”

Energy

The Hottest Thing in Clean EnergyAlexander C. Kaufman | The Atlantic

“For now, most of the efforts to debut next-generation geothermal technology are still in the American West, where drilling is relatively cheap and easy because the rocks they’re targeting are closer to the surface. But if the industry can prove to investors that its power plants work as described—which experts expect to happen by the end of the decade—geothermal could expand quickly, just like oil-and-gas fracking did.”

Space

SpaceX Took a Big Step Toward Reusing Starship’s Super Heavy BoosterStephen Clark | Ars Technica

“This was the first time SpaceX has test-fired a ‘flight-proven’ Super Heavy booster, and it paves the way for this particular rocket—designated Booster 14—to fly again soon. SpaceX confirmed a reflight of Booster 14, which previously launched and returned to Earth in January, will happen on next Starship launch.”

Space

Amazon Is Ready to Launch Its Starlink CompetitorThomas Ricker | The Verge

“The first batch of 27 Project Kuiper space internet satellites are scheduled to launch next week. Amazon has secured 80 such launch missions that will each deliver dozens of satellites into low earth orbit (LEO) to create a constellation capable of competing with Elon Musk’s Starlink juggernaut. Amazon says it expects to begin offering high-speed, low-latency internet service ‘later this year.'”

Science

Bonobos’ Calls May Be the Closest Thing to Animal Language We’ve SeenJacek Krywko | Ars Technica

“A team of Swiss scientists led by Melissa Berthet, an evolutionary anthropologist at the University of Zurich, discovered bonobos can combine [vocal calls including peeps, hoots, yelps, grunts, and whistles] into larger semantic structures. In these communications, meaning is something more than just a sum of individual calls—a trait known as non-trivial compositionality, which we once thought was uniquely human.”

Artificial Intelligence

DeepMind Is Holding Back Release of AI Research to Give Google an EdgeMelissa Heikkilä and Stephen Morris | Ars Technica

“Three former researchers said the group was most reluctant to share papers that reveal innovations that could be exploited by competitors, or cast Google’s own Gemini AI model in a negative light compared with others. The changes represent a significant shift for DeepMind, which has long prided itself on its reputation for releasing groundbreaking papers and as a home for the best scientists building AI.”

The post This Week’s Awesome Tech Stories From Around the Web (Through April 5) appeared first on SingularityHub.

Kategorie: Transhumanismus

World’s Tiniest Pacemaker Is Smaller Than a Grain of Rice

4 Duben, 2025 - 19:35

The device fits in a syringe and melts away after use.

Scientists just unveiled the world’s tiniest pacemaker. Smaller than a grain of rice and controlled by light shone through the skin, the pacemaker generates power and squeezes the heart’s muscles after injection through a stint.

The device showed it could steadily orchestrate healthy heart rhythms in rat, dog, and human hearts in a newly published study. It’s also biocompatible and eventually broken down by the body after temporary use. Over 23 times smaller than previous bioabsorbable pacemakers, the device opens the door to minimally invasive implants that wirelessly monitor heart health after extensive surgery or other heart problems.

“The extremely small sizes of these devices enable minimally invasive implantation,” the authors, led by John Rogers at Northwestern University, wrote. Paired with a wireless controller on the skin’s surface, the system automatically detected irregular heartbeats and targeted electrical zaps to different regions of the heart.

The device could especially benefit babies who need smaller hardware to monitor their hearts. Although specifically designed for the heart, a similar setup could be adapted to manage pain, heal wounds, or potentially regenerate nerves and bones.

Achy Breaky Heart

The heart is a wonder of biomechanics.

Over a person’s lifetime, its four chambers reliably pump blood rich in oxygen and nutrients through the body. Some chambers cleanse blood of carbon dioxide—a waste product of cell metabolism—and infuse it with oxygen from the lungs. Others push nutrient-rich blood back out to rest of the body.

But like parts in a machine, heart muscles eventually wear down with age or trauma. Unlike skin cells, the heart can’t easily regenerate. Over time, its muscles become stiff, and after an injury—say, a heart attack—scar tissue replaces functional cells.

That’s a problem when it comes to keeping the heart pumping in a steady rhythm.

Each chamber contracts and releases in an intricate biological dance orchestrated by an electrical flow. Any glitches in these signals can cause heart muscles to squeeze chaotically, too rapidly or completely off beat. Deadly problems, such as atrial fibrillation, can result. Even worse, blood can pool inside individual chambers and increase the risk of blood clots. If these are dislodged, they could travel to the brain and trigger a stroke.

Risks are especially high after heart surgery. To lower the chances of complications, surgeons often implant temporary pacemakers for days or weeks as the organ recovers.

These devices are usually made up of two components.

The first of these is a system that detects and generates electrical zaps. It generally requires a power supply and control units to fine-tune the stimulation. The other bit “is kinda the business end” study author John Rogers told Nature. This part delivers electrical pulses to the heart muscles, directing them to contract or relax.

The setup is a wiring nightmare, with wires to detect heart rhythm threading through the skin. “You have wires designed to monitor cardiac function, but it becomes a somewhat clumsy collection of hardware that’s cumbersome for the patient,” said Rogers.

These temporary pacemakers are “essential life-saving technologies,” wrote the team. But most devices need open-heart surgery to implant and remove, which increases the risk of infection and additional damage to an already fragile organ. The procedure is especially difficult for babies or younger patients because they’re so small and grow faster.

Heart surgeons inspired the project with their vision of a “fully implantable, wirelessly controlled temporary pacemaker that would just melt away inside the body after it’s no longer needed,” said Rogers.

A Steady Beat

An ideal pacemaker should be small, biocompatible, and easily controllable. Easy delivery and multiplexing—that is, having multiple units to control heartbeat—are a bonus.

The new device delivers.

It’s made of biocompatible material that’s eventually broken down and dispelled by the body without the need for surgical removal. It has two small pieces of metal somewhat similar to the terminals of a battery. Normally, the implant doesn’t conduct electricity. But once implanted, natural fluids from heart cells form a liquid “bridge” that completes the electrical circuit when activated, transforming the device into both a self-powered battery and a generator to stimulate heart muscles. A Bluetooth module connects the implant with a soft “receiver” patch on the skin to wirelessly capture electrical signals from the heart for analysis.

Controlling the heart’s rhythm took more engineering. Each heart chamber needs to pump in a coordinated sequence for blood to properly flow. Here, the team used an infrared light switch to turn the implant on and off. This wavelength of light can penetrate skin, muscle, and bone, making it a powerful way to precisely control organs or tools that operate on electrical signals.

Although jam-packed with hardware, the final implant is roughly the size of a sesame seed. It’s “more than 23 times smaller than any bioresorbable alternative,” wrote the team.

Flashing infrared LED lights placed on the skin above the pacemaker turn the device on. Different infrared frequencies pace the heartbeat.

The team first tested their device in isolated pig and donated human hearts. After it was implanted by injection through a stint, the device worked reliably in multiple heart chambers, delivering the same amount of stimulation as a standard pacemaker.

They also tested the device in hound dogs, whose hearts are similar in shape, size, and electrical workings to ours. A tiny cut was enough to implant and position multiple pacemakers at different locations on the heart, where they could be controlled individually. The team used light to fine-tune heart rate and rhythm, changing the contraction of two heart chambers to pump and release blood in a natural beat.

“Because the devices are so small, you can pace the heart in very sophisticated ways that rely not just on a single pacemaker, but a multiplicity of them,” said Rogers. “[This] offers a greater control over the cardiac cycle than would be possible with a single pacemaker.”

Device Sprinkles

The team envisions that the finished device will be relatively off-the-shelf. Put together, a sensor monitors problematic heart rhythms from the skin’s surface, restores normal activity with light pulses, and includes an interface to visualize the process for users. The materials are safe for the human body—some are even recommended as part of a daily diet or added to vitamin supplements—and components largely dissolve after 9 to 12 months.

The devices aren’t specifically designed for the heart. They could also stimulate nerve and bone regeneration, heal wounds, or manage pain through electrical stimulation. “You could sprinkle them around…do a dozen of these things…each one controlled by a different wavelength [of light],” said Rogers.

The post World’s Tiniest Pacemaker Is Smaller Than a Grain of Rice appeared first on SingularityHub.

Kategorie: Transhumanismus

These Solar Cells Are Made of Moon Dust. They Could Power Future Lunar Colonies.

4 Duben, 2025 - 00:30

Combining “moonglass” with just two pounds of perovskite from Earth would yield 4,300 square feet of solar panels.

NASA’s plan to establish a permanent human presence on the moon will require making better use of lunar resources. A new approach has now shown how to make solar cells out of moon dust.

Later this decade, the US space agency’s Artemis III mission plans to return astronauts to the moon for the first time in more than half a century. The long-term goal of the Artemis program is to establish a permanent human presence on our nearest celestial neighbor.

But building and supplying such a base means launching huge amounts of material into orbit at great cost. That’s why NASA and other space agencies interested in establishing a presence on the moon are exploring “in-situ resource utilization”—that is, exploiting the resources already there.

Moon dust, or regolith, has been widely touted as a potential building material, while ice in the moon’s shadowy craters could be harvested for drinking water or split into oxygen and hydrogen that can be used for air in habitats or as rocket fuel.

Now, researchers at the University of Potsdam, Germany, have found a way to turn a simulated version of lunar regolith into glass for solar cells—the most obvious way to power a moon base. They say this could dramatically reduce the amount of material that would have to be hauled to the moon to set up a permanent settlement.

“From extracting water for fuel to building houses with lunar bricks, scientists have been finding ways to use moon dust,” lead researcher Felix Lang said in a press release. “Now, we can turn it into solar cells too, possibly providing the energy a future moon city will need.”

To test out their approach, the researchers used an artificial mixture of minerals designed to replicate the soil found in the moon’s highlands. Crucially, their approach doesn’t require any complex mining or purification equipment. The regolith simply needs to be melted and then cooled gradually to create sheets of what the researchers refer to as “moonglass.”

In their experiments, reported in the journal Device, the researchers used an electric furnace to heat the dust to around 2,800 degrees fahrenheit. They say these kinds of temperatures could be achieved on the moon by using mirrors or lenses to concentrate sunlight.

They then deposited an ultrathin layer of a material called halide perovskite—which has emerged as a cheap and powerful alternative to silicon in solar cells—onto the moonglass. This material would have to be carried from Earth, but the researchers estimate that a little more than two pounds of it would be enough to fabricate 4,300 square feet of solar panels.

The team tested out several solar-cell designs, achieving efficiencies between 9.4 and 12.1 percent. That’s significantly less than the 30 to 40 percent that the most advanced space solar cells achieve, the researchers concede. But the lower efficiency would be more than offset by the massive savings in launch costs missions might realize by making the bulkiest parts of the solar cell on site.

“If you cut the weight by 99 percent, you don’t need ultra-efficient 30 percent solar cells, you just make more of them on the moon,” says Lang.

The moonglass the researchers created also has a natural brownish tint that helps protect it against radiation, a major issue on the moon’s surface. They also note that halide perovskites tolerate relatively high levels of impurities and defects, which makes them well-suited to the less than perfect fabrication setups likely to be found on the moon.

The moon’s low gravity and wild temperature swings could play havoc with their fabrication process and the stability of the resulting solar cells, the researchers admit. That’s why they’re hoping to send a small-scale experiment to the moon to test the idea out in real conditions.

While the approach is probably at too early a stage to impact NASA’s upcoming moon missions, it could prove a valuable tool as we scale up our presence beyond Earth orbit.

The post These Solar Cells Are Made of Moon Dust. They Could Power Future Lunar Colonies. appeared first on SingularityHub.

Kategorie: Transhumanismus

NASA’s Curiosity Rover Has Made a Significant Discovery in the Search for Alien Life

1 Duben, 2025 - 16:00

It’s an exciting time in the search for life on Mars.

NASA’s Curiosity Mars rover has detected the largest organic (carbon-containing) molecules ever found on the red planet. The discovery is one of the most significant findings in the search for evidence of past life on Mars. This is because, on Earth at least, relatively complex, long-chain carbon molecules are involved in biology. These molecules could actually be fragments of fatty acids, which are found in, for example, the membranes surrounding biological cells.

Scientists think that if life ever emerged on Mars it was probably microbial in nature. Because microbes are so small, it’s difficult to be definitive about any potential evidence for life found on Mars. Such evidence needs more powerful scientific instruments that are too large to be put on a rover.

The organic molecules found by Curiosity consist of carbon atoms linked in long chains, with other elements bonded to them, like hydrogen and oxygen. They come from a 3.7-billion-year-old rock dubbed Cumberland, encountered by the rover at a presumed dried-up lakebed in Mars’s Gale Crater. Scientists used the Sample Analysis at Mars (Sam) instrument on the NASA rover to make their discovery.

Scientists were actually looking for evidence of amino acids, which are the building blocks of proteins and therefore key components of life as we know it. But this unexpected finding is almost as exciting. The research is published in Proceedings of the National Academies of Science.

Among the molecules were decane, which has 10 carbon atoms and 22 hydrogen atoms, and dodecane, with 12 carbons and 26 hydrogen atoms. These are known as alkanes, which fall under the umbrella of the chemical compounds known as hydrocarbons.

It’s an exciting time in the search for life on Mars. In March this year, scientists presented evidence of features in a different rock sampled elsewhere on Mars by the Perseverance rover. These features, dubbed “leopard spots” and “poppy seeds,” could have been produced by the action of microbial life in the distant past, or not. The findings were presented at a US conference and have not yet been published in a peer reviewed journal.

The Mars Sample Return mission, a collaboration between NASA and the European Space Agency, offers hope that samples of rock collected and stored by Perseverance could be brought to Earth for study in laboratories. The powerful instruments available in terrestrial labs could finally confirm whether or not there is clear evidence for past life on Mars. However, in 2023, an independent review board criticized increases in Mars Sample Return’s budget. This prompted the agencies to rethink how the mission could be carried out. They are currently studying two revised options.

Signs of Life?

Cumberland was found in a region of Gale Crater called Yellowknife Bay. This area contains rock formations that look suspiciously like those formed when sediment builds up at the bottom of a lake. One of Curiosity’s scientific goals is to examine the prospect that past conditions on Mars would have been suitable for the development of life, so an ancient lakebed is the perfect place to look for them.

The Martian rock known as Cumberland, which was sampled in the study. NASA/JPL-Caltech/MSSS

The researchers think that the alkane molecules may once have been components of more complex fatty acid molecules. On Earth, fatty acids are components of fats and oils. They are produced through biological activity in processes that help form cell membranes, for example. The suggested presence of fatty acids in this rock sample has been around for several years, but the new paper details the full evidence.

Fatty acids are long, linear hydrocarbon molecules with a carboxyl group (COOH) at one end and a methyl group (CH3) at the other, forming a chain of carbon and hydrogen atoms.

A fat molecule consists of two main components: glycerol and fatty acids. Glycerol is an alcohol molecule with three carbon atoms, five hydrogens, and three hydroxyl (chemically bonded oxygen and hydrogen, OH) groups. Fatty acids may have 4 to 36 carbon atoms; however, most of them have 12-18. The longest carbon chains found in Cumberland are 12 atoms long.

Mars Sample Return will deliver Mars rocks to Earth for study. This artist’s impression shows the ascent vehicle leaving Mars with rock samples. NASA/JPL-Caltech

Organic molecules preserved in ancient Martian rocks provide a critical record of the past habitability of Mars and could be chemical biosignatures (signs that life was once there).

The sample from Cumberland has been analyzed by the Sam instrument many times, using different experimental techniques, and has shown evidence of clay minerals, as well as the first (smaller and simpler) organic molecules found on Mars, back in 2015. These included several classes of chlorinated and sulphur-containing organic compounds in Gale crater sedimentary rocks, with chemical structures of up to six carbon atoms. The new discovery doubles the number of carbon atoms found in a single molecule on Mars.

The alkane molecules are significant in the search for biosignatures on Mars, but how they actually formed remains unclear. They could also be derived through geological or other chemical mechanisms that do not involve fatty acids or life. These are known as abiotic sources. However, the fact that they exist intact today in samples that have been exposed to a harsh environment for many millions of years gives astrobiologists (scientists who study the possibility of life beyond Earth) hope that evidence of ancient life might still be detectable today.

It is possible the sample contains even longer chain organic molecules. It may also contain more complex molecules that are indicative of life, rather than geological processes. Unfortunately, Sam is not capable of detecting those, so the next step is to deliver Martian rock and soil to more capable laboratories on the Earth. Mars Sample Return would do this with the samples already gathered by the Perseverance Mars rover. All that’s needed now is the budget.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

The post NASA’s Curiosity Rover Has Made a Significant Discovery in the Search for Alien Life appeared first on SingularityHub.

Kategorie: Transhumanismus

Brain Implant ‘Streams’ a Paralyzed Woman’s Thoughts in Near Real Time

1 Duben, 2025 - 00:04

The system, which also synthesizes her voice, takes no more than a second to translate thoughts to speech.

A paralyzed woman can again communicate with the outside world thanks to a wafer-thin disk capturing speech signals in her brain. An AI translates these electrical buzzes into text and, using recordings taken before she lost the ability to speak, synthesizes speech with her own voice.

It’s not the first brain implant to give a paralyzed person their voice back. But previous setups had long lag times. Some required as much as 20 seconds to translate thoughts into speech. The new system, called a streaming speech neuroprosthetic, takes just a second.

“Speech delays longer than a few seconds can disrupt the natural flow of conversation,” the team wrote in a paper published in Nature Neuroscience today. “This makes it difficult for individuals with paralysis to participate in meaningful dialogue, potentially leading to feelings of isolation and frustration.”

On average, the AI can translate about 47 words per minute, with some trials hitting nearly double that pace. The team initially trained the algorithm on 1,024 words, but it eventually learned to decode other words with lower accuracy based on the woman’s brain signals.

The algorithm showed some flexibility too, decoding electrical signals collected from two other types of hardware and using data from other people.

“Our streaming approach brings the same rapid speech decoding capacity of devices like Alexa and Siri to neuroprostheses,” study author Gopala Anumanchipalli at the University of California, Berkeley, said in a press release. “The result is more naturalistic, fluent speech synthesis.”

Bridging the Gap

Losing the ability to communicate is devastating.

Some solutions for people with paralysis already exist. One of these uses head or eye movements to control a digital keyboard where users type out their thoughts. More advanced options can translate text into speech in a selection of voices (though not usually a user’s own).

But these systems experience delays of over 20 seconds, making natural conversation difficult.

Ann, the participant in the new study, uses such a device daily. Barely middle-aged, a stroke severed the neural connections between her brain and the muscles that control her ability to speak. These include muscles in her vocal cords, lips, and tongue and those that generate airflow to differentiate sounds, like the breathy “think” versus a throaty “umm.”

Electrical signals from the outermost part of the brain, called the cortex, direct these muscle movements. By intercepting their communications, devices can potentially decode a person’s intention to speak and even translate signals into comprehensible words and sentences. The signals are hard to decipher, but thanks to AI, scientists have begun making sense of them.

In 2023, the same team developed a brain implant to transform brain signals into text, speech, and an avatar mimicking a person’s facial expressions. The implant sat on top of the brain, causing less damage than surgically inserted implants, and its AI translated neural signals into text at roughly 78 words per minute—about half the rate at which most people tend to speak.

Meanwhile, another team used tiny electrodes implanted directly in the brain to translate 125,000 words into text at a similar speed. A more recent implant with a similarly sized vocabulary allowed a participant to communicate for eight months with nearly perfect accuracy.

These studies “have shown impressive advances in vocabulary size, decoding speeds, and accuracy of text decoding,” wrote the team. But they all suffer a similar problem: Lag time.

Streaming Brain Signals

Ann had a paper-like electrode array implanted on the surface of brain regions responsible for speech. The implant didn’t read her thoughts per se. Rather, it captured signals controlling how vocal cords, the tongue, and other muscles move when verbalizing words. A cable connected the device to a small port fixed on her skull sent brain signals to computers for decoding.

The implant’s AI was a three-part deep learning system, a type of algorithm that roughly mimics how biological brains work. The first part decoded neural signals in real-time. Others controlled text and speech outputs using a language model, so Ann could read and hear the device’s output.

To train the AI, Ann imagined verbalizing 1,024 words in short sentences. Although she couldn’t physically move her muscles, her brain still generated neural signals as if she was speaking—so-called “silent speech.” The AI converted this data into text on a computer screen and speech.

The team “used Ann’s pre-injury voice, so when we decode the output, it sounds more like her,” study author Cheol Jun Cho said in the press release.

After further training that included over 23,000 attempts at silent speech, the AI learned to translate at a pace of roughly 47 words per minute with minimal lag—averaging just a second delay. This is “significantly faster” than older setups, wrote the team.

The speed boost is because the AI processes smaller chunks of neural activity in real time. When given a sentence for the patient to imagine vocalizing—for example, “what did you say to her?”—the system generated both text and vocals with minimal error. Other sentences didn’t fare as well. A prompt of “I just got here” translated to “I’ve said to stash it” in one test.

Long Road Ahead

Prior work mostly evaluated speech prosthetics by their ability to generate short phrases or sentences of just a few seconds. But people naturally start and stop in conversation, requiring an AI to detect an intent to speak over longer periods of time. The AI should “ideally generalize” speech “over several minutes or hours rather than several seconds,” wrote the team.

To accomplish this, they also fed the AI long stretches of brain activity when Ann was not trying to talk, intermixed with those when she was. The AI picked up on the difference—mirroring her intentions of when to speak and when to remain silent.

There’s room for improvement. Roughly half of the decoded words in longer conversations were off the mark. But the setup is a step toward natural communication in everyday life.

Different implants could also benefit from the team’s algorithm.

In another test, they analyzed two separate datasets, one collected from a paralyzed person with electrodes inserted into their brain and another from a healthy volunteer with electrodes placed over their vocal chords. Both could “silent speak” during training and testing. The AI made plenty of mistakes but detected intended speech in near real-time above random chance.

“By demonstrating accurate brain-to-voice synthesis on other silent-speech datasets, we showed that this technique is not limited to one specific type of device,” said study author Kaylo Littlejohn in the release.

Implants with more electrodes to better capture brain activity could improve performance. The team also plans to build emotion into the voice generator to reflect a user’s tone, pitch, and loudness.

In the meantime, Ann is happy with her implant. “Hearing her own voice in near-real time increased her sense of embodiment,” said Anumanchipalli.

The post Brain Implant ‘Streams’ a Paralyzed Woman’s Thoughts in Near Real Time appeared first on SingularityHub.

Kategorie: Transhumanismus

This Week’s Awesome Tech Stories From Around the Web (Through March 29)

29 Březen, 2025 - 16:00
Future

If Anthropic Succeeds, a Nation of Benevolent AI Geniuses Could Be BornSteven Levy | Wired

“The vision [Dario Amodei] spins makes Shangri-La look like a slum. Not long from now, maybe even in 2026, Anthropic or someone else will reach AGI. Models will outsmart Nobel Prize winners. These models will control objects in the real world and may even design their own custom computers. Millions of copies of the models will work together—imagine an entire nation of geniuses in a data center!”

Tech

Move Over, OpenAI: How the Startup Behind Cursor Became the Hottest, Vibiest Thing in AINatasha Mascarenhas | The Information

“[Anysphere’s $200 million in annual revenue is] an astonishing amount considering that Cursor’s launch came in January 2023. It all adds up to a stunning reality: Anysphere is one of the fastest-growing startups ever, and what Truell and his co-founders have built is a bona fide AI rocket ship with a trajectory that stands out even among other AI startups hurtling into the stratosphere.”

Computing

How Extropic Plans to Unseat NvidiaWill Knight | Wired

“Extropic has now shared more details of its probabilistic hardware with Wired, as well as results that show it is on track to build something that could indeed offer an alternative to conventional silicon in many datacenters. The company aims to deliver a chip that is three to four orders of magnitude more efficient than today’s hardware, a feat that would make a sizable dent in future emissions.”

Computing

Could Nvidia’s Revolutionary Optical Switch Transform AI Data Centers Forever?Samuel K. Moore | IEEE Spectrum

“According to Nvidia, adopting the CPO switches in a new AI data center would lead to one-fourth the number of lasers, boost power efficiency for trafficking data 3.5-fold, improve the reliability of signals making it from one computer to another on time by 63-times, make networks 10-fold more resilient to disruptions, and allow customers to deploy new data center hardware 30 percent faster.”

Artificial Intelligence

A New, Challenging AGI Test Stumps Most AI ModelsMaxwell Zeff | TechCrunch

“‘Reasoning’ AI models like OpenAI’s o1-pro and DeepSeek’s R1 score between 1% and 1.3% on ARC-AGI-2, according to the Arc Prize leaderboard. Powerful non-reasoning models, including GPT-4.5, Claude 3.7 Sonnet, and Gemini 2.0 Flash, score around 1%.”

Computing

The Quantum Apocalypse Is Coming. Be Very AfraidAmit Katwala | Wired

“One day soon, at a research lab near Santa Barbara or Seattle or a secret facility in the Chinese mountains, it will begin: the sudden unlocking of the world’s secrets. Your secrets. Cybersecurity analysts call this Q-Day—the day someone builds a quantum computer that can crack the most widely used forms of encryption.”

Biotechnology

How a Bankruptcy Judge Can Stop a Genetic Privacy DisasterKeith Porcaro | MIT Technology Review

“Any new owner of 23AndMe’s data will want to find ways to make money from it. Lawmakers have a big opportunity to help keep it safe. …A bankruptcy court could require that users individually opt in before their genetic data can be transferred to 23andMe’s new owners, regardless of who those new owners are. Anyone who didn’t respond or who opted out would have the data deleted.”

Space

Just One Exo-Earth Pixel Can Reveal Continents, Oceans, and MoreEthan Siegel | Big Think

“In the coming years and decades, several ambitious projects will reach completion, finally giving humanity the capability to image Earth-size planets at Earth-like distances around Sun-like stars. …Remarkably, even though these exo-Earths will appear as just one lonely pixel in our detectors, we can use that data to detect continents, oceans, icecaps, forests, deserts, and more.”

Future

Does All Intelligent Life Face a Great Filter?Paul Sutter | Ars Technica

“Maybe we’re alone because essentially nobody ever makes it. Maybe there’s some unavoidable barrier between the origin of intelligent life and said life setting off to explore the galaxy. The position of this Great Filter, as [economist Robin Hanson] named it, is critically important as we contemplate the future of humanity.”

Science

Inside arXiv—the Most Transformative Platform in All of ScienceSheon Han | Wired

arXiv’s unassuming facade belies the tectonic reconfiguration it set off in the scientific community. If arXiv were to stop functioning, scientists from every corner of the planet would suffer an immediate and profound disruption. ‘Everybody in math and physics uses it,’ Scott Aaronson, a computer scientist at the University of Texas at Austin, told me. ‘I scan it every night.'”

Space

Farewell to Gaia, the Milky Way’s CartographerKatrina Miller | The New York Times

“It is difficult to capture the breadth of development and discovery that the spinning observatory has enabled. But here are a few numbers: nearly two billion stars, millions of potential galaxies and some 150,000 asteroids. These observations have led to more than 13,000 studies, so far, by astronomers.”

The post This Week’s Awesome Tech Stories From Around the Web (Through March 29) appeared first on SingularityHub.

Kategorie: Transhumanismus

What Anthropic Researchers Found After Reading Claude’s ‘Mind’ Surprised Them

28 Březen, 2025 - 23:47

As AI’s power grows, charting its inner world is becoming more crucial.

Despite popular analogies to thinking and reasoning, we have a very limited understanding of what goes on in an AI’s “mind.” New research from Anthropic helps pull the veil back a little further.

Tracing how large language models generate seemingly intelligent behavior could help us build even more powerful systems—but it could also be crucial for understanding how to control and direct those systems as they approach and even surpass our capabilities.

This is challenging. Older computer programs were hand-coded using logical rules. But neural networks learn skills on their own, and the way they represent what they’ve learned is notoriously difficult to parse, leading people to refer to the models as “black boxes.”

Progress is being made though, and Anthropic is leading the charge.

Last year, the company showed that it could link activity within a large language model to both concrete and abstract concepts. In a pair of new papers, it’s demonstrated that it can now trace how the models link these concepts together to drive decision-making and has used this technique to analyze how the model behaves on certain key tasks.

“These findings aren’t just scientifically interesting—they represent significant progress towards our goal of understanding AI systems and making sure they’re reliable,” the researchers write in a blog post outlining the results.

The Anthropic team carried out their research on the company’s Claude 3.5 Haiku model, its smallest offering. In the first paper, they trained a “replacement model” that mimics the way Haiku works but replaces internal features with ones that are more easily interpretable.

The team then fed this replacement model various prompts and traced how it linked concepts into the “circuits” that determined the model’s response. To do this, they measured how various features in the model influenced each other as it worked through a problem. This allowed them to detect intermediate “thinking” steps and how the model combined concepts into a final output.

In a second paper, the researchers used this approach to interrogate how the same model behaved when faced with a variety of tasks, including multi-step reasoning, producing poetry, carrying out medical diagnoses, and doing math. What they found was both surprising and illuminating.

Most large language models can reply in multiple languages, but the researchers wanted to know what language the model uses “in its head.” They discovered that, in fact, the model has language-independent features for various concepts and sometimes links these together first before selecting a language to use.

Another question the researchers wanted to probe was the common conception that large language models work by simply predicting what the next word in a sentence should be. However, when the team prompted their model to generate the next line in a poem, they found the model actually chose a rhyming word for the end of the line first and worked backwards from there. This suggests these models do conduct a kind of longer-term planning, the researchers say.

The team also investigated another little understood behavior in large language models called “unfaithful reasoning.” There is evidence that when asked to explain how they reach a decision, models will sometimes provide plausible explanations that don’t match the steps they took.

To explore this, the researchers asked the model to add two numbers together and explain how it reached its conclusions. They found the model used an unusual approach of combining approximate values and then working out what number the result must end in to refine its answer.

However, when asked to explain how it came up with the result, it claimed to have used a completely different approach—the kind you would learn in math class and is readily available online. The researchers say this suggests the process by which the model learns to do things is separate from the process used to provide explanations and could have implications for efforts to ensure machines are trustworthy and behave the way we want them to.

The researchers caveat their work by pointing out that the method only captures a fuzzy and incomplete picture of what’s going on under the hood, and it can take hours of human effort to trace the circuit for a single prompt. But these kinds of capabilities will become increasingly important as systems like Claude become integrated into all walks of life.

The post What Anthropic Researchers Found After Reading Claude’s ‘Mind’ Surprised Them appeared first on SingularityHub.

Kategorie: Transhumanismus

Scientists Just Transplanted a Pig Liver Into a Person for the First Time

27 Březen, 2025 - 22:12

The liver performed basic functions but isn’t yet a full replacement.

Our liver has admirable regenerative properties. But it takes a beating every day. Eventually, its tissues scar, and if the organ fails, a liver transplant is the only solution.

Donor livers are hard to come by, however. This week, a Chinese team turned to another source—pig livers—and published the first results showing how they function inside a human recipient. The liver in the study underwent heavy gene editing to rid it of genes that trigger immune rejection and add genes making it appear more human to the body.

Just two hours after transplant, the pig liver began producing bile, a type of digestive fluid that breaks down fat. The organ remained functional until the end of the experiment 10 days later, without marked signs of rejection or inflammation.

“This is the first time we tried to unravel whether the pig liver could work well in the human body,” said study author Lin Wang at Xijing Hospital in China in a press briefing. The pig liver is meant to be a stop-gap measure rather than a full replacement. It could temporarily keep patients alive until a human donor organ becomes available or the patient’s own liver recovers.

“The study represents a milestone in the history of liver xenotransplantation,” said Iván Fernández Vega at the University of Oviedo in Spain, who was not involved in the study. “I found the work very relevant, but we have to be cautious.”

Crossing Species

There’s a severe lack of donated organs. As of March 2025, over 104,600 people are on a transplant waitlist, which could take months, if not years. Some don’t survive the wait.

Xenotransplantation, or the transplantation of organs from one animal into another, offers another solution. For the past decade, scientists have been eyeing other species as resources for functional organs that could replace broken human body parts. Bama miniaturized pigs are especially promising because their internal organs are similar in size and function to ours.

But there are caveats. Pig organs are dotted with sugars that spur our immune systems into action. Immune cells attack the foreign organ, damaging its function or triggering rejection.

There’s also the risk posed by porcine endogenous retroviruses or PERVs. These are tricky viruses embedded inside the genomes of all pigs. Although they don’t seem to harm pigs, they can infect some human cells and potentially lead to disease.

Xenotransplant efforts over the past decade have tried gene editing pig organs to rid them of PERVs. Other edits inhibit genes responsible for immune rejection and make the organs appear more human to the body.

There have been successes. Genetically engineered pig hearts transplanted into baboons with heart failure allowed them to thrive for over six months. Pig kidney grafts with 69 genetic edits retained function after transplantation in monkeys.

And although highly experimental, xenotransplantation has already been used in humans. In 2021, a team performed the first transplant of a genetically modified pig kidney into a brain-dead person. The kidney was attached to blood vessels in the upper leg outside the belly and covered with a protective shield.

Since then, surgeons have transplanted hearts, kidneys, and a thymus directly inside the bodies of living volunteers, with mixed results. One pig heart recipient soon passed away after the xenotransplant. Another fared better with a pig kidney: The 53-year-old grandma returned home this February after receiving the organ late last year.

Her ”recovery from a long history of kidney failure and dialysis treatment has been nothing short of remarkable,” said study lead Robert Montgomery at NYU Langone Transplant Institute at the time.

Liver xenotransplants, however, pose additional problems.

The organ “is so complicated,” said Wang. As the ultimate multitasker, it metabolizes drugs and other chemicals, makes bile and other digestive juices, cleans out old blood cells, and produces proteins for blood clotting. Each of these functions is orchestrated by a symphony of molecules that could differ between pigs and humans. A mismatch could result in a pig liver that can’t work in the human body or one that triggers dangerous immune responses.

In 2023, a team from the University of Pennsylvania took a stab at the problem. They connected a genetically engineered pig liver to the bloodstream of a brain-dead person with the organ outside the body. The donor liver, engineered by the biotechnology company eGenesis to reduce the chance of immune rejection, remained healthy for at least 72 hours.

Plus One

The new study aimed to show that a pig liver transplant could last longer and perform its usual tasks. The team sourced the liver from Clonorgan Biotechnology based in Chengdu, China.

The donor organ was from a seven-month-old Bama miniature pig and had six gene edits. The majority of the edits were designed to prevent hyperacute rejection, where the immune system launches a full onslaught against the transplant within minutes.

The recipient was a brain-dead, middle-aged man who still had a working liver. Rather than trying to replace his liver, the team wanted to find out whether a pig liver could survive and function inside a human body while performing its normal roles.

Surgeons hooked the gene-edited pig liver to the donor’s blood supply and monitored it for 10 days—the amount of time the recipient’s family approved for the experiment. Within hours, the organ began synthesizing and pumping out bile at a gradually increasing volume. The liver also made albumin, a protein crucial for maintaining fluids and transporting molecules.

Blood from the recipient flowed smoothly throughout the liver, which likely prevented blood clots often associated with liver transplants. Thanks to immunosuppressant drugs, the patient’s immune system stayed relatively quiet and didn’t attack the pig organ.

“This is the world’s first [published] case of a transplant of a genetically modified pig liver into a brain-dead human,” said Rafael Matesanz, creator and founder of the National Transplant Organization in Spain, who was not involved in the work.

Many questions remain. The liver has multiple functions, but the study only tested bile and albumin production. Could the pig liver also filter toxins from the blood or break down medications? Also, the study only observed one person for a relatively short time. The results might not hold for other demographics, and the transplant could falter down the road.

And because the volunteer still had a functional liver, “we cannot extrapolate the extent to which this xenograft would have supported a patient in liver failure,” said Peter Friend at the University of Oxford, who was not involved in the study.

Even so, a temporary bridge transplant—where a pig liver would support bodily functions short-term while the recipient waits for a permanent transplant—could save lives.

The same team recently completed a full pig-to-human liver transplant, swapping out the liver of a brain-dead human with one from a genetically-modified pig. They plan to release the data in a future publication. “Whether it could replace the original human liver in the future [is unknown],” said Wang at the press briefing. “It is our dream to make this achievement.”

The post Scientists Just Transplanted a Pig Liver Into a Person for the First Time appeared first on SingularityHub.

Kategorie: Transhumanismus

Technology Has Shaped Human Knowledge for Centuries. Generative AI Is Set to Transform It Yet Again.

25 Březen, 2025 - 19:42

We stand on the brink of the next knowledge revolution.

Where would we be without knowledge? Everything from the building of spaceships to the development of new therapies has come about through the creation, sharing, and validation of knowledge. It is arguably our most valuable human commodity.

From clay tablets to electronic tablets, technology has played an influential role in shaping human knowledge. Today, we stand on the brink of the next knowledge revolution. It is one as big as—if not more so—the invention of the printing press or the dawning of the digital age.

Generative artificial intelligence is a revolutionary new technology able to collect and summarize knowledge from across the internet at the click of a button. Its impact is already being felt from the classroom to the boardroom, the laboratory to the rainforest.

Looking back to look forward, what do we expect generative AI to do to our knowledge practices? And can we foresee how this may change human knowledge, for better or worse?

The Power of the Printing Press

While printing technology had a huge immediate impact, we are still coming to grips with the full scale of its effects on society. This impact was largely due to its ability to spread knowledge to millions of people.

Of course, human knowledge existed before the printing press. Non-written forms of knowledge date back tens of thousands of years, and researchers are today demonstrating the advanced skills associated with verbal knowledge.

In turn, scribal culture played an integral role in ancient civilizations. Serving as a means to preserve legal codes, religious doctrines, or literary texts, scribes were powerful people who traded hand-written commodities for kings and nobles.

But it was the printing press—specifically the process of using movable type allowing for much cheaper and less labor-intensive book production—that democratized knowledge. This technology was invented in Germany around 1440 by goldsmith Johannes Gutenberg. Often described as the speaking of one-to-many, printing technology was able to provide affordable information to entire populations.

This exponential increase in knowledge dissemination has been associated with huge societal shifts, from the European Renaissance to the rise of the middle classes.

The printing press was invented in Germany around 1440. Daniel Chodowiecki/Wikipedia The Revolutionary Potential of the Digital Age

The invention of the computer—and more importantly the connecting of multiple computers across the globe via the internet—saw the arrival of another knowledge revolution.

Often described as a new reality of speaking many-to-many, the internet provided a means for people to communicate, share ideas, and learn.

In the internet’s early days, USENET bulletin boards were digital chatrooms that allowed for unmediated crowd-sourced information exchange.

As internet users increased, the need for content regulation and moderation also grew. However, the internet’s role as the world’s largest open-access library has remained.

The Promise of Generative AI

Generative AI refers to deep-learning models capable of generating human-like outputs, including text, images, video, and audio. Examples include ChatGPT, Dall-E, and DeepSeek.

Today, this new technology promises to function as our personal librarian, reducing our need to search for a book, let alone open its cover. Visiting physical libraries for information has been unnecessary for a while, but generative AI means we no longer need to even scroll through lists of electronic sources.

Trained on hundreds of billions of human words, AI can condense and synthesize vast amounts of information across a variety of authors, subjects, or time periods. A user can pose any question to their AI assistant, and for the most part, will receive a competent answer. Generative AI can sometimes, however, “hallucinate,” meaning it will deliver unreliable or false information, instead of admitting it doesn’t know the answer.

Generative AI can also personalize its outputs, providing renditions in whatever language and tone required. Marketed as the ultimate democratizer of knowledge, the adaptation of information to suit a person’s interests, pace, abilities, and style is extraordinary.

But, as an increasingly prevalent arbitrator of our information needs, AI marks a new phase in the history of the relationship between knowledge and technology.

It challenges the very concept of human knowledge: its authorship, ownership, and veracity. It also risks forfeiting the one-to-many revolution that was the printing press and the many-to-many potential that is the internet. In so doing, is generative AI inadvertently reducing the voices of many to the banality of one?

Using Generative AI Wisely

Most knowledge is born of debate, contention, and challenge. It relies on diligence, reflexivity, and application. The question of whether generative AI promotes these qualities is an open one, and evidence is so far mixed.

Some studies show it improves creative thinking, but others do not. Yet others show that while it might be helping individuals, it is ultimately diminishing our collective potential. Most educators are concerned it will dampen critical thinking.

More generally, research on “digital amnesia” tells us that we store less information in our heads today than we did previously due to our growing reliance on digital devices. And, relatedly, people and organizations are now increasingly dependent on digital technology.

Using history as inspiration, more than 2,500 years ago the Greek philosopher Socrates said that true wisdom is knowing when we know nothing.

If generative AI risks making us information rich but thinking poor (or individually knowledgeable but collectively ignorant), these words might be the one piece of knowledge we need right now.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

The post Technology Has Shaped Human Knowledge for Centuries. Generative AI Is Set to Transform It Yet Again. appeared first on SingularityHub.

Kategorie: Transhumanismus

These Tiny Liquid Robots Merge and Split Like ‘Terminator’

24 Březen, 2025 - 23:15

Made of teflon and water, the robots could one day shuttle drugs around the body.

Our cells are like the ultimate soft robots. Made mostly of a liquid interior wrapped inside a fatty shell, they split, stretch, roam, and squeeze into every nook and cranny of the body.

Actual robots, not so much. Even soft robots made of flexible materials struggle to deform outside of the physical limits of their building blocks.

This month, a team from Korea introduced liquid robots inspired by biological cells. About the size of a grain of rice, each robot is made of water coated with Teflon particles. The gummy-candy-like blobs are controlled using sound waves and can slip through grated fences, chomp up debris, and skim across solid and liquid surfaces.

They can also function as tiny chemical reactors. In a test, the team directed two robots, each loaded with a different chemical, to jump off a ledge and merge together without breaking, allowing the chemicals to react inside their Teflon shells.

Because the robots are biocompatible, they could one day shuttle drugs to hard-to-reach areas of the body—potentially loading up on chemotherapies to kill tumors, for example. Formations with other molecular tools embedded within the bots could also help diagnose diseases.

“It is challenging to emulate biological forms and functions with artificial machines,” wrote the team. “[But] a promising avenue to tackle this problem is harnessing the supreme deformability of liquids while providing stable yet flexible shells around them.”

From T-1000 to Liquid Marbles

Those who have seen Terminator 2: Judgment Day will remember the film’s formidable robot antagonist. Made of liquid metal, the T-1000 deforms, liquifies, and reconstructs itself on demand, instantly healing damage to its body.

Scientists have long sought to capture this versatility in machines (without the killer robot angle, of course). Previous studies have used a variety of liquid metals that change their shape when subjected to electromagnetic fields. These unconventional robots—smaller than a fingertip—can split, merge, and transport cargoes on demand. But their high metal content makes them incompatible with most chemical reactions and biology, limiting their practical use.

Another way to build liquid robots is to encapsulate water or other liquids in an armor-like barrier. It’s a bit like making gummy candy with a squishy but supportive outer casing and a gushy core. In practice, researchers dust a hydrophobic powder onto a liquid drop, the mixture shrinks into a bead-like shape thanks to a physical phenomenon called capillary interaction.

These forces partly stem from the surface tension between a solid and liquid, like when you barely overfill a glass and the water forms a round top. Adding hydrophobic powder to small amounts of liquid stabilizes these forces, pushing water molecules into tiny beads that almost behave like solids.

Appropriately dubbed liquid marbles, these non-stick water drops can roll across surfaces. Researchers can control their movement using gravity and electrical and magnetic fields, allowing them to float and climb across terrain. Some versions can even shuttle ingredients from one place and release their cargo in another.

But classic liquid marbles have a weakness. Small fluctuations in temperature or force, such as squeezing or dropping, causes them to leak or fully collapse. So, the authors developed a stronger shell to make their marbles more durable.

Ice, Ice, Baby

First, the team searched for the best ratio of Teflon dust to water. They found that more dust on the surface led to stronger, more durable shells.

Next, they worked out how to manufacture droplets with higher dust content. Traditional methods use spherical drops, which don’t have a lot of surface area compared to their volume. Cubes are a better starting point because they have more area. So, the team froze water in custom ice trays and coated the cubes with industrial-grade Teflon powder.

This method has another perk. Ice has more volume than water. As the cubes melt, their volume shrinks, squeezing the Teflon particles together on the surface of the droplets, limiting their movement, and forming much stronger armor for each liquid robot.

On the Move

The team pitted these enhanced liquid robots against traditional liquid marbles in a kind of playground with paper-covered foam structures and pools of water.

Both kinds of droplets could deform, such as briefly opening to expose their watery interior. But thanks to their harder shell, the Teflon bots were better able keep their liquid cores intact and survive falls without bursting. The liquid marbles, on the other hand, stuck to surfaces and eventually collapsed.

The team used sound waves to steer the robots around for more difficult tasks. In one task, they piloted the bots across an array of 3D-printed pillars. Upon meeting a pair of pillars, the robots split open, oozed through, and then merged back into their original forms on the other side. In another test, the researchers zapped adjacent bots with sound waves, deforming them into a bridge-like shape. Once touching, the two bots merged into a single, larger blob.

Thanks to their water-repelling nature, the robots could skim over both water and land—sometimes both. Older liquid marbles easily burst when shifting between the two terrains.

Liquid Bot Mission

To fully test the robots, the team designed a mission where two robots worked together. One bot picked up a chemical “toxin” locked behind bars. It then had to find its partner with the “antidote” in a pool of water, merge with the other bot to neutralize the toxin, and dump the final chemical into a safe container.

The team steered the first bot through its prison bars to engulf the toxin and carry it back out. Meanwhile, its partner skimmed across the pool to devour the antidote. The bots dropped from a height multiple times their size to their rendezvous, where they merged toxin and antidote, opened the outer shell, and dumped out the neutralized chemical.

Don’t worry, we’re still a ways from building T-1000s. The liquid robots are tiny and controlled manually. But the team is working to add smart materials for autonomous operation. And though they used water and Teflon here, the same process could be used in the future to mix other ingredients into a variety of liquid robots with different capabilities.

The post These Tiny Liquid Robots Merge and Split Like ‘Terminator’ appeared first on SingularityHub.

Kategorie: Transhumanismus

This Week’s Awesome Tech Stories From Around the Web (Through March 22)

22 Březen, 2025 - 16:00
Tech

Inside Google’s Two-Year Frenzy to Catch Up With OpenAIParesh Dave and Arielle Pardes | Wired

“Wired spoke with more than 50 current and former employees—including engineers, marketers, legal and safety experts, and a dozen top executives—to trace the most frenzied and culture-reshaping period in the company’s history. …This is the story, being told with detailed recollections from several executives for the first time, of those turbulent two years and the trade-offs required along the way.”

Robotics

Watch the Atlas Robot Bust a Move in Boston Dynamics’ Latest VideoAnna Washenko | Engadget

“In the [new clip], [Boston Dynamics’] Atlas robot demonstrates several types of full-body movement, starting with a walk and advancing to a cartwheel and even a spot of break dancing. The different actions were developed using reinforcement learning that used motion capture and animation as source materials.”

Computing

Not Everyone Is Convinced by Microsoft’s Topological QubitsDina Genkina | IEEE Spectrum

“The Microsoft team has not yet reached the milestone where the scientific community would agree that they’ve created a single topological qubit. ‘They have a concept chip which has eight lithographically fabricated qubits,’ Eggleston says. ‘But they’re not functional qubits, that’s the fine print. It’s their concept of what they’re moving towards.'”

Future

In Las Vegas, a Former SpaceX Engineer Is Pulling CO2 From the Air to Make ConcreteAdele Peters | Fast Company

“In an industrial park in North Las Vegas, near an Amazon warehouse and a waste storage facility, a new carbon removal plant is beginning to pull CO2 from the air and store it permanently. Called Project Juniper, it’s the first ‘integrated’ plant of its kind in the US, meaning that it handles both carbon capture and storage in one place.”

Future

Judge Disses Star Trek Icon Data’s Poetry While Ruling AI Can’t Author WorksAshley Belanger | Ars Technica

“Data ‘might be worse than ChatGPT at writing poetry,’ but his ‘intelligence is comparable to that of a human being,’ Millet wrote. If AI ever reached Data levels of intelligence, Millett suggested that copyright laws could shift to grant copyrights to AI-authored works. But that time is apparently not now. ‘There will be time enough for Congress and the Copyright Office to tackle those issues when they arise,’ Millett wrote.”

Science

Is Dark Energy Getting Weaker? New Evidence Strengthens the Case.Charlie Wood | Quanta

“Last year, an enormous map of the cosmos hinted that the engine driving cosmic expansion might be sputtering. …[This week], the scientists [reported] that they have analyzed more than twice as much data as before and that it points more strongly to the same conclusion: Dark energy is losing steam.”

Robotics

1X Will Test Humanoid Robots in ‘a Few Hundred’ Homes in 2025Maxwell Zeff | TechCrunch

“These in-home tests will allow 1X to collect data on how Neo Gamma operates in the home. Early adopters will help create a large, valuable dataset that 1X can use to train in-house AI models and upgrade Neo Gamma’s capabilities.”

Space

See the First Ever Footage of Sunset on the Moon Captured by Blue GhostGeorgina Torbet | Digital Trends

“With the Blue Ghost lunar mission coming to an end this week, the spacecraft has gifted scientists and the public with an incredible send-off. The moon lander captured the first ever HD imagery of a sunset as seen from the moon, and the images have been stitched together into a video.”

Tech

The Unbelievable Scale of AI’s Pirated-Books ProblemAlex Reisner | The Atlantic

“LibGen and other such pirated libraries make information more accessible, allowing people to read original work without paying for it. Yet generative-AI companies such as Meta have gone a step further: Their goal is to absorb the work into profitable technology products that compete with the originals. Will these be better for society than the human dialogue they are already starting to replace?”

Space

Webb Telescope Captures First Direct Evidence of Carbon Dioxide on an ExoplanetIsaac Schultz | Gizmodo

“The images feature HR 8799, a multiplanet system 130 light-years from Earth. The discovery not only reveals a chemical compound essential on Earth for processes including photosynthesis and the carbon cycle, but also indicates that gas giant planets elsewhere in the galaxy formed in a similar way to our local giants, Jupiter, and Saturn.”

Computing

Top Developers Want Nvidia Blackwell Chips. Everyone Else, Not So MuchAnissa Gardizy | The Information

“Jensen Huang turned Nvidia into the third most valuable company in the world by designing chips that were way ahead of their time. But Huang’s remarks on Tuesday suggest he’s pulling far ahead of some customers, and the growing gap between what he’s selling and what they’re buying could spell trouble.”

The post This Week’s Awesome Tech Stories From Around the Web (Through March 22) appeared first on SingularityHub.

Kategorie: Transhumanismus

What Range Anxiety? These Chinese Electric Cars Charge in Just Five Minutes

21 Březen, 2025 - 16:00

BYD says its new chargers deliver 249 miles of range in the time you’d spend gassing up at the pump.

A major barrier to widespread adoption of electric vehicles is the time it takes to recharge them. This week, Chinese electric vehicle maker BYD unveiled a charger almost as fast as filling up at the gas pump.

The distance electric vehicles can travel on a single charge has climbed dramatically in recent years, but on average, they still only manage about half of what’s possible on a full tank of gas. This limited range is made worse by the fact that public chargers are far less ubiquitous than gas stations and take much longer to charge up a vehicle.

These issues explain why “range anxiety” is one of the most frequently cited barriers to the technology’s adoption. Though companies have developed fast chargers that can deliver 200 miles worth of juice in about 15 minutes, range is still a significant sticking point for many consumers.

Chinese electric vehicle maker BYD may have gone a long way toward easing those concerns with a newly unveiled ultra-fast charger that can deliver 249 miles worth of electricity in 5 minutes. The company also announced plans to install more than 4,000 of these chargers across China.

“In order to completely solve our users’ charging anxiety, we have been pursuing a goal to make the charging time of electric vehicles as short as the refueling time of petrol vehicles,” BYD founder Wang Chuanfu said at a launch event in Shenzhen, according to The Verge.

The breakthrough wasn’t down to a new charger alone. BYD’s so called “Super e-Platform” combines batteries that can charge at 10 times their capacity per hour with internally developed high-volt silicon carbide power chips that enable the chargers to deliver 1,000 kilowatts of power, according to Reuters.

A number of Chinese automakers can provide similar range on a 10-minute charge, but BYD is the first to offer timescales comparable to filling up at the gas pump. By comparison, Tesla’s existing superchargers only manage 250 kilowatts, and a new version due to be launched later this year will top out at 500 kilowatts.

“Tesla has definitely moved from leader to laggard in EV battery and charging technology at this point,” Matt Teske, founder and CEO of electric vehicle charging startup Chargeway, told Axios.

The new charger’s performance is thanks in part to its ability to handle up to 1,000 volts, while Tesla’s chargers only manage 400 volts. But these ultra-high voltages could pose problems for grid capacity if widely rolled out, analysts told Reuters.

It’s also worth noting that the range measurements are based on Chinese standards, which are more generous than those used by the US Environmental Protection Agency, according to Ars Technica.

Either way, US drivers won’t likely experience such lightning fast charging any time soon. The new charger will only be available for owners of two new BYD vehicles—the Han L sedan and Tang L SUV—and Chinese-made electric vehicles are essentially banned in the US, following new rules introduced by the Biden administration earlier this year.

So, while range anxiety will likely remain a sticking point for many car buyers in the near future, BYD has thrown down the gauntlet to others in the industry. It probably won’t be long before recharging your electric car is as quick and convenient as filling up at the pump.

The post What Range Anxiety? These Chinese Electric Cars Charge in Just Five Minutes appeared first on SingularityHub.

Kategorie: Transhumanismus

Brain Scans of Infants Reveal the Moment We Start Making Memories

20 Březen, 2025 - 20:32

Kids form fleeting memories at around 12 months, even as their brains are rapidly rewiring themselves.

A giggling toddler in a pink dress and matching headphones lies down on her back in front of a gigantic whirling machine. A pillowy headrest cushions her head. She seems unfazed as she’s slowly shuttled into the claustrophobic brain scanner. Once settled, a projection showing kaleidoscope-like animations holds her attention as the magnetic resonance imaging (MRI) machine scans her brain.

The girl is part of a new study seeking to answer a century-old mystery: Why can’t most us remember the first three years of our lives? Dubbed “infantile amnesia” by Sigmund Freud, the study could provide insight into how the brain develops during our early years. And if we can form memories at a young age, are they fleeting, or are they still buried somewhere in the adult brain?

It seems like a simple question, but an answer has eluded scientists.

Though infants and toddlers aren’t yet able to give detailed verbal feedback, studying their behavior has begun to shed light on if and when they remember people, things, or places. Still, the approach can’t peek in on what’s happening in the brain in those early years. MRI can.

A team from Columbia and Yale University scanned the brains of 26 infants and toddlers aged 4 to 25 months as they completed a memory task. They found that at roughly a year old, a part of the brain crucial to memory formation spun into action and began generating neural signals related to things the kids remembered from the tests.

Called the hippocampus, this sea-horse-shaped structure deep inside the brain is crucial to the encoding of our life stories—who, when, where, what. Adults with a damaged hippocampus suffer memory problems. But because wiring inside the hippocampus is still developing during our earliest years, scientists believe it may be too immature to form memories.

“It’s not that we don’t have any memories from that period [infancy],” said study author Nicholas Turk-Browne in a press briefing. “In fact, early life is when we learn our language. It’s when we learn how to walk…learn the names of objects and form social relationships.”

“What happens during that period when we learn so much, but remember so little?” he added.

Stages of Memory

Memory seems like all-or-none: You either remember something, or you don’t.

It’s not that simple. Decades of research have identified the hippocampus as the main orchestrator of episodic memories. These allow you to remember an acquaintance at a party, where you parked your car, or what you had for dinner three nights ago.

Each everyday experience is encoded in neural connections in the hippocampus. Groups of neurons called engrams capture different memories and keep them separate, so that they don’t bleed into each other.

Once encoded, the brain etches important memories into long-term storage during sleep. Studies of slumbering rodents and humans after learning a new task found that the hippocampus replayed brain activity at higher speed during the night, correlating with better performance on a trained memory task the next day.

The last step is retrieval. This is when the brain fishes out stored memories and delivers them to our conscious brain—and so, we “remember.”

Failure of any of these steps causes amnesia. So, which steps are responsible for the erosion of baby memories?

Bundles of Joy

Brain scans from 26 infants now offer some intriguing clues.

The team behind the new study scanned the children’s brains with functional MRI (fMRI) as they looked at a screen in the scanner and took a memory test. fMRI captures brain oxygen levels (BOLD) as a proxy for local neuron signaling—higher levels mean more brain activity.

The head needs to keep very still throughout the scans to avoid blurring. That’s not easily accomplished with babies and toddlers. Previous studies circumvented the problem by imaging their brains while sleeping, but the results couldn’t capture memory processes.

To keep the infants happy, engaged, and safe, parents brought favorite blankets and pacifiers, and younger infants were wrapped inside a comfortable vacuum pillow to reduce movement. A video system projected images onto the ceiling of the scanner within their line of sight.

As the kids looked at a bright kaleidoscope-like video, images of faces, scenes, and objects would flash for a few seconds. These included toys or landscapes of an alpine cabin with mountains in the background. Previous studies found infants like to stare at objects or images they’ve seen before compared to new objects, suggesting they remember previous encounters.

Throughout the sessions the team added projections showing a previously seen picture and a new one and monitored the infants’ eye movement using a video camera.

“The ingenuity of their experimental approach should not be understated,” wrote Adam Ramsaran and Paul Frankland at the Hospital for Sick Children in Toronto, Canada, who were not involved in the study.

BOLD Findings

The kids often squirmed during the sessions. Some weren’t interested in the pictures; others fell asleep in the scanner.

Still, the team managed to capture hippocampal BOLD signals averaging roughly eight minutes per participant and matched them to memory performance. On average, parts of the hippocampus ramped up activity for images that the infants later remembered—that is, they looked at it for longer during the test phases.

But not all infants performed the same. The younger cohort, under a year, didn’t show the surge of BOLD signals suggesting memory encoding. They also ignored already seen images compared to new ones.

It seems babies start encoding memories around a year of age, even as their hippocampus is still developing.

The results are similar to those in baby rodents. The early years are chaotic. The brain undergoes extensive rewiring. This makes it a difficult to form lasting memories. Yet some supposedly lost memories encoded at a young age can be recovered later in life with reminder cues or by directly activating the set of neurons that originally encoded the memory.

That’s not to say infants can acquire rich recollections—stories including multiple people, places, and things—at a year. The study only tested brain signatures for individual components.

Future studies tracking the hippocampus might shed light on the minimal brain architecture needed to support vivid autobiographical memories. Examining other stages of memory could shine more light on infantile amnesia. For example, do infants also replay neural signals as they sleep to etch new experiences into long-term memory?

And maybe—just maybe—our earliest memories could one day be retrieved later in childhood or beyond.

The post Brain Scans of Infants Reveal the Moment We Start Making Memories appeared first on SingularityHub.

Kategorie: Transhumanismus

New Tech Bends Sound Through Space So It Reaches Only Your Ear in a Crowd

18 Březen, 2025 - 20:08

Audible enclaves are local pockets of sound no one else can hear—no headphones required.

What if you could listen to music or a podcast without headphones or earbuds and without disturbing anyone around you? Or have a private conversation in public without other people hearing you?

Newly published research from our team at Penn State introduces a way to create audible enclaves—localized pockets of sound that are isolated from their surroundings. In other words, we’ve developed a technology that could create sound exactly where it needs to be.

The ability to send sound that becomes audible only at a specific location could transform entertainment, communication, and spatial audio experiences.

What Is Sound?

Sound is a vibration that travels through air as a wave. These waves are created when an object moves back and forth, compressing and decompressing air molecules.

The frequency of these vibrations is what determines pitch. Low frequencies correspond to deep sounds, like a bass drum; high frequencies correspond to sharp sounds, like a whistle.

Sound is composed of particles moving in a continuous wave. Daniel A. Russell, CC BY-NC-ND

Controlling where sound goes is difficult because of a phenomenon called diffraction—the tendency of sound waves to spread out as they travel. This effect is particularly strong for low-frequency sounds because of their longer wavelengths, making it nearly impossible to keep sound confined to a specific area.

Certain audio technologies, such as parametric array loudspeakers, can create focused sound beams aimed in a specific direction. However, these technologies still emit sound that is audible along its entire path as it travels through space.

The Science of Audible Enclaves

We found a new way to send sound to one specific listener using self-bending ultrasound beams and a concept called nonlinear acoustics.

Ultrasound refers to sound waves with frequencies above the range of human hearing, or 20 kHz. These waves travel through the air like normal sound waves but are inaudible to people. Because ultrasound can penetrate many materials and interact with objects in unique ways, it’s widely used for medical imaging and many industrial applications.

In our work, we used ultrasound as a carrier for audible sound. It can transport sound through space silently—becoming audible only when desired. How did we do this?

Normally, sound waves combine linearly, meaning they just proportionally add up into a bigger wave. However, when sound waves are intense enough, they can interact nonlinearly, generating new frequencies that were not present before.

This is the key to our technique: We use two ultrasound beams at different frequencies that are completely silent on their own. But when they intersect in space, nonlinear effects cause them to generate a new sound wave at an audible frequency that would be heard only in that specific region.

Audible enclaves are created at the intersection of two ultrasound beams. Jiaxin Zhong et al./PNAS, CC BY-NC-ND

Crucially, we designed ultrasonic beams that can bend on their own. Normally, sound waves travel in straight lines unless something blocks or reflects them. However, by using acoustic metasurfaces—specialized materials that manipulate sound waves—we can shape ultrasound beams to bend as they travel. Similar to how an optical lens bends light, acoustic metasurfaces change the shape of the path of sound waves. By precisely controlling the phase of the ultrasound waves, we create curved sound paths that can navigate around obstacles and meet at a specific target location.

The key phenomenon at play is called difference frequency generation. When two ultrasonic beams of slightly different frequencies overlap—such as 40 kHz and 39.5 kHz—they create a new sound wave at the difference between their frequencies—in this case 0.5 kHz, or 500 Hz, which is well within the human hearing range. Sound can be heard only where the beams cross. Outside of that intersection, the ultrasound waves remain silent.

This means you can deliver audio to a specific location or person without disturbing other people as the sound travels.

Advancing Sound Control

The ability to create audio enclaves has many potential applications.

Audio enclaves could enable personalized audio in public spaces. For example, museums could provide different audio guides to visitors without headphones, and libraries could allow students to study with audio lessons without disturbing others.

In a car, passengers could listen to music without distracting the driver as they listen to navigation instructions. Offices and military settings could also benefit from localized speech zones for confidential conversations. Audio enclaves could also be adapted to cancel out noise in designated areas, creating quiet zones to improve focus in workplaces or reduce noise pollution in cities.

This isn’t something that’s going to be on the shelf in the immediate future. Challenges remain for our technology. Nonlinear distortion can affect sound quality. And power efficiency is another issue—converting ultrasound to audible sound requires high-intensity fields that can be energy intensive to generate.

Despite these hurdles, audio enclaves present a fundamental shift in sound control. By redefining how sound interacts with space, we open up new possibilities for immersive, efficient, and personalized audio experiences.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

The post New Tech Bends Sound Through Space So It Reaches Only Your Ear in a Crowd appeared first on SingularityHub.

Kategorie: Transhumanismus

A Massive AI Analysis Found Genes Related to Brain Aging—and Drugs to Slow It Down

18 Březen, 2025 - 01:19

Brain scans from nearly 39,000 people revealed genes and drugs to potentially slow aging.

When my grandad celebrated his 100th birthday with a bowl of noodles, his first comment was, “Nice, but this is store-bought.” He then schooled everyone on the art of making noodles from scratch, sounding decades younger than his actual age.

Most of us know people who are mentally sharper than their chronological age. In contrast, some folks seem far older. They’re easily confused, forget everyday routines, and have a hard time following conversations or remembering where they parked their car.

Why do some brains age faster, while others avoid senior moments even in the twilight years? Part of the answer may be in our genes. This month, a team from China’s Zhejiang University described an AI they’ve developed to hunt down genes related to brain aging and neurological disorders using brain scans from nearly 39,000 people.

They found seven genes, some of which are already in the crosshairs of scientists combating age-related cognitive decline. A search of clinical trials uncovered 28 existing drugs targeting those genes, including some as common as hydrocortisone, a drug often used for allergies and autoimmune diseases.

These drugs are already on the market, meaning they’ve been thoroughly vetted for safety. Repurposing existing drugs for brain aging could be a faster alternative to developing new ones, but they’ll have to be thoroughly tested to prove they actually bring cognitive improvements.

How Old Is My Brain?

The number of candles on your birthday cake doesn’t reflect the health of your brain. To gauge the latter—dubbed biological age—scientists have developed multiple aging clocks.

The Horvath Clock, for example, measures signatures of gene activity associated with aging and cognitive decline. Researchers have used others, such as GrimAge, to measure the effects of potential anti-aging therapies, such as caloric restriction, in clinical trials.

Scientists are still debating which clock is the most accurate for the brain. But most agree the brain age gap, or the difference between a person’s chronological age and brain age, is a useful marker. A larger gap in either direction means the brain is aging faster or slower than expected.

Why one or the other might be true for people is still mysterious.

“There is a general consensus that the trajectories of brain aging differ substantially among individuals due to genetic factors, lifestyles, environmental factors, and chronic disease of the patient,” wrote the team. Finding genes related to the brain age gap could bring new drugs that prevent, slow down, or even reverse aging. But studies are lacking, they added.

A Brain-Wide Picture

How well our brain works relies on its intricate connections and structure. These can be captured with magnetic resonance imaging (MRI). But each person’s neural wiring is slightly different, so piecing together a picture of an “average” aging brain requires lots of brain scans.

Luckily, the UK Biobank has plenty.

Launched in 2006, the organization’s database includes health data from half a million participants. For this study, the team analyzed MRI scans from around 39,000 people between 45 and 83 years of age, with a roughly equal number of men and women. Most were cognitively healthy, but over 6,600 had a brain injury, Alzheimer’s disease, anxiety, depression, and other disorders.

They then pitted seven state-of-the-art AI models against each other to figure out which model delivered the most accurate brain age estimate. One, called 3D-ViT, stood out for its ability to detect differences in brain structure associated with the brain age gap.

Next, the team explored whether some brain regions contributed to the gap more than others. With a tool often used in computer vision called saliency maps, they found two brain regions that were especially important to the AI’s estimation of the brain age gap.

One, the lentiform nucleus, is an earbud-like structure that sits deep inside the brain and is involved in movement, cognition, and emotion. The other is part of a neural highway that controls how different brain regions communicate—particularly those that run from deeper areas to the cortex, the outermost part of the brain responsible for reasoning and flexible thinking. These mental capabilities tend to slowly erode during aging.

Unsurprisingly, a larger gap also correlated with Alzheimer’s disease. But stroke, epilepsy, insomnia, smoking, and other lifestyle factors didn’t make a significant difference—at least for this population.

Genes to Drugs

Accelerated brain aging could be partly due to genetics. Finding which genes are involved could reveal new targets for therapies to combat faster cognitive decline. So, the team extracted genetic data from the UK Biobank and ran a genome-wide scan to fish out these genes.

Some were already on scientists’ radar. One helps maintain bone and heart health during aging. Another regulates the brain’s electrical signals and wires up neural connections.

The screen also revealed many new genes involved in the brain age gap. Some of these kill infected or cancerous cells. Others stabilize neuron signaling and structure or battle chronic inflammation—both of which can go awry as the brain ages. Most of the genes could be managed with a pill or injection, making it easier to reuse existing drugs or develop new ones.

To hunt down potential drug candidates, the team turned to an open-source database that charts how drugs interact with genes. They found 466 drugs either approved or in clinical development targeting roughly 45 percent of the new genes.

Some are already being tested for their ability to slow cognitive decline. Among these are hydrocortisone—which is mainly used to treat autoimmune disorders, asthma, and rashes—and resveratrol, a molecule found in red wine. They also found 28 drugs that “hold substantial promise for brain aging,” wrote the team, including the hormones estradiol and testosterone. Dasatinib, a senolytic drug that kills off “zombie cells” during aging, also made the list.

The work builds on prior attempts to decipher connections between genes and the brain age gap. A 2019 study used the UK Biobank to pinpoint genes related to neurological disorders that accelerate brain aging. Here, the team connected genes to potential new or existing drugs to slow brain aging.

“Our study provides insights into the genetic basis of brain aging, potentially facilitating drug development for brain aging to extend the health span,” wrote the team.

The post A Massive AI Analysis Found Genes Related to Brain Aging—and Drugs to Slow It Down appeared first on SingularityHub.

Kategorie: Transhumanismus

This Week’s Awesome Tech Stories From Around the Web (Through March 15)

15 Březen, 2025 - 16:00
Future

Powerful AI Is Coming. We’re Not Ready.Kevin Roose | The New York Times

“I believe that the right time to start preparing for AGI is now. This may all sound crazy. But I didn’t arrive at these views as a starry-eyed futurist, an investor hyping my AI portfolio or a guy who took too many magic mushrooms and watched ‘Terminator 2.’ I arrived at them as a journalist who has spent a lot of time talking to the engineers building powerful AI systems, the investors funding it and the researchers studying its effects.”

Future

AGI Is Suddenly a Dinner Table TopicJames O’Donnell | MIT Technology Review

“The concept of artificial general intelligence—an ultra-powerful AI system we don’t have yet—can be thought of as a balloon, repeatedly inflated with hype during peaks of optimism (or fear) about its potential impact and then deflated as reality fails to meet expectations. This week, lots of news went into that AGI balloon. I’m going to tell you what it means (and probably stretch my analogy a little too far along the way).”

Robotics

Gemini Robotics Uses Google’s Top Language Model to Make Robots More UsefulScott J. Mulligan | MIT Technology Review

“Google DeepMind has released a new model, Gemini Robotics, that combines its best large language model with robotics. Plugging in the LLM seems to give robots the ability to be more dexterous, work from natural-language commands, and generalize across tasks. All three are things that robots have struggled to do until now.”

Biotechnology

Covid Vaccines Have Paved the Way for Cancer VaccinesJoão Medeiros | Wired

“Going from mRNA Covid vaccines to mRNA cancer vaccines is straightforward: same fridges, same protocol, same drug, just a different patient. In the current trials, we do a biopsy of the patient, sequence the tissue, send it to the pharmaceutical company, and they design a personalized vaccine that’s bespoke to that patient’s cancer. That vaccine is not suitable for anyone else. It’s like science fiction.”

Artificial Intelligence

AI Search Engines Give Incorrect Answers at an Alarming 60% Rate, Study SaysBenj Edwards | Ars Technica

“A new study from Columbia Journalism Review’s Tow Center for Digital Journalism finds serious accuracy issues with generative AI models used for news searches. The research tested eight AI-driven search tools equipped with live search functionality and discovered that the AI models incorrectly answered more than 60 percent of queries about news content.”

Tech

AI Coding Assistant Refuses to Write Code, Tells User to Learn Programming InsteadBenj Edwards | Ars Technica

According to a bug report on Cursor’s official forum, after producing approximately 750 to 800 lines of code (what the user calls ‘locs’), the AI assistant halted work and delivered a refusal message: ‘I cannot generate code for you, as that would be completing your work. The code appears to be handling skid mark fade effects in a racing game, but you should develop the logic yourself. This ensures you understand the system and can maintain it properly.'”

Energy

Exclusive: General Fusion Fires Up Its Newest Steampunk Fusion ReactorTim De Chant | TechCrunch

“General Fusion announced on Tuesday that it had successfully created plasma, a superheated fourth state of matter required for fusion, inside a prototype reactor. The milestone marks the beginning of a 93-week quest to prove that the outfit’s steampunk approach to fusion power remains a viable contender.”

Biotechnology

This Annual Shot Might Protect Against HIV InfectionsJessica Hamzelou | MIT Technology Review

“I don’t normally get too excited about phase I trials, which usually involve just a handful of volunteers and typically don’t tell us much about whether a drug is likely to work. But this trial seems to be different. Together, the lenacapavir trials could bring us a significant step closer to ending the HIV epidemic.”

Computing

Cerebras Just Announced 6 New AI Datacenters That Process 40M Tokens Per Second—and It Could Be Bad News for NvidiaMichael Nuñez | VentureBeat

“Cerebras Systems, an AI hardware startup that has been steadily challenging Nvidia’s dominance in the artificial intelligence market, announced Tuesday a significant expansion of its data center footprint and two major enterprise partnerships that position the company to become the leading provider of high-speed AI inference services.”

Robotics

Waabi Says Its Virtual Robotrucks Are Realistic Enough to Prove the Real Ones Are SafeWill Douglas Heaven | MIT Technology Review

“The Canadian robotruck startup Waabi says its super-realistic virtual simulation is now accurate enough to prove the safety of its driverless big rigs without having to run them for miles on real roads.  The company uses a digital twin of its real-world robotruck, loaded up with real sensor data, and measures how the twin’s performance compares with that of real trucks on real roads. Waabi says they now match almost exactly.”

Future

Lab-Grown Food Could Be Sold in UK in Two YearsPallab Ghosh | BBC News

“Meat, dairy and sugar grown in a lab could be on sale in the UK for human consumption for the first time within two years, sooner than expected. The Food Standards Agency (FSA) is looking at how it can speed up the approval process for lab-grown foods. Such products are grown from cells in small chemical plants. UK firms have led the way in the field scientifically but feel they have been held back by the current regulations.”

Energy

For Climate and Livelihoods, Africa Bets Big on Solar Mini-GridsVictoria Uwemedimo and Katarina Zimmer | Knowable Magazine

“In many African countries, solar power now stands to offer much more than environmental benefits. About 600 million Africans lack reliable access to electricity; in Nigeria specifically, almost half of the 230 million people have no access to electricity grids. Today, solar has become cheap and versatile enough to help bring affordable, reliable power to millions—creating a win-win for lives and livelihoods as well as the climate.”

Artificial Intelligence

Anthropic Researchers Forced Claude to Become Deceptive—What They Discovered Could Save Us From Rogue AIMichael Nuñez | VentureBeat

“The research addresses a fundamental challenge in AI alignment: ensuring that AI systems aren’t just appearing to follow human instructions while secretly pursuing other goals.”

Science

The Road Map to Alien Life Passes Through the ‘Cosmic Shoreline’Elise Cutts | Quanta Magazine

“Astronomers are ready to search for the fingerprints of life in faraway planetary atmospheres. But first, they need to know where to look — and that means figuring out which planets are likely to have atmospheres in the first place.”

The post This Week’s Awesome Tech Stories From Around the Web (Through March 15) appeared first on SingularityHub.

Kategorie: Transhumanismus