Transhumanismus

New Form of Dark Matter Could Solve Decades-Old Milky Way Mystery

Singularity HUB - 22 Duben, 2025 - 19:55

Dark matter with less mass could be driving a mysterious glow emanating from the Milky Way’s core.

Astronomers have long been puzzled by two strange phenomena at the heart of our galaxy. First, the gas in the central molecular zone (CMZ), a dense and chaotic region near the Milky Way’s core, appears to be ionized (meaning it is electrically charged because it has lost electrons) at a surprisingly high rate.

Second, telescopes have detected a mysterious glow of gamma rays with an energy of 511 kilo-electronvolts (keV) (which corresponds to the energy of an electron at rest).

Interestingly, such gamma rays are produced when an electron and its antimatter counterpart—all fundamental charged particles have antimatter versions of themselves that are near identical, but with opposite charge—the positron, collide and annihilate in a flash of light.

The causes of both effects have remained unclear, despite decades of observation. But in a new study, published in Physical Review Letters, my colleagues and I show that both could be linked to one of the most elusive ingredients in the universe: dark matter. In particular, we propose that a new form of dark matter, less massive than the types astronomers typically look for, could be the culprit.

Hidden Process

The CMZ spans almost 700 light years and contains some of the most dense molecular gas in the galaxy. Over the years, scientists have found that this region is unusually ionized, meaning the hydrogen molecules there are being split into charged particles (electrons and nuclei) at a much faster rate than expected.

This could be the result of sources such as cosmic rays and starlight that bombard the gas. However, these alone don’t seem to be able to account for the observed levels.

The other mystery, the 511-keV emission, was first observed in the 1970s, but still has no clearly identified source. Several candidates have been proposed, including supernovas, massive stars, black holes, and neutron stars. However, none fully explain the pattern or intensity of the emission.

We asked a simple question: Could both phenomena be caused by the same hidden process?

Dark matter makes up around 85 percent of the matter in the universe, but it does not emit or absorb light. While its gravitational effects are clear, scientists do not yet know what it is made of.

One possibility, often overlooked, is that dark matter particles could be very light, with masses just a few million electronvolts, far lighter than a proton, and still play a cosmic role. These light dark matter candidates are generally called sub-GeV (giga electronvolts) dark matter particles.

Such dark matter particles may interact with their antiparticles. In our work, we studied what would happen if these light dark matter particles come in contact with their own antiparticles in the galactic center and annihilate each other, producing electrons and positrons.

In the dense gas of the CMZ, these low-energy particles would quickly lose energy and ionize the surrounding hydrogen molecules very efficiently by knocking off their electrons. Because the region is so dense, the particles would not travel far. Instead, they would deposit most of their energy locally, which matches the observed ionization profile quite well.

Using detailed simulations, we found that this simple process, dark matter particles annihilating into electrons and positrons, can naturally explain the ionization rates observed in the CMZ.

Even better, the required properties of the dark matter, such as its mass and interaction strength, do not conflict with any known constraints from the early universe. Dark matter of this kind appears to be a serious option.

The Positron Puzzle

If dark matter is creating positrons in the CMZ, those particles will eventually slow down and eventually annihilate with electrons in the environment, producing gamma-rays at exactly 511-keV energy. This would provide a direct link between the ionization and the mysterious glow.

We found that while dark matter can explain the ionization, it may also be able to replicate some amount of 511-keV radiation as well. This striking finding suggests that the two signals may potentially originate from the same source, light dark matter.

The exact brightness of the 511-keV line depends on several factors, including how efficiently positrons form bound states with electrons and where exactly they annihilate though. These details are still uncertain.

A New Way to Test the Invisible

Regardless of whether the 511-keV emission and CMZ ionization share a common source, the ionization rate in the CMZ is emerging as a valuable new observation to study dark matter. In particular, it provides a way to test models involving light dark matter particles, which are difficult to detect using traditional laboratory experiments.

Move observations of the Milky Way could help test theories of dark matter. ESO/Y. Beletsky, CC BY-SA

In our study, we showed that the predicted ionization profile from dark matter is remarkably flat across the CMZ. This is important, because the observed ionization is indeed spread relatively evenly.

Point sources such as the black hole at the center of the galaxy or cosmic ray sources like supernovas (exploding stars) cannot easily explain this. But a smoothly distributed dark matter halo can.

Our findings suggest that the center of the Milky Way may offer new clues about the fundamental nature of dark matter.

Future telescopes with better resolution will be able to provide more information on the spatial distribution and relationships between the 511-keV line and the CMZ ionization rate. Meanwhile, continued observations of the CMZ may help rule out, or strengthen, the dark matter explanation.

Either way, these strange signals from the heart of the galaxy remind us that the universe is still full of surprises. Sometimes, looking inward, to the dynamic, glowing center of our own galaxy, reveals the most unexpected hints of what lies beyond.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

The post New Form of Dark Matter Could Solve Decades-Old Milky Way Mystery appeared first on SingularityHub.

Kategorie: Transhumanismus

Each of the Brain’s Neurons Is Like Multiple Computers Running in Parallel

Singularity HUB - 21 Duben, 2025 - 23:50

The cables connecting neurons act like independent ‘mini-computers’ to store different types of information.

The brain’s rules seem simple: Fire together, wire together.

When groups of neurons activate, they become interconnected. This networking is how we learn, reason, form memories, and adapt to our world, and it’s made possible by synapses, tiny junctions dotting a neuron’s branches that receive and transmit input from other neurons.

Neurons have often been called the computational units of the brain. But more recent studies suggest that’s not the case. Their input cables, called dendrites, seem to run their own computations, and these alter the way neurons—and their associated networks—function.

A new study in Science sheds light on how these “mini-computers” work. A team from the University of California, San Diego watched as synapses lit up in a mouse’s brain while it learned a new motor skill. Depending on their location on a neuron’s dendrites, the synapses followed different rules. Some were keen to make local connections. Others formed longer circuits.

“Our research provides a clearer understanding of how synapses are being modified during learning,” said study author William “Jake” Wright in a press release.

The work offers a glimpse into how each neuron functions as it encodes memories. “The constant acquisition, storage, and retrieval of memories are among the most essential and fascinating features of the brain,” wrote Ayelén I. Groisman and Johannes J. Letzkus at the University of Freiburg in Germany, who were not involved in the study.

The results could provide insight into “offline learning,” such as when the brain etches fleeting memories into more permanent ones during sleep, a process we still don’t fully understand.

They could also inspire new AI methods. Most current brain-based algorithms treat each artificial neuron as a single entity with synapses following the same set of rules. Tweaking these rules could drive more sophisticated computation in mechanical brains.

A Neural Forest

Flip open a neuroscience textbook, and you’ll see a drawing of a neuron. The receiving end, the dendrite, looks like the dense branches of a tree. These branches funnel electrical signals into the body of the cell. Another branch relays outgoing messages to neighboring cells.

But neurons come in multiple shapes and sizes. Some stubby ones create local circuits using very short branches. Others, for example pyramidal cells, have long, sinewy dendrites that reach toward the top of the brain like broccolini. At the other end, they sprout bushes to gather input from deeper brain regions.

Dotted along all these branches are little hubs called synapses. Scientists have long known that synapses connect during learning. Here, synapses fine-tune their molecular docks so they’re more or less willing to network with neighboring synapses.

But how do synapses know what adjustments best contribute to the neuron’s overall activity? Most only capture local information, yet somehow, they unite to tweak the cell’s output. “When people talk about synaptic plasticity, it’s typically regarded as uniform within the brain,” said Wright. But learning initially occurs inside single synapses, each with its own personality.

Scientists have sought answer to this question—known as the credit assignment problem—by watching a handful of neurons in a dish or running simulations. But the neurons in these studies aren’t part of the brain-wide networks we use to learn, encode, and store memories, so they can’t capture how individual synapses contribute.

Double-Team

In the new study, researchers added genes to mice so they could monitor single synapses in the brain region involved in movement. They then trained the mice to press a lever for a watery treat.

Over two weeks, the team captured activity from pyramidal cells—the ones with long branches on one end and bushes on the other. Rather than only observing each neuron’s activity as a whole, the team also watched individual synapses along each dendrite.

They didn’t behave the same way. Synapses on the longer branch closer to the top of the brain—known as the apical dendrite—rapidly synced with neighbors. Their connections strengthened and formed a tighter network.

“This indicates that learning-related plasticity is governed by local interactions between nearby synaptic inputs in apical dendrites,” wrote Groisman and Letzkus.

By contrast, synapses on the bush-like basal dendrites mostly strengthened or weakened their connections in step with the neuron’s overall activity.

A neuron’s cell body—from which dendrites sprout—is also a computing machine. In another experiment, blocking the cell body’s action slashed signals from basal dendrites but not from apical dendrites. In other words, the neuron’s synapses functioned differently, depending on where they were. Some followed global activity in the cell; others cared more about local issues.

“This discovery fundamentally changes the way we understand how the brain solves the credit assignment problem, with the concept that individual neurons perform distinct computations in parallel in different subcellular compartments,” study senior author Takaki Komiyama said in the press release.

The work joins other efforts showcasing the brain’s complexity. Far from a unit of computation, a neuron’s branches can flexibly employ rules to encode memories.

This raises yet more questions.

The two dendrites—apical and basal—receive different types of information from different areas of the brain. The study’s techniques could help scientists hunt down and tease apart these differing network connections and, in turn, learn more about how we form new memories. Also mysterious are apical dendrites’ rogue synapses that are unaffected by signals from the cell body.

One theory suggests that independence from central control could allow “each dendritic branch to operate as an independent memory unit, greatly increasing the information storage capacity of single neurons,” wrote Groisman and Letzkus. These synapses could also be critical for “offline learning,” such as during sleep, when we build long-lasting memories.

The team is now studying how neurons use these different rules, and if they change in Alzheimer’s, autism, addiction, or post-traumatic disorders. The work could help us better understand what goes “goes wrong in these different diseases,” Wright said.

The post Each of the Brain’s Neurons Is Like Multiple Computers Running in Parallel appeared first on SingularityHub.

Kategorie: Transhumanismus

This Week’s Awesome Tech Stories From Around the Web (Through April 19)

Singularity HUB - 19 Duben, 2025 - 16:00
Artificial Intelligence

Google’s New AI Is Trying to Talk to Dolphins—SeriouslyIsaac Schultz | Gizmodo

“The model is DolphinGemma, a cutting-edge LLM trained to recognize, predict, and eventually generate dolphin vocalizations, in an effort to not only crack the code on how the cetaceans communicate with each other—but also how we might be able to communicate with them ourselves.”

Artificial Intelligence

Microsoft Researchers Say They’ve Developed a Hyper-Efficient AI Model That Can Run on CPUsKyle Wiggers | TechCrunch

“Microsoft researchers claim they’ve developed the largest-scale 1-bit AI model, also known as a ‘bitnet,’ to date. Called BitNet b1.58 2B4T, it’s openly available under an MIT license and can run on CPUs, including Apple’s M2.”

Artificial Intelligence

To Make Language Models Work Better, Researchers Sidestep LanguageAnil Ananthaswamy | Quanta Magazine

“We insist that large language models repeatedly translate their mathematical processes into words. There may be a better way. …In [two recent papers], researchers introduce deep neural networks that allow language models to continue thinking in mathematical spaces before producing any text. While still fairly basic, these models are more efficient and reason better than their standard alternatives.”

Future

Airbus Is Working on a Superconducting Electric AircraftGlenn Zorpette | IEEE Spectrum

“Glenn Llewellyn, Airbus’s vice president in charge of the ZEROe program, described the project in detail, indicating an effort of breathtaking technological ambition. The envisioned aircraft would seat at least 100 people and have a range of 1,000 nautical miles (1,850 kilometers). It would be powered by four fuel-cell ‘engines’ (two on each wing), each with a power output of 2 megawatts.”

Space

Skepticism Greets Claims of a Possible Biosignature on a Distant WorldJohn Timmer | Ars Technica

“So why are many astronomers unconvinced? To be compelling, a biosignature from an exoplanet has to clear several hurdles that can be broken down into three key questions: Is the planet what we think it is? Is the signal real? Are there other ways to produce that signal? At present, none of those questions can be answered with a definitive yes.”

Energy

Scientists Made a Stretchable Lithium Battery You Can Bend, Cut, or StabJacek Krywko | Ars Technica

“It’s hard to use [standard lithium-ion batteries] in soft robots or wearables, so a team of scientists at the University California, Berkeley built a flexible, non-toxic, jelly-like battery that could survive bending, twisting, and even cutting with a razor.”

Energy

These Four Charts Sum Up the State of AI and EnergyCasey Crownhart | MIT Technology Review

“Sure, you’ve probably read that AI will drive an increase in electricity demand. But how that fits into the context of the current and future grid can feel less clear from the headlines. …A new report from the International Energy Agency digs into the details of energy and AI, and I think it’s worth looking at some of the data to help clear things up.”

Future

What ‘Ex Machina’ Got Right (and Wrong) About AI, 10 Years Later Joe Berkowitz | Fast Company

“‘One day AI’s are gonna look back on us the way we look at fossils and skeletons in the plains of Africa,’ Bateman says at one point. ‘An upright ape living in dust, with crude language and tools, all set for extinction.’ …Has humanity officially entered its extinction era in the decade since Ex Machina won a Best Visual Effects Oscar and a Best Screenplay nomination for Garland?”

Space

Looking at the Universe’s Dark Ages From the Far Side of the MoonPaul Sutter | Ars Technica

“It will take humanity several generations, if not more, to develop the capabilities needed to finally build far-side observatories. But it will be worth it, as those facilities will open up the unseen Universe for our hungry eyes, allowing us to pierce the ancient fog of our Universe’s past, revealing the machinations of hydrogen in the dark ages, the birth of the first stars, and the emergence of the first galaxies.”

Artificial Intelligence

Researchers Claim Breakthrough in Fight Against AI’s Frustrating Security HoleBenj Edwards | Ars Technica

“In the AI world, a vulnerability called a ‘prompt injection’ has haunted developers since chatbots went mainstream in 2022. Despite numerous attempts to solve this fundamental vulnerability—the digital equivalent of whispering secret instructions to override a system’s intended behavior—no one has found a reliable solution. Until now, perhaps.”

Energy

Cosmic Robotics’ Robots Could Speed Up Solar Panel DeploymentsTim De Chant | TechCrunch

“Cosmic’s robot can place a panel within a few millimeters of where it needs to be. Workers spot the robot, ensuring everything looks right before fastening the panel to the rack. The goal is not just to lighten the load, but to speed things along, too. Emerick said that Cosmic’s robot could allow a standard crew to be split in two, doubling the amount of solar panels that can be installed in one day.”

Biotechnology

Jurassic Patent: How Colossal Biosciences Is Attempting to Own the ‘Woolly Mammoth’Antonio Regalado | MIT Technology Review

“Colossal Biosciences not only wants to bring back the woolly mammoth—it wants to patent it, too. MIT Technology Review has learned the Texas startup is seeking a patent that would give it exclusive legal rights to create and sell gene-edited elephants containing ancient mammoth DNA.”

The post This Week’s Awesome Tech Stories From Around the Web (Through April 19) appeared first on SingularityHub.

Kategorie: Transhumanismus

This Living Building Material Forms Bone-Like Structures and Could One Day Repair Itself

Singularity HUB - 18 Duben, 2025 - 21:09

Scientists combined fungus and bacteria into a “living material” that stays alive for up to a month.

Fungi are master engineers capable of building vast networks underground. Now, researchers have harnessed their capabilities to create a living building material that could be a sustainable alternative to cement and one day even repair itself.

Nature has developed some impressive building materials that can often go toe-to-toe with the best human-made ones. Wood, coral, and bone have excellent strength-to-weight ratios, and they form at room temperature from readily available supplies.

It’s no wonder engineers have long dreamed of harnessing these powers in human-made structures. Now, scientists have combined fungus and bacteria to create a living material that stays alive for up to a month and can form bone-like structures. The researchers say this approach could one day be used to create structural components that repair themselves.

“We are excited about our results and look forward to engineering more complex and larger structures,” Chelsea Heveran at Montana State University, who led the study, told New Scientist. “When viability is sufficiently high, we could start really imparting lasting biological characteristics to the material that we care about, such as self-healing, sensing, or environmental remediation.”

The new material relies on a process called biomineralization. In this process, cells turn calcium in their environment into calcium carbonate deposits that harden underlying tissues or structures, as in the formation of bone or coral. But certain microbes can also produce calcium carbonate. Engineers have used the process to create “biocement” to seal cracks in oil-and-gas wells or produce masonry.

However, the microbes typically only live for a few days, leaving the final materials inert. Increasingly, scientists are working to create “engineered living materials” where the cells remain viable. These materials could repair themselves, photosynthesize, or sense their environment.

The Montana State researchers created their new material by combining the structural engineering capabilities of fungus (Neurospora crassa) with the biomineralization capabilities of bacteria (Sporosarcina pasteurii). They described the work in a recent paper in Cell Reports Physical Science.

First, they coaxed the fungus’s mycelium—the network of root-like filaments that make up the bulk of most fungi—to grow into a mesh-like scaffold. They then added the bacteria to these scaffolds and placed them in a calcium-rich growth formula, which the microbes converted into calcium carbonate in just 24 hours.

They found the material’s microbes remained viable for up to four weeks after removal from the formula, when kept at 86 degrees Fahrenheit. The researchers didn’t test whether the material could repair itself, but they say keeping cells alive longer is a crucial first step toward this goal.

The team also created beam-like scaffolds that mimic the structure of cortical bone—the strongest type of bone that provides its structural integrity—and then effectively mineralized them. Controlling the internal shape of the scaffolds like this could significantly broaden the types of structures and uses these materials might target.

One limitation is that the researchers couldn’t culture the two species together. They had to kill the fungus after it had grown the scaffold before adding the bacteria. This means the material is only partially living, which could limit what it can do down the line.

But the work opens new possibilities for the growth of high-performance building materials that are both more sustainable and include smart features like self-repair or power generation.

The post This Living Building Material Forms Bone-Like Structures and Could One Day Repair Itself appeared first on SingularityHub.

Kategorie: Transhumanismus

Parkinson’s Patients Say Their Symptoms Eased After Receiving Millions of New Brain Cells

Singularity HUB - 17 Duben, 2025 - 21:18

Two studies found stem-cell therapies were safe and reduced symptoms, but bigger trials are still needed.

Grabbing a coffee cup seems easy. But you need to be able to move your hand, stretch it out, and keep it steady.

These movements are difficult for people with Parkinson’s disease. The disorder eats away at brain cells—called dopamine neurons—that control movement and emotion. Symptoms begin with tremors. Then muscles lock up. Eventually, the disease makes walking and sleeping difficult. Thinking gets harder, and as neurons die, people lose their concentration and memory.

Medications can keep some symptoms at bay, but eventually, their effects wear off. For nearly half a century, scientists have been exploring an alternative solution: Replacing dying dopamine neurons with new ones.

This month, two studies of nearly two dozen people with Parkinson’s showed the strategy is safe. A single transplant boosted dopamine levels for 18 months without notable side effects. Patients had fewer motor symptoms even when they stopped taking their regular medications.

The work stands out because instead of being tailored to each patient, the cells were ready-made. The teams grew new dopamine neurons from donors in the lab. These cells can multiply easily in petri dishes, forming a large supply of replacement cells for patients.

Malin Parmar at Lund University, who was not involved in the study, told Nature the results are “a big leap in the field.”

A Deteriorating Brain

Parkinson’s is the world’s second most common neurodegenerative disease, with up to 90,000 new cases a year in the US. Michael J. Fox, who played Marty McFly in Back to the Future and launched a foundation to find a Parkinson’s cure, is perhaps the most famous person living with the disease.

In Parkinson’s, neurons in the middle of the brain gradually die. Called the substantia nigra, the region is intricately connected with surrounding areas and is critical for movement and emotions. Although the entire area eventually deteriorates, neurons that pump out dopamine—a chemical that fine-tunes neural networks and functions—are first to go. This means the brain gradually loses dopamine as the disease progresses.

There are treatments but no cures.

One common medication, Levodopa, tackles symptoms. Neurons slurp up the drug and transform it into dopamine. But as brain cells gradually die, the medication becomes less effective. Levodopa also has side effects. Because midbrain wiring influences both addictive behaviors and motor control, flooding it with dopamine can change how people act, like increasing the risk of gambling addiction and other obsessive behaviors. Long-term use can also trigger random movements of the face, arms, and legs—a symptom called dyskinesia.

Brain implants that bridge broken connections in the midbrain are another treatment. Deep brain stimulation, for example, mimics natural brain signals to ease motor symptoms. Some implants are already approved for use, but they require surgery and monitoring and aren’t widely accessible.

Rather than patching a broken circuit with a temporary fix, what if we could replace broken dopamine neurons with fresh ones?

Stem-Cell Marathon

Stem cells offer a solution. These special cells can grow into any other type of cell under the right conditions, making them the perfect replacement for dying neurons.

Back in the 1980s, one team transplanted brain tissue rich in dopamine neurons into people with Parkinson’s. These patients experienced a boost of dopamine and improved motor control for years after the surgery. But the source was highly controversial: fetal brain tissue.

Although a “first proof-of-concept for cell transplantation therapy,” the trial raised “ethical concerns,” according to Hideyuki Okano at the Keio University Regenerative Medicine Research Center in Japan, who was not involved in the new studies.

As an alternative, scientists have learned to create stem cells in the lab. One method produces stem cell lines that can grow almost forever under the right conditions. In another, scientists chemically transform adult cells, often taken from the skin, into a stem-cell-like state. These are called induced pluripotent stem cells (iPSCs). Five years ago, a team converted iPSCs into dopamine neurons and transplanted them into a patient, improving symptoms for up to two years.

Getting enough of the cells is difficult. Fetal brains are hard to come by and ethically problematic. And making iPSCs for each patient is time-consuming, potentially limiting widespread adoption.

Off-the-Shelf Treatment

The new studies took a different approach: They gathered two types of widely available stem cells, turned them into young dopamine neurons, and implanted them into the brain.

In one trial, researchers injected cells from a human embryonic-stem-cell line into the midbrains of 12 middle-aged people with Parkinson’s. Once a line is established, these lab-grown cells can reproduce indefinitely, essentially making them an unlimited resource.

Participants received nearly three million cells spread across 18 areas in the midbrain. Some 300,000 of these—roughly the number of dopamine cells that naturally inhabit the region—survived transplantation. The patients took immunosuppressant drugs for a year to prevent rejection.

Follow-up brain scans found higher levels of dopamine, even after patients stopped medication 18 months later. No one showed signs of cancer—a serious risk associated with stem-cell therapy—wrote Okano. Symptoms improved 50 percent. Pain went down. And patients reported improved sleep, appetite, and daily movement.

In a second study, scientists created an iPSC cell line from a donor’s skin cells and coaxed them into fresh dopamine neurons. Transplanted into seven Parkinson’s patients, the cells were shown to be safe and in working order. They pumped out dopamine and eased motor symptoms for over two years.

These studies stand out because they used donor cells, as opposed to cells tailored to each patient. “The results are encouraging because they show that the use of allogeneic (non-self) transplants for the treatment of Parkinson’s disease is likely to be safe,” wrote Okano.

Long Road Ahead

Though promising, both studies have limitations, especially the large number of cells involved. It’s possible to grow the cells in a normal lab setting, but quality control and other special measures are crucial. Scientists are still debating if off-the-shelf cell therapies—which require immunosuppressants—are better than personalized therapies.

The new approach also needs to undergo larger trials. Both studies were open label, meaning participants knew they were being treated, potentially triggering placebo effects. Still, the therapies are moving forward. Both teams are working with biotechnology firms to test them in larger groups.

“Transplanting dopamine-releasing neurons into the brain is a promising regenerative therapy for Parkinson’s disease,” wrote Okano. But “more evidence is needed to prove its effectiveness.”

The post Parkinson’s Patients Say Their Symptoms Eased After Receiving Millions of New Brain Cells appeared first on SingularityHub.

Kategorie: Transhumanismus

Mystery Objects From Other Stars Are Visiting Our Solar System. These Missions Will Study Them Up Close

Singularity HUB - 15 Duben, 2025 - 20:15

Intercepting interstellar objects could transform fleeting encounters into profound scientific opportunities.

In late 2017, a mysterious object tore through our solar system at breakneck speed. Astronomers scrambled to observe the fast moving body using the world’s most powerful telescopes. It was found to be one quarter mile (400 meters) long and very elongated—perhaps 10 times as long as it was wide. Researchers named it ‘Oumuamua, Hawaiian for “scout.”

‘Oumuamua was later confirmed to be the first object from another star known to have visited our solar system. While these interstellar objects (ISOs) originate around a star, they end up as cosmic nomads, wandering through space. They are essentially planetary shrapnel, having been blasted out of their parent star systems by catastrophic events, such as giant collisions between planetary objects.

Astronomers say that ‘Oumuamua could have been traveling through the Milky Way for hundreds of millions of years before its encounter with our solar system. Just two years after this unexpected visit, a second ISO—the Borisov Comet—was spotted, this time by an amateur astronomer in Crimea. These celestial interlopers have given us tantalizing glimpses of material from far beyond our solar system.

But what if we could do more than just watch them fly by?

Studying ISOs up close would offer scientists the rare opportunity to learn more about far off star systems, which are too distant to send missions to.

There may be over 10 septillion (or ten with 24 zeros) ISOs in the Milky Way alone. But if there are so many of them, why have we only seen two? Put simply, we cannot accurately predict when they will arrive. Large ISOs like ‘Oumuamua, that are more easily detected, do not seem to visit the solar system that often, and they travel incredibly fast.

Ground- and space-based telescopes struggle to respond quickly to incoming ISOs, meaning that we are mostly looking at them after they pass through our cosmic neighborhood. However, innovative space missions could get us closer to objects like ‘Oumuamua, by using breakthroughs in artificial intelligence (AI) to guide spacecraft safely to future visitors. Getting closer means we can get a better understanding of their composition, geology, and activity—gaining insights into the conditions around other stars.

Emerging technologies being used to approach space debris could help to approach other unpredictable objects, transforming these fleeting encounters into profound scientific opportunities. So how do we get close? Speeding past Earth at an average of 32 kilometers per second, ISOs give us less than a year for our spacecraft to try and intercept them after detection. Catching up is not impossible—for example, it could be done via gravitational slingshot maneuvers. However, it is difficult, costly and would take years to execute.

The good news is that the first wave of ISO-hunting missions is already in motion: NASA’s mission concept is called Bridge and the European Space Agency (ESA) has a mission called Comet Interceptor. Once an incoming ISO is identified, Bridge would depart Earth to intercept it. However, launching from Earth currently requires a 30-day launch window after detection, which would cost valuable time.

The Comet Interceptor mission is scheduled to launch in 2029. ESA / Work performed by ATG under contract to ESA, CC BY-SA

Comet Interceptor is scheduled for launch in 2029 and comprises a larger spacecraft and two smaller robotic probes. Once launched, it will lie in wait a million miles from Earth, poised to ambush a long period comet (slower comets that come from further away)—or potentially an ISO. Placing spacecraft in a “storage orbit” allows for rapid deployment when a suitable ISO is detected.

Another proposal from the Institute for Interstellar Studies, Project Lyra, assessed the feasibility of chasing down ‘Oumuamua, which has already sped far beyond Neptune’s orbit. They found that it would be possible in theory to catch up with the object, but this would also be very technically challenging.

The Fast and the Curious

These missions are a start, but as described, their biggest limitation is speed. To chase down ISOs like ‘Oumuamua, we’ll need to move a lot faster—and think smarter.

Future missions may rely on cutting-edge AI and related fields such as deep learning—which seeks to emulate the decision-making power of the human brain—to identify and respond to incoming objects in real time. Researchers are already testing small spacecraft that operate in coordinated “swarms,” allowing them to image targets from multiple angles and adapt mid-flight.

At the Vera C Rubin Observatory in Chile, a 10-year survey of the night sky is due to begin soon. This astronomical survey is expected to find dozens of ISOs each year. Simulations suggest we may be on the cusp of a detection boom.

Any spacecraft would need to reach high speeds once an object is spotted and ensure that its energy source doesn’t degrade, potentially after years waiting in “storage orbit.” A number of missions have already utilized a form of propulsion called a solar sail.

These use sunlight on the lightweight, reflective sail to push the spacecraft through space. This would dispense with the need for heavy fuel tanks. The next generation of solar sail spacecraft could use lasers on the sails to reach even higher speeds, which would offer a nimble and low-cost solution compared to other futuristic fuels, such as nuclear propulsion.

The Vera Rubin Observatory in Chile should discover more interstellar objects. RubinObs/NOIRLab/SLAC/NSF/DOE/AURA/Y. AlSayyad

A spacecraft approaching an ISO will also need to withstand high temperatures and possibly erosion from dust being ejected from the object as it moves. While traditional shielding materials can protect spacecraft, they add weight and may slow them down.

To address this, researchers are exploring novel technologies for lightweight, more durable and resistant materials, such as advanced carbon fibers. Some could even be 3D printed. They are also looking at innovative uses of traditional materials such as cork and ceramics.

A suite of different approaches is needed that involve ground-based telescopes and space-based missions, working together to anticipate, chase down, and observe ISOs.

New technology could allow the spacecraft itself to identify and predict the trajectories of incoming objects. However, potential cuts to space science in the US, including to observatories like the James Webb Space Telescope, threaten such progress.

Emerging technologies must be embraced to make an approach and rendezvous with an ISO a real possibility. Otherwise, we will be left scrabbling, taking pictures from afar as yet another cosmic wanderer speeds away.

Disclosure statement:

Billy Bryan works on projects at RAND Europe that are funded by the UK Space Agency and DG DEFIS. He is affiliated with RAND Europe’s Space Hub and is lead of the civil space theme, the University of Sussex Students’ Union as a Trustee, and Rocket Science Ltd. as an advisor.

Chris Carter works on projects at RAND Europe that are funded by the UK Space Agency and DG DEFIS. He is affiliated with RAND Europe’s Space Hub and is a researcher in the civil space theme.

Theodora (Teddy) Ogden is a Senior Analyst at RAND Europe, where she works on defense and security issues in space. She was previously a fellow at Arizona State University, and before that was briefly at NATO.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

The post Mystery Objects From Other Stars Are Visiting Our Solar System. These Missions Will Study Them Up Close appeared first on SingularityHub.

Kategorie: Transhumanismus

Largest Brain Map Ever Reveals Hidden Algorithms of the Mammalian Brain

Singularity HUB - 14 Duben, 2025 - 22:48

In a first, the 3D reconstruction of a mouse brain links structure to activity.

Let a mouse nose around a house, and it will rapidly find food and form a strategy to return to it without getting caught. Given the same task, an AI would require millions of training examples and consume a boatload of energy and time.

Evolution has crafted the brain to quickly learn and adapt to an ever-changing world. Detailing its algorithms—the ways it processes information as revealed by its structure and wiring—could inspire more advanced AI.

This month, the Machine Intelligence From Cortical Networks (MICrONS) consortium released the most comprehensive map ever assembled of a mammalian brain. The years-long effort painstakingly charted a cubic millimeter of mouse brain—all its cells and connections—and linked this wiring diagram to how the animal sees its world.

Although just the size of a poppyseed, the brain chunk was packed with an “astonishing” 84,000 neurons, half a billion synapses—these are the hubs connecting brain cells—and over 3 miles of neural wiring, wrote Harvard’s Mariela Petkova and Gregor Schuhknecht, who were not involved in the project.

Brain maps are nothing new. Some capture basic anatomy. Others highlight which genes activate as neurons spark with activity. The new dataset, imaged at nanoscale resolution and reconstructed with AI, differs in that it connects the brain’s hardware to how it works.

The project “created the most comprehensive dataset ever assembled that links mammalian brain structure to neuronal functions in an active animal,” wrote Petkova and Schuhknecht.

The new resource could help scientists crack the neural code—the brain’s computational framework. Distilling seemingly random electrical activity into algorithms could illuminate how our brains form memories, perceive the outside world, and make calculated decisions. Similar principles could also inspire future generations of more flexible AI.

“Looking at it [the map] really gives you an awe about the sense of complexity in the brain that is very much akin to looking up at the stars of night,” Forrest Collman at the Allen Institute for Brain Science, who was part of MICrONS, told Nature. The results are “really stunningly beautiful.”

An Enigmatic Machine

The brain is nature’s most prized computational engine.

Although recent AI advances allow algorithms to learn faster or adapt, the squishy three-pound blob in our heads somehow perceives, learns, and memorizes encounters in a flash using far less energy. It then stores important information to guide decision-making in the future.

The brain’s internal wiring is the heart of its computational abilities. Neurons and other brain cells dynamically connect to one another through multiple synapses. New learning alters the wiring by tweaking synaptic strength to form memories and generate thoughts.

Scientists have already found molecules and genes that connect and change these networks across large brain chunks (albeit with low resolution). But deep dive into the brain’s neural connections could yield new insights.

Mapping a whole mouse brain at nanoscale resolution is still technologically challenging. Here, the MICrONS team zeroed in on a poppyseed-sized chunk of the visual cortex. Often dubbed “the seat of higher cognition,” the cortex is the most recently evolved brain structure. It supports some of our most treasured abilities: Logical thinking, decision-making, and perception.

Despite the cortex’s seemingly different functions, previous theoretical studies have suggested there’s a common wiring “blueprint” embedded across the region.

Deciphering this blueprint is like “working out the principles of a combustion engine by looking at many cars—there are different engine models, but the same fundamental mechanics apply,” wrote Petkova and Schuhknecht. For the brain, we’ll need a cellular parts list and an idea of how they work together.

Rebuilding the Brain

The project analyzed a tiny chunk of a mouse’s visual cortex sliced into over 28,000 pieces, each more than a thousand times thinner than a human hair.

The sections were imaged with an electron beam to capture nanoscale structures. AI-based software then stitched individual sections into a 3D recreation of the original brain region, with brain cells, wirings, and synapses each highlighted in differed colors.

The map contains over 200,000 brain cells, half a billion synapses, and more than 5.4 kilometers of neural wiring—roughly one and a half times the length of New York City’s Central Park.

Although it’s just a tiny speck of mouse brain, the map pushes the technological limits for mapping brain connections at scale. Previous landmark maps from a roundworm and fruit fly contained a fraction of the total neurons and synapses included in the new release. The only study comparable in volume mapped the human cortex, but with far fewer identified brain cells and synapses.

Into the Looking Glass

The dataset is unusual because it recorded specific activity from the mouse’s brain before imaging it.

The team showed a mouse multiple videos on a screen, including scenes from The Matrix, as it ran on a treadmill. The mouse’s brain had been genetically altered so that any activated neurons emitted a fluorescent light to mark those cells. Almost 76,000 neurons in the visual cortex sparked to life over multiple sessions. This information was then precisely mapped onto the connectome, highlighting individual activated neurons and charting their networks.

“This is where the study truly breaks new ground,” wrote Petkova and Schuhknecht. Rather than compiling a list of brain components, which only maps anatomy, the dataset also decodes functional connections at unprecedented scale.

Other projects have already made use of the dataset. A few showed how the reconstruction can identify different types of neurons. Mapping structural wiring to activity also revealed a recurring circuit—a generic pattern of brain activity—that occurs throughout the cortex. Using an AI term, the connections formed a sort of “foundation model” of the brain that can generalize, with the ability to predict neural activity in other mice.

The database isn’t perfect. Most of the wiring was reconstructed using AI, a process that leaned heavily on human editing to find errors. Reconstructing larger samples will need further technological improvements to speed up the process.

Then there are fundamental mysteries of the brain that the new brain map can’t solve. Though it offers a way to tally neural components and their wiring, higher level computations—for example, comprehending what you’re seeing—could spark another set of neural activity than that captured in the study. And cortex circuits have vast reach, which means the neural connections in the sample are incomplete.

The consortium is releasing the database, along with a new set of AI-based computational tools to link wiring diagrams to neural activity. Meanwhile, they’re planning to use the technology to map larger portions of the brain.

The release “marks a major leap forwards and offers an invaluable community resource for future discoveries in neuroscience,” such as the basic rules of cognition and memory, wrote Petkova and Schuhknech.

The post Largest Brain Map Ever Reveals Hidden Algorithms of the Mammalian Brain appeared first on SingularityHub.

Kategorie: Transhumanismus

Dark Energy Discovery Could Undermine Our Entire Model of Cosmological History

Singularity HUB - 12 Duben, 2025 - 18:27

A vast galactic survey suggests dark energy may not be constant after all.

The great Russian physicist and Nobel laureate Lev Landau once remarked that “cosmologists are often in error, but never in doubt.” In studying the history of the universe itself, there is always a chance that we have got it all wrong, but we never let this stand in the way of our inquiries.

Last month, a press release announced groundbreaking findings from the Dark Energy Spectroscopy Instrument (DESI), which is installed on the Mayall Telescope in Arizona. This vast survey, containing the positions of 15 million galaxies, constitutes the largest three-dimensional mapping of the universe to date. For context, the light from the most remote galaxies recorded in the DESI catalogue was emitted 11 billion years ago, when the universe was about a fifth of its current age.

DESI researchers studied a feature in the distribution of galaxies that astronomers call “baryon acoustic oscillations.” By comparing it to observations of the very early universe and supernovae, they have been able to suggest that dark energy—the mysterious force propelling our universe’s expansion—is not constant throughout the history of the universe.

An optimistic take on the situation is that sooner or later the nature of dark matter and dark energy will be discovered. The first glimpses of DESI’s results offer at least a small sliver of hope of achieving this.

The Cosmic Inventory: the different components of the universe derived from the Planck Satellite observations of the CMB. Image from Jones, Martínez and Trimble, ‘The Reinvention of Science.’, CC BY-SA

However, that might not happen. We might search and make no headway in understanding the situation. If that happens, we would need to rethink not just our research, but the study of cosmology itself. We would need to find an entirely new cosmological model, one that works as well as our current one but that also explains this discrepancy. Needless to say, it would be a tall order.

To many who are interested in science this is an exciting, potentially revolutionary prospect. However, this kind of reinvention of cosmology, and indeed all of science, is not new, as argued in the 2023 book The Reinvention of Science.

The Search for Two Numbers

Back in 1970, Allan Sandage wrote a much-quoted paper pointing to two numbers that bring us closer to answers about the nature of cosmic expansion. His goal was to measure them and discover how they change with cosmic time. Those numbers are the Hubble constant, H₀, and the deceleration parameter, q₀.

The first of these two numbers tells us how fast the universe is expanding. The second is the signature of gravity: as an attractive force, gravity should be pulling against cosmic expansion. Some data has shown a deviation from the Hubble-Lemaître Law, of which Sandage’s second number, q₀, is a measure.

No significant deviation from Hubble’s straight line could be found until breakthroughs were made in 1997 by Saul Perlmutter’s Supernova Cosmology Project and the High-Z SN Search Team led by Adam Riess and Brian Schmidt. The goal of these projects was to search for and follow supernovae exploding in very distant galaxies.

These projects found a clear deviation from the simple straight line of the Hubble-Lemaître Law, but with one important difference: the universe’s expansion is accelerating, not decelerating. Perlmutter, Riess, and Schmidt attributed this deviation to Einstein’s cosmological constant, which is represented by the Greek letter Lambda, Λ, and is related to the deceleration parameter.

Their work earned them the 2011 Nobel Prize in Physics.

Dark Energy: 70% of the Universe

Astonishingly, this Lambda-matter, also known as dark energy, is the dominant component of the universe. It has been speeding up the universe’s expansion to the point where the force of gravity is overridden, and it accounts for almost 70 percent of the total density of the universe.

We know little or nothing about the cosmological constant, Λ. In fact, we do not even know that it is a constant. Einstein first said there was a constant energy field when he created his first cosmological model derived from General Relativity in 1917, but his solution was neither expanding nor contracting. It was static and unchanging, and so the field had to be constant.

Constructing more sophisticated models that contained this constant field was an easier task: they were derived by the Belgian physicist Georges Lemaître, a friend of Einstein’s. The standard cosmology models today are based on Lemaître’s work and are referred to as Λ Cold Dark Matter (ΛCDM) models.

The DESI measurements on their own are completely consistent with this model. However, by combining them with observations of the cosmic microwave background and supernovae, the best fitting model is one involving a dark energy that evolved over cosmic time and that will (potentially) no longer be dominant in the future. In short, this would mean the cosmological constant does not explain dark energy.

The Big Crunch

In 1988, the 2019 physics Nobel laureate P.J.E. Peebles wrote a paper with Bharat Ratra on the possibility that there is a cosmological constant that varies with time. Back when they published this paper, there was no serious body of opinion about Λ.

This is an attractive suggestion. In this case the current phase of accelerated expansion would be transient and would end at some point in the future. Other phases in cosmic history have had a beginning and an end: inflation, the radiation-dominated era, the matter-dominated era, and so on.

The present dominance of dark energy may therefore decline over cosmic time, meaning it would not be a cosmological constant. The new paradigm would imply that the current expansion of the universe could eventually reverse into a “Big Crunch.”

Other cosmologists are more cautious, not least Carl Sagan, who wisely said that “extraordinary claims require extraordinary evidence.” It is crucial to have multiple, independent lines of evidence pointing to the same conclusion. We are not there yet.

Answers may come from one of today’s ongoing projects—not just DESI but also Euclid and J-PAS—which aim to explore the nature of dark energy through large-scale galaxy mapping.

While the workings of the cosmos itself are up for debate, one thing is for sure—a fascinating time for cosmology is on the horizon.

Licia Verde receives funding from the AEI (Spanish State Research Agency) project number PID2022-141125NB-I00, and has previously received funding from the European Research Council. Licia Verde is a member of the DESI collaboration team.

Vicent J. Martínez receives funding from the European Union NextGenerationEU and the Generalitat Valenciana in the 2022 call “Programa de Planes Complementarios de I+D+i”, Project (VAL-JPAS), reference ASFAE/2022/025, the research Project PID2023-149420NB-I00 funded by MICIU/AEI/10.13039/501100011033 and ERDF/EU, and the project of excellence PROMETEO CIPROM/2023/21 of the Conselleria de Educación, Universidades y Empleo (Generalitat Valenciana). He is a member of the Spanish Astronomy Society, the Spanish Royal Physics Society and the Royal Spanish Mathematical Society.

Bernard J.T. Jones and Virginia L Trimble do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

The post Dark Energy Discovery Could Undermine Our Entire Model of Cosmological History appeared first on SingularityHub.

Kategorie: Transhumanismus

DeepMind’s New AI Teaches Itself to Play Minecraft From Scratch

Singularity HUB - 11 Duben, 2025 - 16:58

The AI made a “mental map” of the world to collect the game’s most sought-after material.

My nephew couldn’t stop playing Minecraft when he was seven years old.

One of the most popular games ever, Minecraft is an open world in which players build terrain and craft various items and tools. No one showed him how to navigate the game. But over time, he learned the basics through trial and error, eventually figuring out how to craft intricate designs, such as theme parks and entire working cities and towns. But first, he had to gather materials, some of which—diamonds in particular—are difficult to collect.

Now, a new DeepMind AI can do the same.

Without access to any human gameplay as an example, the AI taught itself the rules, physics, and complex maneuvers needed to mine diamonds. “Applied out of the box, Dreamer is, to our knowledge, the first algorithm to collect diamonds in Minecraft from scratch without human data or curricula,” wrote study author, Danijar Hafner, in a blog post.

But playing Minecraft isn’t the point. AI scientist have long been after general algorithms that can solve tasks across a wide range of problems—not just the ones they’re trained on. Although some of today’s models can generalize a skill across similar problems, they struggle to transfer those skills across more complex tasks requiring multiple steps.

In the limited world of Minecraft, Dreamer seemed to have that flexibility. After learning a model of its environment, it could “imagine” future scenarios to improve its decision making at each step and ultimately was able to collect that elusive diamond.

The work “is about training a single algorithm to perform well across diverse…tasks,” said Harvard’s Keyon Vafa, who was not involved in the study, to Nature. “This is a notoriously hard problem and the results are fantastic.”

Learning From Experience

Children naturally soak up their environment. Through trial and error, they quickly learn to avoid touching a hot stove and, by extension, a recently used toaster oven. Dubbed reinforcement learning, this process incorporates experiences—such as “yikes, that hurt”—into a model of how the world works.

A mental model makes it easier to imagine or predict consequences and generalize previous experiences to other scenarios. And when decisions don’t work out, the brain updates its modeling of the consequences of actions—”I dropped a gallon of milk because it was too heavy for me”—so that kids eventually learn not to repeat the same behavior.

Scientists have adopted the same principles for AI, essentially raising algorithms like children. OpenAI previously developed reinforcement learning algorithms that learned to play the fast-paced multiplayer Dota 2 video game with minimal training. Other such algorithms have learned to control robots capable of solving multiple tasks or beat the hardest Atari games.

Learning from mistakes and wins sounds easy. But we live in a complex world, and even simple tasks, like, say, making a peanut butter and jelly sandwich, involve multiple steps. And if the final sandwich turns into an overloaded, soggy abomination, which step went wrong?

That’s the problem with sparse rewards. We don’t immediately get feedback on every step and action. Reinforcement learning in AI struggles with a similar problem: How can algorithms figure out where their decisions went right or wrong?

World of Minecraft

Minecraft is a perfect AI training ground.

Players freely explore the game’s vast terrain—farmland, mountains, swamps, and deserts—and harvest specialized materials as they go. In most modes, players use these materials to build intricate structures—from chicken coups to the Eiffel Tower—craft objects like swords and fences, or start a farm.

The game also resets: Every time a player joins a new game the world map is different, so remembering a previous strategy or place to mine materials doesn’t help. Instead, the player has to more generally learn the world’s physics and how to accomplish goals—say, mining a diamond.

These quirks make the game an especially useful test for AI that can generalize, and the AI community has focused on collecting diamonds as the ultimate challenge. This requires players to complete multiple tasks, from chopping down trees to making pickaxes and carrying water to an underground lava flow.

Kids can learn how to collect diamonds from a 10-minute YouTube video. But in a 2019 competition, AI struggled even after up to four days of training on roughly 1,000 hours of footage from human gameplay.

Algorithms mimicking gamer behavior were better than those learning purely by reinforcement learning. One of the organizers of the competition, at the time, commented that the latter wouldn’t stand a chance in the competition on their own.

Dreamer the Explorer

Rather than relying on human gameplay, Dreamer explored the game by itself, learning through experimentation to collect a diamond from scratch.

The AI is comprised of three main neural networks. The first of these models the Minecraft world, building an internal “understanding” of its physics and how actions work. The second network is basically a parent that judges the outcome of the AI’s actions. Was that really the right move? The last network then decides the best next step to collect a diamond.

All three components were simultaneously trained using data from the AI’s previous tries—a bit like a gamer playing again and again as they aim for the perfect run.

World modeling is the key to Dreamer’s success, Hafner told Nature. This component mimics the way human players see the game and allows the AI to predict how its actions could change the future—and whether that future comes with a reward.

“The world model really equips the AI system with the ability to imagine the future,” said Hafner.

To evaluate Dreamer, the team challenged it against several state-of-the-art singular use algorithms in over 150 tasks. Some tested the AI’s ability to sustain longer decisions. Others gave either constant or sparse feedback to see how the programs fared in 2D and 3D worlds.

“Dreamer matches or exceeds the best [AI] experts,” wrote the team.

They then turned to a far harder task: Collecting diamonds, which requires a dozen steps. Intermediate rewards helped Dreamer pick the next move with the largest chance of success. As an extra challenge, the team reset the game every half hour to ensure the AI didn’t form and remember a specific strategy.

Dreamer collected a diamond after roughly nine days of continuous gameplay. That’s far slower than expert human players, who need just 20 minutes or so. However, the AI wasn’t specifically trained on the task. It taught itself how to mine one of the game’s most coveted items.

The AI “paves the way for future research directions, including teaching agents world knowledge from internet videos and learning a single world model” so they can increasingly accumulate a general understanding of our world, wrote the team.

“Dreamer marks a significant step towards general AI systems,” said Hafner.

The post DeepMind’s New AI Teaches Itself to Play Minecraft From Scratch appeared first on SingularityHub.

Kategorie: Transhumanismus

Our Conscious Perception of the World Depends on This Deep Brain Structure

Singularity HUB - 10 Duben, 2025 - 17:36

The thalamus is a gateway, shuttling select information into consciousness.

How consciousness emerges in the brain is the ultimate mystery. Scientists generally agree that consciousness relies on multiple brain regions working in tandem. But the areas and neural connections supporting our perception of the world have remained elusive.

A new study, published in Science, offers a potential answer. A Chinese team recorded the neural activity of people with electrodes implanted deep in their brains as they performed a visual task. Called the thalamus, scientists have long hypothesized the egg-shaped area is a central relay conducting information across multiple brain regions.

Previous studies hunting for the brain mechanisms underlying consciousness have often focused on the cortex—the outermost regions of the brain. Very little is known about how deeper brain structures contribute to our sense of perception and self.

Simultaneously recording neural activity from both the thalamus and the cortex, the team found a wave-like signal that only appeared when participants reported seeing an image in a test. Visual signals specifically designed not to reach awareness had a different brain response.

The results support the idea that parts of the thalamus “play a gate role” for the emergence of conscious perception, wrote the team.

The study is “really pretty remarkable,” said Christopher Whyte at the University of Sydney, who was not involved in the work, to Nature. One of the first to simultaneously record activity in both deep and surface brain regions in humans, it reveals how signals travel across the brain to support consciousness.

The Ultimate Enigma

Consciousness has teased the minds of philosophers and scientists for centuries. Thanks to modern brain mapping technologies, researchers are beginning to hunt down its neural underpinnings.

At least half a dozen theories now exist, two of which are going head-to-head in a global research effort using standardized tests to probe how awareness emerges in the human brain. The results, alongside other work, could potentially build a unified theory of consciousness.

The problem? There still isn’t definitive agreement on what we mean by consciousness. But practically, most scientists agree it has at least two modes. One is dubbed the “conscious state,” which is when, for example, you’re awake, asleep, or in a coma. The other mode, “conscious content,” captures awareness or perception.

We’re constantly bombarded with sights, sounds, touch, and other sensations. Only some stimuli—the smell of a good cup of coffee, the sound of a great playlist, the feel of typing on a slightly oily keyboard—reach our awareness. Others are discarded by a web of neural networks long before we perceive them.

In other words, the brain filters signals from the outside world and only brings a sliver of them into conscious perception. The entire process from sensing to perceiving takes just a few milliseconds.

Brain imaging technologies such as functional magnetic resonance imaging (fMRI) can capture the brain’s inner workings as we process these stimuli. But like a camera with slow shutter speed, the technology struggles to map activated brain areas in real time at high resolution. The delay also makes it difficult to track how signals flow from one brain area to another. Because a sense of awareness likely emerges from coherent activation across multiple brain regions, this makes it more difficult to decipher how consciousness emerges from neural chatter.

Most scientists have focused on the cortex, with just a few exploring the function of deeper brain structures. “Capturing neural activity in the thalamic nuclei [thalamus] during conscious perception is very difficult” because of technological restrictions, wrote the authors.

Deep Impact

The new study solved the problem by tapping a unique resource: People with debilitating and persistent headaches that can’t be managed with medication but who are otherwise mentally sharp and healthy.

Each participant in the study already had up to 20 electrodes implanted in different parts of the thalamus and cortex as part of an experimental procedure to dampen their headache pain. Unlike fMRI studies that cover the whole brain with time lag and relatively low resolution, these electrodes could directly pick up neural signals in the implanted areas with minimal delay.

Often dubbed the brain’s Grand Central Station, the thalamus is a complex structure housing multiple neural “train tracks” originating from different locations. Each track routes and ferries a unique combination of incoming sensations to other brain regions for further processing.

The thalamus likely plays “a crucial role in regulating the conscious state” based on previous theoretical and animal studies, wrote the team. But testing its role in humans has been difficult because of its complex structure and location deep inside the brain. The five participants, each with electrodes already implanted in their thalamus and cortex for treatment, were the perfect candidates for a study matching specific neural signals to conscious perception.

Using a custom task, the team measured if participants could consciously perceive a visual cue—a blob of alternating light and dark lines—blinking on a screen. Roughly half the trials were designed so the cue appeared too briefly for the person to register, as determined by previous work. The participants were then asked to move their eyes towards the left or right of the screen depending on whether they noticed the cue.

Throughout the experiment the team captured electrical activity from parts of each participant’s thalamus and prefrontal cortex—the front region of the brain that’s involved in higher level thinking such as reasoning and decision making.

Unique Couplings

Two parts of the thalamus sparked with activity when a person consciously perceived the cue, and the areas orchestrated synchronized waves of activity to the cortex. This synchronized activity disappeared when the participants weren’t consciously aware of the cue.

The contributions to “consciousness-related activity were strikingly different” across the thalamus, wrote the authors. In other words, these specific deep-brain regions may form a crucial gateway for processing visual experiences so they rise to the level of perception.  

The findings are similar to results from previous studies in mice and non-human primates. One study, tracked how mice react to subtle prods to their whiskers. The rodents were trained to only lick water when they felt a touch but otherwise go about their business. Each mouse’s thalamus and cortex sparked when they went for the water, forming similar neural circuits as those observed in humans during conscious perception. Other studies in monkeys have also identified the thalamus as a hot zone for consciousness, although they implicate slightly different areas of the structure.

The team is planning to conduct similar visual experiments in monkeys to clarify which parts of the thalamus support conscious perception. For now, the full nature of consciousness in the brain remains an enigma. But the new results offer a peek inside the human mind as it perceives the world with unprecedented detail.

Liad Mudrik at Tel Aviv University, who was not involved in the study, told Nature it is “one of the most elaborate and extensive investigations of the role of the thalamus in consciousness.”

The post Our Conscious Perception of the World Depends on This Deep Brain Structure appeared first on SingularityHub.

Kategorie: Transhumanismus

What Makes the Human Brain Unique? Scientists Compared It With Monkeys and Apes to Find Out

Singularity HUB - 8 Duben, 2025 - 20:42

Our closest relatives in the animal kingdom are wired up differently.

Scientists have long tried to understand the human brain by comparing it to other primates. Researchers are still trying to understand what makes our brain different to our closest relatives. Our recent study may have brought us one step closer by taking a new approach—comparing the way brains are internally connected.

The Victorian palaeontologist Richard Owen incorrectly argued that the human brain was the only brain to contain a small area called the Hippocampus minor. He claimed that made it unique among the animal kingdom, and he argued, the human brain was therefore clearly unrelated to other species. We’ve learned a lot since then about the organization and function of our brain, but there is still much to learn.

Most studies comparing the human brain to that of other species focus on size. This can be the size of the brain, size of the brain relative to the body, or the size of parts of the brain to the rest of it. However, measures of size don’t tell us anything about the internal organization of the brain. For instance, although the enormous brain of an elephant contains three times as many neurons as the human brain, these are predominantly located in the cerebellum, not in the neocortex, which is commonly associated with human cognitive abilities.

Until recently, studying the brain’s internal organization was painstaking work. The advent of medical imaging techniques, however, has opened up new possibilities to look inside the brains of animals quickly, in great detail, and without harming the animal.

Our team used publicly available MRI data of white matter, the fibers connecting parts of the brain’s cortex. Communication between brain cells runs along these fibers. This costs energy and the mammalian brain is therefore relatively sparsely connected, concentrating communications down a few central pathways.

The connections of each brain region tell us a lot about its functions. The set of connections of any brain region is so specific that brain regions have a unique connectivity fingerprint.

In our study, we compared these connectivity fingerprints across the human, chimpanzee, and macaque monkey brain. The chimpanzee is, together with the bonobo, our closest living relative. The macaque monkey is the non-human primate best known to science. Comparing the human brain to both species meant we could not only assess which parts of our brain are unique to us, but also which parts are likely to be shared heritage with our non-human relatives.

Much of the previous research on human brain uniqueness has focused on the prefrontal cortex, a group of areas at the front of our brain linked to complex thought and decision making. We indeed found that aspects of the prefrontal cortex had a connectivity fingerprint in the human that we couldn’t find in the other animals, particularly when we compared the human to the macaque monkey.

A higher value means the brains are more different. JNeurosci/Rogier Mars and Katherine Bryant, CC BY-NC-ND

But the main differences we found were not in the prefrontal cortex. They were in the temporal lobe, a large part of cortex located approximately behind the ear. In the primate brain, this area is devoted to deep processing of information from our two main senses: vision and hearing. One of the most dramatic findings was in the middle part of the temporal cortex.

The feature driving this distinction was the arcuate fasciculus, a white matter tract connecting the frontal and temporal cortex and traditionally associated with processing language in humans. Most if not all primates have an arcuate fasciculus but it is much larger in human brains.

However, we found that focusing solely on language may be too narrow. The brain areas that are connected via the arcuate fasciculus are also involved in other cognitive functions, such as integrating sensory information and processing complex social behavior. Our study was the first to find the arcuate fasciculus is involved in these functions. This insight underscores the complexity of human brain evolution, suggesting that our advanced cognitive abilities arose not from a single change, as scientists thought, but through several, interrelated changes in brain connectivity.

While the middle temporal arcuate fasciculus is a key player in language processing, we also found differences between the species in a region more at the back of the temporal cortex. This temporoparietal junction area is critical in processing information about others, such as understanding others’ beliefs and intentions, a cornerstone of human social interaction.

In humans, this brain area has much more extensive connections to other parts of the brain processing complex visual information, such as facial expressions and behavioral cues. This suggests that our brain is wired to handle more intricate social processing than those of our primate relatives. Our brain is wired up to be social.

These findings challenge the idea of a single evolutionary event driving the emergence of human intelligence. Instead, our study suggests brain evolution happened in steps. Our findings suggest changes in frontal cortex organization occurred in apes, followed by changes in temporal cortex in the lineage leading to humans.

Richard Owen was right about one thing. Our brains are different from those of other species—to an extent. We have a primate brain, but it’s wired up to make us even more social than other primates, allowing us to communicate through spoken language.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

The post What Makes the Human Brain Unique? Scientists Compared It With Monkeys and Apes to Find Out appeared first on SingularityHub.

Kategorie: Transhumanismus

This Brain-Computer Interface Is So Small It Fits Between the Follicles of Your Hair

Singularity HUB - 8 Duben, 2025 - 00:15

A tiny sensor to control devices with your thoughts—no surgery required.

Brain-computer interfaces are typically unwieldy, which makes using them on the move a non-starter. A new neural interface small enough to be attached between the user’s hair follicles keeps working even when the user is in motion.

At present, brain-computer interfaces are typically used as research devices designed to study neural activity or, occasionally, as a way for patients with severe paralysis to control wheelchairs or computers. But there are hopes they could one day become a fast and intuitive way for people to interact with personal devices through thoughts alone.

Invasive approaches that implant electrodes deep in the brain provide the highest fidelity connections, but regulators are unlikely to approve them for all but the most pressing medical problems in the near term.

Some researchers are focused on developing non-invasive technologies like electroencephalography (EEG), which uses electrodes stuck to the outside of the head to pick up brain signals. But getting a good readout requires stable contact between the electrodes and scalp, which is tricky to maintain, particularly if the user is moving around during normal daily activities.

Now, researchers have developed a neural interface just 0.04 inches across that uses microneedles to painlessly attach to the wearer’s scalp for a highly stable connection. To demonstrate the device’s potential, the team used it to control an augmented reality video call. The interface worked for up to 12 hours after implantation as the wearer stood, walked, and ran.

“This advance provides a pathway for the practical and continuous use of BCI [brain-computer interfaces] in everyday life, enhancing the integration of digital and physical environments,” the researchers write in a paper describing the device in the Proceedings of the National Academy of Sciences.

To create their device, the researchers first molded resin into a tiny cross shape with five microscale spikes sticking out of the surface. They then coated these microneedles with a conductive polymer called PEDOT so they could pick up electrical signals from the brain.

Besides firmly attaching the sensor to the head, the needles also penetrate an outer layer of the scalp made up of dead skin cells that acts as an insulator. This allows the sensor to record directly from the epidermis, which the researchers say enables much better signal acquisition.

The researchers also attached a winding, snake-like copper wire to the sensor and connected it to the larger wires that carry the recorded signal away to be processed. This means that even if the larger wires are jostled as the subject moves, it doesn’t disturb the sensor. A module decodes the brain readings and then transmits them wirelessly to an external device.

To show off the device’s capabilities, they used it to control video calls conducted on a pair of Nreal augmented reality glasses. They relied on “steady-state visual evoked potentials,” in which the brain responds in a predictable way when the user looks at an image flickering at a specific frequency.

By placing different flickering graphics next to different buttons in the video call interface, the user could answer, reject, and end calls by simply looking at the relevant button. The system correctly detected their intention in real-time with an average accuracy of 96.4 percent as the user carried out a variety of movements. They also showed that the recording quality remained stable over 12 hours, while a gold-standard EEG electrode fell off over the same period.

The device was fabricated using a method that would allow mass production, the researchers say, and could also have applications as a wearable health monitor. If they can scale the approach up, an always-on connection between our brains and personal devices may not be so far away.

The post This Brain-Computer Interface Is So Small It Fits Between the Follicles of Your Hair appeared first on SingularityHub.

Kategorie: Transhumanismus

This Week’s Awesome Tech Stories From Around the Web (Through April 5)

Singularity HUB - 5 Duben, 2025 - 16:00
Robotics

Invasion of the Home Humanoid RobotsCade Metz | The New York Times

“Artificial intelligence is already driving cars, writing essays and even writing computer code. Now, humanoids, machines built to look like humans and powered by AI, are poised to move into our homes so they can help with the daily chores.”

Robotics

Roomba Creator Says Humanoid Robots Are OverhypedRocket Drew | The Information

“‘We’ve hardly started on humanoid hype,’ [Rodney] Brooks said. ‘It’s going to go worse and worse and worse.’ Humanoid robots are enthralling because people can imagine them doing everything a human can do, Brooks said, but they still struggle with basic skills such as walking, falling, and coordinating multiple body parts to manipulate an object.”

Computing

A 32-Bit Processor Made With an Atomically Thin SemiconductorJohn Timmer | Ars Technica

“The authors argue that it’s probably one of the most sophisticated bits of ‘beyond silicon’ hardware yet implemented. That said, they don’t expect this technology to replace silicon; instead, they view it as potentially filling some niche needs, like ultra-low-power processing for simple sensors. But if the technology continues to advance, the scope of its potential uses may expand beyond that.”

Computing

World’s Smallest LED Pixels Squeeze Into Astounding 127,000-PPI DisplayMichael Irving | New Atlas

“Scientists in China have created a new type of display with the smallest pixels and the highest pixel density ever. Individual pixels were shrunk to 90 nanometers—about the size of a virus—and a record 127,000 of them were crammed into every inch of a display.”

Biotechnology

Alphabet-Backed Isomorphic Labs Raises $600 Million for AI Drug DevelopmentHelena Smolak | The Wall Street Journal

“‘This funding will further turbocharge the development of our next-generation AI drug design engine, help us advance our own programs into clinical development, and is a significant step forward towards our mission of one day solving all disease with the help of AI,’ Chief Executive Officer Demis Hassabis, who is also the head of Google’s AI division DeepMind, said.”

Robotics

The Hypershell Exoskeleton Is So Good at Climbing Cliffs, It Ruined My WorkoutKyle Barr | Gizmodo

“The Hypershell is a device made for assisting your walks, runs, bikes, or hikes. In a rarity for weird tech, the hiking exoskeleton accomplishes nearly everything it promises to. It does its job so well, and it left me devoid of the exercise and that sense of calm I normally get from my hikes.”

Science

Why Everything in the Universe Turns More ComplexPhilip Ball | Quanta Magazine

“[The researchers] have proposed nothing less than a new law of nature, according to which the complexity of entities in the universe increases over time with an inexorability comparable to the second law of thermodynamics—the law that dictates an inevitable rise in entropy, a measure of disorder. If they’re right, complex and intelligent life should be widespread.”

Future

DeepMind Has Detailed All the Ways AGI Could Wreck the WorldRyan Whitwam | Ars Technica

“While some in the AI field believe AGI is a pipe dream, the authors of [a new] DeepMind paper project that it could happen by 2030. With that in mind, they aimed to understand the risks of a human-like synthetic intelligence, which they acknowledge could lead to ‘severe harm.'”

Energy

The Hottest Thing in Clean EnergyAlexander C. Kaufman | The Atlantic

“For now, most of the efforts to debut next-generation geothermal technology are still in the American West, where drilling is relatively cheap and easy because the rocks they’re targeting are closer to the surface. But if the industry can prove to investors that its power plants work as described—which experts expect to happen by the end of the decade—geothermal could expand quickly, just like oil-and-gas fracking did.”

Space

SpaceX Took a Big Step Toward Reusing Starship’s Super Heavy BoosterStephen Clark | Ars Technica

“This was the first time SpaceX has test-fired a ‘flight-proven’ Super Heavy booster, and it paves the way for this particular rocket—designated Booster 14—to fly again soon. SpaceX confirmed a reflight of Booster 14, which previously launched and returned to Earth in January, will happen on next Starship launch.”

Space

Amazon Is Ready to Launch Its Starlink CompetitorThomas Ricker | The Verge

“The first batch of 27 Project Kuiper space internet satellites are scheduled to launch next week. Amazon has secured 80 such launch missions that will each deliver dozens of satellites into low earth orbit (LEO) to create a constellation capable of competing with Elon Musk’s Starlink juggernaut. Amazon says it expects to begin offering high-speed, low-latency internet service ‘later this year.'”

Science

Bonobos’ Calls May Be the Closest Thing to Animal Language We’ve SeenJacek Krywko | Ars Technica

“A team of Swiss scientists led by Melissa Berthet, an evolutionary anthropologist at the University of Zurich, discovered bonobos can combine [vocal calls including peeps, hoots, yelps, grunts, and whistles] into larger semantic structures. In these communications, meaning is something more than just a sum of individual calls—a trait known as non-trivial compositionality, which we once thought was uniquely human.”

Artificial Intelligence

DeepMind Is Holding Back Release of AI Research to Give Google an EdgeMelissa Heikkilä and Stephen Morris | Ars Technica

“Three former researchers said the group was most reluctant to share papers that reveal innovations that could be exploited by competitors, or cast Google’s own Gemini AI model in a negative light compared with others. The changes represent a significant shift for DeepMind, which has long prided itself on its reputation for releasing groundbreaking papers and as a home for the best scientists building AI.”

The post This Week’s Awesome Tech Stories From Around the Web (Through April 5) appeared first on SingularityHub.

Kategorie: Transhumanismus

World’s Tiniest Pacemaker Is Smaller Than a Grain of Rice

Singularity HUB - 4 Duben, 2025 - 19:35

The device fits in a syringe and melts away after use.

Scientists just unveiled the world’s tiniest pacemaker. Smaller than a grain of rice and controlled by light shone through the skin, the pacemaker generates power and squeezes the heart’s muscles after injection through a stint.

The device showed it could steadily orchestrate healthy heart rhythms in rat, dog, and human hearts in a newly published study. It’s also biocompatible and eventually broken down by the body after temporary use. Over 23 times smaller than previous bioabsorbable pacemakers, the device opens the door to minimally invasive implants that wirelessly monitor heart health after extensive surgery or other heart problems.

“The extremely small sizes of these devices enable minimally invasive implantation,” the authors, led by John Rogers at Northwestern University, wrote. Paired with a wireless controller on the skin’s surface, the system automatically detected irregular heartbeats and targeted electrical zaps to different regions of the heart.

The device could especially benefit babies who need smaller hardware to monitor their hearts. Although specifically designed for the heart, a similar setup could be adapted to manage pain, heal wounds, or potentially regenerate nerves and bones.

Achy Breaky Heart

The heart is a wonder of biomechanics.

Over a person’s lifetime, its four chambers reliably pump blood rich in oxygen and nutrients through the body. Some chambers cleanse blood of carbon dioxide—a waste product of cell metabolism—and infuse it with oxygen from the lungs. Others push nutrient-rich blood back out to rest of the body.

But like parts in a machine, heart muscles eventually wear down with age or trauma. Unlike skin cells, the heart can’t easily regenerate. Over time, its muscles become stiff, and after an injury—say, a heart attack—scar tissue replaces functional cells.

That’s a problem when it comes to keeping the heart pumping in a steady rhythm.

Each chamber contracts and releases in an intricate biological dance orchestrated by an electrical flow. Any glitches in these signals can cause heart muscles to squeeze chaotically, too rapidly or completely off beat. Deadly problems, such as atrial fibrillation, can result. Even worse, blood can pool inside individual chambers and increase the risk of blood clots. If these are dislodged, they could travel to the brain and trigger a stroke.

Risks are especially high after heart surgery. To lower the chances of complications, surgeons often implant temporary pacemakers for days or weeks as the organ recovers.

These devices are usually made up of two components.

The first of these is a system that detects and generates electrical zaps. It generally requires a power supply and control units to fine-tune the stimulation. The other bit “is kinda the business end” study author John Rogers told Nature. This part delivers electrical pulses to the heart muscles, directing them to contract or relax.

The setup is a wiring nightmare, with wires to detect heart rhythm threading through the skin. “You have wires designed to monitor cardiac function, but it becomes a somewhat clumsy collection of hardware that’s cumbersome for the patient,” said Rogers.

These temporary pacemakers are “essential life-saving technologies,” wrote the team. But most devices need open-heart surgery to implant and remove, which increases the risk of infection and additional damage to an already fragile organ. The procedure is especially difficult for babies or younger patients because they’re so small and grow faster.

Heart surgeons inspired the project with their vision of a “fully implantable, wirelessly controlled temporary pacemaker that would just melt away inside the body after it’s no longer needed,” said Rogers.

A Steady Beat

An ideal pacemaker should be small, biocompatible, and easily controllable. Easy delivery and multiplexing—that is, having multiple units to control heartbeat—are a bonus.

The new device delivers.

It’s made of biocompatible material that’s eventually broken down and dispelled by the body without the need for surgical removal. It has two small pieces of metal somewhat similar to the terminals of a battery. Normally, the implant doesn’t conduct electricity. But once implanted, natural fluids from heart cells form a liquid “bridge” that completes the electrical circuit when activated, transforming the device into both a self-powered battery and a generator to stimulate heart muscles. A Bluetooth module connects the implant with a soft “receiver” patch on the skin to wirelessly capture electrical signals from the heart for analysis.

Controlling the heart’s rhythm took more engineering. Each heart chamber needs to pump in a coordinated sequence for blood to properly flow. Here, the team used an infrared light switch to turn the implant on and off. This wavelength of light can penetrate skin, muscle, and bone, making it a powerful way to precisely control organs or tools that operate on electrical signals.

Although jam-packed with hardware, the final implant is roughly the size of a sesame seed. It’s “more than 23 times smaller than any bioresorbable alternative,” wrote the team.

Flashing infrared LED lights placed on the skin above the pacemaker turn the device on. Different infrared frequencies pace the heartbeat.

The team first tested their device in isolated pig and donated human hearts. After it was implanted by injection through a stint, the device worked reliably in multiple heart chambers, delivering the same amount of stimulation as a standard pacemaker.

They also tested the device in hound dogs, whose hearts are similar in shape, size, and electrical workings to ours. A tiny cut was enough to implant and position multiple pacemakers at different locations on the heart, where they could be controlled individually. The team used light to fine-tune heart rate and rhythm, changing the contraction of two heart chambers to pump and release blood in a natural beat.

“Because the devices are so small, you can pace the heart in very sophisticated ways that rely not just on a single pacemaker, but a multiplicity of them,” said Rogers. “[This] offers a greater control over the cardiac cycle than would be possible with a single pacemaker.”

Device Sprinkles

The team envisions that the finished device will be relatively off-the-shelf. Put together, a sensor monitors problematic heart rhythms from the skin’s surface, restores normal activity with light pulses, and includes an interface to visualize the process for users. The materials are safe for the human body—some are even recommended as part of a daily diet or added to vitamin supplements—and components largely dissolve after 9 to 12 months.

The devices aren’t specifically designed for the heart. They could also stimulate nerve and bone regeneration, heal wounds, or manage pain through electrical stimulation. “You could sprinkle them around…do a dozen of these things…each one controlled by a different wavelength [of light],” said Rogers.

The post World’s Tiniest Pacemaker Is Smaller Than a Grain of Rice appeared first on SingularityHub.

Kategorie: Transhumanismus

These Solar Cells Are Made of Moon Dust. They Could Power Future Lunar Colonies.

Singularity HUB - 4 Duben, 2025 - 00:30

Combining “moonglass” with just two pounds of perovskite from Earth would yield 4,300 square feet of solar panels.

NASA’s plan to establish a permanent human presence on the moon will require making better use of lunar resources. A new approach has now shown how to make solar cells out of moon dust.

Later this decade, the US space agency’s Artemis III mission plans to return astronauts to the moon for the first time in more than half a century. The long-term goal of the Artemis program is to establish a permanent human presence on our nearest celestial neighbor.

But building and supplying such a base means launching huge amounts of material into orbit at great cost. That’s why NASA and other space agencies interested in establishing a presence on the moon are exploring “in-situ resource utilization”—that is, exploiting the resources already there.

Moon dust, or regolith, has been widely touted as a potential building material, while ice in the moon’s shadowy craters could be harvested for drinking water or split into oxygen and hydrogen that can be used for air in habitats or as rocket fuel.

Now, researchers at the University of Potsdam, Germany, have found a way to turn a simulated version of lunar regolith into glass for solar cells—the most obvious way to power a moon base. They say this could dramatically reduce the amount of material that would have to be hauled to the moon to set up a permanent settlement.

“From extracting water for fuel to building houses with lunar bricks, scientists have been finding ways to use moon dust,” lead researcher Felix Lang said in a press release. “Now, we can turn it into solar cells too, possibly providing the energy a future moon city will need.”

To test out their approach, the researchers used an artificial mixture of minerals designed to replicate the soil found in the moon’s highlands. Crucially, their approach doesn’t require any complex mining or purification equipment. The regolith simply needs to be melted and then cooled gradually to create sheets of what the researchers refer to as “moonglass.”

In their experiments, reported in the journal Device, the researchers used an electric furnace to heat the dust to around 2,800 degrees fahrenheit. They say these kinds of temperatures could be achieved on the moon by using mirrors or lenses to concentrate sunlight.

They then deposited an ultrathin layer of a material called halide perovskite—which has emerged as a cheap and powerful alternative to silicon in solar cells—onto the moonglass. This material would have to be carried from Earth, but the researchers estimate that a little more than two pounds of it would be enough to fabricate 4,300 square feet of solar panels.

The team tested out several solar-cell designs, achieving efficiencies between 9.4 and 12.1 percent. That’s significantly less than the 30 to 40 percent that the most advanced space solar cells achieve, the researchers concede. But the lower efficiency would be more than offset by the massive savings in launch costs missions might realize by making the bulkiest parts of the solar cell on site.

“If you cut the weight by 99 percent, you don’t need ultra-efficient 30 percent solar cells, you just make more of them on the moon,” says Lang.

The moonglass the researchers created also has a natural brownish tint that helps protect it against radiation, a major issue on the moon’s surface. They also note that halide perovskites tolerate relatively high levels of impurities and defects, which makes them well-suited to the less than perfect fabrication setups likely to be found on the moon.

The moon’s low gravity and wild temperature swings could play havoc with their fabrication process and the stability of the resulting solar cells, the researchers admit. That’s why they’re hoping to send a small-scale experiment to the moon to test the idea out in real conditions.

While the approach is probably at too early a stage to impact NASA’s upcoming moon missions, it could prove a valuable tool as we scale up our presence beyond Earth orbit.

The post These Solar Cells Are Made of Moon Dust. They Could Power Future Lunar Colonies. appeared first on SingularityHub.

Kategorie: Transhumanismus

NASA’s Curiosity Rover Has Made a Significant Discovery in the Search for Alien Life

Singularity HUB - 1 Duben, 2025 - 16:00

It’s an exciting time in the search for life on Mars.

NASA’s Curiosity Mars rover has detected the largest organic (carbon-containing) molecules ever found on the red planet. The discovery is one of the most significant findings in the search for evidence of past life on Mars. This is because, on Earth at least, relatively complex, long-chain carbon molecules are involved in biology. These molecules could actually be fragments of fatty acids, which are found in, for example, the membranes surrounding biological cells.

Scientists think that if life ever emerged on Mars it was probably microbial in nature. Because microbes are so small, it’s difficult to be definitive about any potential evidence for life found on Mars. Such evidence needs more powerful scientific instruments that are too large to be put on a rover.

The organic molecules found by Curiosity consist of carbon atoms linked in long chains, with other elements bonded to them, like hydrogen and oxygen. They come from a 3.7-billion-year-old rock dubbed Cumberland, encountered by the rover at a presumed dried-up lakebed in Mars’s Gale Crater. Scientists used the Sample Analysis at Mars (Sam) instrument on the NASA rover to make their discovery.

Scientists were actually looking for evidence of amino acids, which are the building blocks of proteins and therefore key components of life as we know it. But this unexpected finding is almost as exciting. The research is published in Proceedings of the National Academies of Science.

Among the molecules were decane, which has 10 carbon atoms and 22 hydrogen atoms, and dodecane, with 12 carbons and 26 hydrogen atoms. These are known as alkanes, which fall under the umbrella of the chemical compounds known as hydrocarbons.

It’s an exciting time in the search for life on Mars. In March this year, scientists presented evidence of features in a different rock sampled elsewhere on Mars by the Perseverance rover. These features, dubbed “leopard spots” and “poppy seeds,” could have been produced by the action of microbial life in the distant past, or not. The findings were presented at a US conference and have not yet been published in a peer reviewed journal.

The Mars Sample Return mission, a collaboration between NASA and the European Space Agency, offers hope that samples of rock collected and stored by Perseverance could be brought to Earth for study in laboratories. The powerful instruments available in terrestrial labs could finally confirm whether or not there is clear evidence for past life on Mars. However, in 2023, an independent review board criticized increases in Mars Sample Return’s budget. This prompted the agencies to rethink how the mission could be carried out. They are currently studying two revised options.

Signs of Life?

Cumberland was found in a region of Gale Crater called Yellowknife Bay. This area contains rock formations that look suspiciously like those formed when sediment builds up at the bottom of a lake. One of Curiosity’s scientific goals is to examine the prospect that past conditions on Mars would have been suitable for the development of life, so an ancient lakebed is the perfect place to look for them.

The Martian rock known as Cumberland, which was sampled in the study. NASA/JPL-Caltech/MSSS

The researchers think that the alkane molecules may once have been components of more complex fatty acid molecules. On Earth, fatty acids are components of fats and oils. They are produced through biological activity in processes that help form cell membranes, for example. The suggested presence of fatty acids in this rock sample has been around for several years, but the new paper details the full evidence.

Fatty acids are long, linear hydrocarbon molecules with a carboxyl group (COOH) at one end and a methyl group (CH3) at the other, forming a chain of carbon and hydrogen atoms.

A fat molecule consists of two main components: glycerol and fatty acids. Glycerol is an alcohol molecule with three carbon atoms, five hydrogens, and three hydroxyl (chemically bonded oxygen and hydrogen, OH) groups. Fatty acids may have 4 to 36 carbon atoms; however, most of them have 12-18. The longest carbon chains found in Cumberland are 12 atoms long.

Mars Sample Return will deliver Mars rocks to Earth for study. This artist’s impression shows the ascent vehicle leaving Mars with rock samples. NASA/JPL-Caltech

Organic molecules preserved in ancient Martian rocks provide a critical record of the past habitability of Mars and could be chemical biosignatures (signs that life was once there).

The sample from Cumberland has been analyzed by the Sam instrument many times, using different experimental techniques, and has shown evidence of clay minerals, as well as the first (smaller and simpler) organic molecules found on Mars, back in 2015. These included several classes of chlorinated and sulphur-containing organic compounds in Gale crater sedimentary rocks, with chemical structures of up to six carbon atoms. The new discovery doubles the number of carbon atoms found in a single molecule on Mars.

The alkane molecules are significant in the search for biosignatures on Mars, but how they actually formed remains unclear. They could also be derived through geological or other chemical mechanisms that do not involve fatty acids or life. These are known as abiotic sources. However, the fact that they exist intact today in samples that have been exposed to a harsh environment for many millions of years gives astrobiologists (scientists who study the possibility of life beyond Earth) hope that evidence of ancient life might still be detectable today.

It is possible the sample contains even longer chain organic molecules. It may also contain more complex molecules that are indicative of life, rather than geological processes. Unfortunately, Sam is not capable of detecting those, so the next step is to deliver Martian rock and soil to more capable laboratories on the Earth. Mars Sample Return would do this with the samples already gathered by the Perseverance Mars rover. All that’s needed now is the budget.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

The post NASA’s Curiosity Rover Has Made a Significant Discovery in the Search for Alien Life appeared first on SingularityHub.

Kategorie: Transhumanismus

Brain Implant ‘Streams’ a Paralyzed Woman’s Thoughts in Near Real Time

Singularity HUB - 1 Duben, 2025 - 00:04

The system, which also synthesizes her voice, takes no more than a second to translate thoughts to speech.

A paralyzed woman can again communicate with the outside world thanks to a wafer-thin disk capturing speech signals in her brain. An AI translates these electrical buzzes into text and, using recordings taken before she lost the ability to speak, synthesizes speech with her own voice.

It’s not the first brain implant to give a paralyzed person their voice back. But previous setups had long lag times. Some required as much as 20 seconds to translate thoughts into speech. The new system, called a streaming speech neuroprosthetic, takes just a second.

“Speech delays longer than a few seconds can disrupt the natural flow of conversation,” the team wrote in a paper published in Nature Neuroscience today. “This makes it difficult for individuals with paralysis to participate in meaningful dialogue, potentially leading to feelings of isolation and frustration.”

On average, the AI can translate about 47 words per minute, with some trials hitting nearly double that pace. The team initially trained the algorithm on 1,024 words, but it eventually learned to decode other words with lower accuracy based on the woman’s brain signals.

The algorithm showed some flexibility too, decoding electrical signals collected from two other types of hardware and using data from other people.

“Our streaming approach brings the same rapid speech decoding capacity of devices like Alexa and Siri to neuroprostheses,” study author Gopala Anumanchipalli at the University of California, Berkeley, said in a press release. “The result is more naturalistic, fluent speech synthesis.”

Bridging the Gap

Losing the ability to communicate is devastating.

Some solutions for people with paralysis already exist. One of these uses head or eye movements to control a digital keyboard where users type out their thoughts. More advanced options can translate text into speech in a selection of voices (though not usually a user’s own).

But these systems experience delays of over 20 seconds, making natural conversation difficult.

Ann, the participant in the new study, uses such a device daily. Barely middle-aged, a stroke severed the neural connections between her brain and the muscles that control her ability to speak. These include muscles in her vocal cords, lips, and tongue and those that generate airflow to differentiate sounds, like the breathy “think” versus a throaty “umm.”

Electrical signals from the outermost part of the brain, called the cortex, direct these muscle movements. By intercepting their communications, devices can potentially decode a person’s intention to speak and even translate signals into comprehensible words and sentences. The signals are hard to decipher, but thanks to AI, scientists have begun making sense of them.

In 2023, the same team developed a brain implant to transform brain signals into text, speech, and an avatar mimicking a person’s facial expressions. The implant sat on top of the brain, causing less damage than surgically inserted implants, and its AI translated neural signals into text at roughly 78 words per minute—about half the rate at which most people tend to speak.

Meanwhile, another team used tiny electrodes implanted directly in the brain to translate 125,000 words into text at a similar speed. A more recent implant with a similarly sized vocabulary allowed a participant to communicate for eight months with nearly perfect accuracy.

These studies “have shown impressive advances in vocabulary size, decoding speeds, and accuracy of text decoding,” wrote the team. But they all suffer a similar problem: Lag time.

Streaming Brain Signals

Ann had a paper-like electrode array implanted on the surface of brain regions responsible for speech. The implant didn’t read her thoughts per se. Rather, it captured signals controlling how vocal cords, the tongue, and other muscles move when verbalizing words. A cable connected the device to a small port fixed on her skull sent brain signals to computers for decoding.

The implant’s AI was a three-part deep learning system, a type of algorithm that roughly mimics how biological brains work. The first part decoded neural signals in real-time. Others controlled text and speech outputs using a language model, so Ann could read and hear the device’s output.

To train the AI, Ann imagined verbalizing 1,024 words in short sentences. Although she couldn’t physically move her muscles, her brain still generated neural signals as if she was speaking—so-called “silent speech.” The AI converted this data into text on a computer screen and speech.

The team “used Ann’s pre-injury voice, so when we decode the output, it sounds more like her,” study author Cheol Jun Cho said in the press release.

After further training that included over 23,000 attempts at silent speech, the AI learned to translate at a pace of roughly 47 words per minute with minimal lag—averaging just a second delay. This is “significantly faster” than older setups, wrote the team.

The speed boost is because the AI processes smaller chunks of neural activity in real time. When given a sentence for the patient to imagine vocalizing—for example, “what did you say to her?”—the system generated both text and vocals with minimal error. Other sentences didn’t fare as well. A prompt of “I just got here” translated to “I’ve said to stash it” in one test.

Long Road Ahead

Prior work mostly evaluated speech prosthetics by their ability to generate short phrases or sentences of just a few seconds. But people naturally start and stop in conversation, requiring an AI to detect an intent to speak over longer periods of time. The AI should “ideally generalize” speech “over several minutes or hours rather than several seconds,” wrote the team.

To accomplish this, they also fed the AI long stretches of brain activity when Ann was not trying to talk, intermixed with those when she was. The AI picked up on the difference—mirroring her intentions of when to speak and when to remain silent.

There’s room for improvement. Roughly half of the decoded words in longer conversations were off the mark. But the setup is a step toward natural communication in everyday life.

Different implants could also benefit from the team’s algorithm.

In another test, they analyzed two separate datasets, one collected from a paralyzed person with electrodes inserted into their brain and another from a healthy volunteer with electrodes placed over their vocal chords. Both could “silent speak” during training and testing. The AI made plenty of mistakes but detected intended speech in near real-time above random chance.

“By demonstrating accurate brain-to-voice synthesis on other silent-speech datasets, we showed that this technique is not limited to one specific type of device,” said study author Kaylo Littlejohn in the release.

Implants with more electrodes to better capture brain activity could improve performance. The team also plans to build emotion into the voice generator to reflect a user’s tone, pitch, and loudness.

In the meantime, Ann is happy with her implant. “Hearing her own voice in near-real time increased her sense of embodiment,” said Anumanchipalli.

The post Brain Implant ‘Streams’ a Paralyzed Woman’s Thoughts in Near Real Time appeared first on SingularityHub.

Kategorie: Transhumanismus

This Week’s Awesome Tech Stories From Around the Web (Through March 29)

Singularity HUB - 29 Březen, 2025 - 16:00
Future

If Anthropic Succeeds, a Nation of Benevolent AI Geniuses Could Be BornSteven Levy | Wired

“The vision [Dario Amodei] spins makes Shangri-La look like a slum. Not long from now, maybe even in 2026, Anthropic or someone else will reach AGI. Models will outsmart Nobel Prize winners. These models will control objects in the real world and may even design their own custom computers. Millions of copies of the models will work together—imagine an entire nation of geniuses in a data center!”

Tech

Move Over, OpenAI: How the Startup Behind Cursor Became the Hottest, Vibiest Thing in AINatasha Mascarenhas | The Information

“[Anysphere’s $200 million in annual revenue is] an astonishing amount considering that Cursor’s launch came in January 2023. It all adds up to a stunning reality: Anysphere is one of the fastest-growing startups ever, and what Truell and his co-founders have built is a bona fide AI rocket ship with a trajectory that stands out even among other AI startups hurtling into the stratosphere.”

Computing

How Extropic Plans to Unseat NvidiaWill Knight | Wired

“Extropic has now shared more details of its probabilistic hardware with Wired, as well as results that show it is on track to build something that could indeed offer an alternative to conventional silicon in many datacenters. The company aims to deliver a chip that is three to four orders of magnitude more efficient than today’s hardware, a feat that would make a sizable dent in future emissions.”

Computing

Could Nvidia’s Revolutionary Optical Switch Transform AI Data Centers Forever?Samuel K. Moore | IEEE Spectrum

“According to Nvidia, adopting the CPO switches in a new AI data center would lead to one-fourth the number of lasers, boost power efficiency for trafficking data 3.5-fold, improve the reliability of signals making it from one computer to another on time by 63-times, make networks 10-fold more resilient to disruptions, and allow customers to deploy new data center hardware 30 percent faster.”

Artificial Intelligence

A New, Challenging AGI Test Stumps Most AI ModelsMaxwell Zeff | TechCrunch

“‘Reasoning’ AI models like OpenAI’s o1-pro and DeepSeek’s R1 score between 1% and 1.3% on ARC-AGI-2, according to the Arc Prize leaderboard. Powerful non-reasoning models, including GPT-4.5, Claude 3.7 Sonnet, and Gemini 2.0 Flash, score around 1%.”

Computing

The Quantum Apocalypse Is Coming. Be Very AfraidAmit Katwala | Wired

“One day soon, at a research lab near Santa Barbara or Seattle or a secret facility in the Chinese mountains, it will begin: the sudden unlocking of the world’s secrets. Your secrets. Cybersecurity analysts call this Q-Day—the day someone builds a quantum computer that can crack the most widely used forms of encryption.”

Biotechnology

How a Bankruptcy Judge Can Stop a Genetic Privacy DisasterKeith Porcaro | MIT Technology Review

“Any new owner of 23AndMe’s data will want to find ways to make money from it. Lawmakers have a big opportunity to help keep it safe. …A bankruptcy court could require that users individually opt in before their genetic data can be transferred to 23andMe’s new owners, regardless of who those new owners are. Anyone who didn’t respond or who opted out would have the data deleted.”

Space

Just One Exo-Earth Pixel Can Reveal Continents, Oceans, and MoreEthan Siegel | Big Think

“In the coming years and decades, several ambitious projects will reach completion, finally giving humanity the capability to image Earth-size planets at Earth-like distances around Sun-like stars. …Remarkably, even though these exo-Earths will appear as just one lonely pixel in our detectors, we can use that data to detect continents, oceans, icecaps, forests, deserts, and more.”

Future

Does All Intelligent Life Face a Great Filter?Paul Sutter | Ars Technica

“Maybe we’re alone because essentially nobody ever makes it. Maybe there’s some unavoidable barrier between the origin of intelligent life and said life setting off to explore the galaxy. The position of this Great Filter, as [economist Robin Hanson] named it, is critically important as we contemplate the future of humanity.”

Science

Inside arXiv—the Most Transformative Platform in All of ScienceSheon Han | Wired

arXiv’s unassuming facade belies the tectonic reconfiguration it set off in the scientific community. If arXiv were to stop functioning, scientists from every corner of the planet would suffer an immediate and profound disruption. ‘Everybody in math and physics uses it,’ Scott Aaronson, a computer scientist at the University of Texas at Austin, told me. ‘I scan it every night.'”

Space

Farewell to Gaia, the Milky Way’s CartographerKatrina Miller | The New York Times

“It is difficult to capture the breadth of development and discovery that the spinning observatory has enabled. But here are a few numbers: nearly two billion stars, millions of potential galaxies and some 150,000 asteroids. These observations have led to more than 13,000 studies, so far, by astronomers.”

The post This Week’s Awesome Tech Stories From Around the Web (Through March 29) appeared first on SingularityHub.

Kategorie: Transhumanismus

What Anthropic Researchers Found After Reading Claude’s ‘Mind’ Surprised Them

Singularity HUB - 28 Březen, 2025 - 23:47

As AI’s power grows, charting its inner world is becoming more crucial.

Despite popular analogies to thinking and reasoning, we have a very limited understanding of what goes on in an AI’s “mind.” New research from Anthropic helps pull the veil back a little further.

Tracing how large language models generate seemingly intelligent behavior could help us build even more powerful systems—but it could also be crucial for understanding how to control and direct those systems as they approach and even surpass our capabilities.

This is challenging. Older computer programs were hand-coded using logical rules. But neural networks learn skills on their own, and the way they represent what they’ve learned is notoriously difficult to parse, leading people to refer to the models as “black boxes.”

Progress is being made though, and Anthropic is leading the charge.

Last year, the company showed that it could link activity within a large language model to both concrete and abstract concepts. In a pair of new papers, it’s demonstrated that it can now trace how the models link these concepts together to drive decision-making and has used this technique to analyze how the model behaves on certain key tasks.

“These findings aren’t just scientifically interesting—they represent significant progress towards our goal of understanding AI systems and making sure they’re reliable,” the researchers write in a blog post outlining the results.

The Anthropic team carried out their research on the company’s Claude 3.5 Haiku model, its smallest offering. In the first paper, they trained a “replacement model” that mimics the way Haiku works but replaces internal features with ones that are more easily interpretable.

The team then fed this replacement model various prompts and traced how it linked concepts into the “circuits” that determined the model’s response. To do this, they measured how various features in the model influenced each other as it worked through a problem. This allowed them to detect intermediate “thinking” steps and how the model combined concepts into a final output.

In a second paper, the researchers used this approach to interrogate how the same model behaved when faced with a variety of tasks, including multi-step reasoning, producing poetry, carrying out medical diagnoses, and doing math. What they found was both surprising and illuminating.

Most large language models can reply in multiple languages, but the researchers wanted to know what language the model uses “in its head.” They discovered that, in fact, the model has language-independent features for various concepts and sometimes links these together first before selecting a language to use.

Another question the researchers wanted to probe was the common conception that large language models work by simply predicting what the next word in a sentence should be. However, when the team prompted their model to generate the next line in a poem, they found the model actually chose a rhyming word for the end of the line first and worked backwards from there. This suggests these models do conduct a kind of longer-term planning, the researchers say.

The team also investigated another little understood behavior in large language models called “unfaithful reasoning.” There is evidence that when asked to explain how they reach a decision, models will sometimes provide plausible explanations that don’t match the steps they took.

To explore this, the researchers asked the model to add two numbers together and explain how it reached its conclusions. They found the model used an unusual approach of combining approximate values and then working out what number the result must end in to refine its answer.

However, when asked to explain how it came up with the result, it claimed to have used a completely different approach—the kind you would learn in math class and is readily available online. The researchers say this suggests the process by which the model learns to do things is separate from the process used to provide explanations and could have implications for efforts to ensure machines are trustworthy and behave the way we want them to.

The researchers caveat their work by pointing out that the method only captures a fuzzy and incomplete picture of what’s going on under the hood, and it can take hours of human effort to trace the circuit for a single prompt. But these kinds of capabilities will become increasingly important as systems like Claude become integrated into all walks of life.

The post What Anthropic Researchers Found After Reading Claude’s ‘Mind’ Surprised Them appeared first on SingularityHub.

Kategorie: Transhumanismus

Scientists Just Transplanted a Pig Liver Into a Person for the First Time

Singularity HUB - 27 Březen, 2025 - 22:12

The liver performed basic functions but isn’t yet a full replacement.

Our liver has admirable regenerative properties. But it takes a beating every day. Eventually, its tissues scar, and if the organ fails, a liver transplant is the only solution.

Donor livers are hard to come by, however. This week, a Chinese team turned to another source—pig livers—and published the first results showing how they function inside a human recipient. The liver in the study underwent heavy gene editing to rid it of genes that trigger immune rejection and add genes making it appear more human to the body.

Just two hours after transplant, the pig liver began producing bile, a type of digestive fluid that breaks down fat. The organ remained functional until the end of the experiment 10 days later, without marked signs of rejection or inflammation.

“This is the first time we tried to unravel whether the pig liver could work well in the human body,” said study author Lin Wang at Xijing Hospital in China in a press briefing. The pig liver is meant to be a stop-gap measure rather than a full replacement. It could temporarily keep patients alive until a human donor organ becomes available or the patient’s own liver recovers.

“The study represents a milestone in the history of liver xenotransplantation,” said Iván Fernández Vega at the University of Oviedo in Spain, who was not involved in the study. “I found the work very relevant, but we have to be cautious.”

Crossing Species

There’s a severe lack of donated organs. As of March 2025, over 104,600 people are on a transplant waitlist, which could take months, if not years. Some don’t survive the wait.

Xenotransplantation, or the transplantation of organs from one animal into another, offers another solution. For the past decade, scientists have been eyeing other species as resources for functional organs that could replace broken human body parts. Bama miniaturized pigs are especially promising because their internal organs are similar in size and function to ours.

But there are caveats. Pig organs are dotted with sugars that spur our immune systems into action. Immune cells attack the foreign organ, damaging its function or triggering rejection.

There’s also the risk posed by porcine endogenous retroviruses or PERVs. These are tricky viruses embedded inside the genomes of all pigs. Although they don’t seem to harm pigs, they can infect some human cells and potentially lead to disease.

Xenotransplant efforts over the past decade have tried gene editing pig organs to rid them of PERVs. Other edits inhibit genes responsible for immune rejection and make the organs appear more human to the body.

There have been successes. Genetically engineered pig hearts transplanted into baboons with heart failure allowed them to thrive for over six months. Pig kidney grafts with 69 genetic edits retained function after transplantation in monkeys.

And although highly experimental, xenotransplantation has already been used in humans. In 2021, a team performed the first transplant of a genetically modified pig kidney into a brain-dead person. The kidney was attached to blood vessels in the upper leg outside the belly and covered with a protective shield.

Since then, surgeons have transplanted hearts, kidneys, and a thymus directly inside the bodies of living volunteers, with mixed results. One pig heart recipient soon passed away after the xenotransplant. Another fared better with a pig kidney: The 53-year-old grandma returned home this February after receiving the organ late last year.

Her ”recovery from a long history of kidney failure and dialysis treatment has been nothing short of remarkable,” said study lead Robert Montgomery at NYU Langone Transplant Institute at the time.

Liver xenotransplants, however, pose additional problems.

The organ “is so complicated,” said Wang. As the ultimate multitasker, it metabolizes drugs and other chemicals, makes bile and other digestive juices, cleans out old blood cells, and produces proteins for blood clotting. Each of these functions is orchestrated by a symphony of molecules that could differ between pigs and humans. A mismatch could result in a pig liver that can’t work in the human body or one that triggers dangerous immune responses.

In 2023, a team from the University of Pennsylvania took a stab at the problem. They connected a genetically engineered pig liver to the bloodstream of a brain-dead person with the organ outside the body. The donor liver, engineered by the biotechnology company eGenesis to reduce the chance of immune rejection, remained healthy for at least 72 hours.

Plus One

The new study aimed to show that a pig liver transplant could last longer and perform its usual tasks. The team sourced the liver from Clonorgan Biotechnology based in Chengdu, China.

The donor organ was from a seven-month-old Bama miniature pig and had six gene edits. The majority of the edits were designed to prevent hyperacute rejection, where the immune system launches a full onslaught against the transplant within minutes.

The recipient was a brain-dead, middle-aged man who still had a working liver. Rather than trying to replace his liver, the team wanted to find out whether a pig liver could survive and function inside a human body while performing its normal roles.

Surgeons hooked the gene-edited pig liver to the donor’s blood supply and monitored it for 10 days—the amount of time the recipient’s family approved for the experiment. Within hours, the organ began synthesizing and pumping out bile at a gradually increasing volume. The liver also made albumin, a protein crucial for maintaining fluids and transporting molecules.

Blood from the recipient flowed smoothly throughout the liver, which likely prevented blood clots often associated with liver transplants. Thanks to immunosuppressant drugs, the patient’s immune system stayed relatively quiet and didn’t attack the pig organ.

“This is the world’s first [published] case of a transplant of a genetically modified pig liver into a brain-dead human,” said Rafael Matesanz, creator and founder of the National Transplant Organization in Spain, who was not involved in the work.

Many questions remain. The liver has multiple functions, but the study only tested bile and albumin production. Could the pig liver also filter toxins from the blood or break down medications? Also, the study only observed one person for a relatively short time. The results might not hold for other demographics, and the transplant could falter down the road.

And because the volunteer still had a functional liver, “we cannot extrapolate the extent to which this xenograft would have supported a patient in liver failure,” said Peter Friend at the University of Oxford, who was not involved in the study.

Even so, a temporary bridge transplant—where a pig liver would support bodily functions short-term while the recipient waits for a permanent transplant—could save lives.

The same team recently completed a full pig-to-human liver transplant, swapping out the liver of a brain-dead human with one from a genetically-modified pig. They plan to release the data in a future publication. “Whether it could replace the original human liver in the future [is unknown],” said Wang at the press briefing. “It is our dream to make this achievement.”

The post Scientists Just Transplanted a Pig Liver Into a Person for the First Time appeared first on SingularityHub.

Kategorie: Transhumanismus
Syndikovat obsah