Singularity HUB
Will AI Revolutionize Drug Development? These Are the Root Causes of Drug Failure It Must Address
Ninety percent of drugs fail clinical trials. Can AI help?
The potential of using artificial intelligence in drug discovery and development has sparked both excitement and skepticism among scientists, investors, and the general public.
“Artificial intelligence is taking over drug development,” claim some companies and researchers. Over the past few years, interest in using AI to design drugs and optimize clinical trials has driven a surge in research and investment. AI-driven platforms like AlphaFold, which won the 2024 Nobel Prize for its ability to predict the structure of proteins and design new ones, showcase AI’s potential to accelerate drug development.
AI in drug discovery is “nonsense,” warn some industry veterans. They urge that “AI’s potential to accelerate drug discovery needs a reality check,” as AI-generated drugs have yet to demonstrate an ability to address the 90% failure rate of new drugs in clinical trials. Unlike the success of AI in image analysis, its effect on drug development remains unclear.
We have been following the use of AI in drug development in our work as a pharmaceutical scientist in both academia and the pharmaceutical industry and as a former program manager in the Defense Advanced Research Projects Agency, or DARPA. We argue that AI in drug development is not yet a game-changer, nor is it complete nonsense. AI is not a black box that can turn any idea into gold. Rather, we see it as a tool that, when used wisely and competently, could help address the root causes of drug failure and streamline the process.
Most work using AI in drug development intends to reduce the time and money it takes to bring one drug to market—currently 10 to 15 years and $1 billion to $2 billion. But can AI truly revolutionize drug development and improve success rates?
AI in Drug DevelopmentResearchers have applied AI and machine learning to every stage of the drug development process. This includes identifying targets in the body, screening potential candidates, designing drug molecules, predicting toxicity and selecting patients who might respond best to the drugs in clinical trials, among others.
Between 2010 and 2022, 20 AI-focused startups discovered 158 drug candidates, 15 of which advanced to clinical trials. Some of these drug candidates were able to complete preclinical testing in the lab and enter human trials in just 30 months, compared with the typical 3 to 6 years. This accomplishment demonstrates AI’s potential to accelerate drug development.
On the other hand, while AI platforms may rapidly identify compounds that work on cells in a petri dish or in animal models, the success of these candidates in clinical trials—where the majority of drug failures occur—remains highly uncertain.
Unlike other fields that have large, high-quality datasets available to train AI models, such as image analysis and language processing, the AI in drug development is constrained by small, low-quality datasets. It is difficult to generate drug-related datasets on cells, animals, or humans for millions to billions of compounds. While AlphaFold is a breakthrough in predicting protein structures, how precise it can be for drug design remains uncertain. Minor changes to a drug’s structure can greatly affect its activity in the body and thus how effective it is in treating disease.
Survivorship BiasLike AI, past innovations in drug development like computer-aided drug design, the Human Genome Project, and high-throughput screening have improved individual steps of the process in the past 40 years, yet drug failure rates haven’t improved.
Most AI researchers can tackle specific tasks in the drug development process when provided high-quality data and particular questions to answer. But they are often unfamiliar with the full scope of drug development, reducing challenges into pattern recognition problems and refinement of individual steps of the process. Meanwhile, many scientists with expertise in drug development lack training in AI and machine learning. These communication barriers can hinder scientists from moving beyond the mechanics of current development processes and identifying the root causes of drug failures.
Current approaches to drug development, including those using AI, may have fallen into a survivorship bias trap, overly focusing on less critical aspects of the process while overlooking major problems that contribute most to failure. This is analogous to repairing damage to the wings of aircraft returning from the battle fields in World War II while neglecting the fatal vulnerabilities in engines or cockpits of the planes that never made it back. Researchers often overly focus on how to improve a drug’s individual properties rather than the root causes of failure.
While returning planes might survive hits to the wings, those with damage to the engines or cockpits are less likely to make it back. Martin Grandjean, McGeddon, US Air Force/Wikimedia Commons, CC BY-SAThe current drug development process operates like an assembly line, relying on a checkbox approach with extensive testing at each step of the process. While AI may be able to reduce the time and cost of the lab-based preclinical stages of this assembly line, it is unlikely to boost success rates in the more costly clinical stages that involve testing in people. The persistent 90 percent failure rate of drugs in clinical trials, despite 40 years of process improvements, underscores this limitation.
Addressing Root CausesDrug failures in clinical trials are not solely due to how these studies are designed; selecting the wrong drug candidates to test in clinical trials is also a major factor. New AI-guided strategies could help address both of these challenges.
Currently, three interdependent factors drive most drug failures: dosage, safety and efficacy. Some drugs fail because they’re too toxic, or unsafe. Other drugs fail because they’re deemed ineffective, often because the dose can’t be increased any further without causing harm.
We and our colleagues propose a machine learning system to help select drug candidates by predicting dosage, safety, and efficacy based on five previously overlooked features of drugs. Specifically, researchers could use AI models to determine how specifically and potently the drug binds to known and unknown targets, the levels of these targets in the body, how concentrated the drug becomes in healthy and diseased tissues, and the drug’s structural properties.
These features of AI-generated drugs could be tested in what we call phase 0+ trials, using ultra-low doses in patients with severe and mild disease. This could help researchers identify optimal drugs while reducing the costs of the current “test-and-see” approach to clinical trials.
While AI alone might not revolutionize drug development, it can help address the root causes of why drugs fail and streamline the lengthy process to approval.
This article is republished from The Conversation under a Creative Commons license. Read the original article.
The post Will AI Revolutionize Drug Development? These Are the Root Causes of Drug Failure It Must Address appeared first on SingularityHub.
Scientists Target Incurable Mitochondrial Diseases With New Gene Editing Tools
A new study swapped DNA letters inside mitochondria, paving the way for new gene therapies.
The energy factories in our cells contain their own genes, and genetic mutations in them can cause deadly inherited diseases.
These oblong-shaped organelles, or mitochondria, translate genes into proteins, which together form a kind of production chain that supplies cells with energy. Mutations in mitochondrial DNA, or mtDNA, torpedo the process, leading to sluggish cells that eventually wither away.
Some mitochondrial DNA mutations have been linked to age-related diseases, metabolic problems, and stroke-like symptoms. Others are involved in epilepsy, eye diseases, cancer, and cognitive troubles. Many of the diseases are inherited. But none are treatable.
“Mitochondrial disorders are incredibly diverse in their manifestation and progression… [and] therapeutic options for these pathologies are rarely available and only moderately effective,” wrote Alessandro Bitto at the University of Washington last year.
As a workaround, some countries have already approved mitochondrial transfer therapy, which replaces defective mitochondria with healthy ones in reproductive cells. The resulting “three-parent” kids are generally healthy. But the procedure remains controversial because it involves tinkering with human reproductive cells, with potentially unknown repercussions down the line.
The new study, published in Science Translational Medicine, took an alternative approach—gene therapy. Using a genetic tool called base editing to target mitochondrial DNA, the team successfully rewrote damaged sections to overcome deadly mutations in mice.
“This approach could be potentially used to treat human diseases,” wrote the team.
Double TroubleOur genetic blueprints are housed in two places. The main set is inside the nucleus. But there’s another set in our mitochondria, the organelles that produce over 90 percent of a cell’s energy.
These pill-shaped structures are enveloped in two membranes. The outer membrane is structural. The inner membrane is like an energy factory, containing teams of protein “workers” strategically placed to convert food and oxygen into fuel.
Mitochondria are strange creatures. According to the latest theory, they were once independent critters that sheltered inside larger cells on early Earth. Eventually, the two merged into one. Mitochondria offered protocells a more efficient way to generate energy in exchange for safe haven. Eventually, the team-up led to all the modern cells that make up our bodies.
This is likely why mitochondria have their own DNA. Though it’s separate, it works the same way: Genes are translated into messenger RNA and shuttled to the mitochondria’s own protein-making factories. These local factories recruit “transporters,” or mitochondrial transfer RNA, to supply protein building blocks, which are stitched into the final protein product.
These processes happen in solitude. In a way, mitochondria reign their own territory inside each cell. But their DNA has a disadvantage. Compared to our central genetic blueprint, it’s more prone to mutations because mitochondria evolved fewer DNA repair abilities.
“There are about 1,000 copies of mtDNA in most cells,” but mutations can coexist with healthy variants, the authors wrote. Mitochondrial diseases only happen when mutations overrun healthy DNA. Even a small amount of normal mitochondrial DNA can protect against mutations, suggesting gene editing could be a way to tackle these diseases.
Into the UnknownCurrent treatments for people with mitochondrial mutations ease symptoms but don’t tackle the root cause.
One potential therapy under development would help cells destroy damaged mitochondria. Here, scientists design “scissors” that snip mutated mitochondrial DNA in cells also containing healthy copies. By cutting away damaged DNA, it’s hoped healthy mitochondria repopulate and regain their role.
In 2020, a team led by David Liu at MIT and Harvard’s Broad Institute of MIT and Harvard unleashed a gene editing tool tailored to mitochondria. Well-known for his role in developing CRISPR base editing—a precision tool to swap one genetic letter for another—his lab’s tool targeted mitochondrial DNA with another method.
They broke a bacterial toxin into two halves—both are inactive and non-toxic until they join together at a targeted DNA site. When activated, the editor turns the DNA letter “C” to “T” inside mitochondria, with minimal changes to other genetic material.
In the new study, the team focused on a mitochondrial defect that damages the organelles’ “transporter” molecules. Without this transfer RNA, mitochondria can’t make the proteins that are essential for generating energy.
The transporter molecules look like four-leaf clovers with sturdy stems. Each leaf is made of a pair of genetic letters that grab onto each other. But in some mutations, pairs can’t hook together, so the leaves no longer connect, and they wreck the transporter’s function.
Powering UpThese early results suggest that DNA mutations in mitochondria damage the cell’s ability to provide energy. Correcting the mutations may help.
As a test, the team used their tool to transform genetic letters in cultured cells. After several rounds of treatment, 75 percent of the cells had reprogrammed mitochondria.
The team then combined the editor with a safe delivery virus. When injected into the bloodstreams of young adult mice, the editor rapidly reached cells in their hearts and muscles. In hearts, the treatment upped normal transfer RNA levels by 50 percent.
It’s not a perfect fix though. The injection didn’t reach the brain or kidneys, and they found very few signs of editing in the liver. This is surprising, wrote the authors, because the liver is usually the first organ to absorb gene editors.
When the team upped the dose, off-target edits in healthy mitochondria skyrocketed. On the plus side, the edits didn’t notably alter the main genetic blueprints contained in nuclear DNA.
It’ll be a while before mitochondrial gene editors can be tested in humans. The current system uses TALE, an older gene editing method that’s regained some steam. Off-target edits, especially at higher doses, could also potentially cause problems in unexpected tissues or organs.
“Specific tissues may respond differently to editing, so optimization should also consider the possibility of the target tissue being more sensitive to undesirable off-target changes,” wrote the team.
Overall, there’s more work to do. But new mitochondrial base editors “should help improve the precision of mitochondrial gene therapy,” the team wrote.
The post Scientists Target Incurable Mitochondrial Diseases With New Gene Editing Tools appeared first on SingularityHub.
This Week’s Awesome Tech Stories From Around the Web (Through February 1)
These were our favorite articles in science and tech this week.
OpenAI in Talks for New Funding at Up to $300 Billion Value Shirin Ghaffary, Rachel Metz, and Kate Clark | Bloomberg
“The ChatGPT maker is in discussions to raise funds at a pre-money valuation of $260 billion, said one of the people, who spoke on condition of anonymity to discuss private information. The post-money valuation would be $300 billion, assuming OpenAI raises the full amount. The company was valued at $157 billion in October.”
Cerebras Becomes the World’s Fastest Host for DeepSeek R1, Outpacing Nvidia GPUs by 57x Michael Nuñez | VentureBeat
“The AI chip startup will deploy a 70-billion-parameter version of DeepSeek-R1 running on its proprietary wafer-scale hardware, delivering 1,600 tokens per second —a dramatic improvement over traditional GPU implementations that have struggled with newer ‘reasoning’ AI models.'”
Stem Cells Used to Partially Repair Damaged Hearts John Timmer | Ars Technica
“Although the Nobel Prize for induced stem cells was handed out over a decade ago, the therapies have been slow to follow. In a new paper published in the journal Nature, however, a group of German researchers is now describing tests in primates of a method of repairing the heart using new muscle generated from stem cells.”
DeepSeek Mania Shakes AI Industry to Its Core Emanuel Maiberg | 404 Media
“If these new methods give DeepSeek great results with limited compute, the same methods will give OpenAI and other, more well-resourced AI companies even greater results on their huge training clusters, and it is possible that American companies will adapt to these new methods very quickly. Even if scaling laws really have hit the ceiling and giant training clusters don’t need to be that giant, there’s no reason I can see why other companies can’t be competitive under this new paradigm.”
Boom’s XB-1 Becomes First Civil Aircraft to Go Supersonic Sean O’Kane | TechCrunch
“It cleared Mach 1 and stayed supersonic for around four minutes, reaching Mach 1.1. Test pilot Tristan Brandenburg broke the sound barrier two more times before receiving the call to bring the XB-1 back to the Mojave Air & Space Port. The supersonic flight comes eight years after Boom first revealed the XB-1. It’s a small version of the 64-passenger airliner Boom eventually wants to build, which it calls Overture.”
Waymo to Test in 10 New Cities in 2025, Starting With Las Vegas and San Diego Andrew J. Hawkins | The Verge
“This year, the theme is ‘generalizability’: how well the vehicles adapt to new cities after having driven tens of millions of miles in its core markets of San Francisco, Phoenix, and Los Angeles. Ideally, the company is trying to get to a point where it can bring its vehicles to a new city and launch a robotaxi with a minimal amount of testing as a preamble.”
DeepSeek’s Safety Guardrails Failed Every Test Researchers Threw at Its AI Chatbot Matt Burgess | Wired
“[On Friday], security researchers from Cisco and the University of Pennsylvania [published] findings showing that, when tested with 50 malicious prompts designed to elicit toxic content, DeepSeek’s model did not detect or block a single one. In other words, the researchers say they were shocked to achieve a ‘100 percent attack success rate.'”
Useful Quantum Computing Is Inevitable—and Increasingly Imminent Peter Barrett | MIT Technology Review
“Nvidia CEO Jensen Huang jolted the stock market by saying that practical quantum computing is still 15 to 30 years away, at the same time suggesting those computers will need Nvidia GPUs in order to implement the necessary error correction. However, history shows that brilliant people are not immune to making mistakes. Huang’s predictions miss the mark, both on the timeline for useful quantum computing and on the role his company’s technology will play in that future.”
With Successful New Glenn Flight, Blue Origin May Finally Be Turning the Corner Eric Berger | Ars Technica
“‘I would say, “Stay tuned,”‘ [Bezos] said. ‘This is the very beginning of the Space Age. When the history is finally written hundreds of years from now, the 1960s will be a certain kind of beginning, and [there were] certainly incredible accomplishments. But now we’re really getting started. That was kind of pulled forward from its natural time, the space race with the Soviets. And now is the time when the real movement, the kind of golden age of space, is going to happen. It’s still absolutely day one.'”
JWST Shocks the World With Colliding Neutron Star Discovery Ethan Siegel | Big Think
“When we examined the remnant of [a 2017 neutron star collision] spectrally, we discovered an enormous number of heavy elements, indicating that the heaviest elements were likely produced by these cataclysms. In all the time since, we’ve never seen another such event directly, throwing the idea that neutron star collisions make the heaviest elements into doubt. But thanks to JWST, the idea is back on the table as our #1 option.”
Chatbot Software Begins to Face Fundamental Limitations Anil Ananthaswamy | Quanta Magazine
“Scientists have had some successes pushing transformers past these limits, but those increasingly look like short-term fixes. If so, it means there are fundamental computational caps on the abilities of these forms of artificial intelligence—which may mean it’s time to consider other approaches.”
The post This Week’s Awesome Tech Stories From Around the Web (Through February 1) appeared first on SingularityHub.
This Autonomous Drone Can Track Humans Through Dense Forests at High Speed
Drones that fly themselves, and don’t crash, are improving fast.
Autonomous drones could revolutionize a wide range of industries. Now, scientists have designed a drone that can weave through dense forests, dodge thin power lines in dim lighting, and even track a jogging human.
Rapid improvements in sensor technology and artificial intelligence are making it increasingly feasible for drones to fly themselves. But autonomous drones remain far from foolproof, which has restricted their use to low-risk situations such as delivering food in well-organized cities.
If the technology is ever to have an impact in domains like search and rescue, sports, or even warfare, small drones need to become both more maneuverable and more reliable. That prompted researchers from the University of Hong Kong to develop a new micro air vehicle, or MAV, that can navigate challenging environments at speed.
The new drone, named SUPER, combines lidar technology with a unique two-trajectory navigation system to balance safety and speed. In real-world tests, it outperformed commercial drones in both tracking and collision avoidance, while flying at more than 20 meters per second (45 miles per hour).
“SUPER represents a milestone in transitioning high-speed autonomous navigation from laboratory settings to real-world applications,” the researchers wrote in a paper in Science Robotics introducing the new drone.
According to the authors, the inspiration for the project came from birds’ ability to nimbly navigate cluttered forest environments. To replicate this capability, they first designed a drone just 11 inches across with a thrust-to-weight ratio of more than five, which allowed it to carry out aggressive high-speed maneuvers.
They then fitted it with a lightweight lidar device capable of detecting obstacles at up to 70 meters. Given they were targeting high-speed flight, the researchers say they were keen to avoid the kind of motion blur that camera-based systems suffer from.
Most important though, was the navigation system they designed for the drone. At each route-planning cycle, SUPER’s flight controller generates two flight trajectories towards its goal. The first is designed to be a high-speed route and assumes that some of the areas ahead with limited lidar data are free of obstacles. The second is a back-up trajectory that focuses on safety, only passing through areas known to be free of obstacles.
The drone starts by following the high-speed trajectory but switches to the backup if the real-time lidar data detects anything in the way. To test out the approach, the researchers pitted it against two other research drones and a commercial drone in a series of trials, which involved flying at high speed, dodging thin electrical wires, navigating a dense forest, and flying at night.
The SUPER drone achieved a nearly perfect success rate of 99.63 percent across all the trials, which is nearly 36 times better than the best alternative the researchers tested. This was all while achieving faster flight speeds and significantly reduced planning times.
The drone also demonstrated excellent object tracking, successfully tailing someone jogging through dense forest. In contrast, the commercial drone, which used vision-based sensors, ultimately lost track of the target.
The researchers suggest that the development of smaller, lighter lidar systems and aerodynamic optimizations could enable even higher speeds. Imbuing SUPER with the ability to detect moving objects and predict their motion could also improve its ability to operate in highly dynamic environments.
Given its already impressive performance though, it seems like it won’t be long before fast, agile drones are buzzing over our heads in all kinds of places.
The post This Autonomous Drone Can Track Humans Through Dense Forests at High Speed appeared first on SingularityHub.
Mice With Two Dads Reach Adulthood Thanks to CRISPR
It’s a new way to create same-sex biological offspring—but the approach is not ready for humans.
At first glance, the seven mice skittering around their cages look like other mice. But they have an unusual lineage: They were born with DNA from two dads. The mice join an elite group of critters born from same-sex parents, paving the way for testing in larger animals, such as monkeys.
Led by veteran reproductive researchers Wei Li and Qi Zhou at the Chinese Academy of Sciences, the results “blew us away,” wrote Lluís Montoliu at the National Biotechnology Center in Madrid, who was not involved in the study.
Although mice with two dads have been born before, scientists used a completely different strategy in this study, which also provided insights into a reproductive mystery. In a process called “imprinting,” some genes in embryos are switched on or off depending on whether they come from the biological mom or dad. Problems with imprinting often damage embryos, halting their growth.
In the new study, the team hunted down imprinted genes in embryos made from same-sex parents, drawing an intricate “fingerprint” of their patterns. They then zeroed in on 20 genes and tinkered with them using the gene-editing tool CRISPR. Hundreds of experiments later, the edited embryos—made from two male donors—led to the birth of seven pups that grew to adulthood.
Imprinting doesn’t just affect reproduction. Hiccups in the process can also impair biomedical technologies relying on embryonic stem cells, animal cloning, or induced pluripotent stem cells (iPSCs). Changes in imprinting are complex and hard to predict, with “no universal correction methods,” wrote the team.
“This work will help to address a number of limitations in stem cell and regenerative medicine research,” said Li in a press release.
Genetic Civil WarThe cardinal rule of reproduction in mammals is still sperm meets egg. But there are now more options, beyond nature’s design, for where these reproductive cells come from. Thanks to iPSC technology, which returns skin cells to a stem cell-like state, lab-made egg and sperm cells are now possible.
Scientists have engineered functional eggs and ovaries and created mice pups born from same-sex parents. Li’s team created the first mice born from two mothers in 2018. Compared to their peers, the mice were smaller, but they lived longer and were able to become moms.
The key was unlocking a snippet of the imprinting code.
Egg and sperm each carry half of our DNA. However, when the two sources of DNA meet, they can butt heads. For example, similar sections of the genetic code from mom could encode smaller babies for easier birth, whereas those from dad may encode larger, stronger offspring for better survival once born. In other words, balancing both sides is key.
Embryos made from same-sex gametes don’t “survival naturally,” wrote Montoliu.
Evolution has a solution: Shut off some DNA so that offspring only have one active copy of a gene, either from mom or dad. This trade-off prevents a DNA “civil war” in early embryos, allowing them to grow. Li’s team hunted down three essential DNA regions involved in imprinting and used CRISPR to delete those letters in one mom’s DNA. The edit wiped out the marks, essentially transforming the cell into a pseudo-sperm that, when injected into an egg, led to healthy baby mice.
But the process didn’t work for two dads. Here, the goal was to erase imprinted marks from male donor cells and turn them into pseudo-eggs. Despite editing up to seven genes that control imprinting, only roughly two percent of the efforts led to live births. None of the pups survived until adulthood.
Double DadMaking offspring from two males is notoriously difficult, often triggering failure far sooner than in embryos with DNA from two mothers.
Scientists have used skin cell-derived iPSCs to make egg cells from male donors. But in previous studies, when fertilized with donor sperm, the lab-made eggs led to early embryos with severe imprinting problems. After being transferred to surrogate mothers, they eventually developed defects causing termination. The results suggested that the normal imprinting that balances gene expression from both mom and dad is critical for embryos to flourish.
There are about 200 imprinted genes currently linked to embryo development. Here, the team targeted 20 for genetic editing.
In a complicated series of experiments, they first made “haploid cells.” These cells only contain half the genetic material from a male donor. Using CRISPR, the team then individually modified each imprinting site to shut down the related gene’s activity. Some edits deleted the gene altogether; others added mutations to inhibit its function. More genetic edits to “regulatory” DNA further dampened their activity.
The result was a Frankenstein cell similar to a gamete, but carrying half the genome and with parental imprints wiped out. Next, the scientists injected the edited cells along with normal sperm—the “parental donor”—into an egg with its nucleus and DNA removed. The resulting fertilized egg now had a full set of DNA, with each half coming from male parents.
The approach worked—to a point. When transplanted into surrogate mothers, a fraction of the early embryos grew into mouse pups. Seven eventually reached adulthood. The genetic tweaks also improved placental health, a prior roadblock in the study of mice with same-sex parents.
“These findings provide strong evidence that imprinting abnormalities are the main barrier to mammalian unisexual reproduction,” said study author Guan-Zheng Luo at Sun Yat-sen University.
The work adds to a previous study that created pups from two dads. Helmed by Katsuhiko Hayashi at Osaka University, a team of scientists leveraged a curious quirk of iPSC transformation at the chromosome level—a completely different method than that pursued in the current study. Those mice grew into adults and went on to have pups of their own.
When first sharing those results at a conference, the audience was left “gasping and breathless,” wrote Montoliu.
The new study’s mice had health struggles. They had a larger frame, a squished nose, and a wider head—signs often associated with parental imprinting. They were also less anxious when roaming a large, open field than would normally be expected. Each mouse’s hippocampus, a brain area related to learning, memory, and emotions, was smaller than usual. And they were infertile, with a far shorter lifespan.
Given these problems, the method is hardly ready for clinical use. Tampering with genes in human reproductive cells is currently banned in many countries.
That said, the work is “impressive in its technical complexity,” Martin Leeb at Max Perutz Labs Vienna told Chemical and Engineering News, who was not involved in the study. “I would have personally thought it probably requires even more genetic engineering to get these bi-paternal mice born.”
The team is exploring other genetic tweaks to further improve the process and learn more about imprinting. Meanwhile, they’re planning to extend the method to monkeys, whose reproduction is far more similar to ours.
The post Mice With Two Dads Reach Adulthood Thanks to CRISPR appeared first on SingularityHub.
Does Extraterrestrial Life Exist? Here’s What Scientists Really Think
There’s a solid consensus among scientists on the question, according to a new survey.
News stories about the likely existence of extraterrestrial life, and our chances of detecting it, tend to be positive. We are often told that we might discover it any time now. Finding life beyond Earth is “only a matter of time,” we were told in September 2023. “We are close” was a headline from September 2024.
It’s easy to see why. Headlines such as “We’re probably not close” or “Nobody knows” aren’t very clickable. But what does the relevant community of experts actually think when considered as a whole? Are optimistic predictions common or rare? Is there even a consensus? In our new paper, published in Nature Astronomy, we’ve found out.
During February to June 2024, we carried out four surveys regarding the likely existence of basic, complex, and intelligent extraterrestrial life. We sent emails to astrobiologists (scientists who study extraterrestrial life), as well as to scientists in other areas, including biologists and physicists.
In total, 521 astrobiologists responded, and we received 534 non-astrobiologist responses. The results reveal that 86.6 percent of the surveyed astrobiologists responded either “agree” or “strongly agree” that it’s likely that extraterrestrial life (of at least a basic kind) exists somewhere in the universe.
Less than 2 percent disagreed, with 12 percent staying neutral. So, based on this, we might say that there’s a solid consensus that extraterrestrial life, of some form, exists somewhere out there.
Scientists who weren’t astrobiologists essentially concurred, with an overall agreement score of 88.4 percent. In other words, one cannot say that astrobiologists are biased toward believing in extraterrestrial life, compared with other scientists.
When we turn to “complex” extraterrestrial life or “intelligent” aliens, our results were 67.4 percent agreement and 58.2 percent agreement, respectively, for astrobiologists and other scientists. So, scientists tend to think that alien life exists, even in more advanced forms.
These results are made even more significant by the fact that disagreement for all categories was low. For example, only 10.2 percent of astrobiologists disagreed with the claim that intelligent aliens likely exist.
Optimists and PessimistsAre scientists merely speculating? Usually, we should only take notice of a scientific consensus when it is based on evidence (and lots of it). As there is no proper evidence, scientists may be guessing. However, scientists did have the option of voting “neutral,” an option that was chosen by some scientists who felt that they would be speculating.
Only 12% chose this option. There is actually a lot of “indirect” or “theoretical” evidence that alien life exists. For example, we do now know that habitable environments are very common in the universe.
We have several in our own solar system, including the sub-surface oceans of the moons Europa and Enceladus and arguably also the environment a few meters below the surface of Mars. It also seems relevant that Mars used to be highly habitable, with lakes and rivers of liquid water on its surface and a substantial atmosphere.
It is reasonable to generalize from here to a truly gargantuan number of habitable environments across the galaxy and wider universe. We also know (since we’re here) that life can get started from non-life—it happened on Earth, after all. Although the origin of the first, simple forms of life is poorly understood, there is no compelling reason to think that it requires astronomically rare conditions. And even if it does, the probability of life getting started (abiogenesis) is clearly non-zero.
This can help us to see the 86.6 percent agreement in a new light. Perhaps it is not, actually, a surprisingly strong consensus. Perhaps it is a surprisingly weak consensus. Consider the numbers: there are more than 100 billion galaxies. And we know that habitable environments are everywhere.
Let’s say there are 100 billion billion habitable worlds (planets or moons) in the universe. Suppose we are such pessimists that we think life’s chances of getting started on any given habitable world is one in a billion billion. In that case, we would still answer “agree” to the statement that it is likely that alien life exists in the universe.
Thus, optimists and pessimists should all have answered “agree” or “strongly agree” to our survey, with only the most radical pessimists about the origin of life disagreeing.
Bearing this in mind, we could present our data another way. Suppose we discount the 60 neutral votes we received. Perhaps these scientists felt that they would be speculating and didn’t want to take a stance. In which case, it makes sense to ignore their votes. This leaves 461 votes in total, of which 451 were for agree or strongly agree. Now, we have an overall agreement percentage of 97.8%.
This move is not as illegitimate as it looks. Scientists know that if they choose “neutral” they can’t possibly be wrong. Thus, this is the “safe” choice. In research, it is often called “satisficing.”
As the geophysicist Edward Bullard wrote back in 1975 while debating whether all continents were once joined together, instead of making a choice “it is more prudent to keep quiet, … sit on the fence, and wait in statesmanlike ambiguity for more data.” Not only is keeping quiet a safe choice for scientists, it means the scientist doesn’t need to think too hard —it is the easy choice.
Getting the Balance RightWhat we probably want is balance. On one side, we have the lack of direct empirical evidence and the reluctance of responsible scientists to speculate. On the other side, we have evidence of other kinds, including the truly gargantuan number of habitable environments in the universe.
We know that the probability of life getting started is non-zero. Perhaps 86.6 percent agreement, with 12 percent neutral and less than 2 percent disagreement, is a sensible compromise, all things considered.
Perhaps—given the problem of satisficing—whenever we present such results, we should present two results for overall agreement: one with neutral votes included (86.6 percent) and one with neutral votes disregarded (97.8 percent). Neither result is the single, correct result.
Each perspective speaks to different analytical needs and helps prevent oversimplification of the data. Ultimately, reporting both numbers—and being transparent about their contexts—is the most honest way to represent the true complexity of responses.
This article is republished from The Conversation under a Creative Commons license. Read the original article.
The post Does Extraterrestrial Life Exist? Here’s What Scientists Really Think appeared first on SingularityHub.
The Brain on Microplastics: A Study in Mice Finds the Brain’s Immune Cells Gorging on Plastic
The study sheds light on one way these pesky particles may be detrimental to brain health.
We’re not Barbie girls, but we live in a plastic world.
Microplastics, tiny specks of broken-down plastic, are all around us. They hover in the air, float in our water, and are sprinkled in the food we eat. These particles have even been detected in relatively pristine ice sheets in Antarctica—a continent with minimal human presence.
They’re also inside our bodies. Microplastic dust lingers in our liver, kidney, blood, and reproductive cells. As their levels build up, microplastics stress normal cell functions, triggering inflammation and hormonal problems. In a small number of people, they’re linked to an increased risk of heart attack, neurological problems, and stroke. A recent preprint analyzing donated brain tissue from deceased people detected large amounts of microplastics in their brains, especially around their blood vessels.
Now, a new study sheds light on one way these pesky particles may be detrimental to brain health. By tracking microplastics in the brains of mice, the team found they damaged the brain’s immune cells. These protective cells accumulated microplastics, instead of digesting them, and then the damaged cells clumped up in the brain’s blood vessels, eventually blocking normal blood circulation—with consequences. Mice given a small dose of microplastics struggled to walk and had a slightly harder time remembering places, even a month later.
Food aside, many current medical devices are made of plastic, which ultimately wears down and could potentially directly leak the particles into a patient’s bloodstream. Though the findings need replication in humans—our blood vessels are larger than mice’s—they do offer “a focused direction for understanding the potential health risks associated with microplastics,” wrote the authors.
Friend or Foe?Picture your daily morning routine. Now, mentally scan for all the plastic involved.
It’s everywhere. There’s the coffee pot collecting a drip brew or a Keurig pod to get the day going, the shampoo and conditioner container as you shower, the jug that holds orange juice or milk, and the leftovers in a plastic container, ready for a quick zap in the microwave.
Plastic is so prevalent it’s difficult to imagine a world without the material. But its large-scale production only ramped up in the 1950s, after World War II. During the war, the innovative material was used to craft lightweight yet durable radar and radio devices, ammunition, and disposable medical tools. From there, it trickled down into everyday use.
This came at an environmental cost. Made of synthetic molecules—often derived from fossil fuels—plastics are notoriously difficult to break down. As of 2015, humans had generated approximately 6.3 billion metric tons of plastic waste, just nine percent of which had been recycled. By 2050, roughly double that amount will load up landfills. Despite efforts at recycling or making biodegradable plastics, most products end up in landfills or our environment—either on land or in waterways and oceans.
The latter is especially concerning. As plastics wear down, they shed tiny specks that marine life ingests. Roughly the size of a sesame seed, these floating toxins are gulped up by plankton—which larger marine animals feed on—oysters, scallops, and other ocean creatures. The contamination eventually moves up the food chain and reaches seafood lovers across the world. Combined with other daily sources of microplastics, we’re inhaling and ingesting these materials far more than ever before.
Roughly a decade ago, multiple countries banned exfoliating plastic “beads” from face scrubs, toothpaste, and hand cleaners to reduce microplastic waste. Meanwhile, scientists also started investigating potential health concerns of ingesting microplastics in full force.
Early red flags related to reproductive health. More evidence suggested microplastics are especially harmful to blood vessels. One study in 2024, for example, followed people with blood vessel disease due to a blockage. They analyzed the offending clumps and realized they were made up of tiny microplastic particles combined with broken down cells. Polluted by microplastics, the cells hung around inside the patients’ fatty tissues, spurring inflammation and increasing the chance of heart disease and stroke.
Even the brain was vulnerable to these toxins. Usually, our noggin is guarded by a cellular fortress dubbed the “blood-brain barrier.” Only sanctioned chemicals and some larger proteins can pass through this barrier.
However, it didn’t evolve to block microplastics. Previous studies found these particles could drift into brain tissue, causing some proteins to clump up and trigger or worsen neurodegenerative diseases—conditions in which neurons break down—such as Parkinson’s disease. Microplastics have also been linked to anxiety and depression, though it’s still unknown why.
Scientists generally agree that microplastics floating across the blood-brain barrier can cause damage or spark inflammation in the body affecting neuron function, explained the team. But seeing is believing—which is where the new study comes in.
The Fast LaneRather than analyzing microplastic particles inside brain tissue, the team used a method called two-photon microscopy to track their journey inside a mouse’s brain. The method is particularly useful at visualizing changes inside the brain at high resolution.
They first laced the mice’s drinking water with a glow-in-the-dark version of a microplastic called polystyrene. The bubble-shaped material is prevalent in toys, appliances, and all sorts of packaging. Within two and a half hours, they noticed the particles flowing through blood vessels in the brain. Some particles looked like comets trailing tails, wrote the authors.
If, as previously suggested, microplastics flow into the brain unprotected, they would likely spread across the entire brain. Surprisingly, the particles eventually concentrated in cells.
After isolating the cells containing microplastics, the team realized they were the brain’s immune cells. These cellular warriors readily “eat up” invaders, such as bacteria or viruses. But microplastics gave them indigestion. After consuming the particles, the cells became bloated, turning into oblong-shapes that clustered inside blood vessels and blocked blood flow.
The shapes were just the right diameter to jam blood vessels in the brain—especially those connecting deeper brain regions to the cortex, a neural highway that controls movement, learning, and memory. In several tests, mice given a dose of microplastic struggled to run around a playground or grab onto a “monkey bar.” They also failed to remember places.
The good news? Most of these cognitive problems went away within a month. The team is still trying to figure out how the brain eventually cleaned out the microplastics, and whether the blockages—like blood clots—have lingering health problems.
To be clear, although the research adds to increasing evidence that microplastics could enter and potentially harm the brain, the results are only in mice. The conclusions will need to be verified in people, who have far larger blood vessels in the brain that could potentially thwart the negative effects of microplastics.
However, studying the impact of microplastics on the brain could inform how we manufacture the next generation of medical devices—for example, swapping out plastic casings for other biocompatible materials.
If those devices aren’t “rapidly and thoroughly improved,” then they could “become a persistent and potentially recurrent issue,” wrote the authors. “Increased investment in this area of research is urgent and essential to fully comprehend the health risks posed by [microplastics] in human blood.”
The post The Brain on Microplastics: A Study in Mice Finds the Brain’s Immune Cells Gorging on Plastic appeared first on SingularityHub.
This Week’s Awesome Tech Stories From Around the Web (Through January 25)
These were our favorite articles in science and tech this week.
Open-Source DeepSeek-R1 Uses Pure Reinforcement Learning to Match OpenAI o1—at 95% Less Cost Shubham Sharma | VentureBeat
“Based on the recently introduced DeepSeek V3 mixture-of-experts model, DeepSeek-R1 matches the performance of o1, OpenAI’s frontier reasoning LLM, across math, coding, and reasoning tasks. The best part? It does this at a much more tempting cost, proving to be 90-95% more affordable than the latter.”
OpenAI’s Operator Lets ChatGPT Use the Web for You Will Knight | Wired
“The new tool, called Operator, is an AI agent: It relies on an AI model trained on both text and images to interpret commands and figure out how to use a web browser to execute them. OpenAI claims it has the potential to automate many day-to-day tasks and workday errands.”
Sam Altman’s World Now Wants to Link AI Agents to Your Digital Identity Maxwell Zeff | TechCrunch
“Altman’s World project now wants to create tools that link certain AI agents to people’s online personas, letting other users verify that an agent is acting on a person’s behalf, according to its chief product officer, Tiago Sada. World, a web3 project by Altman and Alex Blania’s Tools for Humanity that was formerly known as Worldcoin, is based on the idea that it will eventually be impossible to distinguish humans from AI agents on the internet.”
The Second Wave of AI Coding Is Here Will Douglas Heaven | MIT Technology Review
“Instead of providing developers with a kind of supercharged autocomplete, like most existing tools, this next generation can prototype, test, and debug code for you. …But there’s more. Many of the people building generative coding assistants think that they could be a fast track to artificial general intelligence (AGI), the hypothetical superhuman technology that a number of top firms claim to have in their sights.”
Tech Leaders Pledge Up to $500 Billion in AI Investment in US Deepa Seetharaman | The Wall Street Journal
“The joint venture, known as Stargate, is led by the ChatGPT maker OpenAI and the global tech investor SoftBank Group. It will build data centers for OpenAI. The database company Oracle and MGX, an investor backed by the United Arab Emirates, are also equity partners in the venture. The companies are committing $100 billion to the venture and plan to invest up to $500 billion over the next four years.”
China’s WeRide Wants to Build Global Robotaxi Empire Jiahui Huang | The Wall Street Journal
“China is on the verge of large-scale robotaxi commercialization, Daiwa analysts said in a recent note. They expect the market for robotaxi-related car manufacturing and auto components to reach 160 billion yuan, equivalent to about $22 billion, by 2026. Eventually, robotaxis are likely to completely replace traditional ride-hailing vehicles, they said.”
Researchers Optimize Simulations of Molecules on Quantum Computers John Timmer | Ars Technica
“On Wednesday, Nature Physics published a paper that describes the simulation of some aspects of simple catalysts on quantum computers and provides a way to dramatically simplify the calculations. The resulting algorithmic improvements mean that we may not need to wait for error correction to run useful simulations.”
When AI Passes This Test, Look Out Kevin Roose | The New York Times
“[Humanity’s Last Exam] consists of roughly 3,000 multiple-choice and short answer questions designed to test AI systems’ abilities in areas ranging from analytic philosophy to rocket engineering. Questions were submitted by experts in these fields, including college professors and prizewinning mathematicians, who were asked to come up with extremely difficult questions they knew the answers to.”
This Company Wants to Build a Space Station That Has Artificial Gravity Emilio Cozzi | Wired
“The company is aiming to launch a commercial space station, the Haven-2, into low Earth orbit by 2028, which would allow astronauts to stay in space after the decommissioning of the International Space Station (ISS) in 2030. In doing so, it is attempting to muscle in on NASA’s plans to develop commercial low-orbit space stations with partner organizations—but most ambitious of all are Vast Space’s goals for what it will eventually put into space: a station that has its own artificial gravity.”
Why the Next Energy Race Is for Underground Hydrogen Casey Crownhart | MIT Technology Review
“It might sound like something straight out of the 19th century, but one of the most cutting-edge areas in energy today involves drilling deep underground to hunt for materials that can be burned for energy. The difference is that this time, instead of looking for fossil fuels, the race is on to find natural deposits of hydrogen.”
What’s Next for Robots James O’Donnell | MIT Technology Review
“We’ve been sold lots of promises that robots will transform society ever since the first robotic arm was installed on an assembly line at a General Motors plant in New Jersey in 1961. Few of those promises have panned out so far. But this year, there’s reason to think that even those staunchly in the ‘bored’ camp will be intrigued by what’s happening in the robot races. Here’s a glimpse at what to keep an eye on.”
The post This Week’s Awesome Tech Stories From Around the Web (Through January 25) appeared first on SingularityHub.
The Surprising Longevity of Electric Vehicles: They Now Live as Long as Gas-Powered Cars
Analysis of 29.8 million cars found the median EV lasts 124,000 miles—8,000 more than a gasoline car.
One of the biggest barriers to widespread adoption of electric vehicles is concern about their shelf life. New research suggests the latest models are just as long-lived as their gas-powered cousins.
Any smartphone owner will be well aware that battery capacity slowly degrades over time, as repeated charging and discharging cycles take their toll. The same is true for the batteries in electric vehicles, but exactly how quickly performance degrades has been unclear.
Most manufacturers provide a warranty that guarantees the power packs will retain 70 percent of their capacity after eight years of use. But this still compares unfavorably to gas-powered cars, impacting both the lifetime value and prospects for the resale of electric vehicles.
However, there’s growing evidence that electric vehicles are lasting much longer than expected. And now, researchers have carried out a comprehensive analysis using data from the UK Ministry of Transport, which shows they can match or even exceed the lifespans of conventional vehicles.
“Our findings provide critical insights into the lifespan and environmental impact of electric vehicles,” Viet Nguyen-Tien, from the London School of Economics and Political Science, said in a press release. “No longer just a niche option, [electric vehicles] are a viable and sustainable alternative to traditional vehicles—a significant step towards achieving a net-zero carbon future.”
In the UK, all vehicles more than three years old are required to undergo an annual roadworthiness test. In a paper in Nature Energy, the researchers analyzed data from 264 million of these tests to estimate the lifespans of different kinds of vehicles.
The records, which covered 29.8 million unique vehicles, include details on the type of vehicle, its initial registration date, and its mileage at the time of the test. The researchers identified cars that had been taken off the road by singling out those that had gone at least 18 months without taking the roadworthiness test.
Using this data, the team was able to estimate the median lifespan for a gasoline, diesel, and electric vehicles. They found that electric vehicles now have a lifespan of 18.4 years, which is nearly a year and a half more than diesel cars and only slightly less than gasoline ones. And their median mileage was 124,000, which is about 8,000 more than a gasoline car.
Longer lifespans aren’t just a positive for owners. A major reason for the switch to battery-powered vehicles is the desire to cut out greenhouse gas emissions. Building electric vehicles actually produces more CO2 than conventional cars, but if they last long enough, they can still be a net positive for the environment.
“Despite higher initial emissions from production, a long-lasting electric vehicle can quickly offset its carbon footprint, contributing to the fight against climate change,” said study co-author Robert Elliott, from the University of Birmingham.
The new research is the latest in a line of studies showing that electric vehicles appear to be lasting longer than people expected. According to Wired, a report from consulting firm P3 showed that on average, electric vehicle batteries still retain 90 percent of their capacity after 100,000 miles. Another study from fleet telematics company Geotab found that batteries in the newest vehicles only degrade by 1.8 percent a year.
Given that maintenance costs for electric vehicles are considerably lower than for conventional ones, these findings suggest that going battery-powered is quickly becoming an economical option for most drivers. That will go a long way toward weaning our transportation systems off fossil fuels.
The post The Surprising Longevity of Electric Vehicles: They Now Live as Long as Gas-Powered Cars appeared first on SingularityHub.
Scientists Say They’ve Discovered How Cancer Hijacks and Corrupts Immune Cells
Cancer cells steal from and poison the cells tasked with fighting them off.
Cancers are sneaky infiltrators highly adept at cellular warfare. As they expand, the malignant cells chip away at the body’s immune defenses.
How cancer cells learn to dodge immune attacks has puzzled scientists for decades. A study in Nature this week has a surprising answer: They steal healthy mitochondria—the cell’s energy powerhouses—from the immune cells that hunt them down. In turn, cancers pump their own damaged mitochondria into healthy immune cells, gradually destroying them from the inside.
Scientists have always assumed mitochondria are produced inside cells and live out their lives there. The new findings challenge this dogma, suggesting mitochondria are mobile—at least in cancers and the tumor-infiltrating immune cells working to fight them off.
Analyzing both types of cells from three cancer patients, the Japanese team found that cancer-fighting immune cells poisoned with damaged mitochondria eventually lose their ability to resist. Without a healthy energy source, the cells languish in a state called senescence, where they can no longer function or divide.
Meanwhile, cancer cells pilfer healthy mitochondria from their attackers to satisfy their appetite for energy, allowing them to divide and spread. In tests in petri dishes and mice, the team found that blocking this mitochondria swap slowed cancer growth and made a common immunotherapy more effective.
Stanford’s Holden Maecker, who was not involved in the study, told Nature the finding “sounds crazy, like science fiction,” and that it’s “potentially a totally new biology that we were not looking at.”
This swapping “is a newly discovered mechanism that thwarts anticancer defenses,” wrote Jonathan Brestoff at Washington University School of Medicine, who was also not part of the study.
Energy MillMitochondria are a crucial ingredient of life. These oblong-shaped organelles are loaded with proteins that convert fuel from food into energy. Unlike other organelles, mitochondria stand out because they carry their own genetic material.
Let’s backtrack, for a moment.
Most of our DNA is housed in the cell’s nucleus. But mitochondria also contain their own genes, dubbed mtDNA. These are likely remnants of their past. A common theory is that mitochondria were originally independent cells that were “eaten up” by another early type of cell at the beginning of complex life on Earth. Eventually the two formed an alliance, with mitochondria increasing energy production for the cells, and the cells protecting the mitochondria.
Today, mtDNA mostly operates independently from our cells’ nuclear genetic material. It’s stored in circles of DNA (like in bacteria). Unlike the rest of our DNA, mtDNA never evolved sophisticated repair mechanisms, and it’s prone to accumulating mutations.
As they produce energy, mitochondria also damage themselves by pumping toxic waste into their surroundings. Cells routinely dispose of defunct mitochondria to make space for healthy replacements that can keep the cellular factory—and our bodies—humming along.
This process goes awry in cancer.
Cops and RobbersCancer cells grow incessantly. They require a steady stream of energy to keep up with their need to repeatedly copy their DNA to divide, grow, and spread. But mitochondria in cancer cells are often mutated and struggle to supply their demanding hosts with enough energy.
In recent decades, scientists have noticed that some cells can shuttle their mitochondria to others. Mostly, the process seems to help out a struggling neighbor. But early hints also suggested mitochondrial transfers could contribute to cancer growth. One study found that tumor cells connect to healthy immune cells to siphon off mitochondria, depleting their attackers of energy while bolstering their own—at least in petri dishes.
Whether this happens in cancer patients is controversial. The new study paints a clearer picture.
The team took samples of tumors and cancer-fighting immune cells—tumor-infiltrating lymphocytes (TILs), in this case—from three cancer patients and analyzed their mtDNA makeup. Normally, each type of cell harbors its own mtDNA mutational “barcode.”
For each patient, both cell types shared the same cancerous barcode—suggesting that mitochondria from the tumors might be hopping to, and taking over, their attackers.
As the team watched the cells—now growing together in the lab—they found cancer mtDNA almost completely replaced native DNA in some of the immune cells. The team also found the cancer cells were stealing healthy mitochondria from their immune attackers by sending out nanotubes that burrowed into them. Meanwhile, the cancer cells spewed their own damaged mitochondria, encapsulated in fatty bubbles, towards the immune cells.
“These findings establish the first clear evidence of bidirectional exchange of mitochondria between two cell types,” wrote Brestoff.
Damaged mitochondria don’t often linger inside healthy cells. They’re rapidly shuttled to the cellular “trash bin.” Those inherited from cancer cells, however, were dotted with a protein that hid them from the cells. Like leaking chemical plants, the mutated mitochondria silently festered inside without detection.
Over time, the hijacked immune cells slowly degraded. No longer able to divide, they entered “senescence”—a zombie-like state where they excreted a toxic protein soup that further lowered their cancer-battling abilities. In short, by robbing these cells of healthy mitochondria, cancer cells turned the body’s first-line defense into an ally that helped them grow.
Cutting the LineThe team next dotted tumor mitochondria in tumors with a glow-in-the-dark protein to track them and implanted them into mice.
They found that immune cells in the mice containing cancer-derived mitochondria were far less effective at fighting off cancer cells. They were “exhausted,” explained the team. The contaminated cells struggled to maintain enough energy to ward off cancers and could no longer replicate. However, drugs that blocked mitochondrial transfer revitalized the exhausted cells and made treatment with a common cancer immunotherapy more effective.
Though the results are in mice, mitochondrial transfer could also play a previously unrecognized role in human cancers. Analyzing clinical data from roughly 200 people with two types of cancer, the team found that increased mtDNA mutation was associated with worse outcomes, even when the patients were receiving immunotherapy treatments.
Mitochondrial health has often been studied in the context of aging. These results could spur new interest in how it impacts cancer and other diseases. We still don’t know how exactly mitochondria from cancers damage immune cells. But further study could potentially inspire new treatments to block mitochondrial swaps. Myriad tools already exist to track mitochondria. Expanding research into cancer biology would be a relatively easy next step.
Although “it remains to be determined how prevalent such mitochondrial exchange is” between other cell types, Brestoff wrote, the research raises new questions about its role in other diseases.
The post Scientists Say They’ve Discovered How Cancer Hijacks and Corrupts Immune Cells appeared first on SingularityHub.
Logging off Life but Living on: How AI Is Redefining Death, Memory, and Immortality
Our digital legacies don’t just preserve memories; they can continue to influence the world, long after we’re gone.
Imagine attending a funeral where the person who has died speaks directly to you, answering your questions and sharing memories. This happened at the funeral of Marina Smith, a Holocaust educator who died in 2022.
Thanks to an AI technology company called StoryFile, Smith seemed to interact naturally with her family and friends.
The system used prerecorded answers combined with artificial intelligence to create a realistic, interactive experience. This wasn’t just a video; it was something closer to a real conversation, giving people a new way to feel connected to a loved one after they’re gone.
Virtual Life After DeathTechnology has already begun to change how people think about life after death. Several technology companies are helping people manage their digital lives after they’re gone. For example, Apple, Google, and Meta offer tools to allow someone you trust to access your online accounts when you die.
Microsoft has patented a system that can take someone’s digital data—such as texts, emails and social media posts—and use it to create a chatbot. This chatbot can respond in ways that sound like the original person.
In South Korea, a group of media companies took this idea even further. A documentary called “Meeting You” showed a mother reunited with her daughter through virtual reality. Using advanced digital imaging and voice technology, the mother was able to see and talk to her dead daughter as if she were really there.
These examples may seem like science fiction, but they’re real tools available today. As AI continues to improve, the possibility of creating digital versions of people after they die feels closer than ever.
Who Owns Your Digital Afterlife?While the idea of a digital afterlife is fascinating, it raises some big questions. For example, who owns your online accounts after you die?
This issue is already being discussed in courts and by governments around the world. In the United States, nearly all states have passed laws allowing people to include digital accounts in their wills.
In Germany, courts ruled that Facebook had to give a deceased person’s family access to their account, saying that digital accounts should be treated as inheritable property, like a bank account or house.
But there are still plenty of challenges. For example, what if a digital clone of you says or does something online that you would never have said or done in real life? Who is responsible for what your AI version does?
When a deepfake of actor Bruce Willis appeared in an ad without his permission, it sparked a debate about how people’s digital likenesses can be controlled, or even exploited, for profit.
Cost is another issue. While some basic tools for managing digital accounts after death are free, more advanced services can be expensive. For example, creating an AI version of yourself might cost thousands of dollars, meaning that only wealthy people could afford to “live on” digitally. This cost barrier raises important questions about whether digital immortality could create new forms of inequality.
Grieving in a Digital WorldLosing someone is often painful, and in today’s world, many people turn to social media to feel connected to those they’ve lost. Research shows that a significant proportion of people maintain their social media connections with deceased loved ones.
But this new way of grieving comes with challenges. Unlike physical memories such as photos or keepsakes that fade over time, digital memories remain fresh and easily accessible. They can even appear unexpectedly in your social media feeds, bringing back emotions when you least expect them.
Some psychologists worry that staying connected to someone’s digital presence could make it harder for people to move on. This is especially true as AI technology becomes more advanced. Imagine being able to chat with a digital version of a loved one that feels almost real. While this might seem comforting, it could make it even harder for someone to accept their loss and let go.
Cultural and Religious Views on Digital AfterlifeDifferent cultures and religions have their own unique perspectives on digital immortality. For example:
• The Vatican, the center of the Catholic Church, has said that digital legacies should always respect human dignity.
• In Islamic traditions, scholars are discussing how digital remains fit into religious laws.
• In Japan, some Buddhist temples are offering digital graveyards where families can preserve and interact with digital traces of their loved ones.
These examples show how technology is being shaped by different beliefs about life, death, and remembrance. They also highlight the challenges of blending new innovations with long-standing cultural and religious traditions.
Planning Your Digital LegacyWhen you think about the future, you probably imagine what you want to achieve in life, not what will happen to your online accounts when you’re gone. But experts say it’s important to plan for your digital assets: everything from social media profiles and email accounts to digital photos, online bank accounts and even cryptocurrencies.
Adding digital assets to your will can help you decide how your accounts should be managed after you’re gone. You might want to leave instructions about who can access your accounts, what should be deleted, and whether you’d like to create a digital version of yourself.
You can even decide if your digital self should “die” after a certain amount of time. These are questions that more and more people will need to think about in the future.
Here are steps you can take to control your digital afterlife:
• Decide on a digital legacy. Reflect on whether creating a digital self aligns with your personal, cultural or spiritual beliefs. Discuss your preferences with loved ones.
• Inventory and plan for digital assets. Make a list of all digital accounts, content, and tools representing your digital self. Decide how these should be managed, preserved, or deleted.
• Choose a digital executor. Appoint a trustworthy, tech-savvy person to oversee your digital assets and carry out your wishes. Clearly communicate your intentions with them.
• Ensure that your will covers your digital identity and assets. Specify how they should be handled, including storage, usage and ethical considerations. Include legal and financial aspects in your plan.
• Prepare for ethical and emotional impacts. Consider how your digital legacy might affect loved ones. Plan to avoid misuse, ensure funding for long-term needs, and align your decisions with your values.
Digital PyramidsThousands of years ago, the Egyptian pharaohs had pyramids built to preserve their legacy. Today, our “digital pyramids” are much more advanced and broadly available. They don’t just preserve memories; they can continue to influence the world, long after we’re gone.
This article is republished from The Conversation under a Creative Commons license. Read the original article.
The post Logging off Life but Living on: How AI Is Redefining Death, Memory, and Immortality appeared first on SingularityHub.
A Paralyzed Man Just Piloted a Virtual Drone With His Mind Alone
Linking brains to machines has gone from science fiction to reality in the past two decades.
When patient T5 suffered a spinal cord injury that left him paralyzed, his dream of flying a drone seemed forever out of reach.
Now, thanks to a brain implant, he’s experienced the thrill in a simulation. By picturing finger movements in his mind, the 69-year-old flew a virtual drone in a video game, with the quadcopter dodging obstacles and whizzing through randomly appearing rings in real time.
T5 is part of the BrainGate2 Neural Interface System clinical trial, which launched in 2009 to help paralyzed people control computer cursors, robotic arms, and other devices by decoding electrical activity in their brains. It’s not just for gaming. Having the ability to move and click a cursor gets them back online. Googling, emailing, streaming shows, scrolling though social media posts—what able-bodied people spend hours on every day—are now again part of their lives.
But cursors can only do so much. Popular gaming consoles—PlayStation, Xbox, Nintendo Switch—require you to precisely move your fingers, especially thumbs, fast and in multiple directions.
Current brain implants often take a bird’s-eye-view of the entire hand. The new study, published in Nature Medicine, separated the fingers into three groups—thumb, pointer and middle finger, and ring finger and pinky. After training, T5 could move each finger group independently with unprecedented finesse. His brain implant also picked up intentions to stretch, curl, or move his thumb side to side, letting him pilot the drone as if using a video game controller.
Calling his gaming sessions “stick time,” T5 enthusiastically said that piloting the drone allowed him to mentally “rise up” from his bed or chair for the first time since his injury. Like other gamers, he asked the research team to record his best runs and share the videos with friends.
Brain-computer mind-melds are “expanding from functional to recreational applications,” wrote Nick Ramsey and Mariska Vansteensel at the University Medical Center Utrecht, who were not involved in the study.
Mind ControlLinking brains to machines has gone from science fiction to reality in the past two decades, and it’s been life-changing for people paralyzed from spinal cord injuries.
These injuries, either due to accident or degeneration, sever nerve highways between the brain and muscles. Scientists have long sought to restore these connections. Some have worked to regenerate broken nerve endings inside the body, with mixed results. Others are building artificial “bridges” over the gap. These implants, often placed in the spinal cord above the injury site, record signals from the brain, decode intention for movement, and stimulate muscles to contract or relax. Thanks to such systems, paralyzed people have been able to walk again—often with assistance—for long distances and minimal training.
Other efforts have done without muscles altogether, instead tapping directly into the brain’s electrical signals to hook the mind to a digital universe. Previous studies have found that watching or imagining movements—like, say, asking a patient to picture moving a cursor around a browser—generates similar brain patterns to physically performing the movements. Recording these “brain signatures” from individual people can then decode their intention to move.
Noland Arbaugh, the first person to receive a brain implant from Elon Musk’s Neuralink, is perhaps the most well-known success. Late last year, the young man livestreamed his life for three days, sharing his view while moving a cursor and playing a video game in bed.
Decoding individual finger movements, however, is a bigger challenge. Our hands are especially dexterous and flexible, making it easy to type, play musical instruments, grab a cup of coffee, or twiddle our thumbs. Each finger is controlled by intricate networks of brain activity working together under the hood to generate complex movements.
Fingers curl, wiggle, and stretch apart. Deciphering the brain patterns that allow them to individually and collectively work together has stymied researchers. “In humans, finger decoding has only been demonstrated in prediction in offline analyses or classification from recorded neural activity,” wrote the authors. Brain signal control hasn’t been used to control fingers in real-time. Even in monkeys, brain implants have only been able to separate fingers into two groups that move independently, limiting their paws’ overall flexibility.
A Virtual FlexIn 2016, T5 had two tiny implants inserted into the hand “knob” of his brain—one for each side that controls hand and finger movements. Each implant, the size of a baby aspirin, had 96 microelectrode channels that quietly captured his brain activity as he went through a series of training tasks. At the time of surgery, T5 could only twitch his hands and feet randomly.
The team first designed a hand avatar. It didn’t fully capture the dexterity of a human hand. The index and middle finger moved together as a group, as did the ring and pinkie. Meanwhile, the thumbs could stretch, curl, and move side to side.
For training, T5 watched the hand avatar move and imagined moving his fingers in sync. Using an artificial neural network that specializes in decoding signals across time, the team next built an AI to decipher T5’s brain activity and correlate each pattern with different types of finger movements. The “decoder” was then used to translate his intentions into actual movements of the hand avatar on the computer screen.
In an initial test that only allowed the thumb to extend and curl—what the researchers call “2D”—the participant was able to extend his finger groups onto a virtual target with over 98 percent accuracy. Each attempt took only a bit more than a second.
Adding side-to-side movement of the thumb had a similar success rate, but doubled the amount of time (though he got faster as he became familiar with the task). Overall, T5 could mind-control his virtual hand to reach around 76 targets a minute, far faster than previous attempts. The training “wasn’t tedious,” he said.
Each finger group movement was then mapped onto a virtual drone. Like moving joysticks and pressing buttons on a video game controller, the finger movements moved the quadcopter at will. The system kept the virtual hand in a relaxed, neutral pose unless T5 decided to move any of the finger groups.
In a day of testing, he flew the drone a dozen times across multiple obstacle courses. Each course required him to use one of the finger group movements to successfully navigate randomly appearing rings and other hurdles. One challenge, for example, had him fly figure eights across multiple rings without hitting them. The system was roughly six times better than prior systems.
Although his virtual fingers and their movements were shown on the computer screen while playing, the visuals weren’t necessary.
“When the drone is moving and the fingers are moving, it’s easier and faster to just look at the drone,” he said. Piloting it was intuitive, “like riding your bicycle on your way to work, [thinking] ‘what am I going to do at work today’, and you’re still shifting gears on your bike and moving right along.”
Adapting from simple training exercises to more complicated movements was also easy. “It’s like if you’re a clarinet player, and you pick up someone else’s clarinet. You know the difference instantly, and there is a little learning curve involved, but that’s based on you [having] an implied competency with your clarinet,” he said. To control the drone, you just have to “tickle it a direction,” he added.
The system is still far from commercial use, and it will have to be tested on more people. New brain implant hardware with more channels could further boost performance. But it’s a first step that opens up multiplayer online gaming—and potentially, better control of other computer programs and sophisticated robotic hands—to people with paralysis, enriching their social lives and overall wellbeing.
The post A Paralyzed Man Just Piloted a Virtual Drone With His Mind Alone appeared first on SingularityHub.
This Week’s Awesome Tech Stories From Around the Web (Through January 18)
These were our favorite articles in science and tech this week.
OpenAI Has Created an AI Model for Longevity Science Antonio Regalado | MIT Technology Review
“When you think of AI’s contributions to science, you probably think of AlphaFold, the Google DeepMind protein-folding program that earned its creator a Nobel Prize last year. Now OpenAI says it’s getting into the science game too—with a model for engineering proteins. The company says it has developed a language model that dreams up proteins capable of turning regular cells into stem cells—and that it has handily beat humans at the task.”
This MIT Spinout Wants to Spool Hair-Thin Fibers Into Patients’ Brains Connie Loizos | TechCrunch
“[NeuroBionics] thinks it could one day improve the lives of millions of people who live with neurological conditions like depression, epilepsy, and Parkinson’s disease. Famed investor Steve Jurvetson of Future Ventures says that if everything goes right for the 18-month-old outfit, its approach could further address ‘the peripheral nervous system for pain, incontinence, and a bunch of other applications.'”
An Entire Book Was Written in DNA—and You Can Buy It for $60 Emily Mullin | Wired
“DNA data storage isn’t exactly mainstream yet, but it might be getting closer. Now you can buy what may be the first commercially available book written in DNA. Today, Asimov Press debuted an anthology of biotechnology essays and science fiction stories encoded in strands of DNA. For $60, you can get a physical copy of the book plus the nucleic acid version—a metal capsule filled with dried DNA.”
Roar of New Glenn’s Engines Silences Skeptics of Bezos’ Blue Origin Kenneth Chang | The New York Times
“The launch was a major success for Blue Origin, Mr. Bezos’ rocket company. It should quiet critics who say that the company has been too slow compared with Elon Musk’s SpaceX, which has dominated global spaceflight industry in recent years. New Glenn could prove a credible competitor with Mr. Musk’s company and win launch contracts from NASA and the Department of Defense, as well as commercial contracts.”
New Superconductive Materials Have Just Been Discovered Charlie Wood | Wired
“In 2024, superconductivity—the flow of electric current with zero resistance—was discovered in three distinct materials. Two instances stretch the textbook understanding of the phenomenon. The third shreds it completely. ‘It’s an extremely unusual form of superconductivity that a lot of people would have said is not possible,’ said Ashvin Vishwanath, a physicist at Harvard University who was not involved in the discoveries.”
Fire Destroys Starship on Its Seventh Test Flight, Raining Debris From Space Stephen Clark | Ars Technica
“SpaceX launched an upgraded version of its massive Starship rocket from South Texas on Thursday, but the flight ended less than nine minutes later after engineers lost contact with the spacecraft. …Within minutes, residents and tourists in the Turks and Caicos Islands, Haiti, the Dominican Republic, and Puerto Rico shared videos showing a shower of debris falling through the atmosphere along Starship’s expected flight corridor.”
A Promising (and Surprisingly Simple) Way to Detect Alien Life Dirk Schulze-Makuch | Big Think
“Studying motility—the ability of organisms (in this case, microbial life) to move independently in their environment—could be an effective way to find and identify extraterrestrial life. Recent research shows that microbes respond to stress, like high salt levels, by moving, making this a potential method for finding life on Mars. The research could also help detect deadly pathogens like cholera in water, improving public health on Earth.”
‘The New York Times’ Takes OpenAI to Court. ChatGPT’s Future Could Be on the Line Bobby Allyn | NPR
“The lawsuit…calls for the destruction of ChatGPT’s dataset. That would be a drastic outcome. If the publishers win the case, and a federal judge orders the dataset destroyed, it could completely upend the company, since it would force OpenAI to recreate its dataset relying only on works it has been authorized to use.”
Not Just Heat Death: Here Are Five Ways the Universe Could End Paul Sutter | Ars Technica
“If you’re having trouble sleeping at night, have you tried to induce total existential dread by contemplating the end of the entire universe? If not, here’s a rundown of five ideas exploring how ‘all there is’ might become ‘nothing at all.’ Enjoy.”
The post This Week’s Awesome Tech Stories From Around the Web (Through January 18) appeared first on SingularityHub.
MIT’s Latest Bug Robot Is a Super Flyer. It Could One Day Help Bees Pollinate Crops.
The bot does acrobatic double flips faster than a fruit fly and stays aloft 100 times longer than other robots.
Rapid declines in insect populations are leading to concerns that the pollination of important crops could soon come under threat. Tiny flying robots designed by MIT researchers could one day provide a mechanical solution.
Numbers of critical pollinators like bees and butterflies are declining rapidly in the face of environmental degradation and climate change, which research suggests could put as much as one third of the food we eat at risk.
While the most obvious solution to this crisis is to find ways to reverse these declines, engineers have also been investigating whether technology could help plug the gaps. Several groups have been building insect-scale flying robots that they hope could one day be used to pollinate crops.
Now, a team at MIT has unveiled a new design that they say is much more agile than predecessors and capable of flying 100 times longer. The bug-sized bot is powered by flapping wings and can even carry out complex acrobatic maneuvers like double aerial flips.
MIT’s flying insect robot. Image Credit: MIT“The amount of flight we demonstrated in this paper is probably longer than the entire amount of flight our field has been able to accumulate with these robotic insects,” associate professor Kevin Chen, who led the project, said in a press release. “With the improved lifespan and precision of this robot, we are getting closer to some very exciting applications, like assisted pollination.”
The new design, reported in Science Robotics, weighs just 750 milligrams (0.03 ounces) and features four modules, each consisting of a carbon-fiber airframe, an artificial muscle that can be electrically activated, a wing, and a transmission to transfer power from the muscle to the wing.
Previous versions of these modules featured roughly the same configuration, but with two flapping wings apiece. However, Chen says this resulted in the downdraft of the wings interfering with each other, reducing the amount of lift generated. In the new set-up, each module’s wing faces away from the robot, which boosts the amount of thrust it can generate.
One of the main reasons for the short shelf life of previous designs was the significant mechanical stress generated by the flapping movement of the wings. An upgraded transmission and a longer wing hinge helped to reduce the strain—allowing the robot to generate more power than before and last longer.
Put together this allowed the robot to achieved average speeds of 35 centimeters per second (13.8 inches per second)—the fastest flight researchers have reported—and sustained hovering for nearly 17 minutes. “We’ve shown flight that is 100 times longer than anyone else in the field has been able to do, so this is an extremely exciting result,” says Chen.
The robot was also able to carry out precise maneuvers, including tracing out the letters MIT in midair, as well as acrobatic double flips with a greater rotational speed than a fruit fly and four times as fast as the previous quickest robot.
A closeup image of on of the robot’s upgraded wings. Image Credit: MITCurrently, the bug-bot is powered by a cable, which means it can’t move about freely. But the researchers say cutting down the number of wings freed up space on the airframe that could be used to install batteries, sensors, and other electronics that would enable it to navigate outside the lab.
That’s likely still some way off though. For now, Chen says the goal is to boost the flight time by another order of magnitude and increase the flight precision so the robot can take off and land from the center of a flower.
If all that comes together, however, our beleaguered natural pollinators may soon have some much-needed help in their efforts to keep our food systems ticking.
The post MIT’s Latest Bug Robot Is a Super Flyer. It Could One Day Help Bees Pollinate Crops. appeared first on SingularityHub.
Meta’s New AI Translates Speech in Real Time Across More Than 100 Languages
It’s accurate and nearly as fast as expert human interpreters.
The dream of a universal AI interpreter just got a bit closer. This week, tech giant Meta released a new AI that can almost instantaneously translate speech in 101 languages as soon as the words tumble out of your mouth.
AI translators are nothing new. But they generally work best with text and struggle to transform spoken words from one language to another. The process is usually multistep. The AI first turns speech into text, translates the text, and then converts it back to speech. Though already useful in everyday life, these systems are inefficient and laggy. Errors can also sneak in at each step.
Meta’s new AI, dubbed SEAMLESSM4T, can directly convert speech into speech. Using a voice synthesizer, the system translates words spoken in 101 languages into 36 others—not just into English, which tends to dominate current AI interpreters. In a head-to-head evaluation, the algorithm is 23 percent more accurate than today’s top models—and nearly as fast as expert human interpreters. It can also translate text into text, text into speech, and vice versa.
Meta is releasing all the data and code used to develop the AI to the public for non-commercial use, so others can optimize and build on it. In a sense, the algorithm is “foundational,” in that “it can be fine-tuned on carefully curated datasets for specific purposes—such as improving translation quality for certain language pairs or for technical jargon,” wrote Tanel Alumäe at Tallinn University of Technology, who was not involved in the project. “This level of openness is a huge advantage for researchers who lack the massive computational resources needed to build these models from scratch.”
It’s “a hugely interesting and important effort,” Sabine Braun at the University of Surrey, who was also not part of the study, told Nature.
Self-Learning AIMachine translation has made strides in the past few years thanks to large language models. These models, which power popular chatbots like ChatGPT and Claude, learn language by training on massive datasets scraped from the internet—blogs, forum comments, Wikipedia.
In translation, humans carefully vet and label these datasets, or “corpuses,” to ensure accuracy. Labels or categories provide a sort of “ground truth” as the AI learns and makes predictions.
But not all languages are equally represented. Training corpuses are easy to come by for high-resource languages, such as English and French. Meanwhile, low-resource languages, largely used in mid- or low-income countries, are harder to find—making it difficult to train a data-hungry AI translator with trusted datasets.
“Some human-labeled resources for translation are freely available, but often limited to a small set of languages or in very specific domains,” wrote the authors.
To get around the problem, the team used a technique called parallel data mining, which crawls the internet and other resources for audio snippets in one language with matching subtitles in another. These pairs, which match in meaning, add a wealth of training data in multiple languages—no human annotation needed. Overall, the team collected roughly 443,000 hours of audio with matching text, resulting in about 30,000 aligned speech-text pairs.
SEAMLESSM4T consists of three different blocks, some handling text and speech input and others output. The translation part of the AI was pre-trained on a massive dataset containing 4.5 million hours of spoken audio in multiple languages. This initial step helped the AI “learn patterns in the data, making it easier to fine-tune the model for specific tasks” later on, wrote Alumäe. In other words, the AI learned to recognize general structures in speech regardless of language, establishing a baseline that made it easier to translate low-resource languages later.
The AI was then trained on the speech pairs and evaluated against other translation models.
Spoken WordA key advantage of the AI is its ability to directly translate speech, without having to convert it into text first. To test this ability, the team hooked up an audio synthesizer to the AI to broadcast its output. Starting with any of the 101 languages it knew, the AI translated speech into 36 different tongues—including low-resource languages—with only a few seconds of delay.
The algorithm outperformed existing state-of-the-art systems, achieving 23 percent greater accuracy using a standardized test. It also better handled background noise and voices from different speakers, although—like humans—it struggled with heavily accented speech.
Lost in TranslationLanguage isn’t just words strung into sentences. It reflects cultural contexts and nuances. For example, translating a gender-neutral language into a gendered one could introduce biases. Does “I am a teacher” in English translate to the masculine “Soy profesor” or to the feminine “Soy profesora” in Spanish? What about translations for doctor, scientist, nanny, or president?
Mistranslations may also add “toxicity,” when the AI spews out offensive or harmful language that doesn’t reflect the original meaning—especially for words that don’t have a direct counterpart in the other language. While easy to laugh off as a comedy of errors in some cases, these mistakes are deadly serious when it comes to medical, immigration, or legal scenarios.
“These sorts of machine-induced error could potentially induce real harm, such as erroneously prescribing a drug, or accusing the wrong person in a trial,” wrote Allison Koenecke at Cornell University, who wasn’t involved in the study. The problem is likely to disproportionally affect people speaking low-resource languages or unusual dialects, due to a relative lack of training data.
To their credit, the Meta team analyzed their model for toxicity and fine-tuned it during multiple stages to lower the chances of gender bias and harmful language.
“This is a step in the right direction, and offers a baseline against which future models can be tested,” wrote Koenecke.
Meta is increasingly supporting open-source technology. Previously, the tech giant released PyTorch, a software library for AI training, which was used by companies, including OpenAI and Tesla, and researchers around the globe. SEAMLESSM4T will also be made public for others to build on its abilities.
The AI is just the latest machine translator that can handle speech-to-speech translation. Previously, Google showcased AudioPaLM, an algorithm that can turn 113 languages into English—but only English. SEAMLESSM4T broadens the scope. Although it only scratches the surface of the roughly 7,000 languages spoken, the AI inches closer to a universal translator—like the Babel fish in The Hitchhiker’s Guide to the Galaxy, which translates languages from species across the universe when popped into the ear.
“The authors’ methods for harnessing real-world data will forge a promising path towards speech technology that rivals the stuff of science fiction,” wrote Alumäe.
The post Meta’s New AI Translates Speech in Real Time Across More Than 100 Languages appeared first on SingularityHub.
China Is About to Build the World’s Biggest Hydropower Dam—With Triple the Output of Three Gorges
Medog Hydropower Station, as it will be called, will blow other hydropower dams out of the water.
China’s electricity use over the last 30 years is a hockey-stick curve, climbing steeply as the country industrialized, built dozens of mega-cities, and became the world’s manufacturing center. Though China’s economy has slowed in recent years, electricity demand is only climbing. Given the country has pledged to reach carbon neutrality by 2060, they’re going to need much more renewable power than they currently have.
To help them achieve that goal, the government recently announced plans to build the biggest hydropower dam in the world.
Medog Hydropower Station, as it will be called, will blow other hydropower dams out of the water (pun intended), with an estimated annual generation capacity triple that of the world’s largest existing dam (which, perhaps unsurprisingly, is also in China). The 60-gigawatt project will be able to generate up to 300,000 gigawatt-hours (or 300 terawatt-hours) of electricity per year. That’s equivalent to Greece’s annual energy consumption.
The dam will be built on a river in Tibet called the Yarlung Tsangpo, with construction carried out by the government-owned Power Construction Corporation of China. It will not only be one of China’s biggest infrastructure projects ever, it will be one of the most expensive infrastructure projects ever, with an estimated cost of a trillion yuan or $136 billion (yes, billion with a “b”).
Perhaps unsurprisingly, China is already home to the world’s largest existing hydropower dam: Three Gorges Dam on the Yangtze River stands 594 feet tall (Arizona’s Hoover Dam is taller, but Three Gorges is wider) and has a generating capacity of 22.5 gigawatts. By comparison, the biggest hydropower dam in the US is the Grand Coulee in Washington state, and it has a generating capacity of 6.8 gigawatts. China is the world leader in hydropower deployment, accounting for almost a third of global hydropower capacity. Many of those dams are on the Yangtze (some of them built by robots!) and some are on the same river where the Medog project will be built.
The Yarlung Tsangpo river starts in western Tibet, flowing east and then south, where it merges with India’s Brahmaputra then flows south through Bangladesh and into the Bay of Bengal. It is the highest river in the world, and a 31-mile (50-kilometer) section in the South Tibet Valley drops by a sharp 6,561 feet (2,000 meters); there’s loads of untapped potential for all that moving water to turn some turbines on its way down.
But the project is not without its challenges, both engineering and political.
Environmental groups say the dam will disrupt ecosystems on the biodiverse Tibetan Plateau. Tibetan rights groups see the project as a prime example of China exploiting Tibet’s natural resources while harming local communities. The dam’s construction will require people to be relocated, though likely not as many as Three Gorges, which uprooted and moved 1.4 million people. The Medog dam will be bigger, but it’s in a more sparsely populated area.
India and Bangladesh have both expressed concerns about the dam, as it could alter the flow of the river downstream where it runs through these countries. There are also concerns about the area’s geological stability, as it sits at the convergence of the Indian and Eurasian continental plates and is considered tectonically active. An earthquake could destroy the dam and cause catastrophic flooding. In fact, a magnitude 6.8 earthquake killed 126 people and damaged 4 reservoirs just last week.
However, Medog won’t be a conventional dam in the form of one giant wall built to hold water behind it, like Three Gorges or the Hoover Dam. Instead, four 12.4-mile (20-kilometer) tunnels will be blasted and excavated through a mountain called Namcha Barwa to divert the river. The water flowing through these tunnels will turn turbines attached to generators before running back into the Yarlung Tsangpo.
The Chinese government says the Medog project will help it achieve the country’s carbon neutrality goals. In 2023, coal was still China’s main source of electricity generation by a long shot, supplying 61 percent of the country’s electricity. Hydropower was a distant second at 13 percent, followed by wind, solar, nuclear, and gas, in that order.
Construction is slated to start in 2029, and if all goes as planned—which would be impressive for a project of this scale—it will take four years to complete, with the dam beginning commercial operation in 2033.
The post China Is About to Build the World’s Biggest Hydropower Dam—With Triple the Output of Three Gorges appeared first on SingularityHub.
Here’s What It Will Take to Ignite Scalable Fusion Power
There’s a growing sense that developing practical fusion energy is no longer an if but a when.
The way scientists think about fusion changed forever in 2022, when what some called the experiment of the century demonstrated for the first time that fusion can be a viable source of clean energy.
The experiment, at Lawrence Livermore National Laboratory, showed ignition: a fusion reaction generating more energy out than was put in.
In addition, the past few years have been marked by a multibillion-dollar windfall of private investment in the field, principally in the United States.
But a whole host of engineering challenges must be addressed before fusion can be scaled up to become a safe, affordable source of virtually unlimited clean power. In other words, it’s engineering time.
As engineers who have been working on fundamental science and applied engineering in nuclear fusion for decades, we’ve seen much of the science and physics of fusion reach maturity in the past 10 years.
But to make fusion a feasible source of commercial power, engineers now have to tackle a host of practical challenges. Whether the United States steps up to this opportunity and emerges as the global leader in fusion energy will depend, in part, on how much the nation is willing to invest in solving these practical problems—particularly through public-private partnerships.
Building a Fusion ReactorFusion occurs when two types of hydrogen atoms, deuterium and tritium, collide in extreme conditions. The two atoms literally fuse into one atom by heating up to 180 million degrees Fahrenheit (100 million degrees Celsius), 10 times hotter than the core of the Sun. To make these reactions happen, fusion energy infrastructure will need to endure these extreme conditions.
There are two approaches to achieving fusion in the lab: inertial confinement fusion, which uses powerful lasers, and magnetic confinement fusion, which uses powerful magnets.
While the “experiment of the century” used inertial confinement fusion, magnetic confinement fusion has yet to demonstrate that it can break even in energy generation.
Several privately funded experiments aim to achieve this feat later this decade, and a large, internationally supported experiment in France, ITER, also hopes to break even by the late 2030s. Both are using magnetic confinement fusion.
Challenges Lying AheadBoth approaches to fusion share a range of challenges that won’t be cheap to overcome. For example, researchers need to develop new materials that can withstand extreme temperatures and irradiation conditions.
Fusion reactor materials also become radioactive as they are bombarded with highly energetic particles. Researchers need to design new materials that can decay within a few years to levels of radioactivity that can be disposed of safely and more easily.
Producing enough fuel, and doing it sustainably, is also an important challenge. Deuterium is abundant and can be extracted from ordinary water. But ramping up the production of tritium, which is usually produced from lithium, will prove far more difficult. A single fusion reactor will need hundreds of grams to one kilogram (2.2 pounds) of tritium a day to operate.
Right now, conventional nuclear reactors produce tritium as a byproduct of fission, but these cannot provide enough to sustain a fleet of fusion reactors.
So, engineers will need to develop the ability to produce tritium within the fusion device itself. This might entail surrounding the fusion reactor with lithium-containing material, which the reaction will convert into tritium.
To scale up inertial fusion, engineers will need to develop lasers capable of repeatedly hitting a fusion fuel target, made of frozen deuterium and tritium, several times per second or so. But no laser is powerful enough to do this at that rate—yet. Engineers will also need to develop control systems and algorithms that direct these lasers with extreme precision on the target.
A laser setup that Farhat Beg’s research group plans to use to repeatedly hit a fusion fuel target. The goal of the experiments is to better control the target’s placement and tracking. The lighting is red from colored gels used to take the picture. David Baillot/University of California San DiegoAdditionally, engineers will need to scale up production of targets by orders of magnitude: from a few hundreds handmade every year with a price tag of hundreds of thousands of dollars each to millions costing only a few dollars each.
For magnetic containment, engineers and materials scientists will need to develop more effective methods to heat and control the plasma and more heat- and radiation-resistant materials for reactor walls. The technology used to heat and confine the plasma until the atoms fuse needs to operate reliably for years.
These are some of the big challenges. They are tough but not insurmountable.
Current Funding LandscapeInvestments from private companies globally have increased—these will likely continue to be an important factor driving fusion research forward. Private companies have attracted over $7 billion in private investment in the past five years.
Several startups are developing different technologies and reactor designs with the aim of adding fusion to the power grid in coming decades. Most are based in the United States, with some in Europe and Asia.
While private sector investments have grown, the US government continues to play a key role in the development of fusion technology up to this point. We expect it to continue to do so in the future.
It was the US Department of Energy that invested about $3 billion to build the National Ignition Facility at the Lawrence Livermore National Laboratory in the mid 2000s, where the “experiment of the century” took place 12 years later.
In 2023, the Department of Energy announced a 4-year, $42 million program to develop fusion hubs for the technology. While this funding is important, it likely will not be enough to solve the most important challenges that remain for the United States to emerge as a global leader in practical fusion energy.
One way to build partnerships between the government and private companies in this space could be to create relationships similar to that between NASA and SpaceX. As one of NASA’s commercial partners, SpaceX receives both government and private funding to develop technology that NASA can use. It was the first private company to send astronauts to space and the International Space Station.
Along with many other researchers, we are cautiously optimistic. New experimental and theoretical results, new tools and private sector investment are all adding to our growing sense that developing practical fusion energy is no longer an if but a when.
This article is republished from The Conversation under a Creative Commons license. Read the original article.
The post Here’s What It Will Take to Ignite Scalable Fusion Power appeared first on SingularityHub.
A ChatGPT Moment Is Coming for Robotics. AI World Models Could Help Make It Happen.
Robots need an internal representation of the world and its rules like ours.
If you’re not familiar with the concept of “world models” just yet, a storm of activity at the start of 2025 gives every indication it may soon become a well-known term.
Jensen Huang, CEO of Nvidia, used his keynote presentation at CES to announce a new platform, Cosmos, for what they’re calling “world foundation models.” Cosmos is a generative AI tool that produces virtual-world-like videos. The next day, Google’s DeepMind revealed similar ambitions with a project led by a former OpenAI engineer. This all comes several months after an intriguing startup, World Labs, achieved unicorn status—a startup valued $1 billion or more—within only four months to do the same thing.
To understand what world models are, it’s worth pointing out that we’re at an inflection point in the way we build and deploy intelligent machines like drones, robots, and autonomous vehicles. Rather than explicitly programming behavior, engineers are turning to 3D computer simulation and AI to let the machines teach themselves. This means physically accurate virtual worlds are becoming an essential source of training data to teach machines to perceive, understand, and navigate three-dimensional space.
What large language models are to systems like ChatGPT, world models are to the virtual world simulators needed to train robots. Therefore, world models are a type of generative AI tool capable of producing 3D environments and simulating virtual worlds. Just like ChatGPT is built with an intuitive chat interface, world-model interfaces might allow more people, even those without technical game developer skillsets, to build 3D virtual worlds. They could also help robots better understand, plan, and navigate their surroundings.
To be clear, most early world models including those announced by Nvidia generate spatial training data in a video format. There are, however, already models capable of producing fully immersive scenes as well. One tool made by a startup called Odyssey, uses gaussian splatting to create scenes which can be loaded into 3D software tools like Unreal Engine and Blender. Another startup, Decart, demoed their world model as a playable version of a game similar to Minecraft. DeepMind has similarly gone the video game route.
All this reflects the potential for changes in the way computer graphics work at a foundational level. In 2023, Huang predicted that in the future, “every single pixel will be generated, not rendered but generated.” He’s recently taken a more nuanced view by saying that traditional rendering systems aren’t likely to fully disappear. It’s clear, however, that generative AI predicting which pixels to show may soon encroach on the work that game engines do today.
The implications for robotics are potentially huge.
Nvidia is now working hard to establish the branding label “physical AI” as a term for the intelligent systems that will power warehouse AMRs, inventory drones, humanoid robots, autonomous vehicles, farmer-less tractors, delivery robots, and more. To give these systems the ability to perform their work effectively in the real world, especially in environments with humans, they must train in physically accurate simulations. World models could potentially produce synthetic training scenarios of any variety imaginable.
This idea is behind the shift in the way companies articulate the path forward for AI, and World Labs is perhaps the best expression of this. Founded by Fei-Fei Li, known as the godmother of AI for her foundational work in computer vision, World Labs defines itself as a spatial intelligence company. In their view, to achieve true general intelligence, AIs will need an embodied ability to “reason about objects, places, and interactions in 3D space and time.” Like their competitors, they are seeking to build foundation models capable of moving AI into three-dimensional space.
In the future, these could evolve into an internal, humanlike representation of the world and its rules. This might allow AIs to predict how their actions will affect the environment around them and plan reasonable approaches to accomplish a task. For example, an AI may learn that if you squeeze an egg too hard it will crack. Yet context matters. If your goal is placing it in a carton, go easy, but if you’re preparing an omelet, squeeze away.
While world models may be experiencing a bit of a moment, it’s early, and there are still significant limitations in the short term. Training and running world models requires massive amounts of computing power even compared to today’s AI. Additionally, models aren’t reliably consistent with the real world’s rules just yet, and like all generative AI, they will be shaped by the biases within their own training data.
As TechCrunch’s Kyle Wiggers writes, “A world model trained largely on videos of sunny weather in European cities might struggle to comprehend or depict Korean cities in snowy conditions.” For these reasons, traditional simulation tools like game and physics engines will still be used for quite some time to render training scenarios for robots. And Meta’s head of AI, Yann LeCun, who wrote deeply about the concept in 2022, still thinks advanced world models—like the ones in our heads—will take a while longer to develop.
Still, it’s an exciting moment for roboticists. Just as ChatGPT signaled an inflection point for AI to enter mainstream awareness; robots, drones, and embodied AI systems may be nearing a similar breakout moment. To get there, physically accurate 3D environments will become the training ground for these systems to learn and mature.
Early world models may make it easier than ever for developers to generate the countless number of training scenarios needed to bring on an era of spatially intelligent machines.
The post A ChatGPT Moment Is Coming for Robotics. AI World Models Could Help Make It Happen. appeared first on SingularityHub.
This Week’s Awesome Tech Stories From Around the Web (Through January 11)
These were our favorite articles in science and tech this week.
Google Is Forming a New Team to Build AI That Can Simulate the Physical World Kyle Wiggers | TechCrunch
“‘We believe scaling [AI training] on video and multimodal data is on the critical path to artificial general intelligence,’ reads one of the job descriptions. Artificial general intelligence, or AGI, generally refers to AI that can accomplish any task a human can. ‘World models will power numerous domains, such as visual reasoning and simulation, planning for embodied agents, and real-time interactive entertainment.'”
Nvidia Announces $3,000 Personal AI Supercomputer Called Digits Kylie Robison | The Verge
“The desktop-sized system can handle AI models with up to 200 billion parameters. …For even more demanding applications, two Project Digits systems can be linked together to handle models with up to 405 billion parameters (Meta’s best model, Llama 3.1, has 405 billion parameters). The GB10 chip delivers up to 1 petaflop of AI performance—meaning it can perform 1 quadrillion AI calculations per second—at FP4 precision (which helps make the calculations faster by making approximations).”
AI Could Create 78 Million More Jobs Than It Eliminates by 2030—Report Benj Edwards | Ars Technica
“On Wednesday, the World Economic Forum (WEF) released its Future of Jobs Report 2025, with CNN immediately highlighting the finding that 40 percent of companies plan workforce reductions due to AI automation. But the report’s broader analysis paints a far more nuanced picture than CNN’s headline suggests: It finds that AI could create 170 million new jobs globally while eliminating 92 million positions, resulting in a net increase of 78 million jobs by 2030.”
This Robovac Has an Arm—and Legs, Too Jennifer Pattison Tuohy | The Verge
“Dreame says its arm can pick up sneakers as large as men’s size 42 (a size 9 in the US) and take them to a designated spot in your home. The concept could apply to small toys and other items, and you’ll be able to designate specific areas for the robot to take certain items, such as toys to the playroom and shoes to the front door.”
A Virtual Cell Is a ‘Holy Grail’ of Science. It’s Getting Closer. Matteo Wong | The Atlantic
“Scientists are now designing computer programs that may unlock the ability to simulate human cells, giving researchers the ability to predict the effect of a drug, mutation, virus, or any other change in the body, and in turn making physical experiments more targeted and likelier to succeed.”
Predicting the ‘Digital Superpowers’ We Could Have by 2030 Louis Rosenberg | Big Think
“Computer scientist Louis B. Rosenberg predicts that context-aware AI agents will bring ‘digital superpowers’ into our daily experiences by 2030. The convergence of AI and body-worn devices, like AI-powered glasses, will likely enable these new abilities. Rosenberg outlines his predictions for the future of technologies like AI, augmented reality, and conversational computing across three phases.”
The Ocean Teems With Networks of Interconnected Bacteria Veronique Greenwood | Quanta
“The Prochlorococcus [bacteria] population may be more connected than anyone could have imagined. They may be holding conversations across wide distances, not only filling the ocean with envelopes of information and nutrients, but also linking what we thought were their private, inner spaces with the interiors of other cells.”
These Newly Identified Cells Could Change the Face of Plastic Surgery Max G. Levy | Wired
“The cells appear to simultaneously provide structure (like cartilage) and natural squishiness (like fat). They appear in many mammals, including humans, and the unique structure they provide gives reconstructive surgeons a clearer understanding of what materials make up our faces. Plikus believes this new tissue discovery sets the stage for better cartilage transplants—and so better plastic surgery.”
Transforming the Moon Into Humanity’s First Space Hub Saurav Shroff | Wired
“This year will mark a turning point in humanity’s relationship with the moon, as we begin to lay the foundations for a permanent presence on its surface, paving the way for our natural satellite to become an industrial hub—one that will lead us to Mars and beyond.”
The post This Week’s Awesome Tech Stories From Around the Web (Through January 11) appeared first on SingularityHub.
Blue Origin Is Ready to Challenge SpaceX With Its New Glenn Rocket
The company hopes to break SpaceX’s industry stranglehold with New Glenn.
Jeff Bezos’s rocket company Blue Origin hopes to become a major rival to SpaceX in the private space industry. But those ambitions are on hold after the company postponed the test launch of its new rocket earlier today.
Despite increasing investment in the private space industry, Elon Musk’s SpaceX has successfully converted its first-mover advantage into near total dominance of the market—accounting for 45 percent of global space launches in 2023. But Blue Origin is hoping to break that stranglehold with its heavy-lift New Glenn rocket, successor to the New Shepard suborbital launch vehicle that took Bezos to space in 2021.
The vehicle’s first test launch was due to lift off from Cape Canaveral Space Force Station in Florida at 1 a.m. Eastern Time (ET) this morning, but Blue Origin had to postpone the launch at the last minute due to rough weather at the landing zone in the Atlantic Ocean. It won’t be long until the company gets another crack at launch though—announcing on X it may try again as early as this Sunday.
The rocket—named after the first American to orbit Earth, NASA astronaut John Glenn—is 320 feet tall and designed to carry 45 tons to low Earth orbit. That places it between SpaceX’s Falcon 9 and Falcon Heavy rockets in terms of payload capacity, at 22 and 64 tons respectively.
New Glenn features two stages. A booster provides most of the thrust to get the vehicle into the upper atmosphere and then detaches, allowing a smaller second stage to deliver the payload to orbit.
Just like SpaceX’s rockets, the first stage is designed to fly up to 25 times. After detaching from the second stage, it will return to Earth and attempt to land on a barge in the ocean. The company is planning a landing attempt on this initial test launch, which is why poor weather at sea prompted today’s cancellation.
Reusability has dramatically reduced SpaceX’s costs compared to competitors. Proving Blue Origin can reuse its rockets too will be crucial if it hopes to muscle in on a share of the launch market.
New Glenn won’t have a commercial payload for the test launch. Instead, it will carry a demonstrator designed to test key technologies for its future Blue Ring spacecraft, including a communications array, power systems, and flight computer.
Blue Ring is designed to carry multiple satellites into orbit and then maneuver to different orbits and locations to deploy them. Blue Origin hopes this will allow the company to provide much more flexible launch services than competitors.
Customers are already lining up.
Originally, the test launch was slated to carry a NASA mission to Mars, though this will now fly on a later New Glenn launch. The US Space Force has also selected the company, alongside SpaceX and United Launch Alliance, to compete for various missions over the next four years.
It is also likely to get a significant amount of business from Bezos’s other venture, Amazon, which is planning to deploy a constellation of internet satellites dubbed Project Kuiper to compete with SpaceX’s Starlink.
While much of this will depend on the success of the test launch, a positive result could herald a much more competitive era for the private launch industry. That’s likely to reduce barriers to space even further and help spur the burgeoning space economy.
The post Blue Origin Is Ready to Challenge SpaceX With Its New Glenn Rocket appeared first on SingularityHub.