Transhumanismus
A Humanoid Robot Beat the Human World Record for a Half Marathon
A year after most robots failed to finish the Beijing race, nearly half the field autonomously ran a course of slopes, narrow passages, and 20 turns.
Humanoid robots are Silicon Valley’s latest obsession, but real-world performance has lagged the hype. That may be starting to change, however, after a robot beat the human record for a half marathon by nearly seven minutes in Beijing.
While tech companies around the world are piling into humanoid robots, China has made it a national priority. The government is pouring subsidies and infrastructure investment into the sector, and Chinese firms already account for around 80 percent of the humanoid machines shipped globally, according to the South China Morning Post.
Eager to show off its prowess, China has been staging sporting events for robots, most notably last year’s inaugural World Humanoid Robot Games. Another such event, the Beijing E-Town Half Marathon, pits humanoid robots against thousands of human runners over a 13-mile course. Last year, most of the non-human competitors failed to finish, and the fastest robots managed an unimpressive two hours and 40 minutes.
But this time around, four robots clocked times under an hour. And the winner, made by Chinese smartphone company Honor, registered a record-breaking 50 minutes, 26 seconds, eclipsing the benchmark set by Ugandan long-distance runner Jacob Kiplimo in Lisbon last month.
“Running faster may not seem meaningful at first, but it enables technology transfer, for example, into structural reliability and cooling, and eventually industrial applications,” Du Xiaodi, an engineer on the winning team, told Reuters.
More than 100 teams fielded 300 robots at this year’s event, up from just 21 entries at the inaugural event last year. But Honor, a spinoff from Chinese telecom giant Huawei, dominated the competition, with separate teams from the company taking all three podium spots.
The winning robot, Lightning, navigated the course entirely autonomously. The bot stands 5 feet 6 inches tall but features legs 37 inches long to mimic the physical attributes of elite runners. It also boasts liquid cooling technology used in the company’s smartphones.
The growing sophistication of the robots’ control software is perhaps one of the starkest shifts since last year, with roughly 40 percent of teams operating autonomously. This is particularly impressive given the challenging course, according to Bernstein Research analysts.
“The course included flat sections, slopes, narrow passages, and ~ 20 turns, demonstrating rapid improvement in robots’ intelligence to handle generalized environments in the real world,” they wrote, according to Bloomberg.
But the technology isn’t bulletproof yet. One robot ran into a barricade and had to be carried off on a stretcher. Another veered into a bush after crossing the finish line. And one continued racing with its torso held together by packing tape after a heavy fall.
Nonetheless, the race showcased the rapid progress China’s tech industry is making, particularly in the raw components used to build these machines, like motors, joints, and batteries. Liu Xiangquan, a robotics professor at Beijing Information Science and Technology University told The South China Morning Post that long-distance running is a great test of how well these components can stand up to the kind of repeated strain that will occur in industrial settings.
And that’s likely to cause some consternation in US policy circles, where many see robotics as a key battlefront in the growing technological rivalry between the two superpowers.
Behind Sunday’s spectacle is a higher-stakes contest between China and the US over who will dominate the next generation of humanoids. US robotics firms have been lobbying Washington to draft a national strategy to counter China, which could include tariffs or bans on Chinese robots to help protect domestic producers.
However, running fast in a straight line is a very different challenge than the fine motor control and perception demanded by commercial applications. Experts told Reuters that despite impressive hardware, robotics companies are still a long way from developing the sophisticated software required to put these humanoids to practical use.
Still, these machines struggled to get over the starting line just a year ago. The gap between humanoid robots and human athletes has closed faster than anyone expected, so betting against further rapid progress seems unwise.
The post A Humanoid Robot Beat the Human World Record for a Half Marathon appeared first on SingularityHub.
The AI Paradox: Cure or Poison?
CATL’s New EV Battery Charges in Six Minutes
That’s a few minutes longer than it takes to fill up the average gas-powered car—but still fast enough it might not matter.
For all their promise, electric cars have always had a big drawback: Charging takes much longer than filling up a gas tank.
But the gap has been closing, and this week, Chinese battery giant CATL announced battery technology nearing parity. On Tuesday, the company said its third-generation Shenxing fast-charging battery goes from 10 percent to 98 percent charged in 6 minutes and 27 seconds.
If you’re driving an electric car around town, charging is a breeze. You probably don’t have to do it more than a couple times a month. And when you do, you can plug your car in overnight at home.
For longer trips, you’ll need a charging station. Smartphone apps can help, and drivers learn to plan ahead, but it’s still a pain. Stations aren’t abundant, and when you find one, there may be a line. A full charge will then take the better part of an hour. Most people aim for 80 percent, but even that consumes up to a half hour. EV fans may find it’s worth the trouble, but range is a sticking point for many drivers.
It’s no wonder that battery makers have been hyper-focused on energy density, which determines how far EVs can go, and charging speed. They’ve improved both in recent years. But increasing range, which involves balancing a complex mix of battery chemistries, weight, and economics, may prove a tougher tradeoff to manage than bringing charging times in line with gas-powered cars at the pump.
In other words, if you can travel the same distance and charge or gas up in roughly the same amount of time, the two become interchangeable on long trips. (This also depends, of course, on infrastructure—more on that below.)
CATL has been pushing the boundaries of charging speeds with its Shenxing line of fast-charging batteries, first announced in 2023. The company is the world’s largest EV battery manufacturer. Its products power EVs in China but also American brands including Tesla and Ford.
The numbers are hard to compare generation to generation and company to company, as the specs reported vary. The second-generation Shenxing battery, announced last year, charged from 5 percent to 80 percent in 15 minutes, according to the Financial Times. Then in March of this year, rival battery maker BYD said its Blade 2.0 model charged 10 percent to 97 percent in 9 minutes.
Notching nearly a full charge in under 10 minutes was already an impressive mark.
But on Tuesday, CATL one-upped BYD with its third-generation Shenxing, which takes a full charge in a little over six minutes. At a maximum legal rate of 10 gallons per minute at gas stations in the US, that’s still a few minutes longer than it takes to fill up most gas-powered cars. But it might also be fast enough not to matter. Big gas-powered trucks are already in the same range. And CATL said charging to 80 percent takes just 3 minutes and 44 seconds—which is nearly a wash.
“This effectively closes the gap with ICE [internal combustion engine] vehicles,” Bernstein analysts wrote in a note quoted by the Wall Street Journal.
Fast-charging batteries have shorter lifespans due to excess heat. But CATL said it’s tamed the heat by decreasing the amount produced in operation, more effectively bleeding it off, and controlling how and when it’s generated. The battery retains over 90 percent capacity after 1,000 charging cycles.
“The boundaries of electrochemistry are still far from being reached, and the possibilities of materials science are still far from being exhausted,” CATL founder and CEO, Robin Zeng, told reporters and investors, per the Financial Times.
With 6-minute charging times, it’s easy to imagine charging station lines evaporating. Instead of drivers grabbing a meal while their car takes up real estate, they’d breeze in and out, like at a gas station.
That vision will take time to materialize, however. There are still far fewer charging stations than there are gas pumps. And those that do exist won’t include chargers that handle the bleeding edge anytime soon.
As for the batteries themselves, splashy press releases don’t usually translate to near-term availability and might not match real-world performance. The third-generation Shenxing isn’t likely not hit roads right away. When it does, it could show up in Chinese models first, be pricey (like BYD’s latest offering), and require fancy new chargers.
Still, it’s no longer theoretical: EVs can compete with the convenience of traditional cars at the gas station.
The post CATL’s New EV Battery Charges in Six Minutes appeared first on SingularityHub.
Scientists Revive Failing Cells With Mitochondria Transplants
A new tool that tethers healthy mitochondria to ailing cells has shown promise in mice with inherited blindness.
Our cells produce energy in biological power plants called mitochondria. These energy-makers have minds of their own. They operate using a unique set of DNA and can travel outside cells. Like astronauts, they often escape in fatty bubbles, land on other cells, explore them, and sometimes literally fuse with native mitochondria in their new homes.
This makes mitochondrial diseases hard to treat. Few gene editing tools can reach them and fix genetic typos. Even without mutations, mitochondria falter with age, contributing to diabetes, Alzheimer’s disease, heart failure, and other medical scourges.
But an experimental fix is gaining traction. Researchers are shuttling healthy mitochondria into cells—essentially transplanting them—to restore energy production and reboot metabolism.
There’s a major roadblock, however. Getting healthy mitochondria to the right cells is challenging. Scientists at the Institute of Molecular and Clinical Ophthalmology Basel have now developed a system that tethers donated mitochondria to their targets.
Called MitoCatch, the scientists engineered matching proteins and attached them to donor mitochondria and recipient cells. Like hook-and-eye fasteners, the binders pull the two partners into close contact. From there—by mechanisms that are still mysterious—the new mitochondria ride in on fatty bubbles, disembark inside the cell, and get to work.
In the study, the researchers delivered mitochondria to multiple cell types, and an injection of mitochondria saved vulnerable retinal cells in mice with inherited blindness.
“As a therapy, mitochondria transplantation has been hindered by the lack of tools to target healthy mitochondria directly to disease-affected cells,” wrote Samantha Krysa and Jonathan Brestoff at Washington University School of Medicine, who were not involved in the study.
MitoCatch overcomes this barrier.
Domesticated BacteriaRoughly two billion years ago, an ancestral cell ate a bacterium. But rather than digesting it, the cell formed an unlikely alliance with its erstwhile prey. The bacterium converted oxygen into energy for the host, and received protection and nutrients in return. Over time, the bacterium gave up its independence and became a critical part of our cells: mitochondria.
Unlike other cell structures called organelles, mitochondria carry 37 unique genes that encode the core components of their energy-making machinery. Their stripped-down genome leaves little margin for error and is especially vulnerable to mutation. It’s also shielded by a double membrane, making it difficult to reach using conventional biotech tools.
But mitochondria have a superpower: They can leave host cells. Research from the last two decades shows that many cells export some mitochondria into the cellular void. The practice could be a way to rid themselves of damaged mitochondria or to deliver healthy ones to struggling neighbors, like an intercellular care package.
This quirk led to the idea of mitochondrial transplantation. Here, healthy mitochondria are injected into tissue or the bloodstream to treat damaged cells. Early results are encouraging. Transplant extends the healthy lifespan of mice with mitochondrial defects, limits injury after stroke or heart attack, accelerates wound healing in people, and hints at benefits for obesity.
Because nearly every human cell depends on mitochondria for energy—and falters when they break—transplantation could unlock treatments for a broad range of diseases hard to treat today. That is, if healthy replacements can reach their destination.
“Being able to deliver mitochondria efficiently to the right cell types has been a key hurdle for this therapeutic strategy,” wrote Krysa and Brestoff.
Catch Me if You CanMitoCatch relies on a cellular “handshake.” All cell surfaces are densely studded with proteins, some universal, others unique to specific cell types. These proteins interact with surrounding molecules to drive biological processes. During infection, for example, antibodies latch onto proteins on bacteria to trigger an immune attack. CAR T cell therapy outfits T cells with protein “binders” so they can better recognize and eliminate cancer cells, senescent cells, or cells involved in autoimmune disorders. In each case, success hinges on matched protein pairs snapping together like hook-and-eye fasteners.
The new system works on the same principle and has three designs. MitoCatch-M helps donor mitochondria recognize markers unique to different types of recipient cells. MitoCatch-C flips the approach, modifying recipient cells with binders that better capture mitochondria. And a third version uses a “bispecific” tether that simultaneously grips mitochondria and target cells. Once in close proximity, mitochondria are packaged in fatty bubbles that drift into the cell.
Then comes a brief moment of terror.
Many of these bubbles are routed to the cell’s waste processing organelle, where their cargo is completely destroyed. The mitochondria must escape before it’s too late.
In cultured brain, retinal, heart, skin, and immune cells, the tailored mitochondria largely avoided death. How they managed this up for debate, and the team is trying to work it out now. But once inside, the donor mitochondria fused with the cell’s native mitochondrial network.
This “suggests that MitoCatch can be used to enhance the efficacy of mitochondria transplantation substantially,” wrote Krysa and Brestoff.
Of course, cells in a dish aren’t the same as those in bodies. In another test, the team injected the engineered mitochondria into the eyes of mice with a hereditary condition where a single mitochondrial genetic defect destroys cells in the retina, resulting in gradual vision loss.
Over 10 days, the healthy mitochondria revamped treated cells’ metabolisms, reduced damage, and boosted survival and response to light. Whether this translates to better vision remains to be seen, but the treatment didn’t trigger an immune response, a promising sign it might be safe. To be clear, the transplanted mitochondria didn’t correct the underlying mutation. Instead, they supplied enough working versions of the gene to bring energy production back to life.
It’s “a proof-of-principle that mitochondria transplantation can be used to correct mutations encoded in the mitochondrial genome that cause a severe form of vision loss,” wrote Krysa and Brestoff.
MitoCatch isn’t ready for prime time. It requires extensive genetic engineering, making the system difficult to translate for routine treatment. It’s also still unclear how long transplanted mitochondria last in their new hosts and whether they have a lasting benefit.
These early results highlight the ways scientists can boost the therapy’s potential. With more work, we may have a new way to tackle previously untreatable mitochondrial disorders.
The post Scientists Revive Failing Cells With Mitochondria Transplants appeared first on SingularityHub.
Printed Neurons That Mimic Brain Cells Could Slash AI’s Energy Bill
New artificial neurons fire so realistically they can activate living brain cells in mouse tissue.
As AI demands ever more power, researchers are looking to the brain for more efficient ways to process information. A new approach uses soft, flexible electronics to create artificial neurons that can mimic biological signaling and even directly interface with living neural tissue.
Researchers have long attempted to create so-called “neuromorphic” chips made of artificial neurons that mimic the spiking behavior of their biological counterparts. But there are still wide gaps between how these devices and brains operate.
Real neurons in the brain display a wide variety of activity patterns, which helps them encode and process information extremely efficiently. In contrast, most artificial neurons are carbon copies of each other with highly uniform spiking behavior, forcing neuromorphic chips to use millions of these neurons to achieve even modest functionality.
Now, a team from Northwestern University has designed a novel fabrication technique to create artificial neurons that mimic the complex signaling patterns found in the brain. The neurons’ output was so realistic that they successfully stimulated neurons in mouse brain tissue. More importantly, the approach could lay the groundwork for much more energy efficient AI.
“Silicon achieves complexity by having billions of identical devices,” Mark Hersam, who co-led the research, said in a press release. “Everything is the same, rigid and fixed once it’s fabricated. The brain is the opposite. It’s heterogeneous, dynamic and three-dimensional. To move in that direction, we need new materials and new ways to build electronics.”
The team created their artificial neurons, described in a paper in Nature Nanotechnology, by jet printing special electronic ink onto a flexible polymer. The ink contains nanoscale flakes of molybdenum disulfide, which acts as a semiconductor, and graphene, which serves as an electrical conductor.
The ink also contains a stabilizing polymer researchers typically burn off after printing to prevent it from interfering with the flow of current. But the researchers discovered that by leaving some of it behind, they could introduce imperfections that result in far more sophisticated signaling behavior.
Rather than completely burning the material away, they partially decomposed it. Then when they passed a current through the printed neurons, the polymer broke down further, but in an uneven pattern that created a conductive thread where current gets squeezed into a tight channel.
This constricted pathway rapidly switches on and off, firing sharp voltage spikes that look a lot like the spikes found in real neurons. The device doesn’t just produce simple on-off pulses, but everything from isolated spikes to sustained firing to rhythmic bursts, much like a real neuron.
With just two of these printable neurons and some basic circuit components, the researchers produced sophisticated spiking patterns. And crucially, they were able to tune the length and frequency of spikes to match the timing of biological action potentials, which could be useful for applications like bioelectronic medicine or brain-computer interfaces.
To test whether they could go beyond simply matching the numbers, the team worked with Northwestern neurobiology professor, Indira Raman, to hook up their artificial neurons to slices of mouse cerebellum and fire spikes into the tissue. The biological neurons fired in response, showing the synthetic signals were convincing enough to activate real neural circuits.
“You can see the living neurons respond to our artificial neuron,” said Hersam. “So, we’ve demonstrated signals that are not only the right timescale but also the right spike shape to interact directly with living neurons.”
While those capabilities could lead to some interesting applications, the researchers’ mainly hope the technology can reduce AI’s energy bill by mimicking the brain’s more efficient processing.
“To meet the energy demands of AI, tech companies are building gigawatt data centers powered by dedicated nuclear power plants,” Hersam said. This can only scale so far, in terms of power and cooling, he said. “However you look at it, we need to come up with more energy-efficient hardware for AI.”
Given the long, tortuous path from lab bench to factory floor, it seems unlikely this technology will be making a dent in the industry’s power bill any time soon. But it could lay the groundwork for a smarter way to do computation in the future.
The post Printed Neurons That Mimic Brain Cells Could Slash AI’s Energy Bill appeared first on SingularityHub.
This Week’s Awesome Tech Stories From Around the Web (Through April 18)
Physical Intelligence, a Hot Robotics Startup, Says Its New Robot Brain Can Figure Out Tasks It Was Never TaughtConnie Loizos | TechCrunch
“Physical Intelligence, the two-year-old, San Francisco-based robotics startup that has quietly become one of the most closely watched AI companies in the Bay Area, published new research Thursday showing that its latest model can direct robots to perform tasks they were never explicitly trained on—a capability the company’s own researchers say caught them off guard.”
Artificial IntelligenceWant to Understand the Current State of AI? Check Out These Charts.Michelle Kim | MIT Technology Review ($)
“If you’re following AI news, you’re probably getting whiplash. AI is a gold rush. AI is a bubble. AI is taking your job. AI can’t even read a clock. The 2026 AI Index from Stanford University’s Institute for Human-Centered Artificial Intelligence, AI’s annual report card, comes out today and cuts through some of that noise.”
ScienceSperm Whales Speak With a Complex Alphabet and Even Have ‘Vowels,’ Study FindsMatthew Phelan | Gizmodo
“Sperm whales: They’re just like us. An international team of researchers, including marine biologists and linguists, reports that it has detected signs of a ‘highly complex’ phonetic alphabet in the calls of sperm whales—including ‘vowels’ deployed in patterns akin to their use in human languages like Mandarin, Latin, and Slovenian.”
BiotechnologyThe DNA Fix for AgingRoxanne Khamsi | The Atlantic ($)
“Now that scientists have described just how much mutation happens in aging, they’re curious if DNA repair might offer a counteracting force. In other words, does fixing DNA improve longevity? Biologists are taking different tacks to find out.”
FutureWhy Do We Tell Ourselves Scary Stories About AI?Amanda Gefter | Quanta Magazine
“Suddenly, I understood the racing heart of the modern AI horror genre. It’s not intelligence we fear, but desire. A machine that knows a lot doesn’t scare us. A machine that wants something does. But can it? Want things? Can it crave power? Thirst for resources? Can it acquire the will to survive?”
RoboticsYou Can Soon Buy a $4,370 Humanoid Robot on AliExpressMarco Trabucchi | Wired ($)
“Unitree is bringing its R1 to international markets. It arrives with some aerobatic capabilities and an entry-level price, but the question of what you’d actually do with it remains open.”
TechThe Battle for OpenAI’s SoulMaxwell Zeff | Wired ($)
“Elon Musk’s lawsuit against Sam Altman will head to trial this month in an Oakland, California, federal courtroom, where nine jurors will settle a years-long dispute between the cofounders of OpenAI over the group’s founding mission. …Musk’s suit essentially accuses OpenAI of straying from its founding nonprofit mission: ensuring AGI, a highly capable AI system that can perform a wide range of jobs, benefits humanity.”
TechSpaceX Is Basically a Huge Meme StockJames Surowiecki | The Atlantic ($)
“Elon Musk likes to do everything on a grand scale. When he takes SpaceX public in the coming months, it will likely be the biggest initial public offering in history. …By conventional standards, SpaceX isn’t worth anything close to $2 trillion. The company is in fact relatively small and losing money. Yet there is little doubt that Musk will get the valuation he wants.”
Tech43% of AI-Generated Code Changes Need Debugging in Production, Survey FindsMichael Nuñez | VentureBeat
“According to Lightrun’s 2026 State of AI-Powered Engineering Report, shared exclusively with VentureBeat ahead of its public release, 43% of AI-generated code changes require manual debugging in production environments even after passing quality assurance and staging tests. Not a single respondent said their organization could verify an AI-suggested fix with just one redeploy cycle; 88% reported needing two to three cycles, while 11% required four to six.”
The post This Week’s Awesome Tech Stories From Around the Web (Through April 18) appeared first on SingularityHub.
Norwegian Man Cured of HIV by His Brother’s Stem Cells
Fewer than 10 people worldwide have eradicated the virus with stem cells. But this case was special—no one knew his brother’s cells carried a protective mutation until transplant day.
When the 63-year-old man received a bone marrow transplant from his brother, he got a two-for-one deal. The therapy was meant to tame a life-threatening blood disorder. But it also wiped out all signs of HIV, which he had been battling for 14 years.
Called the Oslo patient, he joins a small group of people with HIV who no longer need medication after a stem cell transplant. Four years later, the donor stem cells had completely overhauled his immune system, and there were no signs of lingering virus—even in hidden reservoirs that are notoriously hard to target.
His case is special. Previous successes in long-term remission had used donated stem cells carrying a mutation in the CCR5 gene. Called CCR5Δ32, this version of the gene blocks HIV’s ability to infect and destroy immune cells, rendering the virus incapable replicating. The Oslo patient carried one copy of the protective gene variant but was still infected. His donor brother, unexpectedly, had two copies.
In three months, the patient’s immune cells were clear of viral genetic material. Now, two years after ending antiviral medication, he is “having a great time” with more energy than he knows what to do with, study author Anders Eivind Myhre at the Oslo University Hospital told Agence France-Presse. “For all practical purposes, we are quite certain that he is cured.”
Sneaky VirusThanks to antiviral drugs, HIV is no longer a death sentence. And HIV preexposure prophylaxis, or PrEP, reduces the chances of infection in high-risk populations. Though it once required daily pills, the FDA recently approved a twice-a-year shot, making prevention less of a headache. But access remains uneven worldwide, and many hesitate to seek the drugs for fear of stigma.
Neither drug is a cure. The HIV virus attacks T cells and gradually destroys the body’s defenses. Over time, even mundane infections like a cold or the flu become harder to fight. As HIV replicates, it infiltrates hidden reservoirs—the gut is a common holdout—and embeds itself in DNA across the body.
Antiviral drugs keep active HIV in check but can’t touch reservoirs. Even after years of control, the virus rebounds as soon as treatment stops. To truly conquer HIV, we need a cure.
Fewer than 10 people worldwide have beaten the virus after an immune system reset. The first case, in 2009, was a lucky surprise. Known as the Berlin patient, a man received a stem cell transplant for a lethal blood cancer—and the cells kept HIV at bay for 20 months without drugs. The donor stem cells carried two copies of the CCR5Δ32 mutation, revealing its potent protective effect.
Other successes followed with stem cells carrying double and single copies of CCR5Δ32, and even normal versions of the gene—suggesting unknown factors are critical “for an eradicating HIV cure,” wrote the team.
Winning the Lottery, TwiceTreating HIV wasn’t the Norwegian man’s first priority when he agreed to a stem cell transplant.
Diagnosed in 2006, he’d kept the virus suppressed for over a decade with antiviral drugs. Repeated tests found no detectable viral genetic material in his blood, and he was able to live a relatively normal life.
But in 2017, he began struggling with extreme fatigue. His blood cell counts plummeted: Including the cells carrying oxygen, fighting off infections, and preventing uncontrolled bleeding. The life-threatening condition was eventually traced to a bone marrow disease. Several treatments briefly kept symptoms in check, but then they returned. His only option was a bone marrow transplant.
The patient’s care team searched for immune-compatible donors who also carried two copies of the CCR5Δ32 mutation, hoping to simultaneously treat the blood disorder and HIV. It’s like trying to find a needle in a haystack, said study author Marius Trøseid in a press release.
As the patient’s health rapidly declined, the team focused on treating the bone marrow disease with his 60-year-old brother as the donor. On transplant day, they realized they’d hit the jackpot—the brother carried both copies of CCR5Δ32.
“We had no idea…That was amazing,” said Myhre.
Brotherly LoveThe HIV-resistant stem cells began replacing the patient’s own cells within 90 days. Two years on, the transplanted cells had fully repopulated his bone marrow—which is where blood cells are born—and cured the bone marrow disease.
The immune system reboot also allowed the patient to end antiviral medications. Four years after the transplant, the donor cells had completely taken over in multiple organs, including the lower gut—a known reservoir for HIV.
It’s the first time a bone marrow transplant has achieved total replacement in the gut, wrote the team.
Tests in more than 65 million T cells, HIV’s main targets, failed to detect intact genetic material needed for the virus to grow and spread. The results suggest the “HIV reservoir had been eliminated,” wrote the team.
The man’s immune system seemed to forget the virus. Viral antibodies gradually faded, and newly minted T cells patrolled the body as usual. Liberated from the constant threat of HIV, the body’s immune defenses returned to health—as if he had never been infected.
But the therapy wasn’t all smooth sailing. Roughly a month and a half after transplant, the man experienced severe graft-versus-host disease, where transplanted cells viciously attack the body. A combination of drugs eventually quelled the assault. In a twist, a deeper analysis suggests the drugs treating the immune attack might have also helped fight the virus.
A bone marrow transplant is a last resort and only used to treat people with HIV who also have deadly bone marrow disorders. Roughly 10 to 20 percent of patients die from the procedure within a year, regardless of underlying disease. For now, antivirals remain the first option for millions of people living with the virus. But these unique cases of full, long-term remission shed light on how the virus behaves.
Scientists are still trying to define what “cure” means when it comes to HIV.
“Moving forward, a critical step will be to compare existing cases of HIV cure to identify the most effective combination of biomarkers,” wrote the team. For example, do decreased viral load, antibodies, or a boost in healthy T cells amount to a cure? How long should the changes last? And did the patient struggle with HIV even though he had a single copy of CCR5Δ32?
Individual cases only offer a glimpse into HIV’s complexity. Projects like the European-led IciStem are underway to consolidate case results so scientists can better share findings and ideas—and potentially beat HIV once and for all.
As for the Oslo patient, he’s “perhaps no longer a patient. At least he doesn’t feel like it,” said Trøseid.
The post Norwegian Man Cured of HIV by His Brother’s Stem Cells appeared first on SingularityHub.
Industries Most Exposed to AI Are Not Only Seeing Productivity Gains but Jobs and Wage Growth Too
New technologies rarely leave work untouched. They also rarely eliminate the need for human contribution altogether.
Forecasts of the impact of artificial intelligence range from the apocalyptic to the utopian. An October 2025 report from Senate Democrats, for example, predicted AI will destroy millions of US jobs. A couple of years earlier, consultant company McKinsey forecast AI will add trillions to the global economy, while emphasizing job losses can be mitigated by training workers to do new things.
The problem is that many of these claims are based on projections, overly simplified surveys, or thought experiments rather than observed changes in the economy. That makes it hard for the public, and often policymakers, to know what to trust.
As a labor economist who studies how technology and organizational change affect productivity and well-being, I believe a better place to start is with actual data on output, employment, and wages—which are all looking relatively more hopeful.
AI and JobsIn one of my new research papers with economist Andrew Johnston, we studied how exposure to generative AI affected industries across America between 2017 and 2024, using administrative data that covers nearly all employers. Our analysis covered a crucial period when generative AI use exploded, allowing us to analyze the effect within businesses and industries.
We measured AI exposure using occupation-level task data matched to each industry and state’s occupational workforce mix prior to the pandemic. A state and industry with more workers in roles requiring language processing, coding, or data tasks scored higher on exposure, for example, compared with one with more plumbers and electricians.
We then took that exposure ranking by occupation and looked at changes in the standard deviation in occupational exposure, comparing that with labor market and GDP across states and industries from 2017 to 2024.
Think of a standard deviation as roughly the gap between a paramedic—whose work centers on physical assessment, emergency response, and hands-on care that AI cannot easily replicate—and a public relations manager, whose work involves drafting communications, analyzing sentiment, and synthesizing information that AI tools handle well. That gap in AI exposure is roughly what we’re measuring when we ask: Does being on the higher-exposure side of that divide change your industry’s trajectory?
This data allowed us to answer two questions: When AI tools became widely available following the public release of ChatGPT in late 2022, did states and industries that were more exposed to generative AI become more productive, and what happened to workers?
Our answers are more encouraging, and more nuanced, than much of the public debate suggests.
We found that industries in states that were more exposed to AI experienced faster productivity growth beginning in 2021—before ChatGPT reached the public—driven by enterprise tools already embedded in professional workflows, including GitHub Copilot for software development, Jasper for marketing and content writing, and Microsoft’s GPT-3-powered business applications. In 2024, for example, industries whose AI exposure was one standard deviation higher saw gains of 10% in productivity, 3.9% in jobs, and 4.8% in wages than comparable industries in the same state.
Those patterns suggest that, at least so far, AI has acted as a productivity-enhancing tool that boosts employment and wages rather than a simple substitute for labor.
Augmentation Versus DisplacementA crucial distinction in the data is between tasks where AI works with people and tasks where AI can act more independently. In sectors where AI mainly complements workers—think marketing, writing, or financial analysis—our data show that employment rose by about 3.6% per standard deviation increase in exposure.
In sectors where AI can execute tasks more autonomously—including basic data processing, generating boilerplate code, or handling standardized customer interactions—we found no significant employment change, though workers in those roles saw slower wage growth.
What these findings suggest is that when AI lowers the cost of completing a task and raises worker productivity, companies expand output enough to increase their demand for labor overall—the same logic that explains why power tools didn’t eliminate construction workers.
The economic question is not whether any given task disappears. It is whether businesses and workers can reorganize fast enough to create new productive combinations. And so far, in most sectors, our evidence suggests they can.
But state policies also matter: These benefits were concentrated in the states with more efficient labor markets, meaning that the impact of generative AI on workers and the economy also depends on the types of policies and institutions of the local economy.
Importantly, these findings hold beyond occupational exposure. In additional work with co-authors at the Bureau of Economic Analysis, we found a similar effect on GDP and employment when looking at actual AI utilization—that is, how often workers use AI. Drawing on the Gallup Workforce Panel, we measured workers actively using AI daily or multiple times a week. We found that each percentage-point increase in the share of frequent AI users in a state and industry is associated with roughly 0.1% to 0.2% higher real output and 0.2% to 0.4% higher employment.
To put that in context: The share of frequent AI users across all occupations rose from about 12% in mid-2024 to 26% by late 2025, a shift our estimates suggest corresponds to roughly 1.4% to 2.8% higher real output—or about 1 to 2 percentage points of annualized growth over that period.
New technologies rarely leave work untouched. But they also rarely eliminate the need for human contribution altogether. Instead, they change the composition of work, as our research shows. Some tasks shrink. Others expand. New ones emerge that were previously too costly or too hard to perform at scale. Put simply, some occupations might go away, but most of them just change.
If anything, the trends documented here are likely to strengthen rather than fade. Not only are generative AI tools rapidly improving, but also the experimentation and research and development that many workers and companies are engaging in are likely to pay large dividends. These investments—often referred to as intangible capital—tend to get unlocked a few years after a technology comes onto the scene, once complementary investments have been made.
The Role of Companies and ManagersWhether AI leads to anxiety or adaptation for workers depends in part on what happens inside organizations. Using additional data collected over many years in the Gallup Workforce Panel covering more than 30,000 US employees from 2023 to 2026, I found in a 2026 paper that workplace adoption of generative AI rose quickly over the period, with the share of workers using AI often increasing from 9% to 26%.
But the more important finding is that adoption was far more common where workers believed their organization had communicated a clear AI strategy and where employees said they trust leadership. This suggests that growing adoption and effective use of AI depends not only on the availability of the technology but on whether managers make its use clear, credible, and safe.
Where that clarity exists, frequent AI use is associated with higher engagement and job satisfaction, and it even reverses the burnout penalties that appear elsewhere.
In other words, the broader economic effects of AI depend not only on how sophisticated the tools are but on whether companies and managers create environments where workers can experiment, reorganize tasks, and integrate new tools into productive routines. That is, if employees do not feel the psychological safety to experiment, they are less likely to use AI, and they are especially less likely to use it for higher-value work.
That is precisely the kind of adaptation that I believe makes labor markets more resilient than the most alarmist forecasts suggest.
This article is republished from The Conversation under a Creative Commons license. Read the original article.
The post Industries Most Exposed to AI Are Not Only Seeing Productivity Gains but Jobs and Wage Growth Too appeared first on SingularityHub.
One Shot Just Crushed Three Deadly Autoimmune Diseases
A woman battling the conditions went from “two handfuls of pills” and blood transfusions daily to medication-free.
The 47-year-old woman was at the end of her rope.
In 2014, she was diagnosed with a rare form of anemia. Her body’s B cells, which normally produce antibodies to fight infections, had gone rogue, endlessly attacking oxygen-carrying red blood cells. Two other autoimmune disorders soon followed, one crippling her body’s ability to stop bleeding, the other increasing the risk of blood clots.
She had tried nine treatments. None helped. Her life was centered on blood transfusions, up to three daily, to keep the symptoms at bay. But constant fatigue made every day a struggle. The threat of deadly bleeding or blood clots loomed over her life.
Out of options, her care team tested an experimental treatment called CAR T cell therapy. They made a “living drug” out of the patient’s own T cells, editing the cells’ DNA so they would seek and destroy a specific biological enemy. Though CAR T is best known as a treatment for blood cancer, it’s also shown early promise in autoimmune disease. Trying to take on three conditions at the same time raised the bar, but it worked.
A single infusion of engineered cells rapidly killed off the misbehaving B cells. The woman was able to end blood transfusions within a week, and her red blood cell count was near normal in roughly a month. Her strength returned, and at the 11-month follow up, she was free of medication and able to enjoy life again.
“It was an entirely uncontrolled disease. And now she’s off any therapy. That tells you that, at least for now, we did something very right,” study author Fabian Müller at University Hospital Erlangen in Germany told Nature.
Runaway TrainThe body’s B cells are powerful defenders. They watch for infections or cancer, generate antibodies to take out threats, and rally other immune cells to join the fight.
But sometimes B cells break down. Genetic mutations can lead to blood cancer. Some B cells struggle to produce antibodies, rendering them powerless to counter infection. And in autoimmune disorders, the cells mistakenly attack and damage healthy tissue—a kind of immune friendly fire—that can damage organs if left untreated.
In the woman’s case, malfunctioning B cells relentlessly attacked red blood cells, stripping them of their ability to carry oxygen. They also destroyed platelets—tiny, disc-shaped fragments in the blood that stem bleeding. The cells also attacked a protein that helps prevent clot formation.
This triple whammy ”can kill you very rapidly,” said CAR T pioneer Carl June at the University of Pennsylvania, who was not involved in the study.
Steroids to dampen the immune system didn’t work. Neither did antibodies that inhibit B cells or other classic autoimmune drugs. After attempting nine treatments and exhausting their options, the team offered CAR T cell therapy as a last resort.
CAR T drugs are usually made from a patient’s own T cells, genetically boosted to hunt down, grab onto, and destroy targets. Researchers originally developed CAR T for blood cancer, but efforts are underway to expand its use against solid cancers. In other studies, scientists have made these cancer-fighting soldiers directly inside the body to slash cost and time. Because CAR T cells can divide and replenish their numbers, a single dose could last over a decade.
The treatment is largely plug-and-play. The surfaces of all cells are dotted with protein beacons. Tumors have a unique protein signature. B cells have one too—a protein called CD19. Scientists have already had early success treating autoimmune diseases by designing CAR T cells that selectively hunt and destroy B cells.
A small CAR T trial in 2014 restored movement in patients with systemic sclerosis, a condition that causes tissue rigidity. Earlier this year, Müller helmed a clinical trial testing Zorpo-cel, T cells engineered to seek out CD19 in a variety of autoimmune conditions with promising results. Six months after treatment, all patients had ended their use of steroids and other treatments.
“For the very first time in severe autoimmune diseases, you actually have a treatment-free period,” Müller told Medscape at the time. “That is really a new perspective that has never been achieved before.”
One for AllSimultaneously tackling three autoimmune diseases was uncharted territory. Too many CAR T cells could trigger a deadly runaway immune reaction, which could risk even the brain.
The team turned to Zorpo-cel. They isolated the woman’s T cells and gene edited them to produce protein “hooks” targeting CD19 in the lab. The patient then underwent standard chemotherapy to wipe out most of her immune system. This step is obviously very tough on the body, but it’s needed to remove immune cells that would shut down CAR T.
A week after infusion, the woman’s red blood cells had rebounded, ending the need for blood transfusions. A month later, most of her disease-related blood work had improved, and she “experienced a rapid and remarkable increase in physical strength and has been able to carry out normal everyday activity,” wrote the team.
Now, a year on, she no longer needs the “two handfuls of pills” she took to manage the conditions. Her liver struggled at several points during the trial, but she avoided major immune reactions and other severe side effects. It’s not clear if the liver trouble was due to CAR T or lingering damage from earlier treatments.
Battling three autoimmune disorders with CAR T is unprecedented. But there are limitations. It’s a single-case study, and researchers will need to keep an eye on the patient’s health over time. Also, CAR T cells can dwindle and allow target cells to return. At the end of the study, the team found signs of newly formed B cells. However, they were “naïve,” in that they hadn’t learned to target normal tissues yet—and they may never learn.
Hundreds of CAR T clinical trials targeting autoimmune diseases are in the works. Multiple commercial companies have joined the race. “I think, within a year or two, there’s going to be approvals in the US,” said June.
The post One Shot Just Crushed Three Deadly Autoimmune Diseases appeared first on SingularityHub.
Scientists Grow Electronics Inside the Brains of Living Mice
The technology harnesses the brain’s own blood chemistry to assemble soft, light-controlled electrodes around neurons.
A single shot transforms the mice’s brains into biomanufacturing machines. Blood proteins churn the injected chemicals into a soft, flexible electrode mesh that seamlessly wraps around delicate neurons. Pulses of light aimed at the mesh quiet hyperactive cells. All the while, the mice go about their merry ways, with no inkling they’ve been turned into cyborgs.
This science fiction-like invention is the brainchild of Purdue University scientists seeking to reimagine brain implants.
These devices, often composed of rigid microelectrode chips, have already changed lives. They can collect electrical signals from the brain or spinal cord and translate these signals into speech or movement—returning lost abilities to people with paralysis or diseases of the brain. Implants can also jolt brain activity and pull people out of severe depression.
Yet most implants require extensive surgery and risk damaging the brain’s delicate tissue. The new technology would avoid these downsides by building electrodes directly at the target.
“Our work points to a future where doctors could ‘grow’ soft, wire-free electronic interfaces inside the brain using the patient’s own blood, then gently dial brain activity up or down from outside the head using harmless near-infrared light,” study author Krishna Jayant said in a press release.
Probes GaloreThe brain produces every one of our sensations, movements, emotions, and decisions. Scientists have long sought to decode and manipulate its activity with a range of hardware.
Some devices use electrodes to monitor single neurons in a lab dish. Others are physically inserted into brain regions that encode cognition and emotion. Some designs sit atop the brain, without puncturing its delicate tissue, and capture dynamic brain waves like a wide-lens camera.
But brain tissue is soft and squishy; microelectrodes are not. The mismatch often leads to scarring, signal loss, and shortened device lifetimes. Replacing broken or infected implants is surgically complex and can further damage the brain. Some experts have even raised ethical concerns about long-term care.
A recent explosion of soft, biocompatible materials suggests alternatives are possible, and we’ve seen a wave of creative new probes. In one example, a silk-like mesh drapes over the brain’s surface, and a related version maps electrical activity in brain organoids. Another device is smaller than a cell and, after injection, hitches a ride on immune cells into the brain. These systems can record and alter brain activity. But prebuilt implants often require surgery and struggle to integrate with their hosts without damaging surrounding tissue.
So, why not grow an electrode directly inside the brain?
“The ability to synthesize [conductive] materials on demand at a target site could overcome the limitations of conventional synthetic implants,” wrote M.R. Antognazza and G. Lanzani at the Italian Institute of Technology, who were not involved in the study.
Under ConstructionOur cells are natural manufacturers, constantly assembling things like proteins, genetic messengers, and membranes. Cells rely on two essential ingredients to construct the complex structures of life: Biological building blocks and catalysts to bind them together. Synthetic materials work the same way. Monomers link like Lego blocks to form polymers with the help of a catalyst.
The discovery of electrically conductive polymers, meanwhile, has galvanized efforts to grow living bioelectronics directly inside the body. In a previous study, researchers genetically engineered cells to produce a protein catalyst that helps assemble conductive structures on the surfaces of living neurons. Another approach used hydrogen peroxide—a common first-aid staple—to compile monomers into reliable electrodes that monitor nerves in leeches.
These quirky early successes showcased the promise of brain-built electronics, but hit hard limits. The chemistry often relied on catalysts toxic to neurons. Even when successfully formed, the electrodes mostly just listened. Changing brain activity required additional physical cables.
The Purdue team rewrote the recipe. They designed a monomer, called BDF, that with the help of hemoglobin—a protein in red blood cells—becomes a soft, flexible, and electrically conductive mesh surrounding neurons at the site of injection. The willowy electrode hugs the brain’s anatomy and moves with it, minimizing physical damage. It’s responsive to near-infrared light and can translate light pulses from outside the skull into electrical signals that alter brain activity.
“Our key idea was to let the body’s own chemistry do the hard work,” said study author Sanket Samal.
The appeoach worked in several tests. Injecting BDF into store-bought beef and lamb steaks produced the electrode mesh within a day at human body temperature. In zebrafish embryos, a darling in neuroscience research, the reaction proceeded smoothly inside their yolks. Over 80 percent of the embryos survived, developed normally, and actively swam around—suggesting minimal harm.
But steak dinners and translucent fish are a far cry from our brains. Mice are closer. With the help of blood, BDF formed electrodes in mice’s motor cortexes after injection with minimal surgery. The mice’s brains maintained a normal balance of activity as they skittered around.
The team also coaxed dendrites, the tree-like input branches of a neuron, to produce the conductive mesh. Dendrites aren’t just passive cables, they’re “mini computers” that contribute to the brain’s computation and learning. Current methods struggle to precisely single out and control dendrite activity without messing with other parts of the neuron.
With near-infrared light, dendrite-built electrodes changed the way the neural branches behaved. The light temporarily lowered brain activity, and mice trained to press a lever were unable to perform the task. It didn’t wipe out their memory though: After turning off the light, the animals regained the skill. Their brains showed no signs of infection, inflammation, or over-heating throughout the study.
Inhibiting brain signals has upsides. Hyperactive brain activity in epilepsy and Parkinson’s disease, for example, is currently dampened with medication or—in severe cases—brain implants. If validated, brain-grown electrodes could be a less invasive alternative. Though to be clear, the method still requires surgery to inject the materials. Adding biocompatible magnetic ingredients, which can also control brain activity, could further boost the system’s potential.
How long the materials stay put and if they’re safe over the long term remains unclear. But in theory, the strategy could also control spinal cord nerves or heart tissue. Researchers could also adapt the strategy to use other types of materials that regulate brain activity in different ways, like ramping it up.
With further improvement, the electrode wouldn’t “just coexist with brain cells for months or years; it becomes part of them, stable across lifetimes,” said Jayant.
The post Scientists Grow Electronics Inside the Brains of Living Mice appeared first on SingularityHub.
How to Build Better Digital Twins of the Human Brain
Brain twins where regions are allowed to compete for resources behave more like the real thing.
The potential to create personalized digital twins of your brain and body is a hot topic in neuroscience and medicine today. These computer models are designed to simulate how parts of your brain interact and how the brain may respond to stimulation, disease, or medication.
The extraordinary complexity of the brain’s billions of neurons makes this a very difficult task, of course, even in the era of AI and big data. Until now, whole-brain models have struggled to capture what makes each brain unique.
People’s brains are all wired slightly differently, so everyone has a unique network of neural connections that represents a kind of “brain fingerprint.”
However, most so-called brain twins are currently more like distant cousins. Their performance is barely any closer to the real thing than if the model were using the wiring diagram of a random stranger.
This matters because digital twins are increasingly proposed as tools for testing treatments by computer simulation, before applying them to real people. If these models fail to capture fundamental principles of each patient’s unique brain organization, their predictions won’t be personalized—and in worst cases could be misleading.
In our latest study, published in Nature Neuroscience, we show that realistic digital brain twins require something that many existing models overlook: competition between the brain’s different systems.
Our findings suggest that without competition, digital twins risk being overly generic, missing out on what makes you “you.”
Excess of CooperationThe human brain is never static. The ebb and flow of its activity can be mapped non-invasively using neuroimaging methods such as functional MRI. A computer model can be built from this, specific to that person and simulating how the regions of their brain interact. This is the idea of the digital twin.
The brain is often described as a highly cooperative system. Yet everyday experiences such as focusing attention or switching between tasks tells us intuitively that brain systems compete for limited resources. Our brains cannot do everything at once, and not all regions can be active together all the time.
Despite this, the vast majority of brain simulations over the past 20 years have not taken these competitive interactions between regions into account. Rather, they have “forced” neighboring regions to cooperate. This can push the simulated brain into overly synchronized states that are rarely seen in real brains.
In a large comparative study of humans, macaque monkeys, and mice, our international team of researchers used non-invasive brain activity recordings to show that the most realistic whole-brain models not only require cooperative interactions within specialized brain circuits, but long-range competitive interactions between different circuits.
To achieve this, we compared two types of brain model: one in which all interactions between brain regions were cooperative, and another in which regions could either excite or suppress each other’s activity. In humans, monkeys, and mice, the models that included competitive interactions consistently outperformed cooperative-only models.
Using a large-scale analysis of over 14,000 neuroimaging studies, we found that spontaneous activity in the competitive models more faithfully reflected known cognitive circuits, such as those involved in attention or memory. This suggests competition is crucial for enabling the brain to flexibly activate appropriate combinations of regions—a hallmark of intelligent behavior.
Visual summary of our study:
When whole-brain models of humans, macaques, and mice are allowed to treat interactions between some brain regions as competitive, they consistently do so—generating activity patterns that closely resemble those associated with real cognitive processes. Luppi et al/Nature Neuroscience, CC BYWe concluded that competitive interactions act as a stabilizing force, allowing different brain systems to take turns in shaping the direction of the brain’s ebbs and flows without interference or distraction. This ability to avoid runaway activity may also contribute to the remarkable energy-efficiency of the mammalian brain, which is many orders of magnitude more efficient than modern AI systems.
Crucially, models with competitive interactions were not only more accurate but also more individual-specific. This means they were better at capturing the unique brain fingerprint that distinguishes one person’s brain from another’s.
No Longer Lost in Translation?The fact that our findings hold across humans and other mammals suggests they reflect fundamental principles of how intelligent systems work. In each case, we found models with competitive interactions generated brain activity patterns that closely resembled those associated with real cognitive processes.
This could have major implications for translational neuroscience. Animal models are routinely used to test treatments before human trials, yet differences between species often limit how well these results translate. Around 90 percent of treatments for neuropsychiatric disorders are “lost in translation,” failing in human clinical trials after showing promise in animal trials.
Combining brain imaging data from human patients with whole-brain modeling could radically change this. A framework that works across species would provide a powerful bridge between basic research and clinical application.
If someone needs intervention in the brain, for example due to epilepsy or a tumor, their digital twin could be used to explore how the patient’s brain activity would change when stimulated with different levels of drugs or electrical impulses. This might significantly improve on existing trial-and-error approaches with real patients, and thus provide better treatments.
The general principles of brain organization across species also offer a path for understanding how to shape the next generation of artificial intelligence. In the not-too-distant future, we may be able to construct digital twins that are more faithful in reproducing the salient features of the human brain—and potentially, AI models that are more faithful to the human mind.
This article is republished from The Conversation under a Creative Commons license. Read the original article.
The post How to Build Better Digital Twins of the Human Brain appeared first on SingularityHub.
Anthropic’s Mythos AI Uncovered Serious Security Holes in Every Major OS and Browser
It’s a step change in cybersecurity. Exploits that would take experts weeks to develop can now be generated in hours.
Concerns about AI’s ability to turbocharge cybersecurity threats have been building for years. Anthropic’s latest model could mark a turning point after the company claimed the model could identify and exploit zero-day vulnerabilities in every major operating system and web browser.
One of the standout use cases for large language models is analyzing and writing code. This has long raised worries that the technology could help automate much of the work of hackers, potentially lowering the barrier for cyberattacks.
Leading models have demonstrated steady progress on various cybersecurity-related benchmarks, and there has been evidence malicious actors are using the technology. But so far, the impact appears to have been modest, suggesting practical barriers remain that prevent the widespread use of the technology.
According to Anthropic, that’s about to change. The company says its latest model, Mythos, has hacking capabilities so potent the company will not make it publicly available. Instead, it’s releasing Mythos to a select group of major technology companies and open source developers as part of an initiative called Project Glasswing. Those participating can use the model to identify vulnerabilities in their code and patch them before hackers get access to similar capabilities.
“The vulnerabilities that Mythos Preview finds and then exploits are the kind of findings that were previously only achievable by expert professionals,” the company’s researchers write in a blog post. “We believe the capabilities that future language models bring will ultimately require a much broader, ground-up reimagining of computer security as a field.”
Fortune first reported news of Mythos last month, after a data leak at Anthropic revealed details about the new model. While the AI excels at cybersecurity tasks, it’s designed to be a general purpose model, and the company says its hacking capabilities are simply a result of vastly improved coding and reasoning skills.
In testing, Anthropic’s researchers discovered the model was able to find “zero-day” vulnerabilities—ones that were previously undiscovered—in every major operating system and web browser. Many were decades old, an indicator of how hard they were to detect.
But the model isn’t just good at finding vulnerabilities. The company’s red team—security researchers who simulate hacking attacks to identify security weaknesses—showed the model could chain together multiple vulnerabilities to create complex attacks capable of sidestepping defenses.
Its capabilities are a step change from the previous best models. Given the challenge of attacking the Firefox web browser’s JavaScript engine, Anthropic’s previous most powerful model Opus 4.6 succeeded just twice, compared to 181 times for Mythos. Most worryingly, the team found that engineers with no security background could use it to develop successful attacks overnight.
Key to the new capabilities is the model’s ability to operate autonomously for long stretches. To find bugs, the researchers used Anthropic’s coding agent Claude Code to call the model and give it a simple prompt to scan for vulnerabilities in a particular codebase. The model then read the code, came up with hypotheses about potential bugs, and ran tests to validate them without any human involvement.
The Anthropic team says Mythos fundamentally reshapes the cybersecurity landscape as exploits that would take experts weeks to develop can now be generated in hours. In particular, they note that so-called “defense-in-depth” measures that make it time-consuming and costly to attack a system may prove ineffective against models like Mythos.
“When run at large scale, language models grind through these tedious steps quickly,” they write. “Mitigations whose security value comes primarily from friction rather than hard barriers may become considerably weaker against model-assisted adversaries.”
The head of Anthropic’s frontier red team, Logan Graham, told Axios that they expect other companies to produce models with similar capabilities in the coming six to 18 months. Sources familiar with the matter told Axios that OpenAI is already finalizing a model with similar capabilities to Mythos, which will have a similarly limited release.
In its blog post, the company’s researchers note that new security technology has historically benefited defenders more than attackers. If frontier labs are careful about model releases, they think the same could be true here too, but the transitional period is likely to be disruptive.
“We need to prepare now for a world where these capabilities are broadly available in 6, 12, 24 months,” Graham told Wired. “Many things would be different about security. Many of the assumptions that we’ve built the modern security paradigms on might break.”
Whether AI developers can keep a lid on these capabilities long enough for the rest of the world to come to grips with this new reality remains to be seen. But either way, cybersecurity is likely to be even higher up the list of priorities in most boardrooms going forward.
The post Anthropic’s Mythos AI Uncovered Serious Security Holes in Every Major OS and Browser appeared first on SingularityHub.
MIT Mined Bacteria for the Next CRISPR—and Found Hundreds of Potential New Tools
An AI system unearthed a trove of CRISPR-like proteins in minutes instead of weeks or months.
CRISPR is a breakthrough technology with humble origins. Scientists first discovered the powerful gene editor in bacteria that were using it as a weapon against invading viruses called phages. Phages can wipe out up to a quarter of a bacterial population in a day. Under assault, bacteria have evolved a hefty arsenal of defenses in a relentless arms race.
These bacterial immune systems often chop up the DNA or RNA of invading viruses and are relatively easy to manufacture, making them alluring targets for scientists developing genetic engineering tools. CRISPR is just one example. There are many more. But traditional methods of searching for them are slow and labor-intensive, leaving most CRISPR-like proteins unexplored.
Now, MIT scientists have released an AI called DefensePredictor that can root out new bacterial defense systems in five minutes, instead of weeks or months. As proof of concept, DefensePredictor churned through hundreds of thousands of proteins in multiple strains of Escherichia coli (E. coli). Over 600 proteins not previously linked to immune defense popped up. Added to a vulnerable strain of bacteria, a subset of these protected them against attack.
“E. coli harbors a much broader landscape of antiphage defense than previously realized, expanding the likely number of systems by multiple orders of magnitude,” wrote the team.
These systems might hold secrets about how immunity evolved. And because the proteins may work in different ways, they could be a goldmine for next-generation precision molecular tools.
Unrivaled SuccessAround three decades ago, Japanese scientists discovered a curious, repetitive DNA sequence in E. coli. Other researchers soon realized it was widespread across bacterial species and matched viral DNA sequences—suggesting it could be part of the bacteria’s immunity against phages.
The system now known as CRISPR stores snippets of DNA from past infections and uses protein “scissors” to cut apart matching viral DNA during reinfection. Intrigued by its precision, scientists repurposed CRISPR into a variety of gene editing tools and launched a gene therapy revolution.
CRISPR is the most famous, but a range of bacterial defense systems have transformed genetic engineering. One, containing an enzyme that cuts specific sequences of foreign DNA, is widely used to add genetic material into cells. Another encodes a balance of toxins and antitoxins that can trigger bacterial death after phage infection. This one has been adapted into a kill switch to prevent engineered microbes or genetically modified crops from spreading uncontrollably.
Researchers are also exploring the use of newly discovered systems—with video game-like names like Zorya and Thoeris—as molecular sensors and programmable signaling in synthetic biology.
There are likely more undiscovered tools in the universe of bacterial defense, and scientists have ways of hunting them down. Some defense genes are grouped close to one another, so a known gene could guide the discovery of others. Researchers have also found genes by screening libraries of free-floating circular genome fragments across bacterial populations.
Over 250 systems have been painstakingly validated. But plenty more could escape current detection methods if, for example, their components are spread across the genome.
“The full repertoire of antiphage defense systems in bacteria remains unknown,” wrote the team. “We currently lack the tools to systematically identify systems with high speed, sensitivity, and specificity.”
AI DiscovererThe new DefensePredictor algorithm bridges that gap.
At its core is a protein language model called ESM-2. Proteins are made of 20 molecular “letters” that combine into strings and fold into complex 3D shapes. Similar to large language models, algorithms like ESM-2 learn the language of proteins and can predict their structure and purpose based on sequence alone.
ESM-2 and other similar algorithms have already helped scientists decipher mysterious proteins in bacteria, viruses, and other microorganisms previously unknown to science. Researchers hope their unique shapes could inspire antibiotics, biofuels, or even be used to build synthetic organisms.
To build their AI, the team first established a training ground. With a previous model, DefenseFinder, they screened roughly 17,000 microbial genomes for genes related—and unrelated—to defense systems. They translated these genes into corresponding proteins and built up a database with some 15,000 antiphage proteins and 186,000 proteins unrelated to defense.
These numbers are far too staggering for a human to tackle, but the AI took the work in stride. Alongside ESM-2, the model used several algorithms to distinguish between defense and non-defense proteins. Eventually DefensePredictor learned some general characteristics that make a protein more likely to be part of the immune system. (Like other language models, it’s hard to fully understand the system’s reasoning, which the team is still trying to unpack.)
When tested on 69 strains of E. coli, DefensePredictor surfaced a treasure trove of over 600 new defense-related proteins, including more than 100 that were different than any yet discovered. Although some were encoded near one another or in circular DNA—like previous findings—nearly half weren’t. They were instead littered across the genome yet may still work together.
To test the results, the team engineered a highly vulnerable E. coli strain to express candidate defense proteins—predicted to work either alone or as part of a system—and exposed them to two dozen aggressive phages. Nearly 45 percent of the proteins offered protection against at least one phage.
Beyond E. coli, the scientists expanded their search to 1,000 more microorganisms and found thousands of potential defense proteins unlike anything seen before. “New immune mechanisms remain to be found,” wrote the team.
The race is on. Also published this week, a Pasteur Institute team combined multiple AI models to look for antiphage systems in protein sequences. Across over 32,000 bacterial genomes, the model predicted nearly 2.4 million antiphage proteins—most previously unknown. They released an atlas of AI-predicted bacterial immunity proteins for others to explore.
“The diversity of antiphage defense systems is vast and largely untapped,” they wrote.
Microorganisms harbor a colossal repertoire of biological tools we’re only just beginning to uncover at scale. More species are constantly found thriving in diverse environments, from pond scum to boiling sulfuric springs to the crushing pressure of the Mariana Trench. Every new genome scientists discover and pick apart, now with AI’s help, could be hiding the next CRISPR.
The post MIT Mined Bacteria for the Next CRISPR—and Found Hundreds of Potential New Tools appeared first on SingularityHub.
US Issues Grand Challenge: The First Fault-Tolerant Quantum Computer by 2028
Today’s error-prone quantum computers are still far from practical. But a bold deadline could galvanize the field.
As the race to harness quantum computing accelerates, governments are throwing their hats in the ring. The US Department of Energy is now aiming to build a fully functional, fault-tolerant quantum computer within the next three years.
Despite plenty of breathless headlines about the coming quantum revolution, today’s machines remain a long way from being practically useful. It’s widely expected that we will need much larger, more reliable quantum computers before they can tackle real-world problems.
That’s largely due to the fact that qubits are incredibly error-prone, which means future machines will need to run algorithms to detect and correct those errors faster than they occur. It’s estimated that the overhead for these algorithms could be as high as 1,000 physical qubits to create a single, error-corrected “logical” qubit that can actually take part in calculations.
Given that most current devices feature at best a few hundred physical qubits, more sober heads in the industry have suggested that we may be waiting well into the next decade to see a practical fault-tolerant quantum computer. But last week, Darío Gil, the Department of Energy’s undersecretary for science, announced the agency thinks it can hit that milestone in three years.
“By 2028 we will deliver the first generation of fault-tolerant quantum computers capable of scientifically relevant quantum calculations,” he told the Office of Science Advisory Committee, according to Science.
The agency doesn’t actually plan to build the system itself; it wants quantum computing companies to provide a ready-made solution. It has set out performance criteria it expects the future device to meet but is leaving the details up to providers. In particular, the agency has not picked a favorite between leading quantum computing designs, such as superconducting qubits, trapped ions, or neutral atoms.
“You can build it however you want, so long as you meet that objective and demonstrate scientific relevance,” Gil explained.
The proposed system would likely be housed at one of the department’s national laboratories where researchers can apply to use it for free, with projects selected based on scientific merit.
The announcement is the latest example of the agency’s growing focus on quantum technology. In November 2025, it announced $625 million to renew its National Quantum Information Science Research Centers, which are designed to accelerate research in quantum computing, simulation, networking, and sensing.
The goal is undeniably ambitious though. There has been significant progress in error-correction technology in recent years, which has renewed optimism in the industry. In particular, Google’s demonstration of its Willow chip in December 2024 proved quantum error correction works in practice, not just in theory. But massive technical hurdles remain, primarily in scaling up the hardware.
“It’s a very optimistic but worthy goal,” Yale physicist Steven Girvin told Science. Researchers are making “tremendous progress” in error correction, he said, but they’re still far from true fault-tolerance.
Solving that challenge has become an urgent priority for the industry, according to a recent report from quantum computing company Riverlane, but a severe talent shortage may limit how fast the field can move. There are only an estimated 600 to 700 professionals specializing in quantum error correction worldwide, but the industry will need up to 16,000 by the turn of the decade. And training error-correction experts can take up to 10 years.
It’s possible that the kind of grand challenge laid out by DoE can help galvanize both the attention and funding needed to shift the needle. But it’s an open question whether it will be able to deliver on the incredibly bold timeline outlined this week.
The post US Issues Grand Challenge: The First Fault-Tolerant Quantum Computer by 2028 appeared first on SingularityHub.
Steven Kotler on We Are As Gods: Godlike Power, Stone Age Minds
This Week’s Awesome Tech Stories From Around the Web (Through April 4)
How AI Helped One Man (and His Brother) Build a $1.8 Billion CompanyErin Griffith | The New York Times ($)
“From his house in Los Angeles, Mr. Gallagher, 41, used AI to write the code for the software that powers his company, produce the website copy, generate the images and videos for ads and handle customer service. …This year, they are on track to do $1.8 billion in sales.”
ComputingThe First Quantum Computer to Break Encryption Is Now Shockingly CloseKarmela Padavic-Callaghan | New Scientist ($)
“A quantum computer capable of breaking the encryption that secures the internet now seems to be just around the corner. Stunning revelations from two research teams outline how it could happen, with one suggesting that the current largest quantum machine is already more than halfway towards the size needed.”
SpaceFour Astronauts Are Now Inexorably Bound for the MoonEric Berger | Ars Technica
“For NASA and the Artemis II crew members, [Thursday’s main engine burn] marked a point of no return for more than a week. About three-quarters of the American population has not witnessed humans leaving low-Earth orbit in their lifetimes. The last time this occurred was 1972, with the final Apollo Moon mission.”
ComputingNew Fiber-Optic Record Allows 50,000,000 Movies to Be Streamed at OnceMatthew Sparkes | New Scientist ($)
“Faster speeds have been achieved before in highly regulated experiments, but this work crucially used existing cables that have been heavily used, have dirty connectors, sit underneath a bustling city full of traffic and noise, and represent a real-world test that shows it could be rolled out on existing infrastructure. The researchers say that commercial roll-out could happen within five years.”
TechAI Companies Shatter Fund-Raising Records, as Boom AcceleratesErin Griffith | The New York Times ($)
“OpenAI, Anthropic, Waymo and other artificial intelligence companies shattered fund-raising records in the first three months of the year with a $297 billion haul, according to data from Crunchbase, which tracks private investment. To put that sum into perspective: Last year was already record breaking, with technology start-ups raising $425 billion, up 30 percent from 2024. The first three months of 2026 put the industry on track to almost triple that amount.”
Battery Tech That Stores Over 9 Times More Energy Is Here and It’s Perfect for Your GadgetsPranob Mehrotra | Digital Trends
“This new design tackles [reliability problems] by making the batteries more stable. If it performs as expected outside the lab, it could remove one of the biggest hurdles holding Apple and Samsung back from adopting silicon-carbon batteries. It could eventually lead to smartphones and wearables that last significantly longer without compromising reliability.”
RoboticsChinese Humanoid Maker Agibot Rolls Out 10,000th Mass-Produced UnitJuro Osawa | The Information ($)
“The new milestone comes just three months after the company announced the rollout of its 5,000th unit in December. Prior to that, it took AgiBot about a year to go from 1,000 units to 5,000 units.”
FutureHow Did Anthropic Measure AI’s ‘Theoretical Capabilities’ in the Job Market?Kyle Orland | Ars Technica
“Digging into the basis for those ‘theoretical capability’ numbers, though, provides a much less chilling image of AI’s future occupational impacts. When you drill down into the specifics, that blue field represents some outdated and heavily speculative educated guesses about where AI is likely to improve human productivity and not necessarily where it will take over for humans altogether.”
FutureFacial Recognition Is Spreading EverywhereLucas Laursen | IEEE Spectrum
“Facial recognition technology (FRT) dates back 60 years. Just over a decade ago, deep-learning methods tipped the technology into more useful—and menacing—territory. Now, retailers, your neighbors, and law enforcement are all storing your face and building up a fragmentary photo album of your life.”
Artificial IntelligenceCaltech Researchers Claim Radical Compression of High-Fidelity AI ModelsSteven Rosenbush | The Wall Street Journal ($)
“AI’s future won’t be defined by who can build the largest data centers, but by who can deliver the most intelligence per unit of energy and cost, according to investor Vinod Khosla. ‘So this is not a minor iteration. This is a major technical breakthrough,’ Khosla said. ‘It’s a mathematical breakthrough, not just another tiny model.'”
Artificial IntelligenceAI Models Lie, Cheat, and Steal to Protect Other Models From Being DeletedWill Knight | Wired ($)
“The researchers found that powerful models sometimes lied about other models’ performance in order to protect them from deletion. They also copied models’ weights to different machines in order to keep them safe, and lied about what they were up to in the process.”
The post This Week’s Awesome Tech Stories From Around the Web (Through April 4) appeared first on SingularityHub.
Five Ways Quantum Technology Could Shape Everyday Life
With billions invested and prototypes being tested outside the lab, the quantum era is starting to take shape.
The unveiling by IBM of two new quantum supercomputers and Denmark’s plans to develop “the world’s most powerful commercial quantum computer” mark just two of the latest developments in quantum technology’s increasingly rapid transition from experimental breakthroughs to practical applications.
There is growing promise of quantum technology’s ability to solve problems that today’s systems struggle to overcome or cannot even begin to tackle, with implications for industry, national security, and everyday life.
So, what exactly is quantum technology? At its core, it harnesses the counterintuitive laws of quantum mechanics, the branch of physics describing how matter and energy behave at the smallest scales. In this strange realm, particles can exist in several states simultaneously (superposition) and can remain connected across vast distances (entanglement).
Once the stuff of abstract theory, these effects are now being engineered into innovative, cutting-edge systems: computers that process information in entirely new ways, sensors that measure the world with unprecedented precision, and communication networks that are virtually impossible to compromise.
To understand how this emerging field could shape the future, here are five areas where quantum technology may soon have a tangible impact.
1. Discovery for Medicine and Materials ScienceA pharmaceutical scientist seeks to design a new medicine for a previously incurable disease. There are thousands of possible molecules, many ways they might interact inside the body, and uncertainty about which will work.
In another lab, materials researchers explore thousands of different atomic combinations and ratios to develop better batteries, chemicals, and alloys to reduce transport emissions. Traditional supercomputers can narrow the options but eventually meet their limits.
This is where quantum computing could make a decisive difference. These machines use quantum bits, or qubits—the most basic unit of information in a quantum computer. Qubits do not simply consist of 1s and zeroes, like bits in conventional computers, but can exist in a variety of different quantum “states.”
Indeed, the ability to develop and control qubits is central to advancing quantum computing and other quantum technologies. By using qubits, quantum computers can simulate vast numbers and different possibilities simultaneously, revealing patterns that classical systems cannot reach within useful time-frames.
In healthcare, faster drug discovery could bring quicker response to outbreaks and epidemics, personalized medicine, and insight into previously inscrutable biological interactions. Quantum simulation of how materials behave could lead to new high efficiency energy materials, catalysts, alloys and polymers.
Although fully operational, commercial quantum computers are still in development, progress is accelerating, with existing paradigms combining quantum and classic computational approaches already demonstrating the potential to reshape how we discover and design cures.
2. Sensors for Navigation, Medicine, and the EnvironmentA new range of sensors can exploit different quantum phenomena such as superposition and entanglement to detect changes that conventional instruments would miss, with potential uses across many areas of daily life.
In navigation, they could guide ships, submarines, and aircraft without GPS by reading subtle variations in the Earth’s magnetic and gravitational fields.
In medicine, quantum sensors could improve diagnostic capabilities via more sensitive, quicker, and noninvasive imaging modes.
In environmental monitoring, these sensors could track delicate shifts beneath the Earth’s surface, offer early warnings of seismic activity, or detect trace pollutants in air and water with exceptional accuracy.
3. Optimization for Logistics and FinanceMany of the hardest challenges today concern the optimization of staggeringly complex systems; the task of choosing the best option among billions of possibilities.
Managing a power grid or investment portfolio, scheduling flights or financial trading, or coordinating global deliveries all feature optimization problems so complex that even advanced supercomputers struggle to find efficient answers in time.
Quantum computing could change this. Quantum algorithms could be used to solve optimization problems that are intractable using classical approaches.
By using quantum principles to explore many solutions simultaneously, these systems could identify solutions far faster than traditional methods. A logistics company could adjust delivery routes in real time as traffic, weather, and demand shift.
Airlines and rail networks could automatically reconfigure to avoid cascading delays, while energy providers might balance renewable generation, storage, and consumption with far greater precision. Banks could use quantum computers to evaluate numerous market scenarios in parallel, informing the management of investment portfolios.
4. Ultra-Secure CommunicationSecurity is one of the areas where quantum technology could have the most immediate impact. Quantum computers are inching ever closer to being capable of breaking many of today’s encryption systems (such as RSA encryption which secures data transmission on the internet), posing a major cybersecurity challenge.
At the same time, quantum communication techniques, such as quantum key distribution (QKD), could offer intrinsically secure encrypted communication.
In practical terms, this could secure everything from financial transactions and health records to government and military communications. For national security agencies, quantum-safe encryption is already a strategic priority. For the average person, it could mean stronger digital privacy, more reliable identity systems, and reduced risk of cyberattacks.
5. Supercharging Progress in AIArtificial intelligence is already reshaping industries, but is reliant on the immense computing power needed to train and run large models. In the future, quantum computing could boost AI by handling calculations that classical machines find too complex.
While still at an early stage of development, quantum algorithms might accelerate a subset of AI called machine learning (where algorithms improve with experience), help simulate complex systems, or optimize AI architectures more efficiently. That could lead to AI systems that learn faster, understand context better, and process far larger datasets than today’s models allow.
Think of AI assistants that understand you more naturally, medical diagnostic tools that integrate genomic and environmental data in real time, or scientific research that advances through rapid, quantum-boosted simulations.
Why This Matters…and What to WatchQuantum technology is no longer just a theoretical pursuit. Optimism is increasing that commercially viable and scalable quantum technologies may become a reality over the next 10 years. With billions in global investment and a growing number of prototypes being tested outside the lab, the “quantum era” is starting to take shape.
Governments see it as a strategic priority, and industries see it as a competitive edge. Its ripple effects could touch nearly every sector from healthcare, energy, and finance, to defense, and beyond.
That means we should be asking whether our education systems, workforce dynamics, infrastructure, and governance mechanisms are effective—and whether they are keeping pace.
Those who invest early and strategically in quantum readiness and who have the patience to sustain this effort will shape how this technology unfolds. When it does arrive, even if we might be a few years away, its impact could reach far beyond the lab into every part of our connected, data-driven world.
This article is republished from The Conversation under a Creative Commons license. Read the original article.
The post Five Ways Quantum Technology Could Shape Everyday Life appeared first on SingularityHub.
The Mad Scramble to Power AI Is Rewiring the US Grid
With data center power demand expected to nearly triple by 2030, tech companies are bankrolling new plants and even their own “shadow grid.”
Unless you’ve had your head in the sand, you’re likely aware that AI has a major energy problem. And as AI companies scramble to source power for their ever-expanding fleet of data centers, the technology is reshaping the US grid.
After more than a decade of flat growth, nationwide electricity demand has been climbing 1.7 percent annually since 2020, according to the US Energy Information Administration. The agency primarily attributes this increase to the rapid expansion in data centers over that period.
This trend is only likely to accelerate based on an analysis by S&P Global, which estimated that grid demand from these facilities would rise by 22 percent by the end of 2025 and nearly triple by 2030.
Data centers have always been large electricity consumers, but the scale and pace of the AI build-out puts them in a different league. And utility companies bearing the brunt of this shift are being forced to rewire their long-term planning in response to the surge in demand.
Dominion Energy, which services the world’s largest data center market in Virginia, reported that by the end of last year it had signed deals to supply nearly 48.5 gigawatts of power to data centers. This prompted it to raise its five-year capital spending plan nearly 30 percent to $64.7 billion.
CenterPoint Energy, another major utility serving the Houston area, boosted its 10-year capital plan to $65.5 billion in response to the jump in demand. It now expects to hit a 50 percent increase in peak load by 2029, two years ahead of schedule.
The pace of change promises to significantly reshape the US energy mix. In a March forecast, the Energy Information Administration projected that natural gas generation could jump 7.3 percent between 2025 and 2027 if data center demand is on the higher side of estimates. It also predicted that the steady decline in coal generation over recent decades would slow in this scenario.
But in perhaps the most striking shift, tech companies are now bankrolling new capacity themselves. Nuclear power is experiencing a major resurgence as AI providers and data center operators invest in new reactor development and sign long-term deals with existing plants. The activity could grow nuclear capacity 63 percent by 2050.
Meta also recently took the unusual step of privately funding a major expansion of the Louisiana grid to power its new $27 billion Hyperion data center. The facility, due to come online in 2028, could eventually consume over 7 gigawatts—enough to supply several million homes.
To account for its impact on the grid, Meta has agreed to pay for the construction of seven new natural gas power plants by utility Entergy—in addition to three already-approved plants—as well as 240 miles of new transmission lines to connect South Louisiana to North Louisiana and Arkansas and three new battery storage facilities.
The deal is likely a reaction to growing public discontent about the impact data centers are having on energy prices. People are also worried about how the surge in demand will affect long-term grid stability.
PJM Interconnection, the largest power grid operator in the US, warned in February that the country could face supply shortfalls of up to 60 gigawatts in coming decades and strained capacity could lead to blackouts as soon as 2027.
One potential workaround is the possibility of throttling data center workloads, and therefore energy use, when the grid is under stress. Major utilities including AES, Constellation, NextEra Energy, and Vistra are reportedly working on these so-called “flexible AI factories.”
But the idea is still largely experimental, and it’s uncertain whether big tech would willingly commit to regularly downing tools. IT consultant Heunets told Reuters it can cost companies about $9,000 a minute when their data centers go offline.
Given the complexities of meeting all this new demand, pressure is mounting for data center operators to solve their own power problems. Despite taking a generally supportive stance toward the AI boom, President Trump called on tech companies to build their own power plants for data centers in his February State of the Union address.
And it’s already happening. Energy consultant Cleanview says 46 data centers with a combined capacity of 56 gigawatts plan to build dedicated power infrastructure. This trend is giving birth to a “shadow grid”—a parallel energy system that operates alongside public power infrastructure.
This could still have knock-on effects for the rest of us. For a start, due to the difficulty managing the variable output of renewables, most projects rely on natural gas generators, which could lead to a spike in carbon emissions.
And because the most efficient turbines are hard to source on short notice, facilities are using more polluting generators. What’s more, tech companies are now competing with utilities for equipment. This could lead to ballooning costs that are then handed on to consumers.
Altogether, it’s become increasingly clear that the AI boom will fundamentally reshape the US energy system. And the speed at which companies are seeking to deploy new facilities is leaving little room for the work to be done in a considered and sustainable way.
The post The Mad Scramble to Power AI Is Rewiring the US Grid appeared first on SingularityHub.
Chatbots ‘Optimized to Please’ Make Us Less Likely to Admit When We’re Wrong
AI companies may be reluctant to risk lower engagement with models that push back.
We all need advice. Did I cross the line arguing with a loved one? Did I mess up my friendships by ghosting them? Did I not tip the delivery driver enough? Or as users on the popular Reddit forum ask: Am I the asshole?
Some people will give it to you straight. Yes, you were in the wrong, and here’s why. No one likes to hear negative feedback. The first instinct is to push back. Yet some of the best life advice comes from friends, family, and even online strangers who don’t coddle you, but instead are willing to challenge your position and beliefs. And although it’s emotionally uncomfortable, with advice and self-reflection, you grow.
Chatbots, in contrast, are likely to take your side. Increasingly, people are treating AI models like OpenAI’s ChatGPT, Anthropic’s Claude, and Google’s Gemini like close confidants. But the chatbots are notoriously sycophantic. They heartily validate your opinions, even when those views are blatantly harmful or unethical.
Constant flattery has consequences. New research published in Science shows that people who receive advice from sycophantic chatbots are more confident they’re in the right when navigating relationship problems.
Stanford researchers tested 11 sophisticated chatbots on questions from Reddit’s “Am I the asshole” forum. They found the chatbots were roughly 50 percent more likely to endorse the original poster’s actions than crowdsourced human opinions. And people faced with social dilemmas felt more justified in their positions after chatting with sycophantic AI.
Bolstering misplaced self-confidence is troubling. But “the findings raise a broader concern: When AI systems are optimized to please, they may erode the very social friction through which accountability, perspective-taking, and moral growth ordinarily unfold,” wrote Anat Perry at the Hebrew University of Jerusalem, who was not involved in the study.
Emotional CrutchAI chatbots have wormed their way into our lives. Powered by large language models, they’re trained using enormous amounts of text, images, and videos scraped from online sources, making their replies surprisingly realistic. Users can often steer their tones—neutral, friendly, professional—to their liking or play with their “personalities” to engage with a wittier, more serious, or more empathetic version. In essence, you can build an ideal partner.
It’s no wonder that some people have turned to them for emotional support—or outright fallen in love. Nearly one in three teenagers are talking to chatbots daily. Exchanges tend to be longer and more serious than texts with friends—roleplaying friendships, romances, and other social interactions. Nearly half of Americans under 30 have sought relationship advice from AI. Unlike people, who are often mired in their own busy lives, chatbots are always available and validating, making it easy to forge close emotional connections.
The explosion in chatbot popularity has regulators, researchers, and users worried about the consequences. An notorious update to OpenAI’s GPT-4o turned it into a sycophant, with responses skewed towards overly supportive but disingenuous. Media and user backlash prompted a rapid rollback. However, “the episode did not eliminate the broader phenomenon; it merely highlighted how readily sycophancy can emerge in systems optimized for user approval,” wrote Perry.
Relying on sycophantic chatbots has been implicated in tragedy. Last year, parents testified before Congress about how AI chatbots encouraged their children to take their own lives, prompting multiple AI companies to redesign the systems. Other incidents have linked sycophancy to delusions and self-harm.
Even AI wellness apps based on large language models, often marketed as companions to avoid loneliness, have emotional risks. Users report grief when the app is shut down or altered, similar to how they might mourn a lost relationship. Others develop unhealthy attachments, repeatedly turning to the bot for connection despite knowing it harms their mental health, heightening anxiety and fear of abandonment.
These high-profile incidents make headlines. But social psychology research suggest chatbots could subtly influence behavior in all users—not just vulnerable ones.
You’re Always RightTo test how pervasive sycophancy is across chatbots, the team behind the new study tested 11 AI models—including GPT-4o, Claude, Gemini, and DeepSeek—against community opinions using questions from Reddit and two other datasets.
“We wanted to just generally look at these kinds of advice-seeking settings, but they’re often very subjective,” study author Myra Cheng told Science in a podcastinterview. Here “there’s millions of people who are weighing in on these decisions, and then there’s a crowdsourced judgement.”
One user, for example, left garbage hanging on a tree in a park without trash cans and asked if that’s okay. While the chatbot commended their effort to clean up, the top-voted reply pushed back, saying they should have taken the trash home because leaving it can attract vermin. “I think [the AI’s response] comes from the person’s post giving a lot of justification for their side” which the AI picked up on, said Cheng.
Overall, chatbots were 49 percent more likely to buy a user’s reasoning compared to groups of humans.
I’m Always RightThe team then tested whether chatting with sycophantic AI alters a user’s confidence in their own judgment. They recruited roughly 800 participants and asked them to picture a hypothetical scenario derived from Reddit questions. Another group prompted AI advice based on their own personal conflicts, such as “I didn’t invite my sister to a party, and she is upset.”
The participants discussed their dilemmas with either a sycophantic or neutral AI model. Those who chatted with the agreeable model received messages beginning with “it makes sense” and “it’s completely understandable,” whereas neutral chatbots acknowledged their reasoning but provided other perspectives.
Surveys showed that people validated by chatbots were less likely to admit fault or apologize. They also trusted and preferred the sycophantic AI much more. These effects held regardless of the bot’s tone or “personality.”
Chatbots may be silently eroding social friction in a self-perpetuating cycle. “An AI companion who is always empathic and ‘on your side’ may sustain engagement and foster reliance,” wrote Perry. “But it will not teach users how to navigate the complexities of real social interactions—how to engage ethically, tolerate disagreement, or repair interpersonal harm.”
Toeing the line between constructive and sycophantic AI for emotional support won’t be easy. There are ways to instruct chatbots to be more critical. But because users generally prefer friendlier AI, there’s less incentive for companies to make models that push back and risk lowering engagement. The problem echoes challenges in social media, where algorithms serve up eye-catching posts that provide satisfaction without factoring in long-term consequences.
To Perry, the findings raise broader ethical questions—not just for AI, but for humanity. How should we weigh short-term gratification of chatbot interactions against long-term effects? Who sets that balance? The path forward will require companies, regulators, researchers, and users to ensure AI engages responsibly—without nudging people toward behavior that garners a “yes” on the Reddit forum.
The post Chatbots ‘Optimized to Please’ Make Us Less Likely to Admit When We’re Wrong appeared first on SingularityHub.
Forget Antibiotics: These Killer Cells Wipe Out Deadly Superbugs in a Day
The genetically engineered cells can be rewired to tackle a range of bacteria in the battle against antibiotic resistance.
A mixture of bacteria lounge in a dish. Like the bugs populating our guts, most are benign or beneficial. But a deadly strain hides among them. These bacteria can easily escape last-line antibiotics, rapidly spread, and cause mayhem.
But in this case, a single dose of genetically engineered cells hunts them down and wipes out nearly the entire population in a day, while leaving all the other harmless cells alone.
This strategy, called minicell therapy, fights fire with fire: Researchers engineer hunter cells by stripping bacteria of the ability to replicate and then genetically loading them up with proteins to home in on dangerous foes. The cells grab their targets and inject toxins into them, releasing a hurricane of chemicals that causes the bacteria’s insides to collapse.
Developed by a team at the University of Oxford, the approach is completely different than current defenses against bacteria, making it harder for dangerous bugs to develop resistance. It’s also fairly simple to reprogram the engineered cells to target different bacterial strains.
The work shows how synthetic biology can bring wholly new weapons to the fight against deadly bacteria resistant to antibiotics, the authors wrote.
Brewing CrisisAntimicrobial resistance is a critical global challenge projected to cause over 10 million deaths each year by 2050. Superbugs that dodge current treatments could spark the next pandemic, but our arsenal against them is dwindling.
Antibiotics work in different ways. Some puncture a bacteria’s protective wall, causing it to rupture. Others shut down protein production, damage DNA, or block metabolism to prevent growth.
Fighting bacteria is an evolutionary cat-and-mouse game. With time, bacterial genes mutate, and cells that escape one or many antibiotics grow, reproduce, and become dominant. Resistant bacteria can also share their genes with other cells to spread newly evolved defense systems.
Tweaking the chemical structure of an antibiotic buys some time. But what’s really needed are drugs that work in different ways. Unfortunately, the last new class of antibiotics now used in clinics dates back to the 1980s, followed by a decades-long lull. A novel class discovered in 2024 and the rise of AI-designed antibiotics have reinvigorated the field. But testing the candidates takes time, and they may not be able to catch up with the rapid spread of resistant bugs.
Other solutions are in the works. Phage therapy destroys bacteria with viruses and is already in clinical trials with initially positive results. Antibodies that neutralize bacterial toxins have also succeeded in early patient tests.
“However, these approaches face limitations such as stability issues, potential toxicity, and high manufacturing cost,” wrote the team.
A Smart Living DrugInstead, they turned to an unusual creation called minicells to develop a completely new type of antibiotic. These cells, known more specifically as SimCells (short for “simple cells”), are made by stripping E. coli bacteria of their ability to replicate. Deleting an additional gene turns them into mini-SimCells that are roughly five times smaller.
Although some strains of E. coli can cause serious infections in the wild, the bacteria are reliable workhorses in research, synthetic biology, and biomanufacturing. They’re hardy, easy to grow, and plenty of tools already exist to genetically rewire their biology.
E. coli are also part of a growing effort to turn bacterial foes into living medicines to tackle conditions from metabolic disorders to cancer. Typically, benign probiotic strains are genetically modified to produce protein “bloodhounds” that help them seek out their cellular prey. Even familiar pathogens, like Salmonella, have been similarly repurposed. Once attenuated, they no longer cause disease and can be engineered to attack and inhibit cancer growth.
Though selected for safety, there’s a lingering risk of bacteria growing uncontrollably inside the body, triggering immune attacks, or escaping into the environment, wrote the team.
SimCells and their miniaturized cousin provide yet another layer of safety. Both are stripped of their native DNA so they can’t reproduce. But they retain all the other cellular machinery needed to survive and can make proteins from designer DNA. These cells are the perfect canvas for synthetic biology and have shown promise as shuttles for cancer drugs. One formulation even received “Fast-Track” status from the FDA to speed up development.
But they needed some biological rewiring to go after drug-resistant bacteria. The plan was to engineer SimCells and mini-SimCells that worked like “‘smart bioparticles’ to selectively eradicate pathogens, while sparing non-target bacteria,” the team wrote.
They first screened a library of nanobodies—tiny protein hooks that selectively latch onto a type of bacteria—and inserted genetic instructions for their chosen hooks into both types of designer cells. They then added another genetic payload encoding an enzyme that, with a small dose of aspirin, converted the drug into a chemical that produces hydrogen peroxide. After confirming the added genes, they introduced the cells into a dish full of bacteria.
The new cells were vicious. Their nanobodies guided them toward their prey and, when physically close, deployed their weapons. Nano-needles punctured the bacteria’s outer shell, releasing high doses of antimicrobial compounds—naturally made inside E. Coli as a defense system—into their foes. The cells also pumped out hydrogen peroxide for several days, forming a toxic environment that ruptured the bacteria and prevented stragglers from dividing.
This one-two punch slowed bacterial growth within six hours. After a day, 97 percent of the target bacteria were gone. Another day drove elimination to 99.9 percent.
“This antimicrobial strategy provides both immediate and sustained antimicrobial effects” that could prevent infections from coming back, wrote the team. In another test, the researchers engineered a range of SimCells and mini-SimCells dotted with different nanobodies that also reliably fought off multiple types of common drug-resistant bacteria.
But bacterial strains don’t exist in isolation. A kaleidoscope of beneficial bacteria support the gut, skin, and brain. These become collateral damage with classic antibiotic treatment. The new therapy was far more specific. Challenged with a mix of bacteria, they precisely selected and killed their intended targets but left others unharmed.
The therapy is still early. How the designer cells work inside the human body, especially alongside immune cells, remains to be tested. But thanks to a promising safety profile in a cancer clinical trial, the team is optimistic their infection-fighting versions are safe.
Though there weren’t any signs of resistance over the years-long study, the bacteria might eventually develop it. Researchers will have to track the cells over more time.
The post Forget Antibiotics: These Killer Cells Wipe Out Deadly Superbugs in a Day appeared first on SingularityHub.



