Transhumanismus
Logging off Life but Living on: How AI Is Redefining Death, Memory, and Immortality
Our digital legacies don’t just preserve memories; they can continue to influence the world, long after we’re gone.
Imagine attending a funeral where the person who has died speaks directly to you, answering your questions and sharing memories. This happened at the funeral of Marina Smith, a Holocaust educator who died in 2022.
Thanks to an AI technology company called StoryFile, Smith seemed to interact naturally with her family and friends.
The system used prerecorded answers combined with artificial intelligence to create a realistic, interactive experience. This wasn’t just a video; it was something closer to a real conversation, giving people a new way to feel connected to a loved one after they’re gone.
Virtual Life After DeathTechnology has already begun to change how people think about life after death. Several technology companies are helping people manage their digital lives after they’re gone. For example, Apple, Google, and Meta offer tools to allow someone you trust to access your online accounts when you die.
Microsoft has patented a system that can take someone’s digital data—such as texts, emails and social media posts—and use it to create a chatbot. This chatbot can respond in ways that sound like the original person.
In South Korea, a group of media companies took this idea even further. A documentary called “Meeting You” showed a mother reunited with her daughter through virtual reality. Using advanced digital imaging and voice technology, the mother was able to see and talk to her dead daughter as if she were really there.
These examples may seem like science fiction, but they’re real tools available today. As AI continues to improve, the possibility of creating digital versions of people after they die feels closer than ever.
Who Owns Your Digital Afterlife?While the idea of a digital afterlife is fascinating, it raises some big questions. For example, who owns your online accounts after you die?
This issue is already being discussed in courts and by governments around the world. In the United States, nearly all states have passed laws allowing people to include digital accounts in their wills.
In Germany, courts ruled that Facebook had to give a deceased person’s family access to their account, saying that digital accounts should be treated as inheritable property, like a bank account or house.
But there are still plenty of challenges. For example, what if a digital clone of you says or does something online that you would never have said or done in real life? Who is responsible for what your AI version does?
When a deepfake of actor Bruce Willis appeared in an ad without his permission, it sparked a debate about how people’s digital likenesses can be controlled, or even exploited, for profit.
Cost is another issue. While some basic tools for managing digital accounts after death are free, more advanced services can be expensive. For example, creating an AI version of yourself might cost thousands of dollars, meaning that only wealthy people could afford to “live on” digitally. This cost barrier raises important questions about whether digital immortality could create new forms of inequality.
Grieving in a Digital WorldLosing someone is often painful, and in today’s world, many people turn to social media to feel connected to those they’ve lost. Research shows that a significant proportion of people maintain their social media connections with deceased loved ones.
But this new way of grieving comes with challenges. Unlike physical memories such as photos or keepsakes that fade over time, digital memories remain fresh and easily accessible. They can even appear unexpectedly in your social media feeds, bringing back emotions when you least expect them.
Some psychologists worry that staying connected to someone’s digital presence could make it harder for people to move on. This is especially true as AI technology becomes more advanced. Imagine being able to chat with a digital version of a loved one that feels almost real. While this might seem comforting, it could make it even harder for someone to accept their loss and let go.
Cultural and Religious Views on Digital AfterlifeDifferent cultures and religions have their own unique perspectives on digital immortality. For example:
• The Vatican, the center of the Catholic Church, has said that digital legacies should always respect human dignity.
• In Islamic traditions, scholars are discussing how digital remains fit into religious laws.
• In Japan, some Buddhist temples are offering digital graveyards where families can preserve and interact with digital traces of their loved ones.
These examples show how technology is being shaped by different beliefs about life, death, and remembrance. They also highlight the challenges of blending new innovations with long-standing cultural and religious traditions.
Planning Your Digital LegacyWhen you think about the future, you probably imagine what you want to achieve in life, not what will happen to your online accounts when you’re gone. But experts say it’s important to plan for your digital assets: everything from social media profiles and email accounts to digital photos, online bank accounts and even cryptocurrencies.
Adding digital assets to your will can help you decide how your accounts should be managed after you’re gone. You might want to leave instructions about who can access your accounts, what should be deleted, and whether you’d like to create a digital version of yourself.
You can even decide if your digital self should “die” after a certain amount of time. These are questions that more and more people will need to think about in the future.
Here are steps you can take to control your digital afterlife:
• Decide on a digital legacy. Reflect on whether creating a digital self aligns with your personal, cultural or spiritual beliefs. Discuss your preferences with loved ones.
• Inventory and plan for digital assets. Make a list of all digital accounts, content, and tools representing your digital self. Decide how these should be managed, preserved, or deleted.
• Choose a digital executor. Appoint a trustworthy, tech-savvy person to oversee your digital assets and carry out your wishes. Clearly communicate your intentions with them.
• Ensure that your will covers your digital identity and assets. Specify how they should be handled, including storage, usage and ethical considerations. Include legal and financial aspects in your plan.
• Prepare for ethical and emotional impacts. Consider how your digital legacy might affect loved ones. Plan to avoid misuse, ensure funding for long-term needs, and align your decisions with your values.
Digital PyramidsThousands of years ago, the Egyptian pharaohs had pyramids built to preserve their legacy. Today, our “digital pyramids” are much more advanced and broadly available. They don’t just preserve memories; they can continue to influence the world, long after we’re gone.
This article is republished from The Conversation under a Creative Commons license. Read the original article.
The post Logging off Life but Living on: How AI Is Redefining Death, Memory, and Immortality appeared first on SingularityHub.
A Paralyzed Man Just Piloted a Virtual Drone With His Mind Alone
Linking brains to machines has gone from science fiction to reality in the past two decades.
When patient T5 suffered a spinal cord injury that left him paralyzed, his dream of flying a drone seemed forever out of reach.
Now, thanks to a brain implant, he’s experienced the thrill in a simulation. By picturing finger movements in his mind, the 69-year-old flew a virtual drone in a video game, with the quadcopter dodging obstacles and whizzing through randomly appearing rings in real time.
T5 is part of the BrainGate2 Neural Interface System clinical trial, which launched in 2009 to help paralyzed people control computer cursors, robotic arms, and other devices by decoding electrical activity in their brains. It’s not just for gaming. Having the ability to move and click a cursor gets them back online. Googling, emailing, streaming shows, scrolling though social media posts—what able-bodied people spend hours on every day—are now again part of their lives.
But cursors can only do so much. Popular gaming consoles—PlayStation, Xbox, Nintendo Switch—require you to precisely move your fingers, especially thumbs, fast and in multiple directions.
Current brain implants often take a bird’s-eye-view of the entire hand. The new study, published in Nature Medicine, separated the fingers into three groups—thumb, pointer and middle finger, and ring finger and pinky. After training, T5 could move each finger group independently with unprecedented finesse. His brain implant also picked up intentions to stretch, curl, or move his thumb side to side, letting him pilot the drone as if using a video game controller.
Calling his gaming sessions “stick time,” T5 enthusiastically said that piloting the drone allowed him to mentally “rise up” from his bed or chair for the first time since his injury. Like other gamers, he asked the research team to record his best runs and share the videos with friends.
Brain-computer mind-melds are “expanding from functional to recreational applications,” wrote Nick Ramsey and Mariska Vansteensel at the University Medical Center Utrecht, who were not involved in the study.
Mind ControlLinking brains to machines has gone from science fiction to reality in the past two decades, and it’s been life-changing for people paralyzed from spinal cord injuries.
These injuries, either due to accident or degeneration, sever nerve highways between the brain and muscles. Scientists have long sought to restore these connections. Some have worked to regenerate broken nerve endings inside the body, with mixed results. Others are building artificial “bridges” over the gap. These implants, often placed in the spinal cord above the injury site, record signals from the brain, decode intention for movement, and stimulate muscles to contract or relax. Thanks to such systems, paralyzed people have been able to walk again—often with assistance—for long distances and minimal training.
Other efforts have done without muscles altogether, instead tapping directly into the brain’s electrical signals to hook the mind to a digital universe. Previous studies have found that watching or imagining movements—like, say, asking a patient to picture moving a cursor around a browser—generates similar brain patterns to physically performing the movements. Recording these “brain signatures” from individual people can then decode their intention to move.
Noland Arbaugh, the first person to receive a brain implant from Elon Musk’s Neuralink, is perhaps the most well-known success. Late last year, the young man livestreamed his life for three days, sharing his view while moving a cursor and playing a video game in bed.
Decoding individual finger movements, however, is a bigger challenge. Our hands are especially dexterous and flexible, making it easy to type, play musical instruments, grab a cup of coffee, or twiddle our thumbs. Each finger is controlled by intricate networks of brain activity working together under the hood to generate complex movements.
Fingers curl, wiggle, and stretch apart. Deciphering the brain patterns that allow them to individually and collectively work together has stymied researchers. “In humans, finger decoding has only been demonstrated in prediction in offline analyses or classification from recorded neural activity,” wrote the authors. Brain signal control hasn’t been used to control fingers in real-time. Even in monkeys, brain implants have only been able to separate fingers into two groups that move independently, limiting their paws’ overall flexibility.
A Virtual FlexIn 2016, T5 had two tiny implants inserted into the hand “knob” of his brain—one for each side that controls hand and finger movements. Each implant, the size of a baby aspirin, had 96 microelectrode channels that quietly captured his brain activity as he went through a series of training tasks. At the time of surgery, T5 could only twitch his hands and feet randomly.
The team first designed a hand avatar. It didn’t fully capture the dexterity of a human hand. The index and middle finger moved together as a group, as did the ring and pinkie. Meanwhile, the thumbs could stretch, curl, and move side to side.
For training, T5 watched the hand avatar move and imagined moving his fingers in sync. Using an artificial neural network that specializes in decoding signals across time, the team next built an AI to decipher T5’s brain activity and correlate each pattern with different types of finger movements. The “decoder” was then used to translate his intentions into actual movements of the hand avatar on the computer screen.
In an initial test that only allowed the thumb to extend and curl—what the researchers call “2D”—the participant was able to extend his finger groups onto a virtual target with over 98 percent accuracy. Each attempt took only a bit more than a second.
Adding side-to-side movement of the thumb had a similar success rate, but doubled the amount of time (though he got faster as he became familiar with the task). Overall, T5 could mind-control his virtual hand to reach around 76 targets a minute, far faster than previous attempts. The training “wasn’t tedious,” he said.
Each finger group movement was then mapped onto a virtual drone. Like moving joysticks and pressing buttons on a video game controller, the finger movements moved the quadcopter at will. The system kept the virtual hand in a relaxed, neutral pose unless T5 decided to move any of the finger groups.
In a day of testing, he flew the drone a dozen times across multiple obstacle courses. Each course required him to use one of the finger group movements to successfully navigate randomly appearing rings and other hurdles. One challenge, for example, had him fly figure eights across multiple rings without hitting them. The system was roughly six times better than prior systems.
Although his virtual fingers and their movements were shown on the computer screen while playing, the visuals weren’t necessary.
“When the drone is moving and the fingers are moving, it’s easier and faster to just look at the drone,” he said. Piloting it was intuitive, “like riding your bicycle on your way to work, [thinking] ‘what am I going to do at work today’, and you’re still shifting gears on your bike and moving right along.”
Adapting from simple training exercises to more complicated movements was also easy. “It’s like if you’re a clarinet player, and you pick up someone else’s clarinet. You know the difference instantly, and there is a little learning curve involved, but that’s based on you [having] an implied competency with your clarinet,” he said. To control the drone, you just have to “tickle it a direction,” he added.
The system is still far from commercial use, and it will have to be tested on more people. New brain implant hardware with more channels could further boost performance. But it’s a first step that opens up multiplayer online gaming—and potentially, better control of other computer programs and sophisticated robotic hands—to people with paralysis, enriching their social lives and overall wellbeing.
The post A Paralyzed Man Just Piloted a Virtual Drone With His Mind Alone appeared first on SingularityHub.
This Week’s Awesome Tech Stories From Around the Web (Through January 18)
These were our favorite articles in science and tech this week.
OpenAI Has Created an AI Model for Longevity Science Antonio Regalado | MIT Technology Review
“When you think of AI’s contributions to science, you probably think of AlphaFold, the Google DeepMind protein-folding program that earned its creator a Nobel Prize last year. Now OpenAI says it’s getting into the science game too—with a model for engineering proteins. The company says it has developed a language model that dreams up proteins capable of turning regular cells into stem cells—and that it has handily beat humans at the task.”
This MIT Spinout Wants to Spool Hair-Thin Fibers Into Patients’ Brains Connie Loizos | TechCrunch
“[NeuroBionics] thinks it could one day improve the lives of millions of people who live with neurological conditions like depression, epilepsy, and Parkinson’s disease. Famed investor Steve Jurvetson of Future Ventures says that if everything goes right for the 18-month-old outfit, its approach could further address ‘the peripheral nervous system for pain, incontinence, and a bunch of other applications.'”
An Entire Book Was Written in DNA—and You Can Buy It for $60 Emily Mullin | Wired
“DNA data storage isn’t exactly mainstream yet, but it might be getting closer. Now you can buy what may be the first commercially available book written in DNA. Today, Asimov Press debuted an anthology of biotechnology essays and science fiction stories encoded in strands of DNA. For $60, you can get a physical copy of the book plus the nucleic acid version—a metal capsule filled with dried DNA.”
Roar of New Glenn’s Engines Silences Skeptics of Bezos’ Blue Origin Kenneth Chang | The New York Times
“The launch was a major success for Blue Origin, Mr. Bezos’ rocket company. It should quiet critics who say that the company has been too slow compared with Elon Musk’s SpaceX, which has dominated global spaceflight industry in recent years. New Glenn could prove a credible competitor with Mr. Musk’s company and win launch contracts from NASA and the Department of Defense, as well as commercial contracts.”
New Superconductive Materials Have Just Been Discovered Charlie Wood | Wired
“In 2024, superconductivity—the flow of electric current with zero resistance—was discovered in three distinct materials. Two instances stretch the textbook understanding of the phenomenon. The third shreds it completely. ‘It’s an extremely unusual form of superconductivity that a lot of people would have said is not possible,’ said Ashvin Vishwanath, a physicist at Harvard University who was not involved in the discoveries.”
Fire Destroys Starship on Its Seventh Test Flight, Raining Debris From Space Stephen Clark | Ars Technica
“SpaceX launched an upgraded version of its massive Starship rocket from South Texas on Thursday, but the flight ended less than nine minutes later after engineers lost contact with the spacecraft. …Within minutes, residents and tourists in the Turks and Caicos Islands, Haiti, the Dominican Republic, and Puerto Rico shared videos showing a shower of debris falling through the atmosphere along Starship’s expected flight corridor.”
A Promising (and Surprisingly Simple) Way to Detect Alien Life Dirk Schulze-Makuch | Big Think
“Studying motility—the ability of organisms (in this case, microbial life) to move independently in their environment—could be an effective way to find and identify extraterrestrial life. Recent research shows that microbes respond to stress, like high salt levels, by moving, making this a potential method for finding life on Mars. The research could also help detect deadly pathogens like cholera in water, improving public health on Earth.”
‘The New York Times’ Takes OpenAI to Court. ChatGPT’s Future Could Be on the Line Bobby Allyn | NPR
“The lawsuit…calls for the destruction of ChatGPT’s dataset. That would be a drastic outcome. If the publishers win the case, and a federal judge orders the dataset destroyed, it could completely upend the company, since it would force OpenAI to recreate its dataset relying only on works it has been authorized to use.”
Not Just Heat Death: Here Are Five Ways the Universe Could End Paul Sutter | Ars Technica
“If you’re having trouble sleeping at night, have you tried to induce total existential dread by contemplating the end of the entire universe? If not, here’s a rundown of five ideas exploring how ‘all there is’ might become ‘nothing at all.’ Enjoy.”
The post This Week’s Awesome Tech Stories From Around the Web (Through January 18) appeared first on SingularityHub.
MIT’s Latest Bug Robot Is a Super Flyer. It Could One Day Help Bees Pollinate Crops.
The bot does acrobatic double flips faster than a fruit fly and stays aloft 100 times longer than other robots.
Rapid declines in insect populations are leading to concerns that the pollination of important crops could soon come under threat. Tiny flying robots designed by MIT researchers could one day provide a mechanical solution.
Numbers of critical pollinators like bees and butterflies are declining rapidly in the face of environmental degradation and climate change, which research suggests could put as much as one third of the food we eat at risk.
While the most obvious solution to this crisis is to find ways to reverse these declines, engineers have also been investigating whether technology could help plug the gaps. Several groups have been building insect-scale flying robots that they hope could one day be used to pollinate crops.
Now, a team at MIT has unveiled a new design that they say is much more agile than predecessors and capable of flying 100 times longer. The bug-sized bot is powered by flapping wings and can even carry out complex acrobatic maneuvers like double aerial flips.
MIT’s flying insect robot. Image Credit: MIT“The amount of flight we demonstrated in this paper is probably longer than the entire amount of flight our field has been able to accumulate with these robotic insects,” associate professor Kevin Chen, who led the project, said in a press release. “With the improved lifespan and precision of this robot, we are getting closer to some very exciting applications, like assisted pollination.”
The new design, reported in Science Robotics, weighs just 750 milligrams (0.03 ounces) and features four modules, each consisting of a carbon-fiber airframe, an artificial muscle that can be electrically activated, a wing, and a transmission to transfer power from the muscle to the wing.
Previous versions of these modules featured roughly the same configuration, but with two flapping wings apiece. However, Chen says this resulted in the downdraft of the wings interfering with each other, reducing the amount of lift generated. In the new set-up, each module’s wing faces away from the robot, which boosts the amount of thrust it can generate.
One of the main reasons for the short shelf life of previous designs was the significant mechanical stress generated by the flapping movement of the wings. An upgraded transmission and a longer wing hinge helped to reduce the strain—allowing the robot to generate more power than before and last longer.
Put together this allowed the robot to achieved average speeds of 35 centimeters per second (13.8 inches per second)—the fastest flight researchers have reported—and sustained hovering for nearly 17 minutes. “We’ve shown flight that is 100 times longer than anyone else in the field has been able to do, so this is an extremely exciting result,” says Chen.
The robot was also able to carry out precise maneuvers, including tracing out the letters MIT in midair, as well as acrobatic double flips with a greater rotational speed than a fruit fly and four times as fast as the previous quickest robot.
A closeup image of on of the robot’s upgraded wings. Image Credit: MITCurrently, the bug-bot is powered by a cable, which means it can’t move about freely. But the researchers say cutting down the number of wings freed up space on the airframe that could be used to install batteries, sensors, and other electronics that would enable it to navigate outside the lab.
That’s likely still some way off though. For now, Chen says the goal is to boost the flight time by another order of magnitude and increase the flight precision so the robot can take off and land from the center of a flower.
If all that comes together, however, our beleaguered natural pollinators may soon have some much-needed help in their efforts to keep our food systems ticking.
The post MIT’s Latest Bug Robot Is a Super Flyer. It Could One Day Help Bees Pollinate Crops. appeared first on SingularityHub.
Meta’s New AI Translates Speech in Real Time Across More Than 100 Languages
It’s accurate and nearly as fast as expert human interpreters.
The dream of a universal AI interpreter just got a bit closer. This week, tech giant Meta released a new AI that can almost instantaneously translate speech in 101 languages as soon as the words tumble out of your mouth.
AI translators are nothing new. But they generally work best with text and struggle to transform spoken words from one language to another. The process is usually multistep. The AI first turns speech into text, translates the text, and then converts it back to speech. Though already useful in everyday life, these systems are inefficient and laggy. Errors can also sneak in at each step.
Meta’s new AI, dubbed SEAMLESSM4T, can directly convert speech into speech. Using a voice synthesizer, the system translates words spoken in 101 languages into 36 others—not just into English, which tends to dominate current AI interpreters. In a head-to-head evaluation, the algorithm is 23 percent more accurate than today’s top models—and nearly as fast as expert human interpreters. It can also translate text into text, text into speech, and vice versa.
Meta is releasing all the data and code used to develop the AI to the public for non-commercial use, so others can optimize and build on it. In a sense, the algorithm is “foundational,” in that “it can be fine-tuned on carefully curated datasets for specific purposes—such as improving translation quality for certain language pairs or for technical jargon,” wrote Tanel Alumäe at Tallinn University of Technology, who was not involved in the project. “This level of openness is a huge advantage for researchers who lack the massive computational resources needed to build these models from scratch.”
It’s “a hugely interesting and important effort,” Sabine Braun at the University of Surrey, who was also not part of the study, told Nature.
Self-Learning AIMachine translation has made strides in the past few years thanks to large language models. These models, which power popular chatbots like ChatGPT and Claude, learn language by training on massive datasets scraped from the internet—blogs, forum comments, Wikipedia.
In translation, humans carefully vet and label these datasets, or “corpuses,” to ensure accuracy. Labels or categories provide a sort of “ground truth” as the AI learns and makes predictions.
But not all languages are equally represented. Training corpuses are easy to come by for high-resource languages, such as English and French. Meanwhile, low-resource languages, largely used in mid- or low-income countries, are harder to find—making it difficult to train a data-hungry AI translator with trusted datasets.
“Some human-labeled resources for translation are freely available, but often limited to a small set of languages or in very specific domains,” wrote the authors.
To get around the problem, the team used a technique called parallel data mining, which crawls the internet and other resources for audio snippets in one language with matching subtitles in another. These pairs, which match in meaning, add a wealth of training data in multiple languages—no human annotation needed. Overall, the team collected roughly 443,000 hours of audio with matching text, resulting in about 30,000 aligned speech-text pairs.
SEAMLESSM4T consists of three different blocks, some handling text and speech input and others output. The translation part of the AI was pre-trained on a massive dataset containing 4.5 million hours of spoken audio in multiple languages. This initial step helped the AI “learn patterns in the data, making it easier to fine-tune the model for specific tasks” later on, wrote Alumäe. In other words, the AI learned to recognize general structures in speech regardless of language, establishing a baseline that made it easier to translate low-resource languages later.
The AI was then trained on the speech pairs and evaluated against other translation models.
Spoken WordA key advantage of the AI is its ability to directly translate speech, without having to convert it into text first. To test this ability, the team hooked up an audio synthesizer to the AI to broadcast its output. Starting with any of the 101 languages it knew, the AI translated speech into 36 different tongues—including low-resource languages—with only a few seconds of delay.
The algorithm outperformed existing state-of-the-art systems, achieving 23 percent greater accuracy using a standardized test. It also better handled background noise and voices from different speakers, although—like humans—it struggled with heavily accented speech.
Lost in TranslationLanguage isn’t just words strung into sentences. It reflects cultural contexts and nuances. For example, translating a gender-neutral language into a gendered one could introduce biases. Does “I am a teacher” in English translate to the masculine “Soy profesor” or to the feminine “Soy profesora” in Spanish? What about translations for doctor, scientist, nanny, or president?
Mistranslations may also add “toxicity,” when the AI spews out offensive or harmful language that doesn’t reflect the original meaning—especially for words that don’t have a direct counterpart in the other language. While easy to laugh off as a comedy of errors in some cases, these mistakes are deadly serious when it comes to medical, immigration, or legal scenarios.
“These sorts of machine-induced error could potentially induce real harm, such as erroneously prescribing a drug, or accusing the wrong person in a trial,” wrote Allison Koenecke at Cornell University, who wasn’t involved in the study. The problem is likely to disproportionally affect people speaking low-resource languages or unusual dialects, due to a relative lack of training data.
To their credit, the Meta team analyzed their model for toxicity and fine-tuned it during multiple stages to lower the chances of gender bias and harmful language.
“This is a step in the right direction, and offers a baseline against which future models can be tested,” wrote Koenecke.
Meta is increasingly supporting open-source technology. Previously, the tech giant released PyTorch, a software library for AI training, which was used by companies, including OpenAI and Tesla, and researchers around the globe. SEAMLESSM4T will also be made public for others to build on its abilities.
The AI is just the latest machine translator that can handle speech-to-speech translation. Previously, Google showcased AudioPaLM, an algorithm that can turn 113 languages into English—but only English. SEAMLESSM4T broadens the scope. Although it only scratches the surface of the roughly 7,000 languages spoken, the AI inches closer to a universal translator—like the Babel fish in The Hitchhiker’s Guide to the Galaxy, which translates languages from species across the universe when popped into the ear.
“The authors’ methods for harnessing real-world data will forge a promising path towards speech technology that rivals the stuff of science fiction,” wrote Alumäe.
The post Meta’s New AI Translates Speech in Real Time Across More Than 100 Languages appeared first on SingularityHub.
China Is About to Build the World’s Biggest Hydropower Dam—With Triple the Output of Three Gorges
Medog Hydropower Station, as it will be called, will blow other hydropower dams out of the water.
China’s electricity use over the last 30 years is a hockey-stick curve, climbing steeply as the country industrialized, built dozens of mega-cities, and became the world’s manufacturing center. Though China’s economy has slowed in recent years, electricity demand is only climbing. Given the country has pledged to reach carbon neutrality by 2060, they’re going to need much more renewable power than they currently have.
To help them achieve that goal, the government recently announced plans to build the biggest hydropower dam in the world.
Medog Hydropower Station, as it will be called, will blow other hydropower dams out of the water (pun intended), with an estimated annual generation capacity triple that of the world’s largest existing dam (which, perhaps unsurprisingly, is also in China). The 60-gigawatt project will be able to generate up to 300,000 gigawatt-hours (or 300 terawatt-hours) of electricity per year. That’s equivalent to Greece’s annual energy consumption.
The dam will be built on a river in Tibet called the Yarlung Tsangpo, with construction carried out by the government-owned Power Construction Corporation of China. It will not only be one of China’s biggest infrastructure projects ever, it will be one of the most expensive infrastructure projects ever, with an estimated cost of a trillion yuan or $136 billion (yes, billion with a “b”).
Perhaps unsurprisingly, China is already home to the world’s largest existing hydropower dam: Three Gorges Dam on the Yangtze River stands 594 feet tall (Arizona’s Hoover Dam is taller, but Three Gorges is wider) and has a generating capacity of 22.5 gigawatts. By comparison, the biggest hydropower dam in the US is the Grand Coulee in Washington state, and it has a generating capacity of 6.8 gigawatts. China is the world leader in hydropower deployment, accounting for almost a third of global hydropower capacity. Many of those dams are on the Yangtze (some of them built by robots!) and some are on the same river where the Medog project will be built.
The Yarlung Tsangpo river starts in western Tibet, flowing east and then south, where it merges with India’s Brahmaputra then flows south through Bangladesh and into the Bay of Bengal. It is the highest river in the world, and a 31-mile (50-kilometer) section in the South Tibet Valley drops by a sharp 6,561 feet (2,000 meters); there’s loads of untapped potential for all that moving water to turn some turbines on its way down.
But the project is not without its challenges, both engineering and political.
Environmental groups say the dam will disrupt ecosystems on the biodiverse Tibetan Plateau. Tibetan rights groups see the project as a prime example of China exploiting Tibet’s natural resources while harming local communities. The dam’s construction will require people to be relocated, though likely not as many as Three Gorges, which uprooted and moved 1.4 million people. The Medog dam will be bigger, but it’s in a more sparsely populated area.
India and Bangladesh have both expressed concerns about the dam, as it could alter the flow of the river downstream where it runs through these countries. There are also concerns about the area’s geological stability, as it sits at the convergence of the Indian and Eurasian continental plates and is considered tectonically active. An earthquake could destroy the dam and cause catastrophic flooding. In fact, a magnitude 6.8 earthquake killed 126 people and damaged 4 reservoirs just last week.
However, Medog won’t be a conventional dam in the form of one giant wall built to hold water behind it, like Three Gorges or the Hoover Dam. Instead, four 12.4-mile (20-kilometer) tunnels will be blasted and excavated through a mountain called Namcha Barwa to divert the river. The water flowing through these tunnels will turn turbines attached to generators before running back into the Yarlung Tsangpo.
The Chinese government says the Medog project will help it achieve the country’s carbon neutrality goals. In 2023, coal was still China’s main source of electricity generation by a long shot, supplying 61 percent of the country’s electricity. Hydropower was a distant second at 13 percent, followed by wind, solar, nuclear, and gas, in that order.
Construction is slated to start in 2029, and if all goes as planned—which would be impressive for a project of this scale—it will take four years to complete, with the dam beginning commercial operation in 2033.
The post China Is About to Build the World’s Biggest Hydropower Dam—With Triple the Output of Three Gorges appeared first on SingularityHub.
Here’s What It Will Take to Ignite Scalable Fusion Power
There’s a growing sense that developing practical fusion energy is no longer an if but a when.
The way scientists think about fusion changed forever in 2022, when what some called the experiment of the century demonstrated for the first time that fusion can be a viable source of clean energy.
The experiment, at Lawrence Livermore National Laboratory, showed ignition: a fusion reaction generating more energy out than was put in.
In addition, the past few years have been marked by a multibillion-dollar windfall of private investment in the field, principally in the United States.
But a whole host of engineering challenges must be addressed before fusion can be scaled up to become a safe, affordable source of virtually unlimited clean power. In other words, it’s engineering time.
As engineers who have been working on fundamental science and applied engineering in nuclear fusion for decades, we’ve seen much of the science and physics of fusion reach maturity in the past 10 years.
But to make fusion a feasible source of commercial power, engineers now have to tackle a host of practical challenges. Whether the United States steps up to this opportunity and emerges as the global leader in fusion energy will depend, in part, on how much the nation is willing to invest in solving these practical problems—particularly through public-private partnerships.
Building a Fusion ReactorFusion occurs when two types of hydrogen atoms, deuterium and tritium, collide in extreme conditions. The two atoms literally fuse into one atom by heating up to 180 million degrees Fahrenheit (100 million degrees Celsius), 10 times hotter than the core of the Sun. To make these reactions happen, fusion energy infrastructure will need to endure these extreme conditions.
There are two approaches to achieving fusion in the lab: inertial confinement fusion, which uses powerful lasers, and magnetic confinement fusion, which uses powerful magnets.
While the “experiment of the century” used inertial confinement fusion, magnetic confinement fusion has yet to demonstrate that it can break even in energy generation.
Several privately funded experiments aim to achieve this feat later this decade, and a large, internationally supported experiment in France, ITER, also hopes to break even by the late 2030s. Both are using magnetic confinement fusion.
Challenges Lying AheadBoth approaches to fusion share a range of challenges that won’t be cheap to overcome. For example, researchers need to develop new materials that can withstand extreme temperatures and irradiation conditions.
Fusion reactor materials also become radioactive as they are bombarded with highly energetic particles. Researchers need to design new materials that can decay within a few years to levels of radioactivity that can be disposed of safely and more easily.
Producing enough fuel, and doing it sustainably, is also an important challenge. Deuterium is abundant and can be extracted from ordinary water. But ramping up the production of tritium, which is usually produced from lithium, will prove far more difficult. A single fusion reactor will need hundreds of grams to one kilogram (2.2 pounds) of tritium a day to operate.
Right now, conventional nuclear reactors produce tritium as a byproduct of fission, but these cannot provide enough to sustain a fleet of fusion reactors.
So, engineers will need to develop the ability to produce tritium within the fusion device itself. This might entail surrounding the fusion reactor with lithium-containing material, which the reaction will convert into tritium.
To scale up inertial fusion, engineers will need to develop lasers capable of repeatedly hitting a fusion fuel target, made of frozen deuterium and tritium, several times per second or so. But no laser is powerful enough to do this at that rate—yet. Engineers will also need to develop control systems and algorithms that direct these lasers with extreme precision on the target.
A laser setup that Farhat Beg’s research group plans to use to repeatedly hit a fusion fuel target. The goal of the experiments is to better control the target’s placement and tracking. The lighting is red from colored gels used to take the picture. David Baillot/University of California San DiegoAdditionally, engineers will need to scale up production of targets by orders of magnitude: from a few hundreds handmade every year with a price tag of hundreds of thousands of dollars each to millions costing only a few dollars each.
For magnetic containment, engineers and materials scientists will need to develop more effective methods to heat and control the plasma and more heat- and radiation-resistant materials for reactor walls. The technology used to heat and confine the plasma until the atoms fuse needs to operate reliably for years.
These are some of the big challenges. They are tough but not insurmountable.
Current Funding LandscapeInvestments from private companies globally have increased—these will likely continue to be an important factor driving fusion research forward. Private companies have attracted over $7 billion in private investment in the past five years.
Several startups are developing different technologies and reactor designs with the aim of adding fusion to the power grid in coming decades. Most are based in the United States, with some in Europe and Asia.
While private sector investments have grown, the US government continues to play a key role in the development of fusion technology up to this point. We expect it to continue to do so in the future.
It was the US Department of Energy that invested about $3 billion to build the National Ignition Facility at the Lawrence Livermore National Laboratory in the mid 2000s, where the “experiment of the century” took place 12 years later.
In 2023, the Department of Energy announced a 4-year, $42 million program to develop fusion hubs for the technology. While this funding is important, it likely will not be enough to solve the most important challenges that remain for the United States to emerge as a global leader in practical fusion energy.
One way to build partnerships between the government and private companies in this space could be to create relationships similar to that between NASA and SpaceX. As one of NASA’s commercial partners, SpaceX receives both government and private funding to develop technology that NASA can use. It was the first private company to send astronauts to space and the International Space Station.
Along with many other researchers, we are cautiously optimistic. New experimental and theoretical results, new tools and private sector investment are all adding to our growing sense that developing practical fusion energy is no longer an if but a when.
This article is republished from The Conversation under a Creative Commons license. Read the original article.
The post Here’s What It Will Take to Ignite Scalable Fusion Power appeared first on SingularityHub.
A ChatGPT Moment Is Coming for Robotics. AI World Models Could Help Make It Happen.
Robots need an internal representation of the world and its rules like ours.
If you’re not familiar with the concept of “world models” just yet, a storm of activity at the start of 2025 gives every indication it may soon become a well-known term.
Jensen Huang, CEO of Nvidia, used his keynote presentation at CES to announce a new platform, Cosmos, for what they’re calling “world foundation models.” Cosmos is a generative AI tool that produces virtual-world-like videos. The next day, Google’s DeepMind revealed similar ambitions with a project led by a former OpenAI engineer. This all comes several months after an intriguing startup, World Labs, achieved unicorn status—a startup valued $1 billion or more—within only four months to do the same thing.
To understand what world models are, it’s worth pointing out that we’re at an inflection point in the way we build and deploy intelligent machines like drones, robots, and autonomous vehicles. Rather than explicitly programming behavior, engineers are turning to 3D computer simulation and AI to let the machines teach themselves. This means physically accurate virtual worlds are becoming an essential source of training data to teach machines to perceive, understand, and navigate three-dimensional space.
What large language models are to systems like ChatGPT, world models are to the virtual world simulators needed to train robots. Therefore, world models are a type of generative AI tool capable of producing 3D environments and simulating virtual worlds. Just like ChatGPT is built with an intuitive chat interface, world-model interfaces might allow more people, even those without technical game developer skillsets, to build 3D virtual worlds. They could also help robots better understand, plan, and navigate their surroundings.
To be clear, most early world models including those announced by Nvidia generate spatial training data in a video format. There are, however, already models capable of producing fully immersive scenes as well. One tool made by a startup called Odyssey, uses gaussian splatting to create scenes which can be loaded into 3D software tools like Unreal Engine and Blender. Another startup, Decart, demoed their world model as a playable version of a game similar to Minecraft. DeepMind has similarly gone the video game route.
All this reflects the potential for changes in the way computer graphics work at a foundational level. In 2023, Huang predicted that in the future, “every single pixel will be generated, not rendered but generated.” He’s recently taken a more nuanced view by saying that traditional rendering systems aren’t likely to fully disappear. It’s clear, however, that generative AI predicting which pixels to show may soon encroach on the work that game engines do today.
The implications for robotics are potentially huge.
Nvidia is now working hard to establish the branding label “physical AI” as a term for the intelligent systems that will power warehouse AMRs, inventory drones, humanoid robots, autonomous vehicles, farmer-less tractors, delivery robots, and more. To give these systems the ability to perform their work effectively in the real world, especially in environments with humans, they must train in physically accurate simulations. World models could potentially produce synthetic training scenarios of any variety imaginable.
This idea is behind the shift in the way companies articulate the path forward for AI, and World Labs is perhaps the best expression of this. Founded by Fei-Fei Li, known as the godmother of AI for her foundational work in computer vision, World Labs defines itself as a spatial intelligence company. In their view, to achieve true general intelligence, AIs will need an embodied ability to “reason about objects, places, and interactions in 3D space and time.” Like their competitors, they are seeking to build foundation models capable of moving AI into three-dimensional space.
In the future, these could evolve into an internal, humanlike representation of the world and its rules. This might allow AIs to predict how their actions will affect the environment around them and plan reasonable approaches to accomplish a task. For example, an AI may learn that if you squeeze an egg too hard it will crack. Yet context matters. If your goal is placing it in a carton, go easy, but if you’re preparing an omelet, squeeze away.
While world models may be experiencing a bit of a moment, it’s early, and there are still significant limitations in the short term. Training and running world models requires massive amounts of computing power even compared to today’s AI. Additionally, models aren’t reliably consistent with the real world’s rules just yet, and like all generative AI, they will be shaped by the biases within their own training data.
As TechCrunch’s Kyle Wiggers writes, “A world model trained largely on videos of sunny weather in European cities might struggle to comprehend or depict Korean cities in snowy conditions.” For these reasons, traditional simulation tools like game and physics engines will still be used for quite some time to render training scenarios for robots. And Meta’s head of AI, Yann LeCun, who wrote deeply about the concept in 2022, still thinks advanced world models—like the ones in our heads—will take a while longer to develop.
Still, it’s an exciting moment for roboticists. Just as ChatGPT signaled an inflection point for AI to enter mainstream awareness; robots, drones, and embodied AI systems may be nearing a similar breakout moment. To get there, physically accurate 3D environments will become the training ground for these systems to learn and mature.
Early world models may make it easier than ever for developers to generate the countless number of training scenarios needed to bring on an era of spatially intelligent machines.
The post A ChatGPT Moment Is Coming for Robotics. AI World Models Could Help Make It Happen. appeared first on SingularityHub.
This Week’s Awesome Tech Stories From Around the Web (Through January 11)
These were our favorite articles in science and tech this week.
Google Is Forming a New Team to Build AI That Can Simulate the Physical World Kyle Wiggers | TechCrunch
“‘We believe scaling [AI training] on video and multimodal data is on the critical path to artificial general intelligence,’ reads one of the job descriptions. Artificial general intelligence, or AGI, generally refers to AI that can accomplish any task a human can. ‘World models will power numerous domains, such as visual reasoning and simulation, planning for embodied agents, and real-time interactive entertainment.'”
Nvidia Announces $3,000 Personal AI Supercomputer Called Digits Kylie Robison | The Verge
“The desktop-sized system can handle AI models with up to 200 billion parameters. …For even more demanding applications, two Project Digits systems can be linked together to handle models with up to 405 billion parameters (Meta’s best model, Llama 3.1, has 405 billion parameters). The GB10 chip delivers up to 1 petaflop of AI performance—meaning it can perform 1 quadrillion AI calculations per second—at FP4 precision (which helps make the calculations faster by making approximations).”
AI Could Create 78 Million More Jobs Than It Eliminates by 2030—Report Benj Edwards | Ars Technica
“On Wednesday, the World Economic Forum (WEF) released its Future of Jobs Report 2025, with CNN immediately highlighting the finding that 40 percent of companies plan workforce reductions due to AI automation. But the report’s broader analysis paints a far more nuanced picture than CNN’s headline suggests: It finds that AI could create 170 million new jobs globally while eliminating 92 million positions, resulting in a net increase of 78 million jobs by 2030.”
This Robovac Has an Arm—and Legs, Too Jennifer Pattison Tuohy | The Verge
“Dreame says its arm can pick up sneakers as large as men’s size 42 (a size 9 in the US) and take them to a designated spot in your home. The concept could apply to small toys and other items, and you’ll be able to designate specific areas for the robot to take certain items, such as toys to the playroom and shoes to the front door.”
A Virtual Cell Is a ‘Holy Grail’ of Science. It’s Getting Closer. Matteo Wong | The Atlantic
“Scientists are now designing computer programs that may unlock the ability to simulate human cells, giving researchers the ability to predict the effect of a drug, mutation, virus, or any other change in the body, and in turn making physical experiments more targeted and likelier to succeed.”
Predicting the ‘Digital Superpowers’ We Could Have by 2030 Louis Rosenberg | Big Think
“Computer scientist Louis B. Rosenberg predicts that context-aware AI agents will bring ‘digital superpowers’ into our daily experiences by 2030. The convergence of AI and body-worn devices, like AI-powered glasses, will likely enable these new abilities. Rosenberg outlines his predictions for the future of technologies like AI, augmented reality, and conversational computing across three phases.”
The Ocean Teems With Networks of Interconnected Bacteria Veronique Greenwood | Quanta
“The Prochlorococcus [bacteria] population may be more connected than anyone could have imagined. They may be holding conversations across wide distances, not only filling the ocean with envelopes of information and nutrients, but also linking what we thought were their private, inner spaces with the interiors of other cells.”
These Newly Identified Cells Could Change the Face of Plastic Surgery Max G. Levy | Wired
“The cells appear to simultaneously provide structure (like cartilage) and natural squishiness (like fat). They appear in many mammals, including humans, and the unique structure they provide gives reconstructive surgeons a clearer understanding of what materials make up our faces. Plikus believes this new tissue discovery sets the stage for better cartilage transplants—and so better plastic surgery.”
Transforming the Moon Into Humanity’s First Space Hub Saurav Shroff | Wired
“This year will mark a turning point in humanity’s relationship with the moon, as we begin to lay the foundations for a permanent presence on its surface, paving the way for our natural satellite to become an industrial hub—one that will lead us to Mars and beyond.”
The post This Week’s Awesome Tech Stories From Around the Web (Through January 11) appeared first on SingularityHub.
Blue Origin Is Ready to Challenge SpaceX With Its New Glenn Rocket
The company hopes to break SpaceX’s industry stranglehold with New Glenn.
Jeff Bezos’s rocket company Blue Origin hopes to become a major rival to SpaceX in the private space industry. But those ambitions are on hold after the company postponed the test launch of its new rocket earlier today.
Despite increasing investment in the private space industry, Elon Musk’s SpaceX has successfully converted its first-mover advantage into near total dominance of the market—accounting for 45 percent of global space launches in 2023. But Blue Origin is hoping to break that stranglehold with its heavy-lift New Glenn rocket, successor to the New Shepard suborbital launch vehicle that took Bezos to space in 2021.
The vehicle’s first test launch was due to lift off from Cape Canaveral Space Force Station in Florida at 1 a.m. Eastern Time (ET) this morning, but Blue Origin had to postpone the launch at the last minute due to rough weather at the landing zone in the Atlantic Ocean. It won’t be long until the company gets another crack at launch though—announcing on X it may try again as early as this Sunday.
The rocket—named after the first American to orbit Earth, NASA astronaut John Glenn—is 320 feet tall and designed to carry 45 tons to low Earth orbit. That places it between SpaceX’s Falcon 9 and Falcon Heavy rockets in terms of payload capacity, at 22 and 64 tons respectively.
New Glenn features two stages. A booster provides most of the thrust to get the vehicle into the upper atmosphere and then detaches, allowing a smaller second stage to deliver the payload to orbit.
Just like SpaceX’s rockets, the first stage is designed to fly up to 25 times. After detaching from the second stage, it will return to Earth and attempt to land on a barge in the ocean. The company is planning a landing attempt on this initial test launch, which is why poor weather at sea prompted today’s cancellation.
Reusability has dramatically reduced SpaceX’s costs compared to competitors. Proving Blue Origin can reuse its rockets too will be crucial if it hopes to muscle in on a share of the launch market.
New Glenn won’t have a commercial payload for the test launch. Instead, it will carry a demonstrator designed to test key technologies for its future Blue Ring spacecraft, including a communications array, power systems, and flight computer.
Blue Ring is designed to carry multiple satellites into orbit and then maneuver to different orbits and locations to deploy them. Blue Origin hopes this will allow the company to provide much more flexible launch services than competitors.
Customers are already lining up.
Originally, the test launch was slated to carry a NASA mission to Mars, though this will now fly on a later New Glenn launch. The US Space Force has also selected the company, alongside SpaceX and United Launch Alliance, to compete for various missions over the next four years.
It is also likely to get a significant amount of business from Bezos’s other venture, Amazon, which is planning to deploy a constellation of internet satellites dubbed Project Kuiper to compete with SpaceX’s Starlink.
While much of this will depend on the success of the test launch, a positive result could herald a much more competitive era for the private launch industry. That’s likely to reduce barriers to space even further and help spur the burgeoning space economy.
The post Blue Origin Is Ready to Challenge SpaceX With Its New Glenn Rocket appeared first on SingularityHub.
CRISPR Baby 2.0? Controversial Simulation Touts Benefits of Gene Editing Embryos
Scientists are grappling with the implications of a CRISPR-baby world.
Bring up germline editing, and most scientists cringe. The idea behind the notorious CRISPR-baby scandal, editing reproductive cells or embryos tinkers with DNA far beyond just the patient—any changes, either beneficial or harmful, pass down through generations.
Germline editing is banned in most countries. A Chinese court sentenced He Jiankui, the disgraced scientist first to experiment with editing human embryos, to jail for three years. Now free again, He said in an interview with NPR last year that the CRISPRed twins, Lulu and Nana, are healthy and growing normally as toddlers, although he declined to answer more detailed questions about their wellbeing.
He’s delinquent experiment sparked universal condemnation, but also triggered heavy debate among scientists about the future of germline editing. In theory, if based on solid scientific and clinical foundations, such edits could reduce the chances of inherited diseases down an entire family line. But it’s a slippery slope. When does reducing the risk of inherited breast cancer, diabetes, or Alzheimer’s disease edge into “designer baby” territory?
As scientists grapple with the implications of a CRISPR-baby world, a new analysis took an unusual approach to analyzing the risks and benefits of germline editing. For one, it was completely inside a machine—no potential babies harmed. For another, the authors of the study focused especially on diseases with multiple potential genetic contributors—heart attack, stroke, cancer, depression, and diabetes—all of which haunt many families.
On average, adding only 10 protective gene variants slashed disease risk up to 60-fold. The mathematical model also predicted health benefits such as lowered levels of “bad” cholesterol in people prone to heart disease—an idea which is currently being studied in a gene editing clinical trial led by Verve Therapeutics.
Not everyone is on board.
An accompanying article put it plainly: “Embryo editing for disease is unsafe and unproven.” Penned by Shai Carmi at the Hebrew University of Jerusalem, Henry Greely at Stanford Law School, and Kevin Mitchell at Trinity College Dublin, the piece raised a multitude of questions on the ethical and societal impacts of tweaking our DNA with inheritable gene edits—even assuming all goes well technologically.
“Given the broad interest in this topic, the work will probably be discussed widely and might ultimately affect policy,” they wrote.
Tweaking Our Genetic BlueprintEver since mapping the human genome at the turn of the century, scientists have dreamed of correcting mutated genes to prevent disease. Two decades later, thanks to massive improvements in gene sequencing and synthesis technologies alongside the rise of gene editing multitool CRISPR, gene therapy is no longer science fiction.
In late 2023, the UK approved the world’s first CRISPR-based gene therapy for two previously untreatable inherited blood disorders—sickle cell disease and beta thalassemia. The US soon followed. Meanwhile, a promising clinical trial that disables a gene in people susceptible to high cholesterol showed the approach slashes the dangerous buildup of artery-clogging clumps.
Here’s the crux: These gene therapies only alter somatic cells—that is, cells that make up the body. The changes only affect the treated person. Germline editing, on the other hand, opens an entirely new Pandora’s box. Editing reproductive cells or embryos doesn’t just alter the resulting baby’s genetics—the edits could also pass on to their offspring.
Most gene editing trials to date, including He’s, have focused on altering one gene. But most common diseases that plague us today—heart disease, stroke, cancer, diabetes—are polygenic, in that they are influenced by hundreds to thousands of gene variants. These are the same genes, just a tad different in their genetic makeup.
On its own, each variant has very little influence on health. But if negative variants build up across the genome, together they increase a person’s risk of these complex diseases. Doctors already use technologies that screen people’s genes to monitor for breast cancer, in which multiple gene mutations increase risk.
Reproductive scientists have also taken note. Research is underway to screen embryos conceived through in vitro fertilization, or IVF. Those with low polygenic risk are then selected for further development. The method has been available since 2019, explained Carmi, Greely, and Mitchell, “but the expected reductions in disease risk are modest, at best.”
A more radical idea is to change genes directly inside embryos, often by giving them a dose of , protective gene variants. It was He’s original idea—CRISPRing genes that potentially protect against HIV, but with very little evidence. But if successful—and that is a very large if—the treatment could protect generations of people from inherited diseases.
Broader ScopeRather than just a single gene, the new study focused on diseases with multiple genetic contributions in a simulation. Using previous data that associated genetic variants with diseases, the team analyzed a myriad of health troubles, including Alzheimer’s disease, schizophrenia, diabetes, heart disease, and depression. They asked: What if we edited “protective” genes into the embryos?
To be clear, the team only gauged outcomes based on mathematical simulations. However, multiple simulations for different diseases suggested that adding just a small number of protective genes—roughly 10—would boost the protective effects up to 60-fold.
They built the model based on a few assumptions.
First, they assumed perfect accuracy, in that the gene editor will modify only targeted DNA without harming other non-targeted genetic letters. That’s not entirely possible now: Although CRISPR-based therapeutics are more precise than their predecessors, they still sometimes snip and alter unexpected genetic sequences.
Another assumption is that we know which genes cause what disease. Protective genetic variants are rare, and scientists mostly find them through large genome-wide screenings followed by vigorously testing in cells and animals. These results unveil helpful or harmful variants—for example, APOE4 as a risk factor for Alzheimer’s—for gene editing. But for complex diseases, thousands of gene variants are at play.
“Mapping causal [gene] variants has been a slow process so far,” wrote Carmi, Greely, and Mitchell.
The protective effects of gene variants may not add up. If two “savior” genes are added at the same time, but they trigger the same pathway in cells, when combined they may reach a ceiling for beneficial effects. It’s like working out and drinking too many protein shakes—there’s only so much the body can handle.
Also, such simulations note but sidestep societal consequences. A slight misstep in germline engineering could alter the DNA makeup of multiple generations. “In embryo editing, the stakes are extremely high,” wrote Carmi, Greely, and Mitchell. “Any errors will affect every cell and organ in the future child.”
Still, editing embryos at a large scale remains roughly 30 years away according to the authors. Meanwhile, scientists are already tinkering with other reproductive genetic technologies. Some include sequencing the whole genome of embryos, fetuses, and newborns to tackle potential health troubles.
For now, the authors wrote, “there is good reason to start exploring the challenges and opportunities” of editing multiple disease-related genes that can pass to future generations, “well before it becomes a practical possibility.”
The post CRISPR Baby 2.0? Controversial Simulation Touts Benefits of Gene Editing Embryos appeared first on SingularityHub.
Donald J. Robertson on How to Think Like Socrates in the Age of AI
Time Expansion Experiences: Why Time Slows Down in Altered States of Consciousness
When the boundary between us and the world softens, our sense of time expands.
We all know that time seems to pass at different speeds in different situations. For example, time appears to go slowly when we travel to unfamiliar places. A week in a foreign country seems much longer than a week at home.
Time also seems to pass slowly when we are bored or in pain. It seems to speed up when we’re in a state of absorption, such as when we play music or chess, or paint or dance. More generally, most people report time seems to speed up as they get older.
However, these variations in time perception are quite mild. Our experience of time can change in a much more radical way. In my new book, I describe what I call “time expansion experiences”—in which seconds can stretch out into minutes.
The reasons why time can speed up and slow down are a bit of a mystery. Some researchers, including me, think that mild variations in time perception are linked to information processing. As a general rule, the more information—such as perceptions, sensations, thoughts—that our minds process, the slower time seems to pass. Time passes slowly to children because they live in a world of newness.
New environments stretch time because of their unfamiliarity. Absorption contracts time because our attention becomes narrow, and our minds become quiet, with few thoughts passing through. In contrast, boredom stretches time because our unfocused minds fill with a massive amount of thought-chatter.
Time Expansion ExperiencesTime expansion experiences (or Tees) can occur in an accident or emergency situation, such as a car crash, a fall, or an attack. In time expansion experiences, time appears to expand by many orders of magnitude. In my research, I have found that around 85 percent of people have had at least one Tee.
Around a half of Tees occur in accident and emergency situations. In such situations, people are often surprised by the amount of time they have to think and act. In fact, many people are convinced that time expansion saved them from their serious injury, or even saved their lives—because it allowed them to take preventative action that would normally be impossible.
For example, a woman who reported a Tee in which she avoided a metal barrier falling onto her car told me how a “slowing down of the moment” allowed her to “decide how to escape the falling metal on us.”
Tees are also common in sport. For example, a participant described a Tee that occurred while playing ice hockey, when “the play which seemed to last for about ten minutes all occurred in the space of about eight seconds.” Tees also occur in moments of stillness and presence, during meditation, or in natural surroundings.
However, some of the most extreme Tees are linked to psychedelic substances, such as LSD or ayahuasca. In my collection of Tees, around 10 percent are linked to psychedelics. A man told me that, during an LSD experience, he looked at the stopwatch on his phone and “the hundredths of a second were moving as slow as seconds normally move. It was really intense time dilation,” he said.
But why? One theory is that these experiences are linked to a release of noradrenaline (both a hormone and a neurotransmitter) in emergency situations, related to the “fight or flight” mechanism. However, this doesn’t fit with the calm wellbeing people usually report in Tees.
Even though their lives might be in danger, people usually feel strangely calm and relaxed. For example, a woman who had a Tee when she fell off a horse told me: “The whole experience seemed to last for minutes. I was ultra-calm, unconcerned that the horse still hadn’t recovered its balance and quite possibly could fall on top of me.” The noradrenaline theory also doesn’t fit with the fact that many Tees occur in peaceful situations, such as deep meditation or oneness with nature.
Another theory I have considered is that Tees are an evolutionary adaptation. Maybe our ancestors developed the ability to slow down time in emergency situations—such as encounters with deadly wild animals or natural disasters—to improve their chances of survival. However, the above argument applies here too: This doesn’t fit with the non-emergency situations when Tees occur.
A third theory is that Tees aren’t real experiences, but illusions of recollection. In emergency situations, so this theory goes, our awareness becomes acute, so that we take in more perceptions than normal. These perceptions become encoded in our memories, so that when we recall the emergency situation, the extra memories create the impression that time passed slowly.
However, in many Tees, people are certain that they had extra time to think and act. Time expansion allowed complex series of thoughts and actions that would have been impossible if time had been passing at a normal speed. In a recent (not yet published) poll of 280 Tees, I found that less than 3 percent of the participants believed that the experience was an illusion. Some 87 percent believed it was a real experience that happened in the present, while 10 percent were undecided.
Altered States of ConsciousnessIn my view, the key to understanding Tees surrounds altered states of consciousness. The sudden shock of an accident may disrupt our normal psychological processes, causing an abrupt shift in consciousness. In sport, intense altered states occur due to what I call “super-absorption”.
Absorption normally makes time pass faster—as in flow, when we are absorbed in a task. But when absorption becomes especially intense, over a long period of sustained concentration, the opposite occurs, and time slows down radically.
Altered states of consciousness can also affect our sense of identity, and our normal sense of separation between us and the world. As the psychologist Marc Wittmann has pointed out, our sense of time is closely bound up with our sense of self.
We usually have a sense of living inside our mental space, with the world “out there” on the other side. One of the main features of intense altered states is that sense of separation fades. We no longer feel enclosed inside our minds, but feel connected to our surroundings.
This means the boundary between us and the world softens. And in the process, our sense of time expands. We slip outside our normal consciousness, and into a different time-world.
This article is republished from The Conversation under a Creative Commons license. Read the original article.
The post Time Expansion Experiences: Why Time Slows Down in Altered States of Consciousness appeared first on SingularityHub.
This Molecule Mimics the Antiaging Effects of Dieting—Without the Hunger
What if we could package up the longevity effects of dieting in a pill?
After several glorious days of large meals and Christmas cookies, I’m ready to eat healthier and slash some calories. It’s not to drop holiday pounds: Cutting calories is one of the most promising ways to stave off aging in multiple species. In some animals, it could prolong life.
But there’s a caveat: Clinical studies assessing health benefits of caloric restriction in humans are mixed. One reason? Dieting is hard. Dramatically reducing calories for years is nearly impossible for most people.
What if we could mimic the effects of dieting in a pill—reaping benefits without hunger pangs?
In two studies published in Nature, a team from China did just that. After screening over 1,000 molecules from the blood of mice, either on a diet or eating normally, they found a molecule that mimics the effects of caloric restriction.
Called lithocholic acid, or LCA, it’s made naturally by bacteria in our guts and is a component of bile, a yellow-green liquid that digests fats. Feeding LCA to worms and flies—both commonly used to study aging in the lab—extended their lifespans. Elderly mice given water spiked with the molecule also regained muscle strength and athleticism, running far longer when given the choice, and showed improved blood sugar management and overall metabolism.
To be clear, there isn’t any evidence that LCA has similar effects in humans. And in large doses, it could be toxic. But the authors “make a compelling case” that LCA “triggers many of the age-defying and potentially lifespan-extending health benefits of low-calorie diets,” wrote David Sinclair, a prominent longevity researcher at Harvard University, who was not involved in the study.
Long Diet, Long LifeResearchers have known for nearly a century that cutting calories by up to 50 percent, without sacrificing nutrients, can prolong lifespan in worms, flies, and some kinds of mice. Researchers saw similar results in one monkey study (but not another), with health benefits lasting into old age.
Saying no to the caloric equivalent of a muffin a day also seems to slow the pace of aging in humans. In the two-year clinical study CALERIE, one of the largest caloric restriction trials to date, young to middle-aged participants who shaved just a smidge off their usual diet were rewarded with myriad health improvements, such as lower levels of blood-vessel-clogging cholesterol and higher sensitivity to insulin.
But not everyone stuck to their diet. The original goal was to cut calories by 25 percent. Most managed just half of that. This is partly why caloric restriction is so hard to study in people. Forget longevity, when hungry, it’s oh so easy to reach for that enticing chocolate bar.
Cravings aren’t the only downside. Eating less also causes muscles to waste away, increases the risk of infections, and makes it hard to regulate body temperature—all of which may sound sadly familiar to people living with an elderly grandparent and are antithetical to anti-aging.
There’s a straightforward solution that potentially bypasses the negatives of cutting calories. If we can find out how caloric restriction battles aging, it’s then possible to mimic the process with a pill, potentially without the side effects of dieting, hunger and all.
An Unexpected SourceOur bodies react to food in extremely complicated ways. A slew of proteins springs into action to aid digestion, while others rally to absorb nutrients and trigger downstream effects—for example, building muscle or amping up the immune system.
Digging through the maze of metabolism is a headache, but the authors had one lead: A protein dubbed AMPK. Like a conductor, AMPK sparks to life and organizes multiple processes in cells after caloric restriction.
They decided to hunt for molecules in serum—the liquid part of the blood after clotting—that stimulate AMPK production in mice undergoing a diet and those that ate to their heart’s content. After sorting through over 1,200 metabolic molecules, they found roughly 200 that increased in dieting mice. Each molecule was then tested on cells to see if it activated AMPK.
“We took a brute-force approach,” study author Sheng-Cai Lin told Nature.
One molecule stood out: LCA, a component of bile. A liquid that helps digest fat and absorb vitamins, bile is synthesized in the liver and stored in the gallbladder. When released into the digestive tract, harmless gut bacteria transform it into LCA and other similar chemicals.
Called “bile acids,” these chemicals were previously shown to extend the lifespans of worms, yeast, and fruit flies. LCA is naturally produced by the body. So, what happens if we give a little extra?
Have Your Cake and Eat It TooThe team laced drinking water with a small dose of LCA and gave it to old mice for a month. These mice could eat anytime they wanted—no dieting required.
Compared to mice sipping normal water, those sipping on LCA had improved metabolism, insulin sensitivity, and control of blood sugar levels. They also ran further and for longer and could grab onto a bar with more strength, their muscles healing better from the tear-and-wear of physical workouts. Even their cells’ energy factories, or mitochondria, hummed along more efficiently and grew in numbers.
Surprisingly, LCA also boosted levels of GLP-1—the hormone that Ozempic and other blockbuster drugs are based on—without triggering any muscle loss. All these effects were based on AMPK. Mice without the protein didn’t reap any health benefits from LCA.
What about longevity? Feeding LCA to worms and fruit flies significantly extended their lifespan by up to 20 percent. Mice, in contrast, only had a very slight boost that wasn’t statistically significant—meaning the trend could be due to chance. However, just the health benefits are a good starting point to potentially stave off common age-related problems, wrote the team.
Another article from the same team dug deep into the weeds of how LCA works inside cells. Its main target came as another surprise—a protein previously known for its anti-aging effects in yeast, worms, and flies when given resveratrol, a chemical found in red wine. Evidence for resveratrol’s effect on extending longevity is mixed. But together, the new findings suggest the protein target could be a “hub” coordinating how diets impact healthy longevity.
“These data essentially prove that LCA works through the same activation mechanism as resveratrol…—a remarkable finding,” wrote Sinclair.
Lots of questions remain. Many people have had their gallbladders—organs that store bile—removed. So far, there isn’t any evidence the procedure increases the chance of age-related diseases. LCA at high doses is also toxic to the liver and, when combined with DNA-damaging chemicals, could boost the risk of cancer.
Diet is only part of the picture when it comes to longevity. A recent study in genetically diverse mice undergoing caloric restriction suggests that genes may play a larger role. LCA may need to be tested in a larger genetic variety of mice and at different ages, potentially with longer duration.
The team is beginning to give monkeys LCA while monitoring their health. If the results hold up, “these findings could be remembered as a milestone linking caloric intake to age-related diseases,” wrote Sinclair.
The post This Molecule Mimics the Antiaging Effects of Dieting—Without the Hunger appeared first on SingularityHub.
What Is an AI Agent? A Computer Scientist Explains the Next Wave of AI Tools
The AI agents big tech companies are now developing possess the ability to take actions on your behalf.
Interacting with AI chatbots like ChatGPT can be fun and sometimes useful, but the next level of everyday AI goes beyond answering questions: AI agents carry out tasks for you.
Major technology companies, including OpenAI, Microsoft, Google, and Salesforce, have recently released or announced plans to develop and release AI agents. They claim these innovations will bring newfound efficiency to technical and administrative processes underlying systems used in health care, robotics, gaming, and other businesses.
Simple AI agents can be taught to reply to standard questions sent over email. More advanced ones can book airline and hotel tickets for transcontinental business trips. Google recently demonstrated Project Mariner to reporters, a browser extension for Chrome that can reason about the text and images on your screen.
In the demonstration, the agent helped plan a meal by adding items to a shopping cart on a grocery chain’s website, even finding substitutes when certain ingredients were not available. A person still needs to be involved to finalize the purchase, but the agent can be instructed to take all of the necessary steps up to that point.
In a sense, you are an agent. You take actions in your world every day in response to things that you see, hear, and feel. But what exactly is an AI agent? As a computer scientist, I offer this definition: AI agents are technological tools that can learn a lot about a given environment, and then—with a few simple prompts from a human—work to solve problems or perform specific tasks in that environment.
Rules and GoalsA smart thermostat is an example of a very simple agent. Its ability to perceive its environment is limited to a thermometer that tells it the temperature. When the temperature in a room dips below a certain level, the smart thermostat responds by turning up the heat.
A familiar predecessor to today’s AI agents is the Roomba. The robot vacuum cleaner learns the shape of a carpeted living room, for instance, and how much dirt is on the carpet. Then it takes action based on that information. After a few minutes, the carpet is clean.
The smart thermostat is an example of what AI researchers call a simple reflex agent. It makes decisions, but those decisions are simple and based only on what the agent perceives in that moment. The robot vacuum is a goal-based agent with a singular goal: clean all of the floor that it can access. The decisions it makes—when to turn, when to raise or lower brushes, when to return to its charging base—are all in service of that goal.
A goal-based agent is successful merely by achieving its goal through whatever means are required. Goals can be achieved in a variety of ways, however, some of which could be more or less desirable than others.
Many of today’s AI agents are utility based, meaning they give more consideration to how to achieve their goals. They weigh the risks and benefits of each possible approach before deciding how to proceed. They are also capable of considering goals that conflict with each other and deciding which one is more important to achieve. They go beyond goal-based agents by selecting actions that consider their users’ unique preferences.
Making Decisions, Taking ActionWhen technology companies refer to AI agents, they aren’t talking about chatbots or large language models like ChatGPT. Though chatbots that provide basic customer service on a website technically are AI agents, their perceptions and actions are limited. Chatbot agents can perceive the words that a user types, but the only action they can take is to reply with text that hopefully offers the user a correct or informative response.
The AI agents that AI companies refer to are significant advances over large language models like ChatGPT because they possess the ability to take actions on behalf of the people and companies who use them.
OpenAI says agents will soon become tools that people or businesses will leave running independently for days or weeks at a time, with no need to check on their progress or results. Researchers at OpenAI and Google DeepMind say agents are another step on the path to artificial general intelligence or “strong” AI—that is, AI that exceeds human capabilities in a wide variety of domains and tasks.
The AI systems that people use today are considered narrow AI or “weak” AI. A system might be skilled in one domain—chess, perhaps—but if thrown into a game of checkers, the same AI would have no idea how to function because its skills wouldn’t translate. An artificial general intelligence system would be better able to transfer its skills from one domain to another, even if it had never seen the new domain before.
Worth the Risks?Are AI agents poised to revolutionize the way humans work? This will depend on whether technology companies can prove that agents are equipped not only to perform the tasks assigned to them, but also to work through new challenges and unexpected obstacles when they arise.
Uptake of AI agents will also depend on people’s willingness to give them access to potentially sensitive data: Depending on what your agent is meant to do, it might need access to your internet browser, your email, your calendar, and other apps or systems that are relevant for a given assignment. As these tools become more common, people will need to consider how much of their data they want to share with them.
A breach of an AI agent’s system could cause private information about your life and finances to fall into the wrong hands. Are you OK taking these risks if it means that agents can save you some work?
What happens when AI agents make a poor choice or a choice that its user would disagree with? Currently, developers of AI agents are keeping humans in the loop, making sure people have an opportunity to check an agent’s work before any final decisions are made. In the Project Mariner example, Google won’t let the agent carry out the final purchase or accept the site’s terms of service agreement. By keeping you in the loop, the systems give you the opportunity to back out of any choices made by the agent that you don’t approve.
Like any other AI system, an AI agent is subject to biases. These biases can come from the data that the agent is initially trained on, the algorithm itself, or in how the output of the agent is used. Keeping humans in the loop is one method to reduce bias by ensuring that decisions are reviewed by people before being carried out.
The answers to these questions will likely determine how popular AI agents become, and depend on how much AI companies can improve their agents once people begin to use them.
This article is republished from The Conversation under a Creative Commons license. Read the original article.
Image Credit: Ant Rozetsky on Unsplash
The post What Is an AI Agent? A Computer Scientist Explains the Next Wave of AI Tools appeared first on SingularityHub.
Here’s How Nvidia’s Vice-Like Grip on AI Chips Could Slip
In the great AI gold rush of the past couple of years, Nvidia has dominated the market for shovels—namely the chips needed to train models. But a shift in tactics…
In the great AI gold rush of the past couple of years, Nvidia has dominated the market for shovels—namely the chips needed to train models. But a shift in tactics by many leading AI developers presents an opening for competitors.
Nvidia boss Jensen Huang’s call to lean into hardware for AI will go down as one of the best business decisions ever made. In just a decade, he’s converted a $10 billion business that primarily sold graphics cards to gamers into a $3 trillion behemoth that has the world’s most powerful tech CEOs literally begging for his product.
Since the discovery in 2012 that the company’s graphics processing units (GPUs) can accelerate AI training, Nvidia’s consistently dominated the market for AI-specific hardware. But competitors are nipping at its heels, both old foes, like AMD and Intel, as well as a clutch of well-financed chip startups. And a recent change in priorities at the biggest AI developers could shake up the industry.
In recent years, developers have focused on training ever-larger models, something at which Nvidia’s chips excel. But as gains from this approach dry up, companies are instead boosting the number of times they query a model to squeeze out more performance. This is an area where rivals could more easily compete.
“As AI shifts from training models to inference, more and more chip companies will gain an edge on Nvidia,” Thomas Hayes, chairman and managing member at Great Hill Capital, told Reuters following news that custom semiconductor provider Broadcom had hit a trillion-dollar valuation thanks to AI chips demand.
The shift is being driven by the cost and sheer difficulty of getting ahold of Nvidia’s most powerful chips, as well as a desire among AI industry leaders not to be entirely beholden to a single supplier for such a crucial ingredient.
The competition is coming from several quarters.
While Nvidia’s traditional rivals have been slow to get into the AI race, that’s changing. At the end of last year, AMD unveiled its MI300 chips, which the company’s CEO claimed could go toe-to-toe with Nvidia’s chips on training but provide a 1.4x boost on inference. Industry leaders including Meta, OpenAI, and Microsoft announced shortly afterwards they would use the chips for inference.
Intel has also committed significant resources to developing specialist AI hardware with its Gaudi line of chips, though orders haven’t lived up to expectations. But it’s not only other chipmakers trying to chip away at Nvidia’s dominance. Many of the company’s biggest customers in the AI industry are also actively developing their own custom AI hardware.
Google is the clear leader in this area, having developed the first generation of its tensor processing unit (TPU) as far back as 2015. The company initially developed the chips for internal use, but earlier this month it announced its cloud customers could now access the latest Trillium processors to train and serve their own models.
While OpenAI, Meta, and Microsoft all have AI chip projects underway, Amazon recently undertook a major effort to catch up in a race it’s often seen as lagging in. Last month, the company unveiled the second generation of its Trainium chips, which are four times faster than their predecessors and already being tested by Anthropic—the AI startup in which Amazon has invested $4 billion.
The company plans to offer data center customers access to the chip. Eiso Kant, chief technology officer of AI start-up Poolside, told the New York Times that Trainium 2 could boost performance per dollar by 40 percent compared to Nvidia chips.
Apple too is, allegedly, getting in on the game. According to a recent report by tech publication The Information, the company is developing an AI chip with long-time partner Broadcom.
In addition to big tech companies, there are a host of startups hoping to break Nvidia’s stranglehold on the market. And investors clearly think there’s an opening—they pumped $6 billion into AI semiconductor companies in 2023, according to data from PitchBook.
Companies like SambaNova and Groq are promising big speedups on AI inference jobs, while Cerebras Systems, with its dinner-plate-sized chips, is specifically targeting the biggest AI computing tasks.
However, software is a major barrier for those thinking of moving away from Nvidia’s chips. In 2006, the company created proprietary software called CUDA to help developers design programs that operate efficiently over many parallel processing cores—a key capability in AI.
“They made sure every computer science major coming out of university is trained up and knows how to program CUDA,” Matt Kimball, principal data-center analyst at Moor Insights & Strategy, told IEEE Spectrum. “They provide the tooling and the training, and they spend a lot of money on research.”
As a result, most AI researchers are comfortable in CUDA and reluctant to learn other companies’ software. To counter this, AMD, Intel, and Google joined the UXL Foundation, an industry group creating open-source alternatives to CUDA. Their efforts are still nascent, however.
Either way, Nvidia’s vice-like grip on the AI hardware industry does seem to be slipping. While it’s likely to remain the market leader for the foreseeable future, AI companies could have a lot more options in 2025 as they continue building out infrastructure.
Image Credit: visuals on Unsplash
The post Here’s How Nvidia’s Vice-Like Grip on AI Chips Could Slip appeared first on SingularityHub.
Four Clinical Trials We’re Watching That Could Change Medicine in 2025
Breakthroughs in medicine are exciting. They promise to alleviate human suffering, sometimes on global scales. But it takes years, even decades, for new drugs and therapies to go from research to your medicine cabinet. Along the way, most will stumble at some point. Clinical trials, which test therapies for safety and efficacy, are the final hurdle before approval.
Last year was packed with clinical trials news.
Blockbuster medications Ozempic and Wegovy still dominated headlines. Although known for their impact on weight loss, that’s not all they can do. In an analysis of over 1.6 million patients, the drugs seemed to block 10 obesity-associated cancers—including those of the liver, kidney, pancreas, and skin cancers. Another trial over one year found that a similar type of drug slowed cognitive decline in people with mild Alzheimer’s disease.
Meanwhile, scientists dug into how psychedelics and MDMA fight off depression and post-traumatic stress disorders. The year was a relative setback for the psychedelic renaissance, with the FDA rejecting MDMA therapy. But the field is still gaining recognition for its therapeutic potential.
Then there’s lenacapavir, a shot that protects people from HIV. Named “breakthrough of the year” by Science, the shot completely protected African teenage girls and women against HIV infection. Another trial supported the results, showing the drug protected people who have sex with men at nearly 100 percent efficacy. The success stems from a new understanding of the protein “capsule” guarding the virus’ genetic material. Many other viruses have a similar makeup—meaning the strategy could help researchers design new drugs to fight them off too.
So, what’s poised to take the leap from breakthrough to clinical approval in 2025? Here’s what to expect in the year ahead.
Base Editing Takes a Shot at Sickle Cell DiseaseBase editing is a type of gene editor, like the genetic Swiss Army knife CRISPR-Cas9. Developed in 2016, base editing nicks a single DNA strand—rather than cutting both strands—making it far less likely to damage untargeted parts of the genome.
In previous years, base editing teamed up with CAR T therapy to destroy cancer cells. Led by Beam Therapeutics, a trial uses base editing to edit four genes in immune cells to amp up their cancer-hunting capabilities. Another study, BEACON, launched a few years back, is testing whether base-edited blood stem cells can tackle severe sickle cell disease, with initial results expected in February 2025.
In sickle cell disease, a genetic mutation transforms oxygen-carrying red blood cells from smooth, donut-like shapes into cells with sharp edges. The disease eventually destroys blood vessels and causes pain.
The BEACON trial base edits blood stem cells—dubbed HSCs (hematopoietic stem cells)—to correct the faulty genes. These cells eventually develop into all of our blood cells, including immune blood cells, and are critical for treating blood disease.
BEACON is open-label and single-arm, meaning all patients are getting treatment, and they know. During the trial, HSCs are taken from each person and given a gene variant that boosts fetal hemoglobin—a protein that carries oxygen in red blood cells. Increasing levels of the protein should improve symptoms.
The trial faces headwinds with a reported death in early results. But the death was attributed to side effects of busulfan, a drug used to create space in the bone marrow—a standard procedure before transplant—rather than the base editing itself. If successful, the trial opens the door to treating other inherited diseases and pushes the technology closer to clinical use.
A Cancer Throwdown With Radioactive DrugsProstate cancer creeps up. However, with screening, it can be detected early. Cancer cells are dotted with a protein dubbed PSMA, which has been a target for therapies tackling the disease.
After over a decade of research, one molecule stood out: lutetium-177. Also known as Pluvicto, the radioactive drug grabs onto PSMA once injected into the body and emits damaging levels of radiation directly onto cancerous cells. First approved by the FDA in 2022 for prostate cancer that has already spread, the drug significantly improved survival and quality of life.
Pluvicto was initially okayed for treatment after chemotherapy. Now, an ongoing trial, PSMAddition, is asking if early treatment may yield better results.
In over 1,100 patients with minimally treated prostate cancer that has spread, the trial is testing early treatment in a particular population of patients. Specifically, prostate cancer patients usually undergo hormone therapy to combat the disease, but in some people, the treatment could also lower their responsiveness to Pluvicto.
Positive results would be a “potential game-changer for hundreds of thousands of patients with prostate cancer globally,” Oliver Sartor, who’s leading the trial, wrote in Nature Medicine.
A Component of Weed Tackles PsychosisDespite being federally illegal, psychedelics are having a moment. CBD, a component of both weed—which isn’t traditionally considered psychedelic, but can have similar effects—and hemp has already been approved by the FDA for treating seizures in kids two years or older.
A new clinical study called Stratification and Treatment in Early Psychosis (STEP) hopes the molecule could also help people with psychosis from schizophrenia or other disorders. Mostly based in the UK, the study consists of three placebo-controlled, double-blind randomized trials—the gold standard in clinical trials.
As a Phase 3 study, the final step before requesting approval, each trial will gauge the effect of CBD with or without anti-psychotics in people with different stages of psychosis.
One trial is working with people who’ve had just one episode; another includes those who’ve experienced psychosis resistant to drug treatments. The last trial is preventative, studying patients who are at high risk of developing psychosis. With blood tests, questionnaires, and brain imaging, the team aims to gauge how well the participants respond to CBD.
It’ll be one of the largest studies of CBD to date, coordinating 30 sites in 11 countries and recruiting roughly 1,000 participants. The study will also look for biomarkers that could potentially predict treatment success. Researchers expect first results in 2025, and hope the trial can shed a light on the potential therapeutic effects of CBD in severe psychiatric disorders.
Is Personalized Breast Cancer Screening Coming Your Way?Breast cancer is far too common. For now, screening guidelines are one-size-fits-all. Generally, they’re based on age—beginning at roughly 50 years of age in most countries. But the tests have limited efficacy, reducing death risk by just 20 percent.
Part of this is due to family history. Each person has an individual risk depending on genetics, lifestyle, and other environmental factors. For people with potentially lower risk, mammograms may not be needed even if they fit the screening bill. Meanwhile, for women at high risk, more intensive screening could better capture cancerous cells.
One trial, called My Personal Breast Cancer Screening, is looking to make breast cancer screening more personalized based on risk. The largest global study to date, the trial has launched in six countries with over 53,000 women. It’ll compare the health outcomes of women that either follow current breast cancer screening recommendations or those that receive a personalized screen.
To tailor treatment, the team will use participants’ genetic data to assess risk, in combination with other factors, such as family history and breast density. They’ll follow the women and note whether, or when, they develop breast cancer four years after the screen. If successful, the strategy could help those at high risk while lowering unnecessary harm and screening burden for people with low risk.
These are just glimpses of medical therapies in the works. There’ll be plenty more to cover in 2025. As usual, it was a great year geeking out with you. Thanks for reading—and looking forward to sharing what this year has to offer!
Image Credit: Elsa Olofsson on Unsplash
These Were Our Favorite Tech Stories From Around the Web in 2024
Every Saturday we post a selection of our favorite science and technology articles from the week. With 2024 nearing its end, we dug through all those posts again to surface 25 stories worth revisiting. Here you’ll find meditations on AI’s evolution, a ChatGPT moment in robotics, first contact with whale civilization, the inaugural jet suit grand prix, and five sci-fi visions from the year 2149—among many more worth your time.
Happy reading. See you in 2025!
The GPT Era Is Already Ending
Matteo Wong | The Atlantic
“[OpenAI] has been unusually direct that the o1 series is the future: Chen, who has since been promoted to senior vice president of research, told me that OpenAI is now focused on this ‘new paradigm,’ and Altman later wrote that the company is prioritizing’ o1 and its successors. The company believes, or wants its users and investors to believe, that it has found some fresh magic. The GPT era is giving way to the reasoning era.”
Falcon 9 Reaches a Flight Rate 30 Times Higher Than Shuttle at 1/100th the Cost
Eric Berger | Ars Technica
“Space enthusiast Ryan Caton also crunched the numbers on the number of SpaceX launches this year compared to some of its competitors. So far this year, SpaceX has launched as many rockets as Roscosmos has since 2013, United Launch Alliance since 2010, and Arianespace since 2009. This year alone, the Falcon 9 has launched more times than the Ariane 4, Ariane 5, or Atlas V rockets each did during their entire careers.”
Google’s New Project Astra Could Be Generative AI’s Killer App
Will Douglas Heaven | MIT Technology Review
“Last week I was taken through an unmarked door on an upper floor of a building in London’s King’s Cross district into a room with strong secret-project vibes. The word ‘ASTRA’ was emblazoned in giant letters across one wall. …’The pitch to my mum is that we’re building an AI that has eyes, ears, and a voice. It can be anywhere with you, and it can help you with anything you’re doing,’ says Greg Wayne, co-lead of the Astra team. ‘It’s not there yet, but that’s the kind of vision.’”
Is Robotics About to Have Its Own ChatGPT Moment?
Melissa Heikkilä | MIT Technology Review
“For decades, roboticists have more or less focused on controlling robots’ ‘bodies’—their arms, legs, levers, wheels, and the like—via purpose-driven software. But a new generation of scientists and inventors believes that the previously missing ingredient of AI can give robots the ability to learn new skills and adapt to new environments faster than ever before. This new approach, just maybe, can finally bring robots out of the factory and into our homes.”
Cheap Solar Panels Are Changing the World
Zoë Schlanger | The Atlantic
“‘In a single year, in a single technology, we’re providing as much new electricity as the entirety of global growth the year before,’ Kingsmill Bond, a senior energy strategist at RMI, a clean-energy nonprofit, told me. A decade or two ago, analysts ‘did not imagine in their wildest dreams that solar by the middle of the 2020s would already be supplying all of the growth of global electricity demand,’ he said. Yet here we are.”
The Race for the Next Ozempic
Emily Mullin | Wired
“These drugs are now wildly popular, in shortage as a result, and hugely profitable for the companies making them. Their success has sparked a frenzy among pharmaceutical companies looking for the next blockbuster weight-loss drug. Researchers are now racing to develop new anti-obesity medications that are more effective, more convenient, or produce fewer side effects than the ones currently on the market.”
SpaceX Catches Returning Rocket in Mid-Air, Turning a Fanciful Idea Into Reality
Stephen Clark | Ars Technica
“This achievement is the first of its kind, and it’s crucial for SpaceX’s vision of rapidly reusing the Starship rocket, enabling human expeditions to the moon and Mars, routine access to space for mind-bogglingly massive payloads, and novel capabilities that no other company—or country—seems close to attaining.”
Mechazilla has caught the Super Heavy booster! pic.twitter.com/6R5YatSVJX
— SpaceX (@SpaceX) October 13, 2024
Silicon Valley’s Trillion-Dollar Leap of Faith
Matteo Wong | The Atlantic
“These companies have decided that the best way to make generative AI better is to build bigger AI models. And that is really, really expensive, requiring resources on the scale of moon missions and the interstate-highway system to fund the data centers and related infrastructure that generative AI depends on. …Now a number of voices in the finance world are beginning to ask whether all of this investment can pay off.”
Apple Vision Pro Review: Magic, Until It’s Not
Nilay Patel | The Verge
“The Vision Pro is an astounding product. It’s the sort of first-generation device only Apple can really make, from the incredible display and passthrough engineering, to the use of the whole ecosystem to make it so seamlessly useful, to even getting everyone to pretty much ignore the whole external battery situation. …But the shocking thing is that Apple may have inadvertently revealed that some of these core ideas are actually dead ends—that they can’t ever be executed well enough to become mainstream.”
Hands On With Orion, Meta’s First Pair of AR Glasses
Alex Heath | The Verge
“They look almost like a normal pair of glasses. That’s the first thing I notice as I walk into a conference room at Meta’s headquarters in Menlo Park, California. The black Clark Kent-esque frames sitting on the table in front of me look unassuming, but they represent CEO Mark Zuckerberg’s multibillion-dollar bet on the computers that come after smartphones. They’re called Orion, and they’re Meta’s first pair of augmented reality glasses.”
People Are Worried That AI Will Take Everyone’s Jobs. We’ve Been Here Before.
David Rotman | MIT Technology Review
“[Karl T. Compton’s 1938] essay concisely framed the debate over jobs and technical progress in a way that remains relevant, especially given today’s fears over the impact of artificial intelligence. …While today’s technologies certainly look very different from those of the 1930s, Compton’s article is a worthwhile reminder that worries over the future of jobs are not new and are best addressed by applying an understanding of economics, rather than conjuring up genies and monsters.”
How First Contact With Whale Civilization Could Unfold
Ross Andersen | The Atlantic
“One night last winter, over drinks in downtown Los Angeles, the biologist David Gruber told me that human beings might someday talk to sperm whales. …Gruber said that they hope to record billions of the animals’ clicking sounds with floating hydrophones, and then to decipher the sounds’ meaning using neural networks. I was immediately intrigued. For years, I had been toiling away on a book about the search for cosmic civilizations with whom we might communicate. This one was right here on Earth.”
8 Google Employees Invented Modern AI. Here’s the Inside Story
Steven Levy | Wired
“They met by chance, got hooked on an idea, and wrote the ‘Transformers’ paper—the most consequential tech breakthrough in recent history. …Approaching its seventh anniversary, the ‘Attention’ paper has attained legendary status. The authors started with a thriving and improving technology—a variety of AI called neural networks—and made it into something else: a digital system so powerful that its output can feel like the product of an alien intelligence.”
The Best Qubits for Quantum Computing Might Just Be Atoms
Philip Ball | Quanta
“In the search for the most scalable hardware to use for quantum computers, qubits made of individual atoms are having a breakout moment. …’We believe we can pack tens or even hundreds of thousands in a centimeter-scale device,’ [Mark Saffman, a physicist at the University of Wisconsin] said.”
Why AI Could Eat Quantum Computing’s Lunch
Edd Gent | MIT Technology Review
“The scale and complexity of quantum systems that can be simulated using AI is advancing rapidly, says Giuseppe Carleo, a professor of computational physics at the Swiss Federal Institute of Technology (EPFL). …Given the pace of recent advances, a growing number of researchers are now asking whether AI could solve a substantial chunk of the most interesting problems in chemistry and materials science before large-scale quantum computers become a reality.”
The Very First Jet Suit Grand Prix Takes Off in Dubai
Mike Hanlon | New Atlas
“A new sport kicked away this month when the first ever jet-suit race was held in Dubai. Each racer wore an array of seven 130-hp jet engines (two on each arm and three in the backpack for a total 1,050 hp) that are controlled by hand-throttles. After that, the pilots use the three thrust vectors to gain lift, move forward and try to stay above ground level while negotiating the course…faster than anyone else.“
What If Your AI Girlfriend Hated You?
Kate Knibbs | Wired
“It seems as though we’ve arrived at the moment in the AI hype cycle where no idea is too bonkers to launch. This week’s eyebrow-raising AI project is a new twist on the romantic chatbot—a mobile app called AngryGF, which offers its users the uniquely unpleasant experience of getting yelled at via messages from a fake person.”
Pocket-Sized AI Models Could Unlock a New Era of Computing
Will Knight | Wired
“When ChatGPT was released in November 2023, it could only be accessed through the cloud because the model behind it was downright enormous. Today I am running a similarly capable AI program on a Macbook Air, and it isn’t even warm. The shrinkage shows how rapidly researchers are refining AI models to make them leaner and more efficient. It also shows how going to ever larger scales isn’t the only way to make machines significantly smarter.”
On Self-Driving, Waymo Is Playing Chess While Tesla Plays Checkers
Timothy B. Lee | Ars Technica
“Many Tesla fans see [limitations like remote operators and avoiding freeways] as signs that Waymo is headed for a technological dead end. …But I predict that when Tesla begins its driverless transition, it will realize that safety requires a Waymo-style incremental rollout. So Tesla hasn’t found a different, better way to bring driverless technology to market. Waymo is just so far ahead that it’s dealing with challenges Tesla hasn’t even started thinking about. Waymo is playing chess while Tesla is still playing checkers.”
World’s ‘Largest Solar Precinct’ Approved by Australian Government
Keiran Smith | Associated Press
“Australian company Sun Cable plans to build a 12,400-hectare solar farm and transport electricity to the northern Australian city of Darwin via an 800-kilometer (497-mile) overhead transmission line, then on to large-scale industrial customers in Singapore through a 4,300-kilometer (2,672-mile) submarine cable. The Australia-Asia PowerLink project aims to deliver up to six gigawatts of green electricity each year.”
The Year Is 2149 and…
Sean Michaels | MIT Technology Review
“Novelist Sean Michaels envisions what life will look like 125 years from now: ‘The year is 2149 and people mostly live their lives “on rails.” That’s what they call it, “on rails,” which is to live according to the meticulous instructions of software. Software knows most things about you—what causes you anxiety, what raises your endorphin levels, everything you’ve ever searched for, everywhere you’ve been. Software sends messages on your behalf; it listens in on conversations. ‘”
Geothermal Energy Could Outperform Nuclear Power
Editorial Staff | The Economist
“How big could EGS [or enhanced geothermal systems] get? Big enough. Though DOE analyses suggest only around 40GW of conventional geothermal resource exist in America, new techniques expand the theoretical potential to a whopping 5,500GW across much of the country, with strong potential in over half of states. The heat is definitely on.”
Hidden ‘BopSpotter’ Microphone Is Constantly Surveilling San Francisco for Good Music
Jason Koebler | 404 Media
“Bop Spotter is a project by technologist Riley Walz in which he has hidden an Android phone in a box on a pole, rigged it to be solar powered, and has set it to record audio and periodically sends it to Shazam’s API to determine which songs people are playing in public. Walz describes it as ShotSpotter, but for music. ‘This is culture surveillance. No one notices, no one consents. But it’s not about catching criminals,’ Walz’s website reads. ‘It’s about catching vibes.’”
Two Students Created Face Recognition Glasses. It Wasn’t Hard.
Kashmir Hill | The New York Times
“Mr. Nguyen and a fellow Harvard student, Caine Ardayfio, had built glasses used for identifying strangers in real time, and had demonstrated them on two ‘real people’ at the subway station, including Mr. Hoda, whose name was incorrectly transcribed in the video captions as ‘Vishit.’ Mr. Nguyen and Mr. Ardayfio, who are both 21 and studying engineering, said in an interview that their system relied on widely available technologies.”
Electric Cars Could Last Much Longer Than You Think
James Morris | Wired
“Rather than having a shorter lifespan than internal combustion engines, EV batteries are lasting way longer than expected, surprising even the automakers themselves. …A 10-year-old EV could be almost as good as new, and a 20-year-old one still very usable. That could be yet another disruption to an automotive industry that relies on cars mostly heading to the junkyard after 15 years.”
Image Credit: SpaceX
‘Mirror Bacteria’ Could Wreak Havoc on Life and the Environment, Scientists Warn
Our hands are mirror images of each other. Unless you flip one hand around, they’ll never look the same.
Scientists call this chirality, and the mirror-like property is fundamental to all life on Earth. DNA and RNA—life’s genetic molecules, from viruses to humans—are made of components that exist in their right-handed form. Amino acids, the building blocks of proteins, are left-handed. Switching the handedness usually causes cells to break down.
That is, it did until synthetic biology came along.
For the past decade, scientists have been engineering “mirror life” by changing the chirality of life’s building blocks. Flipping evolution’s design, they’ve made right-handed amino acids and left-handed sugars to build genetic material.
So far, this flipped biological universe only exists in individual molecules. But they could one day—potentially, in just a decade—be used to build mirror bacteria.
This month, dozens of scientists penned a warning against making mirror bacteria in Science. Among them are J. Craig Venter, a long-time enthusiast for rewriting life’s code. If released, mirror bacteria could evade the immune system, potentially causing lethal infections in people, animals, and plants. With utterly “alien” genomes, they are also likely to dodge antibiotics and other treatments, allowing them to rapidly spread like an uncontrollable invasive species.
“We are passionate defenders of allowing scientists to conduct their research with as few limits on intellectual curiosity as possible, and calling for a ban is not something that we do often or lightly,” wrote John Glass and Katarzyna Adamala at the J. Craig Venter Institute and the University of Minnesota, respectively, in an essay in The Scientist. Both contributed to the new paper.
“However, every rule has exceptions, and this is one of them,” they wrote.
Pushing BoundariesSynthetic biology taps into the building blocks of life to expand upon nature’s design.
The field’s made leaps over the past decade. Storing data in DNA is old news. Recent studies have created DNA-based computer circuits that can play chess and living bacteria that thrive even with most of their genes removed—running instructions written on a fully synthetic chromosome designed in a computer and synthesized in a lab.
These advances could impact our daily lives.
Synthetic circuits that allow bacteria to pump out drugs, for example, could aid the fight against diabetes and malaria. Bacteria modified to chomp plastic or make strong but biodegradable materials, such as artificial silk, could protect the environment. Constructing synthetic components that mesh—or clash—with living organisms helps us better understand our own biology. As Richard Feynman famously said, “What I cannot create, I do not understand.”
While all this might already sound like science fiction, these studies still play out under evolution’s rules of chirality.
Mirror life breaks them.
There’s reason to explore these “flipped” molecules. For one, they could make longer-lasting medication. Proteins grab onto drugs to break them down. But like a right hand trying to fit into a left handprint, hypothetically, mirror molecules specifically designed to interact with a single protein target wouldn’t engage with other natural components in the cell—potentially making them more stable with fewer side effects.
Scientists have already made DNA and proteins from flipped building blocks. Some are now considering the next step: Using these components to build a mirror life form. The technology doesn’t yet exist. But “with the right components and nutrients,” flipped DNA or proteins could “boot up” a bacteria completely alien to all life on Earth, wrote Glass and Adamala.
“While both of us were initially excited about the prospect of developing mirror life, when we learned that mirror bacteria might have an incredibly deadly impact if they were ever introduced into the wild, we changed our minds,” they wrote.
Why So Dangerous?Glass and Adamala are among dozens of experts in the field who penned a warning against making mirror life forms.
To be clear, they are not advocating a ban on research into individual therapeutic mirror molecules, which could benefit medicine. Rather, their focus is on mirror bacteria with the potential to reproduce.
Once bacteria or other living creatures can be entirely developed using synthetic DNA, synthetic proteins, and synthetic lipids, a living mirror bacteria could be constructed in the same way, wrote the authors.
Although the technology remains at least a decade away, now is the time to consider risks.
In isolation—say, in a petri dish—mirror bacteria would likely live like normal cells if given mirror-image nutrients and be as feeble or strong as their natural peers. The problem? Many “normal” bacteria can also survive on nutrients without chirality, suggesting that mirror bacteria could also take advantage of those nutrients.
It could be a problem, then, if mirror bacteria break loose. Although lab breaches are rare, they do happen. Mirror bacteria’s “flipped” genetic makeup would make them completely immune to phages—viruses that prey on bacteria in the wild. Because of their flipped chirality, they’re completely hidden from predators.
This resilience could allow mirror bacteria to spread across ecosystems. Through evolution, they could also optimize their mirror genes to survive, in their perspective, in a “flipped” world.
“An unstoppable replicating mirror bacteria free in the environment could cause consequences that are disastrous,” wrote Glass and Adamala.
More worrisome is perhaps their danger to human health. Our immune systems rely on proteins that latch onto invading pathogens. But they can only recognize proteins with the same chirality. If we were infected with mirror bacteria—and that’s still a big if—they could evade our immune systems, essentially making us immunocompromised.
Early signs already show that mirror proteins can withstand being broken down in cells. Because they’re “hidden” from the immune system, these bacteria could enter the body—through the skin, gut, or lungs like normal pathogens—without triggering antibodies or other immune defenses. Current antibiotics, engineered to tackle bacteria with natural chirality, likely wouldn’t work on mirrored ones. So, they could, in theory, cause devastating outbreaks.
What to Do?There are ways to reduce risk that balance research into the benefits of “flipped” life molecules. Scientists could intentionally hobble mirror bacteria with a synthetic kill-switch that doesn’t harm other living creatures. But once created, it would be relatively easy for others to free so-called “bio-contained” bacteria of safeguards, argued the authors.
“We therefore recommend that research with the goal of creating mirror bacteria not be permitted, and that funders make clear that they will not support such work,” they wrote.
The opinion doesn’t include mirror DNA or proteins for therapeutic uses. In addition to their Science paper, which summarizes results, the team released a full report and welcome scientists, policymakers, industry, and the general public to jump into the debate.
“Once a mirror cell is made, it’s going to be incredibly difficult to try to put that genie back in the bottle,” said Michael Kay at the University of Utah, who contributed to the new article. “That’s a big motivation for why we’re thinking about prevention and regulation well ahead of any potential actual risk.”
Inside VillageOS: A ‘SimCity’-Like Tool for Regenerative Living Spaces
“We exist, and life exists on Earth, because of 12 to 14 inches of topsoil. When that goes away, we go away,” said James Ehrlich, director of compassionate sustainability at Stanford University. It was an offhand and exasperated tangent more than an hour into a lengthy interview for this article and one of many sobering observations made during the conversation.
It’s no secret that our relationship to the natural world is under tremendous strain today, and according to Ehrlich, many of the emergencies we face can be traced back to how we design and manage modern communities. Simply put, the way we build and operate our living spaces is destroying the environment and fueling a global mental health crisis of loneliness. Ehrlich’s work focuses on both. As the world continues to urbanize, this is a recipe for chaos, he says.
In our discussion, he pointed out that humanity has experienced a dramatic shift in the past 70 years. Before 1950, about 70 percent of the global population lived outside cities, many in small communities with varying degrees of self-sufficiency. Since then, rapid urbanization has transformed societies around the world, with over half of humanity now living in cities.
“My thesis has been and will continue to be that cities are brittle and that urban infrastructure is capable of experiencing, like a domino effect, a cascading set of failures,” he says.
Ehrlich emphasizes that we can’t only retrofit modern cities with more sustainable infrastructure but must also develop new communities that more closely resemble the life of our ancestors.
He doesn’t appear to be alone in that thinking.
VillageOSIn recent decades, there’s been rising interest in self-sufficient, environmentally sustainable, and socially and economically resilient communities, often called ecovillages. Today, there are more than 10,000 such communities in a variety of forms ranging from the secular to the spiritually oriented, each seeking to create thriving spaces aligned with their environment.
While designing and operating an ecovillage is complex, Ehrlich’s startup, ReGen Villages, a Stanford University spinoff, is developing software tools to make the task easier.
Their core planning tool, VillageOS, can help conceive residential infrastructure incorporating everything from clean water systems and housing to renewable energy, organic food production, and even robotic and autonomous systems.
It’s like an industrial SimCity for regenerative living spaces.
VillageOS courtesy of ReGen Villages Holding, BV“Very often, architects, engineers, and planners prioritize maximizing building density or minimizing costs which focuses on profit rather than environmental impact or sustainability. VillageOS takes a different approach by first asking, ‘What is the land telling us?'” says Ehrlich.
In that sense, VillageOS is a high-tech listening device that can assess the land’s natural capacity and resource flows. It works by pulling in geospatial maps and then aggregating everything from historical data about climate and weather to soil maps and an array of local regulations, building codes, and permitting information. With the data, VillageOS uses generative design to blueprint community spaces that maximize any number of intended outcomes while minimizing its environmental footprint.
The goal is to design flourishing spaces that embed sustainable practices from the start.
A user who wants to enhance water resilience, for example, can set objectives like “maximizing rainwater storage” or “reducing runoff.” The software can then identify the best location to place a reservoir on a real parcel of land. It can do the same when designing housing and energy systems or choosing appropriate climate-resilient crops and where to harvest them.
The software’s user interface is key to the project. Built in Unreal Engine, it pulls 3D map data from Cesium and makes use of photorealistic, 3D visual renderings. By incorporating a game-like design with slider bars and controls, even non-technical users should be able to use the tool as easily as playing a video game.
“I get joy imagining that we can sit down with elderly farmers who own a piece of land, and without instruction watch them type in their address to load their land, start to get the climate data, and then explore the possibilities for what might be possible for that piece of land,” says Ehrlich.
One benefit of Unreal Engine is its ability to generate realistic lighting conditions in real time, a relatively new breakthrough that’s already having a dramatic impact on industries like real estate and film production. That means VillageOS users can plan and visualize exactly how a space would look and feel during different seasons and times of day or how foliage might cast shadows and change lighting conditions. It may seem trivial, but architects spend significant amounts of time exploring how lighting changes our use of public space.
The photorealism allows planners to communicate exactly how a resident can expect to experience a living space. The system could even allow customization, like setting a user’s height to that of a child so designers can take a variety of stakeholders into account.
Another element of VillageOS is its potential to serve as a digital twin and tool for managing a community’s ongoing operations. Digital twins, as I’ve written elsewhere, use a virtual replica of a real system to interactively engage with, ask questions of, or make predictions about that system. This might prove useful when deploying and managing autonomous robotic systems designed with ecovillages in mind, like drones or robotic fruit pickers.
“We’re going to see all kinds of robotic interventions capable of redirecting water, redirecting solar panels, and doing different kinds of autonomous interventions for the benefit of improving and refining the living conditions of these communities,” Ehrlich says.
The VillageOS software is still in development, but Ehrlich plans to release the climate data aggregator as an open-source API as soon as its ready. In the meantime, ReGen Villages is working with landowners and developers to train the VillageOS software.
System ResetThe scope of Ehrlich’s mission touches almost every aspect of how a society functions and addresses nearly all the UN Sustainable Development Goals. One of his work’s clearest ambitions is to curb the potential disruption from climate-related displacement and migration. Ehrlich sees a future where flourishing communities with socially affordable and climate-resilient housing developments reduce burdens on governments around the world and foster a mentally and physically healthy society—a big goal of his work at Stanford.
“Compassionate sustainability is about mindfulness—reducing the amygdala’s response related to cortisol release. How we live and where we live can actually improve health outcomes. There is a definite correlation between my work at Stanford and health outcomes based on this kind of design thinking.”
Living in small intentional communities might not only be an environmental solution to global challenges but could also make us happier and healthier. VillageOS might one day help us get to that better future.
Image Credit: ReGen Villages Holding, BV