Transhumanismus
This Week’s Awesome Tech Stories From Around the Web (Through March 29)
If Anthropic Succeeds, a Nation of Benevolent AI Geniuses Could Be BornSteven Levy | Wired
“The vision [Dario Amodei] spins makes Shangri-La look like a slum. Not long from now, maybe even in 2026, Anthropic or someone else will reach AGI. Models will outsmart Nobel Prize winners. These models will control objects in the real world and may even design their own custom computers. Millions of copies of the models will work together—imagine an entire nation of geniuses in a data center!”
TechMove Over, OpenAI: How the Startup Behind Cursor Became the Hottest, Vibiest Thing in AINatasha Mascarenhas | The Information
“[Anysphere’s $200 million in annual revenue is] an astonishing amount considering that Cursor’s launch came in January 2023. It all adds up to a stunning reality: Anysphere is one of the fastest-growing startups ever, and what Truell and his co-founders have built is a bona fide AI rocket ship with a trajectory that stands out even among other AI startups hurtling into the stratosphere.”
ComputingHow Extropic Plans to Unseat NvidiaWill Knight | Wired
“Extropic has now shared more details of its probabilistic hardware with Wired, as well as results that show it is on track to build something that could indeed offer an alternative to conventional silicon in many datacenters. The company aims to deliver a chip that is three to four orders of magnitude more efficient than today’s hardware, a feat that would make a sizable dent in future emissions.”
ComputingCould Nvidia’s Revolutionary Optical Switch Transform AI Data Centers Forever?Samuel K. Moore | IEEE Spectrum
“According to Nvidia, adopting the CPO switches in a new AI data center would lead to one-fourth the number of lasers, boost power efficiency for trafficking data 3.5-fold, improve the reliability of signals making it from one computer to another on time by 63-times, make networks 10-fold more resilient to disruptions, and allow customers to deploy new data center hardware 30 percent faster.”
Artificial IntelligenceA New, Challenging AGI Test Stumps Most AI ModelsMaxwell Zeff | TechCrunch
“‘Reasoning’ AI models like OpenAI’s o1-pro and DeepSeek’s R1 score between 1% and 1.3% on ARC-AGI-2, according to the Arc Prize leaderboard. Powerful non-reasoning models, including GPT-4.5, Claude 3.7 Sonnet, and Gemini 2.0 Flash, score around 1%.”
ComputingThe Quantum Apocalypse Is Coming. Be Very AfraidAmit Katwala | Wired
“One day soon, at a research lab near Santa Barbara or Seattle or a secret facility in the Chinese mountains, it will begin: the sudden unlocking of the world’s secrets. Your secrets. Cybersecurity analysts call this Q-Day—the day someone builds a quantum computer that can crack the most widely used forms of encryption.”
BiotechnologyHow a Bankruptcy Judge Can Stop a Genetic Privacy DisasterKeith Porcaro | MIT Technology Review
“Any new owner of 23AndMe’s data will want to find ways to make money from it. Lawmakers have a big opportunity to help keep it safe. …A bankruptcy court could require that users individually opt in before their genetic data can be transferred to 23andMe’s new owners, regardless of who those new owners are. Anyone who didn’t respond or who opted out would have the data deleted.”
SpaceJust One Exo-Earth Pixel Can Reveal Continents, Oceans, and MoreEthan Siegel | Big Think
“In the coming years and decades, several ambitious projects will reach completion, finally giving humanity the capability to image Earth-size planets at Earth-like distances around Sun-like stars. …Remarkably, even though these exo-Earths will appear as just one lonely pixel in our detectors, we can use that data to detect continents, oceans, icecaps, forests, deserts, and more.”
FutureDoes All Intelligent Life Face a Great Filter?Paul Sutter | Ars Technica
“Maybe we’re alone because essentially nobody ever makes it. Maybe there’s some unavoidable barrier between the origin of intelligent life and said life setting off to explore the galaxy. The position of this Great Filter, as [economist Robin Hanson] named it, is critically important as we contemplate the future of humanity.”
ScienceInside arXiv—the Most Transformative Platform in All of ScienceSheon Han | Wired
arXiv’s unassuming facade belies the tectonic reconfiguration it set off in the scientific community. If arXiv were to stop functioning, scientists from every corner of the planet would suffer an immediate and profound disruption. ‘Everybody in math and physics uses it,’ Scott Aaronson, a computer scientist at the University of Texas at Austin, told me. ‘I scan it every night.'”
SpaceFarewell to Gaia, the Milky Way’s CartographerKatrina Miller | The New York Times
“It is difficult to capture the breadth of development and discovery that the spinning observatory has enabled. But here are a few numbers: nearly two billion stars, millions of potential galaxies and some 150,000 asteroids. These observations have led to more than 13,000 studies, so far, by astronomers.”
The post This Week’s Awesome Tech Stories From Around the Web (Through March 29) appeared first on SingularityHub.
What Anthropic Researchers Found After Reading Claude’s ‘Mind’ Surprised Them
As AI’s power grows, charting its inner world is becoming more crucial.
Despite popular analogies to thinking and reasoning, we have a very limited understanding of what goes on in an AI’s “mind.” New research from Anthropic helps pull the veil back a little further.
Tracing how large language models generate seemingly intelligent behavior could help us build even more powerful systems—but it could also be crucial for understanding how to control and direct those systems as they approach and even surpass our capabilities.
This is challenging. Older computer programs were hand-coded using logical rules. But neural networks learn skills on their own, and the way they represent what they’ve learned is notoriously difficult to parse, leading people to refer to the models as “black boxes.”
Progress is being made though, and Anthropic is leading the charge.
Last year, the company showed that it could link activity within a large language model to both concrete and abstract concepts. In a pair of new papers, it’s demonstrated that it can now trace how the models link these concepts together to drive decision-making and has used this technique to analyze how the model behaves on certain key tasks.
“These findings aren’t just scientifically interesting—they represent significant progress towards our goal of understanding AI systems and making sure they’re reliable,” the researchers write in a blog post outlining the results.
The Anthropic team carried out their research on the company’s Claude 3.5 Haiku model, its smallest offering. In the first paper, they trained a “replacement model” that mimics the way Haiku works but replaces internal features with ones that are more easily interpretable.
The team then fed this replacement model various prompts and traced how it linked concepts into the “circuits” that determined the model’s response. To do this, they measured how various features in the model influenced each other as it worked through a problem. This allowed them to detect intermediate “thinking” steps and how the model combined concepts into a final output.
In a second paper, the researchers used this approach to interrogate how the same model behaved when faced with a variety of tasks, including multi-step reasoning, producing poetry, carrying out medical diagnoses, and doing math. What they found was both surprising and illuminating.
Most large language models can reply in multiple languages, but the researchers wanted to know what language the model uses “in its head.” They discovered that, in fact, the model has language-independent features for various concepts and sometimes links these together first before selecting a language to use.
Another question the researchers wanted to probe was the common conception that large language models work by simply predicting what the next word in a sentence should be. However, when the team prompted their model to generate the next line in a poem, they found the model actually chose a rhyming word for the end of the line first and worked backwards from there. This suggests these models do conduct a kind of longer-term planning, the researchers say.
The team also investigated another little understood behavior in large language models called “unfaithful reasoning.” There is evidence that when asked to explain how they reach a decision, models will sometimes provide plausible explanations that don’t match the steps they took.
To explore this, the researchers asked the model to add two numbers together and explain how it reached its conclusions. They found the model used an unusual approach of combining approximate values and then working out what number the result must end in to refine its answer.
However, when asked to explain how it came up with the result, it claimed to have used a completely different approach—the kind you would learn in math class and is readily available online. The researchers say this suggests the process by which the model learns to do things is separate from the process used to provide explanations and could have implications for efforts to ensure machines are trustworthy and behave the way we want them to.
The researchers caveat their work by pointing out that the method only captures a fuzzy and incomplete picture of what’s going on under the hood, and it can take hours of human effort to trace the circuit for a single prompt. But these kinds of capabilities will become increasingly important as systems like Claude become integrated into all walks of life.
The post What Anthropic Researchers Found After Reading Claude’s ‘Mind’ Surprised Them appeared first on SingularityHub.
Scientists Just Transplanted a Pig Liver Into a Person for the First Time
The liver performed basic functions but isn’t yet a full replacement.
Our liver has admirable regenerative properties. But it takes a beating every day. Eventually, its tissues scar, and if the organ fails, a liver transplant is the only solution.
Donor livers are hard to come by, however. This week, a Chinese team turned to another source—pig livers—and published the first results showing how they function inside a human recipient. The liver in the study underwent heavy gene editing to rid it of genes that trigger immune rejection and add genes making it appear more human to the body.
Just two hours after transplant, the pig liver began producing bile, a type of digestive fluid that breaks down fat. The organ remained functional until the end of the experiment 10 days later, without marked signs of rejection or inflammation.
“This is the first time we tried to unravel whether the pig liver could work well in the human body,” said study author Lin Wang at Xijing Hospital in China in a press briefing. The pig liver is meant to be a stop-gap measure rather than a full replacement. It could temporarily keep patients alive until a human donor organ becomes available or the patient’s own liver recovers.
“The study represents a milestone in the history of liver xenotransplantation,” said Iván Fernández Vega at the University of Oviedo in Spain, who was not involved in the study. “I found the work very relevant, but we have to be cautious.”
Crossing SpeciesThere’s a severe lack of donated organs. As of March 2025, over 104,600 people are on a transplant waitlist, which could take months, if not years. Some don’t survive the wait.
Xenotransplantation, or the transplantation of organs from one animal into another, offers another solution. For the past decade, scientists have been eyeing other species as resources for functional organs that could replace broken human body parts. Bama miniaturized pigs are especially promising because their internal organs are similar in size and function to ours.
But there are caveats. Pig organs are dotted with sugars that spur our immune systems into action. Immune cells attack the foreign organ, damaging its function or triggering rejection.
There’s also the risk posed by porcine endogenous retroviruses or PERVs. These are tricky viruses embedded inside the genomes of all pigs. Although they don’t seem to harm pigs, they can infect some human cells and potentially lead to disease.
Xenotransplant efforts over the past decade have tried gene editing pig organs to rid them of PERVs. Other edits inhibit genes responsible for immune rejection and make the organs appear more human to the body.
There have been successes. Genetically engineered pig hearts transplanted into baboons with heart failure allowed them to thrive for over six months. Pig kidney grafts with 69 genetic edits retained function after transplantation in monkeys.
And although highly experimental, xenotransplantation has already been used in humans. In 2021, a team performed the first transplant of a genetically modified pig kidney into a brain-dead person. The kidney was attached to blood vessels in the upper leg outside the belly and covered with a protective shield.
Since then, surgeons have transplanted hearts, kidneys, and a thymus directly inside the bodies of living volunteers, with mixed results. One pig heart recipient soon passed away after the xenotransplant. Another fared better with a pig kidney: The 53-year-old grandma returned home this February after receiving the organ late last year.
Her ”recovery from a long history of kidney failure and dialysis treatment has been nothing short of remarkable,” said study lead Robert Montgomery at NYU Langone Transplant Institute at the time.
Liver xenotransplants, however, pose additional problems.
The organ “is so complicated,” said Wang. As the ultimate multitasker, it metabolizes drugs and other chemicals, makes bile and other digestive juices, cleans out old blood cells, and produces proteins for blood clotting. Each of these functions is orchestrated by a symphony of molecules that could differ between pigs and humans. A mismatch could result in a pig liver that can’t work in the human body or one that triggers dangerous immune responses.
In 2023, a team from the University of Pennsylvania took a stab at the problem. They connected a genetically engineered pig liver to the bloodstream of a brain-dead person with the organ outside the body. The donor liver, engineered by the biotechnology company eGenesis to reduce the chance of immune rejection, remained healthy for at least 72 hours.
Plus OneThe new study aimed to show that a pig liver transplant could last longer and perform its usual tasks. The team sourced the liver from Clonorgan Biotechnology based in Chengdu, China.
The donor organ was from a seven-month-old Bama miniature pig and had six gene edits. The majority of the edits were designed to prevent hyperacute rejection, where the immune system launches a full onslaught against the transplant within minutes.
The recipient was a brain-dead, middle-aged man who still had a working liver. Rather than trying to replace his liver, the team wanted to find out whether a pig liver could survive and function inside a human body while performing its normal roles.
Surgeons hooked the gene-edited pig liver to the donor’s blood supply and monitored it for 10 days—the amount of time the recipient’s family approved for the experiment. Within hours, the organ began synthesizing and pumping out bile at a gradually increasing volume. The liver also made albumin, a protein crucial for maintaining fluids and transporting molecules.
Blood from the recipient flowed smoothly throughout the liver, which likely prevented blood clots often associated with liver transplants. Thanks to immunosuppressant drugs, the patient’s immune system stayed relatively quiet and didn’t attack the pig organ.
“This is the world’s first [published] case of a transplant of a genetically modified pig liver into a brain-dead human,” said Rafael Matesanz, creator and founder of the National Transplant Organization in Spain, who was not involved in the work.
Many questions remain. The liver has multiple functions, but the study only tested bile and albumin production. Could the pig liver also filter toxins from the blood or break down medications? Also, the study only observed one person for a relatively short time. The results might not hold for other demographics, and the transplant could falter down the road.
And because the volunteer still had a functional liver, “we cannot extrapolate the extent to which this xenograft would have supported a patient in liver failure,” said Peter Friend at the University of Oxford, who was not involved in the study.
Even so, a temporary bridge transplant—where a pig liver would support bodily functions short-term while the recipient waits for a permanent transplant—could save lives.
The same team recently completed a full pig-to-human liver transplant, swapping out the liver of a brain-dead human with one from a genetically-modified pig. They plan to release the data in a future publication. “Whether it could replace the original human liver in the future [is unknown],” said Wang at the press briefing. “It is our dream to make this achievement.”
The post Scientists Just Transplanted a Pig Liver Into a Person for the First Time appeared first on SingularityHub.
Technology Has Shaped Human Knowledge for Centuries. Generative AI Is Set to Transform It Yet Again.
We stand on the brink of the next knowledge revolution.
Where would we be without knowledge? Everything from the building of spaceships to the development of new therapies has come about through the creation, sharing, and validation of knowledge. It is arguably our most valuable human commodity.
From clay tablets to electronic tablets, technology has played an influential role in shaping human knowledge. Today, we stand on the brink of the next knowledge revolution. It is one as big as—if not more so—the invention of the printing press or the dawning of the digital age.
Generative artificial intelligence is a revolutionary new technology able to collect and summarize knowledge from across the internet at the click of a button. Its impact is already being felt from the classroom to the boardroom, the laboratory to the rainforest.
Looking back to look forward, what do we expect generative AI to do to our knowledge practices? And can we foresee how this may change human knowledge, for better or worse?
The Power of the Printing PressWhile printing technology had a huge immediate impact, we are still coming to grips with the full scale of its effects on society. This impact was largely due to its ability to spread knowledge to millions of people.
Of course, human knowledge existed before the printing press. Non-written forms of knowledge date back tens of thousands of years, and researchers are today demonstrating the advanced skills associated with verbal knowledge.
In turn, scribal culture played an integral role in ancient civilizations. Serving as a means to preserve legal codes, religious doctrines, or literary texts, scribes were powerful people who traded hand-written commodities for kings and nobles.
But it was the printing press—specifically the process of using movable type allowing for much cheaper and less labor-intensive book production—that democratized knowledge. This technology was invented in Germany around 1440 by goldsmith Johannes Gutenberg. Often described as the speaking of one-to-many, printing technology was able to provide affordable information to entire populations.
This exponential increase in knowledge dissemination has been associated with huge societal shifts, from the European Renaissance to the rise of the middle classes.
The printing press was invented in Germany around 1440. Daniel Chodowiecki/Wikipedia The Revolutionary Potential of the Digital AgeThe invention of the computer—and more importantly the connecting of multiple computers across the globe via the internet—saw the arrival of another knowledge revolution.
Often described as a new reality of speaking many-to-many, the internet provided a means for people to communicate, share ideas, and learn.
In the internet’s early days, USENET bulletin boards were digital chatrooms that allowed for unmediated crowd-sourced information exchange.
As internet users increased, the need for content regulation and moderation also grew. However, the internet’s role as the world’s largest open-access library has remained.
The Promise of Generative AIGenerative AI refers to deep-learning models capable of generating human-like outputs, including text, images, video, and audio. Examples include ChatGPT, Dall-E, and DeepSeek.
Today, this new technology promises to function as our personal librarian, reducing our need to search for a book, let alone open its cover. Visiting physical libraries for information has been unnecessary for a while, but generative AI means we no longer need to even scroll through lists of electronic sources.
Trained on hundreds of billions of human words, AI can condense and synthesize vast amounts of information across a variety of authors, subjects, or time periods. A user can pose any question to their AI assistant, and for the most part, will receive a competent answer. Generative AI can sometimes, however, “hallucinate,” meaning it will deliver unreliable or false information, instead of admitting it doesn’t know the answer.
Generative AI can also personalize its outputs, providing renditions in whatever language and tone required. Marketed as the ultimate democratizer of knowledge, the adaptation of information to suit a person’s interests, pace, abilities, and style is extraordinary.
But, as an increasingly prevalent arbitrator of our information needs, AI marks a new phase in the history of the relationship between knowledge and technology.
It challenges the very concept of human knowledge: its authorship, ownership, and veracity. It also risks forfeiting the one-to-many revolution that was the printing press and the many-to-many potential that is the internet. In so doing, is generative AI inadvertently reducing the voices of many to the banality of one?
Using Generative AI WiselyMost knowledge is born of debate, contention, and challenge. It relies on diligence, reflexivity, and application. The question of whether generative AI promotes these qualities is an open one, and evidence is so far mixed.
Some studies show it improves creative thinking, but others do not. Yet others show that while it might be helping individuals, it is ultimately diminishing our collective potential. Most educators are concerned it will dampen critical thinking.
More generally, research on “digital amnesia” tells us that we store less information in our heads today than we did previously due to our growing reliance on digital devices. And, relatedly, people and organizations are now increasingly dependent on digital technology.
Using history as inspiration, more than 2,500 years ago the Greek philosopher Socrates said that true wisdom is knowing when we know nothing.
If generative AI risks making us information rich but thinking poor (or individually knowledgeable but collectively ignorant), these words might be the one piece of knowledge we need right now.
This article is republished from The Conversation under a Creative Commons license. Read the original article.
The post Technology Has Shaped Human Knowledge for Centuries. Generative AI Is Set to Transform It Yet Again. appeared first on SingularityHub.
These Tiny Liquid Robots Merge and Split Like ‘Terminator’
Made of teflon and water, the robots could one day shuttle drugs around the body.
Our cells are like the ultimate soft robots. Made mostly of a liquid interior wrapped inside a fatty shell, they split, stretch, roam, and squeeze into every nook and cranny of the body.
Actual robots, not so much. Even soft robots made of flexible materials struggle to deform outside of the physical limits of their building blocks.
This month, a team from Korea introduced liquid robots inspired by biological cells. About the size of a grain of rice, each robot is made of water coated with Teflon particles. The gummy-candy-like blobs are controlled using sound waves and can slip through grated fences, chomp up debris, and skim across solid and liquid surfaces.
They can also function as tiny chemical reactors. In a test, the team directed two robots, each loaded with a different chemical, to jump off a ledge and merge together without breaking, allowing the chemicals to react inside their Teflon shells.
Because the robots are biocompatible, they could one day shuttle drugs to hard-to-reach areas of the body—potentially loading up on chemotherapies to kill tumors, for example. Formations with other molecular tools embedded within the bots could also help diagnose diseases.
“It is challenging to emulate biological forms and functions with artificial machines,” wrote the team. “[But] a promising avenue to tackle this problem is harnessing the supreme deformability of liquids while providing stable yet flexible shells around them.”
From T-1000 to Liquid MarblesThose who have seen Terminator 2: Judgment Day will remember the film’s formidable robot antagonist. Made of liquid metal, the T-1000 deforms, liquifies, and reconstructs itself on demand, instantly healing damage to its body.
Scientists have long sought to capture this versatility in machines (without the killer robot angle, of course). Previous studies have used a variety of liquid metals that change their shape when subjected to electromagnetic fields. These unconventional robots—smaller than a fingertip—can split, merge, and transport cargoes on demand. But their high metal content makes them incompatible with most chemical reactions and biology, limiting their practical use.
Another way to build liquid robots is to encapsulate water or other liquids in an armor-like barrier. It’s a bit like making gummy candy with a squishy but supportive outer casing and a gushy core. In practice, researchers dust a hydrophobic powder onto a liquid drop, the mixture shrinks into a bead-like shape thanks to a physical phenomenon called capillary interaction.
These forces partly stem from the surface tension between a solid and liquid, like when you barely overfill a glass and the water forms a round top. Adding hydrophobic powder to small amounts of liquid stabilizes these forces, pushing water molecules into tiny beads that almost behave like solids.
Appropriately dubbed liquid marbles, these non-stick water drops can roll across surfaces. Researchers can control their movement using gravity and electrical and magnetic fields, allowing them to float and climb across terrain. Some versions can even shuttle ingredients from one place and release their cargo in another.
But classic liquid marbles have a weakness. Small fluctuations in temperature or force, such as squeezing or dropping, causes them to leak or fully collapse. So, the authors developed a stronger shell to make their marbles more durable.
Ice, Ice, BabyFirst, the team searched for the best ratio of Teflon dust to water. They found that more dust on the surface led to stronger, more durable shells.
Next, they worked out how to manufacture droplets with higher dust content. Traditional methods use spherical drops, which don’t have a lot of surface area compared to their volume. Cubes are a better starting point because they have more area. So, the team froze water in custom ice trays and coated the cubes with industrial-grade Teflon powder.
This method has another perk. Ice has more volume than water. As the cubes melt, their volume shrinks, squeezing the Teflon particles together on the surface of the droplets, limiting their movement, and forming much stronger armor for each liquid robot.
On the MoveThe team pitted these enhanced liquid robots against traditional liquid marbles in a kind of playground with paper-covered foam structures and pools of water.
Both kinds of droplets could deform, such as briefly opening to expose their watery interior. But thanks to their harder shell, the Teflon bots were better able keep their liquid cores intact and survive falls without bursting. The liquid marbles, on the other hand, stuck to surfaces and eventually collapsed.
The team used sound waves to steer the robots around for more difficult tasks. In one task, they piloted the bots across an array of 3D-printed pillars. Upon meeting a pair of pillars, the robots split open, oozed through, and then merged back into their original forms on the other side. In another test, the researchers zapped adjacent bots with sound waves, deforming them into a bridge-like shape. Once touching, the two bots merged into a single, larger blob.
Thanks to their water-repelling nature, the robots could skim over both water and land—sometimes both. Older liquid marbles easily burst when shifting between the two terrains.
Liquid Bot MissionTo fully test the robots, the team designed a mission where two robots worked together. One bot picked up a chemical “toxin” locked behind bars. It then had to find its partner with the “antidote” in a pool of water, merge with the other bot to neutralize the toxin, and dump the final chemical into a safe container.
The team steered the first bot through its prison bars to engulf the toxin and carry it back out. Meanwhile, its partner skimmed across the pool to devour the antidote. The bots dropped from a height multiple times their size to their rendezvous, where they merged toxin and antidote, opened the outer shell, and dumped out the neutralized chemical.
Don’t worry, we’re still a ways from building T-1000s. The liquid robots are tiny and controlled manually. But the team is working to add smart materials for autonomous operation. And though they used water and Teflon here, the same process could be used in the future to mix other ingredients into a variety of liquid robots with different capabilities.
The post These Tiny Liquid Robots Merge and Split Like ‘Terminator’ appeared first on SingularityHub.
This Week’s Awesome Tech Stories From Around the Web (Through March 22)
Inside Google’s Two-Year Frenzy to Catch Up With OpenAIParesh Dave and Arielle Pardes | Wired
“Wired spoke with more than 50 current and former employees—including engineers, marketers, legal and safety experts, and a dozen top executives—to trace the most frenzied and culture-reshaping period in the company’s history. …This is the story, being told with detailed recollections from several executives for the first time, of those turbulent two years and the trade-offs required along the way.”
RoboticsWatch the Atlas Robot Bust a Move in Boston Dynamics’ Latest VideoAnna Washenko | Engadget
“In the [new clip], [Boston Dynamics’] Atlas robot demonstrates several types of full-body movement, starting with a walk and advancing to a cartwheel and even a spot of break dancing. The different actions were developed using reinforcement learning that used motion capture and animation as source materials.”
ComputingNot Everyone Is Convinced by Microsoft’s Topological QubitsDina Genkina | IEEE Spectrum
“The Microsoft team has not yet reached the milestone where the scientific community would agree that they’ve created a single topological qubit. ‘They have a concept chip which has eight lithographically fabricated qubits,’ Eggleston says. ‘But they’re not functional qubits, that’s the fine print. It’s their concept of what they’re moving towards.'”
FutureIn Las Vegas, a Former SpaceX Engineer Is Pulling CO2 From the Air to Make ConcreteAdele Peters | Fast Company
“In an industrial park in North Las Vegas, near an Amazon warehouse and a waste storage facility, a new carbon removal plant is beginning to pull CO2 from the air and store it permanently. Called Project Juniper, it’s the first ‘integrated’ plant of its kind in the US, meaning that it handles both carbon capture and storage in one place.”
FutureJudge Disses Star Trek Icon Data’s Poetry While Ruling AI Can’t Author WorksAshley Belanger | Ars Technica
“Data ‘might be worse than ChatGPT at writing poetry,’ but his ‘intelligence is comparable to that of a human being,’ Millet wrote. If AI ever reached Data levels of intelligence, Millett suggested that copyright laws could shift to grant copyrights to AI-authored works. But that time is apparently not now. ‘There will be time enough for Congress and the Copyright Office to tackle those issues when they arise,’ Millett wrote.”
ScienceIs Dark Energy Getting Weaker? New Evidence Strengthens the Case.Charlie Wood | Quanta
“Last year, an enormous map of the cosmos hinted that the engine driving cosmic expansion might be sputtering. …[This week], the scientists [reported] that they have analyzed more than twice as much data as before and that it points more strongly to the same conclusion: Dark energy is losing steam.”
Robotics1X Will Test Humanoid Robots in ‘a Few Hundred’ Homes in 2025Maxwell Zeff | TechCrunch
“These in-home tests will allow 1X to collect data on how Neo Gamma operates in the home. Early adopters will help create a large, valuable dataset that 1X can use to train in-house AI models and upgrade Neo Gamma’s capabilities.”
SpaceSee the First Ever Footage of Sunset on the Moon Captured by Blue GhostGeorgina Torbet | Digital Trends
“With the Blue Ghost lunar mission coming to an end this week, the spacecraft has gifted scientists and the public with an incredible send-off. The moon lander captured the first ever HD imagery of a sunset as seen from the moon, and the images have been stitched together into a video.”
TechThe Unbelievable Scale of AI’s Pirated-Books ProblemAlex Reisner | The Atlantic
“LibGen and other such pirated libraries make information more accessible, allowing people to read original work without paying for it. Yet generative-AI companies such as Meta have gone a step further: Their goal is to absorb the work into profitable technology products that compete with the originals. Will these be better for society than the human dialogue they are already starting to replace?”
SpaceWebb Telescope Captures First Direct Evidence of Carbon Dioxide on an ExoplanetIsaac Schultz | Gizmodo
“The images feature HR 8799, a multiplanet system 130 light-years from Earth. The discovery not only reveals a chemical compound essential on Earth for processes including photosynthesis and the carbon cycle, but also indicates that gas giant planets elsewhere in the galaxy formed in a similar way to our local giants, Jupiter, and Saturn.”
ComputingTop Developers Want Nvidia Blackwell Chips. Everyone Else, Not So MuchAnissa Gardizy | The Information
“Jensen Huang turned Nvidia into the third most valuable company in the world by designing chips that were way ahead of their time. But Huang’s remarks on Tuesday suggest he’s pulling far ahead of some customers, and the growing gap between what he’s selling and what they’re buying could spell trouble.”
The post This Week’s Awesome Tech Stories From Around the Web (Through March 22) appeared first on SingularityHub.
What Range Anxiety? These Chinese Electric Cars Charge in Just Five Minutes
BYD says its new chargers deliver 249 miles of range in the time you’d spend gassing up at the pump.
A major barrier to widespread adoption of electric vehicles is the time it takes to recharge them. This week, Chinese electric vehicle maker BYD unveiled a charger almost as fast as filling up at the gas pump.
The distance electric vehicles can travel on a single charge has climbed dramatically in recent years, but on average, they still only manage about half of what’s possible on a full tank of gas. This limited range is made worse by the fact that public chargers are far less ubiquitous than gas stations and take much longer to charge up a vehicle.
These issues explain why “range anxiety” is one of the most frequently cited barriers to the technology’s adoption. Though companies have developed fast chargers that can deliver 200 miles worth of juice in about 15 minutes, range is still a significant sticking point for many consumers.
Chinese electric vehicle maker BYD may have gone a long way toward easing those concerns with a newly unveiled ultra-fast charger that can deliver 249 miles worth of electricity in 5 minutes. The company also announced plans to install more than 4,000 of these chargers across China.
“In order to completely solve our users’ charging anxiety, we have been pursuing a goal to make the charging time of electric vehicles as short as the refueling time of petrol vehicles,” BYD founder Wang Chuanfu said at a launch event in Shenzhen, according to The Verge.
The breakthrough wasn’t down to a new charger alone. BYD’s so called “Super e-Platform” combines batteries that can charge at 10 times their capacity per hour with internally developed high-volt silicon carbide power chips that enable the chargers to deliver 1,000 kilowatts of power, according to Reuters.
A number of Chinese automakers can provide similar range on a 10-minute charge, but BYD is the first to offer timescales comparable to filling up at the gas pump. By comparison, Tesla’s existing superchargers only manage 250 kilowatts, and a new version due to be launched later this year will top out at 500 kilowatts.
“Tesla has definitely moved from leader to laggard in EV battery and charging technology at this point,” Matt Teske, founder and CEO of electric vehicle charging startup Chargeway, told Axios.
The new charger’s performance is thanks in part to its ability to handle up to 1,000 volts, while Tesla’s chargers only manage 400 volts. But these ultra-high voltages could pose problems for grid capacity if widely rolled out, analysts told Reuters.
It’s also worth noting that the range measurements are based on Chinese standards, which are more generous than those used by the US Environmental Protection Agency, according to Ars Technica.
Either way, US drivers won’t likely experience such lightning fast charging any time soon. The new charger will only be available for owners of two new BYD vehicles—the Han L sedan and Tang L SUV—and Chinese-made electric vehicles are essentially banned in the US, following new rules introduced by the Biden administration earlier this year.
So, while range anxiety will likely remain a sticking point for many car buyers in the near future, BYD has thrown down the gauntlet to others in the industry. It probably won’t be long before recharging your electric car is as quick and convenient as filling up at the pump.
The post What Range Anxiety? These Chinese Electric Cars Charge in Just Five Minutes appeared first on SingularityHub.
Brain Scans of Infants Reveal the Moment We Start Making Memories
Kids form fleeting memories at around 12 months, even as their brains are rapidly rewiring themselves.
A giggling toddler in a pink dress and matching headphones lies down on her back in front of a gigantic whirling machine. A pillowy headrest cushions her head. She seems unfazed as she’s slowly shuttled into the claustrophobic brain scanner. Once settled, a projection showing kaleidoscope-like animations holds her attention as the magnetic resonance imaging (MRI) machine scans her brain.
The girl is part of a new study seeking to answer a century-old mystery: Why can’t most us remember the first three years of our lives? Dubbed “infantile amnesia” by Sigmund Freud, the study could provide insight into how the brain develops during our early years. And if we can form memories at a young age, are they fleeting, or are they still buried somewhere in the adult brain?
It seems like a simple question, but an answer has eluded scientists.
Though infants and toddlers aren’t yet able to give detailed verbal feedback, studying their behavior has begun to shed light on if and when they remember people, things, or places. Still, the approach can’t peek in on what’s happening in the brain in those early years. MRI can.
A team from Columbia and Yale University scanned the brains of 26 infants and toddlers aged 4 to 25 months as they completed a memory task. They found that at roughly a year old, a part of the brain crucial to memory formation spun into action and began generating neural signals related to things the kids remembered from the tests.
Called the hippocampus, this sea-horse-shaped structure deep inside the brain is crucial to the encoding of our life stories—who, when, where, what. Adults with a damaged hippocampus suffer memory problems. But because wiring inside the hippocampus is still developing during our earliest years, scientists believe it may be too immature to form memories.
“It’s not that we don’t have any memories from that period [infancy],” said study author Nicholas Turk-Browne in a press briefing. “In fact, early life is when we learn our language. It’s when we learn how to walk…learn the names of objects and form social relationships.”
“What happens during that period when we learn so much, but remember so little?” he added.
Stages of MemoryMemory seems like all-or-none: You either remember something, or you don’t.
It’s not that simple. Decades of research have identified the hippocampus as the main orchestrator of episodic memories. These allow you to remember an acquaintance at a party, where you parked your car, or what you had for dinner three nights ago.
Each everyday experience is encoded in neural connections in the hippocampus. Groups of neurons called engrams capture different memories and keep them separate, so that they don’t bleed into each other.
Once encoded, the brain etches important memories into long-term storage during sleep. Studies of slumbering rodents and humans after learning a new task found that the hippocampus replayed brain activity at higher speed during the night, correlating with better performance on a trained memory task the next day.
The last step is retrieval. This is when the brain fishes out stored memories and delivers them to our conscious brain—and so, we “remember.”
Failure of any of these steps causes amnesia. So, which steps are responsible for the erosion of baby memories?
Bundles of JoyBrain scans from 26 infants now offer some intriguing clues.
The team behind the new study scanned the children’s brains with functional MRI (fMRI) as they looked at a screen in the scanner and took a memory test. fMRI captures brain oxygen levels (BOLD) as a proxy for local neuron signaling—higher levels mean more brain activity.
The head needs to keep very still throughout the scans to avoid blurring. That’s not easily accomplished with babies and toddlers. Previous studies circumvented the problem by imaging their brains while sleeping, but the results couldn’t capture memory processes.
To keep the infants happy, engaged, and safe, parents brought favorite blankets and pacifiers, and younger infants were wrapped inside a comfortable vacuum pillow to reduce movement. A video system projected images onto the ceiling of the scanner within their line of sight.
As the kids looked at a bright kaleidoscope-like video, images of faces, scenes, and objects would flash for a few seconds. These included toys or landscapes of an alpine cabin with mountains in the background. Previous studies found infants like to stare at objects or images they’ve seen before compared to new objects, suggesting they remember previous encounters.
Throughout the sessions the team added projections showing a previously seen picture and a new one and monitored the infants’ eye movement using a video camera.
“The ingenuity of their experimental approach should not be understated,” wrote Adam Ramsaran and Paul Frankland at the Hospital for Sick Children in Toronto, Canada, who were not involved in the study.
BOLD FindingsThe kids often squirmed during the sessions. Some weren’t interested in the pictures; others fell asleep in the scanner.
Still, the team managed to capture hippocampal BOLD signals averaging roughly eight minutes per participant and matched them to memory performance. On average, parts of the hippocampus ramped up activity for images that the infants later remembered—that is, they looked at it for longer during the test phases.
But not all infants performed the same. The younger cohort, under a year, didn’t show the surge of BOLD signals suggesting memory encoding. They also ignored already seen images compared to new ones.
It seems babies start encoding memories around a year of age, even as their hippocampus is still developing.
The results are similar to those in baby rodents. The early years are chaotic. The brain undergoes extensive rewiring. This makes it a difficult to form lasting memories. Yet some supposedly lost memories encoded at a young age can be recovered later in life with reminder cues or by directly activating the set of neurons that originally encoded the memory.
That’s not to say infants can acquire rich recollections—stories including multiple people, places, and things—at a year. The study only tested brain signatures for individual components.
Future studies tracking the hippocampus might shed light on the minimal brain architecture needed to support vivid autobiographical memories. Examining other stages of memory could shine more light on infantile amnesia. For example, do infants also replay neural signals as they sleep to etch new experiences into long-term memory?
And maybe—just maybe—our earliest memories could one day be retrieved later in childhood or beyond.
The post Brain Scans of Infants Reveal the Moment We Start Making Memories appeared first on SingularityHub.
New Tech Bends Sound Through Space So It Reaches Only Your Ear in a Crowd
Audible enclaves are local pockets of sound no one else can hear—no headphones required.
What if you could listen to music or a podcast without headphones or earbuds and without disturbing anyone around you? Or have a private conversation in public without other people hearing you?
Newly published research from our team at Penn State introduces a way to create audible enclaves—localized pockets of sound that are isolated from their surroundings. In other words, we’ve developed a technology that could create sound exactly where it needs to be.
The ability to send sound that becomes audible only at a specific location could transform entertainment, communication, and spatial audio experiences.
What Is Sound?Sound is a vibration that travels through air as a wave. These waves are created when an object moves back and forth, compressing and decompressing air molecules.
The frequency of these vibrations is what determines pitch. Low frequencies correspond to deep sounds, like a bass drum; high frequencies correspond to sharp sounds, like a whistle.
Sound is composed of particles moving in a continuous wave. Daniel A. Russell, CC BY-NC-NDControlling where sound goes is difficult because of a phenomenon called diffraction—the tendency of sound waves to spread out as they travel. This effect is particularly strong for low-frequency sounds because of their longer wavelengths, making it nearly impossible to keep sound confined to a specific area.
Certain audio technologies, such as parametric array loudspeakers, can create focused sound beams aimed in a specific direction. However, these technologies still emit sound that is audible along its entire path as it travels through space.
The Science of Audible EnclavesWe found a new way to send sound to one specific listener using self-bending ultrasound beams and a concept called nonlinear acoustics.
Ultrasound refers to sound waves with frequencies above the range of human hearing, or 20 kHz. These waves travel through the air like normal sound waves but are inaudible to people. Because ultrasound can penetrate many materials and interact with objects in unique ways, it’s widely used for medical imaging and many industrial applications.
In our work, we used ultrasound as a carrier for audible sound. It can transport sound through space silently—becoming audible only when desired. How did we do this?
Normally, sound waves combine linearly, meaning they just proportionally add up into a bigger wave. However, when sound waves are intense enough, they can interact nonlinearly, generating new frequencies that were not present before.
This is the key to our technique: We use two ultrasound beams at different frequencies that are completely silent on their own. But when they intersect in space, nonlinear effects cause them to generate a new sound wave at an audible frequency that would be heard only in that specific region.
Audible enclaves are created at the intersection of two ultrasound beams. Jiaxin Zhong et al./PNAS, CC BY-NC-NDCrucially, we designed ultrasonic beams that can bend on their own. Normally, sound waves travel in straight lines unless something blocks or reflects them. However, by using acoustic metasurfaces—specialized materials that manipulate sound waves—we can shape ultrasound beams to bend as they travel. Similar to how an optical lens bends light, acoustic metasurfaces change the shape of the path of sound waves. By precisely controlling the phase of the ultrasound waves, we create curved sound paths that can navigate around obstacles and meet at a specific target location.
The key phenomenon at play is called difference frequency generation. When two ultrasonic beams of slightly different frequencies overlap—such as 40 kHz and 39.5 kHz—they create a new sound wave at the difference between their frequencies—in this case 0.5 kHz, or 500 Hz, which is well within the human hearing range. Sound can be heard only where the beams cross. Outside of that intersection, the ultrasound waves remain silent.
This means you can deliver audio to a specific location or person without disturbing other people as the sound travels.
Advancing Sound ControlThe ability to create audio enclaves has many potential applications.
Audio enclaves could enable personalized audio in public spaces. For example, museums could provide different audio guides to visitors without headphones, and libraries could allow students to study with audio lessons without disturbing others.
In a car, passengers could listen to music without distracting the driver as they listen to navigation instructions. Offices and military settings could also benefit from localized speech zones for confidential conversations. Audio enclaves could also be adapted to cancel out noise in designated areas, creating quiet zones to improve focus in workplaces or reduce noise pollution in cities.
This isn’t something that’s going to be on the shelf in the immediate future. Challenges remain for our technology. Nonlinear distortion can affect sound quality. And power efficiency is another issue—converting ultrasound to audible sound requires high-intensity fields that can be energy intensive to generate.
Despite these hurdles, audio enclaves present a fundamental shift in sound control. By redefining how sound interacts with space, we open up new possibilities for immersive, efficient, and personalized audio experiences.
This article is republished from The Conversation under a Creative Commons license. Read the original article.
The post New Tech Bends Sound Through Space So It Reaches Only Your Ear in a Crowd appeared first on SingularityHub.
A Massive AI Analysis Found Genes Related to Brain Aging—and Drugs to Slow It Down
Brain scans from nearly 39,000 people revealed genes and drugs to potentially slow aging.
When my grandad celebrated his 100th birthday with a bowl of noodles, his first comment was, “Nice, but this is store-bought.” He then schooled everyone on the art of making noodles from scratch, sounding decades younger than his actual age.
Most of us know people who are mentally sharper than their chronological age. In contrast, some folks seem far older. They’re easily confused, forget everyday routines, and have a hard time following conversations or remembering where they parked their car.
Why do some brains age faster, while others avoid senior moments even in the twilight years? Part of the answer may be in our genes. This month, a team from China’s Zhejiang University described an AI they’ve developed to hunt down genes related to brain aging and neurological disorders using brain scans from nearly 39,000 people.
They found seven genes, some of which are already in the crosshairs of scientists combating age-related cognitive decline. A search of clinical trials uncovered 28 existing drugs targeting those genes, including some as common as hydrocortisone, a drug often used for allergies and autoimmune diseases.
These drugs are already on the market, meaning they’ve been thoroughly vetted for safety. Repurposing existing drugs for brain aging could be a faster alternative to developing new ones, but they’ll have to be thoroughly tested to prove they actually bring cognitive improvements.
How Old Is My Brain?The number of candles on your birthday cake doesn’t reflect the health of your brain. To gauge the latter—dubbed biological age—scientists have developed multiple aging clocks.
The Horvath Clock, for example, measures signatures of gene activity associated with aging and cognitive decline. Researchers have used others, such as GrimAge, to measure the effects of potential anti-aging therapies, such as caloric restriction, in clinical trials.
Scientists are still debating which clock is the most accurate for the brain. But most agree the brain age gap, or the difference between a person’s chronological age and brain age, is a useful marker. A larger gap in either direction means the brain is aging faster or slower than expected.
Why one or the other might be true for people is still mysterious.
“There is a general consensus that the trajectories of brain aging differ substantially among individuals due to genetic factors, lifestyles, environmental factors, and chronic disease of the patient,” wrote the team. Finding genes related to the brain age gap could bring new drugs that prevent, slow down, or even reverse aging. But studies are lacking, they added.
A Brain-Wide PictureHow well our brain works relies on its intricate connections and structure. These can be captured with magnetic resonance imaging (MRI). But each person’s neural wiring is slightly different, so piecing together a picture of an “average” aging brain requires lots of brain scans.
Luckily, the UK Biobank has plenty.
Launched in 2006, the organization’s database includes health data from half a million participants. For this study, the team analyzed MRI scans from around 39,000 people between 45 and 83 years of age, with a roughly equal number of men and women. Most were cognitively healthy, but over 6,600 had a brain injury, Alzheimer’s disease, anxiety, depression, and other disorders.
They then pitted seven state-of-the-art AI models against each other to figure out which model delivered the most accurate brain age estimate. One, called 3D-ViT, stood out for its ability to detect differences in brain structure associated with the brain age gap.
Next, the team explored whether some brain regions contributed to the gap more than others. With a tool often used in computer vision called saliency maps, they found two brain regions that were especially important to the AI’s estimation of the brain age gap.
One, the lentiform nucleus, is an earbud-like structure that sits deep inside the brain and is involved in movement, cognition, and emotion. The other is part of a neural highway that controls how different brain regions communicate—particularly those that run from deeper areas to the cortex, the outermost part of the brain responsible for reasoning and flexible thinking. These mental capabilities tend to slowly erode during aging.
Unsurprisingly, a larger gap also correlated with Alzheimer’s disease. But stroke, epilepsy, insomnia, smoking, and other lifestyle factors didn’t make a significant difference—at least for this population.
Genes to DrugsAccelerated brain aging could be partly due to genetics. Finding which genes are involved could reveal new targets for therapies to combat faster cognitive decline. So, the team extracted genetic data from the UK Biobank and ran a genome-wide scan to fish out these genes.
Some were already on scientists’ radar. One helps maintain bone and heart health during aging. Another regulates the brain’s electrical signals and wires up neural connections.
The screen also revealed many new genes involved in the brain age gap. Some of these kill infected or cancerous cells. Others stabilize neuron signaling and structure or battle chronic inflammation—both of which can go awry as the brain ages. Most of the genes could be managed with a pill or injection, making it easier to reuse existing drugs or develop new ones.
To hunt down potential drug candidates, the team turned to an open-source database that charts how drugs interact with genes. They found 466 drugs either approved or in clinical development targeting roughly 45 percent of the new genes.
Some are already being tested for their ability to slow cognitive decline. Among these are hydrocortisone—which is mainly used to treat autoimmune disorders, asthma, and rashes—and resveratrol, a molecule found in red wine. They also found 28 drugs that “hold substantial promise for brain aging,” wrote the team, including the hormones estradiol and testosterone. Dasatinib, a senolytic drug that kills off “zombie cells” during aging, also made the list.
The work builds on prior attempts to decipher connections between genes and the brain age gap. A 2019 study used the UK Biobank to pinpoint genes related to neurological disorders that accelerate brain aging. Here, the team connected genes to potential new or existing drugs to slow brain aging.
“Our study provides insights into the genetic basis of brain aging, potentially facilitating drug development for brain aging to extend the health span,” wrote the team.
The post A Massive AI Analysis Found Genes Related to Brain Aging—and Drugs to Slow It Down appeared first on SingularityHub.
This Week’s Awesome Tech Stories From Around the Web (Through March 15)
Powerful AI Is Coming. We’re Not Ready.Kevin Roose | The New York Times
“I believe that the right time to start preparing for AGI is now. This may all sound crazy. But I didn’t arrive at these views as a starry-eyed futurist, an investor hyping my AI portfolio or a guy who took too many magic mushrooms and watched ‘Terminator 2.’ I arrived at them as a journalist who has spent a lot of time talking to the engineers building powerful AI systems, the investors funding it and the researchers studying its effects.”
FutureAGI Is Suddenly a Dinner Table TopicJames O’Donnell | MIT Technology Review
“The concept of artificial general intelligence—an ultra-powerful AI system we don’t have yet—can be thought of as a balloon, repeatedly inflated with hype during peaks of optimism (or fear) about its potential impact and then deflated as reality fails to meet expectations. This week, lots of news went into that AGI balloon. I’m going to tell you what it means (and probably stretch my analogy a little too far along the way).”
RoboticsGemini Robotics Uses Google’s Top Language Model to Make Robots More UsefulScott J. Mulligan | MIT Technology Review
“Google DeepMind has released a new model, Gemini Robotics, that combines its best large language model with robotics. Plugging in the LLM seems to give robots the ability to be more dexterous, work from natural-language commands, and generalize across tasks. All three are things that robots have struggled to do until now.”
BiotechnologyCovid Vaccines Have Paved the Way for Cancer VaccinesJoão Medeiros | Wired
“Going from mRNA Covid vaccines to mRNA cancer vaccines is straightforward: same fridges, same protocol, same drug, just a different patient. In the current trials, we do a biopsy of the patient, sequence the tissue, send it to the pharmaceutical company, and they design a personalized vaccine that’s bespoke to that patient’s cancer. That vaccine is not suitable for anyone else. It’s like science fiction.”
Artificial IntelligenceAI Search Engines Give Incorrect Answers at an Alarming 60% Rate, Study SaysBenj Edwards | Ars Technica
“A new study from Columbia Journalism Review’s Tow Center for Digital Journalism finds serious accuracy issues with generative AI models used for news searches. The research tested eight AI-driven search tools equipped with live search functionality and discovered that the AI models incorrectly answered more than 60 percent of queries about news content.”
TechAI Coding Assistant Refuses to Write Code, Tells User to Learn Programming InsteadBenj Edwards | Ars Technica
According to a bug report on Cursor’s official forum, after producing approximately 750 to 800 lines of code (what the user calls ‘locs’), the AI assistant halted work and delivered a refusal message: ‘I cannot generate code for you, as that would be completing your work. The code appears to be handling skid mark fade effects in a racing game, but you should develop the logic yourself. This ensures you understand the system and can maintain it properly.'”
EnergyExclusive: General Fusion Fires Up Its Newest Steampunk Fusion ReactorTim De Chant | TechCrunch
“General Fusion announced on Tuesday that it had successfully created plasma, a superheated fourth state of matter required for fusion, inside a prototype reactor. The milestone marks the beginning of a 93-week quest to prove that the outfit’s steampunk approach to fusion power remains a viable contender.”
BiotechnologyThis Annual Shot Might Protect Against HIV InfectionsJessica Hamzelou | MIT Technology Review
“I don’t normally get too excited about phase I trials, which usually involve just a handful of volunteers and typically don’t tell us much about whether a drug is likely to work. But this trial seems to be different. Together, the lenacapavir trials could bring us a significant step closer to ending the HIV epidemic.”
ComputingCerebras Just Announced 6 New AI Datacenters That Process 40M Tokens Per Second—and It Could Be Bad News for NvidiaMichael Nuñez | VentureBeat
“Cerebras Systems, an AI hardware startup that has been steadily challenging Nvidia’s dominance in the artificial intelligence market, announced Tuesday a significant expansion of its data center footprint and two major enterprise partnerships that position the company to become the leading provider of high-speed AI inference services.”
RoboticsWaabi Says Its Virtual Robotrucks Are Realistic Enough to Prove the Real Ones Are SafeWill Douglas Heaven | MIT Technology Review
“The Canadian robotruck startup Waabi says its super-realistic virtual simulation is now accurate enough to prove the safety of its driverless big rigs without having to run them for miles on real roads. The company uses a digital twin of its real-world robotruck, loaded up with real sensor data, and measures how the twin’s performance compares with that of real trucks on real roads. Waabi says they now match almost exactly.”
FutureLab-Grown Food Could Be Sold in UK in Two YearsPallab Ghosh | BBC News
“Meat, dairy and sugar grown in a lab could be on sale in the UK for human consumption for the first time within two years, sooner than expected. The Food Standards Agency (FSA) is looking at how it can speed up the approval process for lab-grown foods. Such products are grown from cells in small chemical plants. UK firms have led the way in the field scientifically but feel they have been held back by the current regulations.”
EnergyFor Climate and Livelihoods, Africa Bets Big on Solar Mini-GridsVictoria Uwemedimo and Katarina Zimmer | Knowable Magazine
“In many African countries, solar power now stands to offer much more than environmental benefits. About 600 million Africans lack reliable access to electricity; in Nigeria specifically, almost half of the 230 million people have no access to electricity grids. Today, solar has become cheap and versatile enough to help bring affordable, reliable power to millions—creating a win-win for lives and livelihoods as well as the climate.”
Artificial IntelligenceAnthropic Researchers Forced Claude to Become Deceptive—What They Discovered Could Save Us From Rogue AIMichael Nuñez | VentureBeat
“The research addresses a fundamental challenge in AI alignment: ensuring that AI systems aren’t just appearing to follow human instructions while secretly pursuing other goals.”
ScienceThe Road Map to Alien Life Passes Through the ‘Cosmic Shoreline’Elise Cutts | Quanta Magazine
“Astronomers are ready to search for the fingerprints of life in faraway planetary atmospheres. But first, they need to know where to look — and that means figuring out which planets are likely to have atmospheres in the first place.”
The post This Week’s Awesome Tech Stories From Around the Web (Through March 15) appeared first on SingularityHub.
Staying Sane in an Insane World
This Robotic Hand’s Electronic Skin Senses Exactly How Hard It Needs to Squeeze
The hand can gently pick up anything from plastic cups to pineapples.
Our hands are works of art. A rigid skeleton provides structure. Muscles adjust to different weights. Our skin, embedded with touch, pressure, and temperature sensors, provides immediate feedback on what we’re touching. Flexible joints make it possible to type on a keyboard or use a video game controller without a thought.
Now, a team at Johns Hopkins University has recreated these perks in a life-like prosthetic robot hand. At its core is a 3D-printed skeleton. Each finger has three independently controlled joints made of silicone that are moved around with air pressure. A three-layer electronic skin covering the hand’s fingertips helps it gauge grip strength on the fly. The hand is controlled using electrical signals from muscles in the forearm alone.
In tests, able-bodied volunteers used the hand to pick up stuffed toys and dish sponges without excessive squeezing. It adjusted its grip when challenged with heavy metal water bottles and prickly pineapples—picking up items without dropping them or damaging the hand.
“The goal from the beginning has been to create a prosthetic hand that we model based on the human hand’s physical and sensing capabilities—a more natural prosthetic that functions and feels like a lost limb,” study author Sriramana Sankar said in a press release.
Softening UpProsthetic hands have come a long way. One of the first, crafted out of metal in the Middle Ages, had joints that could be moved passively using another hand.
Today, soft robotics have changed the game. Unlike rigid, unforgiving material, spongy hands can handle delicate objects without distorting or crushing them. Integrated sensors for pressure or temperature make them more life-like by providing sensory feedback.
But soft materials have a problem. They can’t consistently generate the same force to pick up heavy objects. Even with multiple joints and a dynamic palm, squishy robotic hands have a harder time detecting different textures compared to their rigid counterparts, wrote the team. They’re also weak. Existing soft robotic hands can only lift around 2.8 pounds.
In contrast, our hands have both a rigid skeleton and soft tissues—muscles and tendons—that stretch, twist, and contract. Pressure sensors in our skin provide instant feedback: Am I squeezing a plush toy, holding a slippery coffee mug, or manipulating my phone?
That why recent prosthetic designs incorporate both artificial skeletons and muscles.
For example, the commercially available LUKE arm has a metal and plastic skeleton for strength and stability. Its fingertips have soft materials for better dexterity. The prosthetic can grab objects using different inputs—for example, electrical signals from muscles or a foot peddle to switch between grasp strengths. But the hand is still mostly rigid and has limited mobility. The thumb and index finger can flex individually. All the other fingers move together.
Then there’s the problem of feedback. Our fingers use touch to calibrate our grip. Each of the skin’s three layers encodes slightly different sensations with a variety of receptors, or biological sensors. The outer layer feels light touch and slow vibration, like when hair lightly brushes your hand. Deeper layers detect pressure: the texture and weight of a heavy dumbbell, for example.
In 2018, the team behind the new study developed electronic skin inspired by human skin. The material, or E-dermis, sensed textures and transmitted them to surviving nerves in an amputee’s arm with small zaps of electricity. The skin used piezoresistive sensors, such that pressure would change how the sensors conducted electricity. Prosthetic fingertips coated in the sensors allowed an upper-limb amputee to detect a range of sensations, including pressure.
“If you’re holding a cup of coffee, how do you know you’re about to drop it? Your palm and fingertips send signals to your brain that the cup is slipping,” study author Nitish Thakor said in the recent study’s press release. “Our system is neurally inspired—it models the hand’s touch receptors to produce nerve-like messages so the prosthetics’ ‘brain,’ or its computer, understands if something is hot or cold, soft or hard, or slipping from the grip.”
Hands OnThe new design incorporated E-dermis into a hybrid hand designed to mimic a human hand.
The thumb has two joints made of silicone and the fingers have three. Each joint can flex independently. These connect to a rigid 3D-printed skeleton and are moved about by air.
Compared to prosthetics with only soft components, the skeleton adds force and can support heavier weights. The prosthetic hand’s fingertips are covered in a patch of E-dermis the size of a fingernail. Each finger bends naturally, curling into the palm or stretching apart.
Electrical signals from a user’s forearm muscles control the hand. Such devices, dubbed myoelectric prostheses, tap into living nerve endings above the amputation site. When a person thinks of moving the hand, a microprocessor translates the nerve signals into motor commands.
Several studies with able-bodied volunteers showcased the hand’s dexterity. Participants wore a sheath over their forearms to capture the electrical signals in their upper arms—mimicking those used for amputees—and to send them along to the robotic hand.
With minimal training, the volunteers could grab a variety of objects of different sizes, weights, and textures. The hand gently picked up a sponge, without squishing it into oblivion, and a variety of produce—apple, orange, clementine—without bruising it. The prosthetic showed it could also lift heavier items, such as a small stone statue and a metal water bottle.
But the best example, according to the authors, was when it held a fragile plastic cup filled with water using only three fingers. The hand didn’t dent the cup or spill any water.
Overall, it had an impressive 99.7 percent accuracy rate handling 15 everyday items, rapidly adjusting its grip to avoid drops, spills, and other potential mishaps.
To be clear, the device hasn’t been tested on people who’ve lost a hand. And there’s more to improve. Adding a tendon of sorts between the artificial fingers could make them more stable. Mimicking how the palm moves could further boost flexibility. And adding sensors, such as those for temperature, could push the engineered hand even closer to a human’s.
Improving the dexterity of the hands isn’t only “essential for next-generation prostheses,” said Thakor. Future robotic hands will have to seamlessly integrate into everyday living, dealing with all the variety we do. “That’s why a hybrid robot, designed like the human hand, is so valuable—it combines soft and rigid structures, just like our skin, tissue, and bones.”
The post This Robotic Hand’s Electronic Skin Senses Exactly How Hard It Needs to Squeeze appeared first on SingularityHub.
Green Steel Startup’s Largest Reactor Yet Produces a Ton of Molten Metal With Electricity
For Boston Metal, it’s a step towards green steel plants that can make millions of tons of steel.
Steelmaking is one of the hardest industries to decarbonize due to its reliance on high temperatures and coal-based fuels to drive crucial reactions. But a green steel company has made a major breakthrough after its new plant produced more than a ton of the metal.
Rapid progress decarbonizing the energy and transport sectors is leading to a growing focus on areas of the economy where it will be harder to ditch fossil fuels. One of these is steelmaking, which by some estimates produces as much as 8 percent of all carbon emissions.
US startup Boston Metal hopes to change this by commercializing zero-emission steelmaking technology developed at the Massachusetts Institute of Technology. This week, the company completed the first run of its largest reactor yet, which validates key technologies required to start producing steel at industrial scales.
“With this milestone, we are taking a major step forward in making green steel a reality, and we’re doing it right here in the US, demonstrating the critical innovation that can enhance domestic manufacturing,” Tadeu Carneiro, CEO of Boston Metal, said in a press release.
Traditional steelmaking involves burning a coal-based fuel called coke, both to generate the high temperatures required and to remove oxygen from iron ore to create iron. But this generates huge amounts of CO2, which is why steelmaking is so bad for the environment.
Boston Metal’s approach instead uses electrolysis to convert iron ore into molten iron without directly producing any emissions. As a result, if the electricity used to drive the process comes from renewable sources, the resulting metal is almost entirely emission-free.
The company’s process, known as molten oxide electrolysis, involves mixing iron ore with an electrolyte inside a large reactor, heating it to 2,900 degrees Fahrenheit, and then passing a current through it.
The oxygen in the ore separates and bubbles up through the electrolyte, while a layer of molten iron collects at the bottom of the reactor. This reservoir of liquid metal is then periodically tapped, though the process itself is continuous.
One of the biggest challenges for the approach is creating an anode—the positive terminal used to introduce electricity to the reactor—that doesn’t degrade too rapidly. A short shelf life for this component would mean regular stoppages for maintenance or replacement, which would significantly impact the approach’s commercial viability.
Adam Rauwerdink, Boston Metal’s senior vice president of business development, told MIT Technology Review that the company has successfully made their anodes hardier. But the new bus-sized reactor is the first to feature multiple anodes, which will be key to scaling the approach.
The current plant can produce a ton or two of metal in about a month. However, the company hopes to build a plant that can produce the same amount in a day by the end of 2027. The design is modular, and the plan is to eventually string many reactors together in facilities that can output millions of tons of steel.
Boston Metal is not the only company attempting to clean up steelmaking.
Swedish company Stegra has raised billions of dollars to build the world’s first large-scale green steel plant in Northern Sweden. The plant will use green hydrogen to cut emissions by up to 95 percent. US startup Electra is also raising $257 million to develop a low-temperature electrochemical process for producing green iron.
Scaling any of these approaches to the point where they make a dent in an industry as massive as steelmaking will be a huge challenge. But these developments suggest the technical barriers are rapidly falling.
The post Green Steel Startup’s Largest Reactor Yet Produces a Ton of Molten Metal With Electricity appeared first on SingularityHub.
What Google Translate Tells Us About Where AI Is Headed Next
The trajectory of AI in translation hints at the future of generative AI.
The computer scientists Rich Sutton and Andrew Barto have been recognized for a long track record of influential ideas with this year’s Turing Award, the most prestigious in the field. Sutton’s 2019 essay “The Bitter Lesson,” for instance, underpins much of today’s feverishness around artificial intelligence (AI).
He argues that methods to improve AI that rely on heavy-duty computation rather than human knowledge are “ultimately the most effective, and by a large margin.” This is an idea whose truth has been demonstrated many times in AI history. Yet there’s another important lesson in that history from some 20 years ago that we ought to heed.
Today’s AI chatbots are built on large language models (LLMs), which are trained on huge amounts of data that enable a machine to “reason” by predicting the next word in a sentence using probabilities.
Useful probabilistic language models were formalized by the American polymath Claude Shannon in 1948, citing precedents from the 1910s and 1920s. Language models of this form were then popularized in the 1970s and 1980s for use by computers in translation and speech recognition, in which spoken words are converted into text.
The first language model on the scale of contemporary LLMs was published in 2007 and was a component of Google Translate, which had been launched a year earlier. Trained on trillions of words using over a thousand computers, it is the unmistakeable forebear of today’s LLMs, even though it was technically different.
It relied on probabilities computed from word counts, whereas today’s LLMs are based on what is known as transformers. First developed in 2017—also originally for translation—these are artificial neural networks that make it possible for machines to better exploit the context of each word.
The Pros and Cons of Google TranslateMachine translation (MT) has improved relentlessly in the past two decades, driven not only by tech advances but also the size and diversity of training data sets. Whereas Google Translate started by offering translations between just three languages in 2006—English, Chinese, and Arabic—today it supports 249. Yet while this may sound impressive, it’s still actually less than 4 percent of the world’s estimated 7,000 languages.
Between a handful of those languages, like English and Spanish, translations are often flawless. Yet even in these languages, the translator sometimes fails on idioms, place names, legal and technical terms, and various other nuances.
Between many other languages, the service can help you get the gist of a text, but often contains serious errors. The largest annual evaluation of machine translation systems—which now includes translations done by LLMs that rival those of purpose-built translation systems—bluntly concluded in 2024 that “MT is not solved yet.”
Machine translation is widely used in spite of these shortcomings: As far back as 2021, the Google Translate app reached one billion installs. Yet users still appear to understand that they should use such services cautiously. A 2022 survey of 1,200 people found that they mostly used machine translation in low-stakes settings, like understanding online content outside of work or study. Only about 2 percent of respondents’ translations involved higher stakes settings, including interacting with healthcare workers or police.
Sure enough, there are high risks associated with using machine translations in these settings. Studies have shown that machine-translation errors in healthcare can potentially cause serious harm, and there are reports that it has harmed credible asylum cases. It doesn’t help that users tend to trust machine translations that are easy to understand, even when they are misleading.
Knowing the risks, the translation industry overwhelmingly relies on human translators in high-stakes settings like international law and commerce. Yet these workers’ marketability has been diminished by the fact that the machines can now do much of their work, leaving them to focus more on assuring quality.
Many human translators are freelancers in a marketplace mediated by platforms with machine-translation capabilities. It’s frustrating to be reduced to wrangling inaccurate output, not to mention the precarity and loneliness endemic to platform work. Translators also have to contend with the real or perceived threat that their machine rivals will eventually replace them—researchers refer to this as automation anxiety.
Lessons for LLMsThe recent unveiling of the Chinese AI model Deepseek, which appears to be close to the capabilities of market leader OpenAI’s latest GPT models but at a fraction of the price, signals that very sophisticated LLMs are on a path to being commoditized. They will be deployed by organizations of all sizes at low costs—just as machine translation is today.
Of course, today’s LLMs go far beyond machine translation, performing a much wider range of tasks. Their fundamental limitation is data, having exhausted most of what is available on the internet already. For all its scale, their training data is likely to underrepresent most tasks, just as it underrepresents most languages for machine translation.
Indeed the problem is worse with generative AI. Unlike with languages, it is difficult to know which tasks are well represented in an LLM. There will undoubtedly be efforts to improve training data that make LLMs better at some underrepresented tasks. But the scope of the challenge dwarfs that of machine translation.
Tech optimists may pin their hopes on machines being able to keep increasing the size of the training data by making their own synthetic versions, or of learning from human feedback through chatbot interactions. These avenues have already been explored in machine translation, with limited success.
So the foreseeable future for LLMs is one in which they are excellent at a few tasks, mediocre in others, and unreliable elsewhere. We will use them where the risks are low, while they may harm unsuspecting users in high-risk settings—as has already happened to laywers who trusted ChatGPT output containing citations to non-existent case law.
These LLMs will aid human workers in industries with a culture of quality assurance, like computer programming, while making the experience of those workers worse. Plus we will have to deal with new problems such as their threat to human artistic works and to the environment. The urgent question: is this really the future we want to build?
This article is republished from The Conversation under a Creative Commons license. Read the original article.
The post What Google Translate Tells Us About Where AI Is Headed Next appeared first on SingularityHub.
DARPA Wants to ‘Grow’ Enormous Living Structures in Space
Living materials could self-assemble into antennas, nets to capture debris, or even space station parts.
Space stations break down. Satellites get damaged. Repairing them requires launching replacement components on rockets.
The US Defense Advanced Research Projects Agency (DARPA) is now exploring an alternative: growing these parts directly in space. The concept would skirt delivery headaches. Without a rocket’s size and weight constraints, engineers could also design and construct large structures—over 1,640 feet or 500 meters long—that can’t be shipped from Earth.
The technology could be especially useful as we inch towards missions to Mars and beyond.
The agency has previously explored space manufacturing that would rely on robotic construction or self-assembling materials. The new proposal adds synthetic biology to the mix. Compared to traditional rigid materials, alternatives that incorporate living microbes could be more flexible. Embedded in a biocompatible matrix that provides structure, they could form a living material that withstands the unforgiving environment of space.
It sounds like science fiction, and it still is. But in late February, DARPA called for ideas to make the vision a reality.
Space FactoryBuilding large objects directly in space has multiple perks. Instead of folding up structures to fit into rockets—like the James Webb Space Telescope, which engineers folded origami-like for its ride to space—ferrying lightweight raw materials from Earth could be more energy- and cost-efficient. The materials could then be made into much larger objects in orbit. Microgravity also allows engineers to design structures that would sag under their own weight on Earth. Space offers an opportunity to build objects that are wildly different than any on the ground.
Space manufacturing is already in the works. In 2022, DARPA launched the Novel Orbital Moon Manufacturing, Materials, and Mass-Efficient Design (NOM4D) program to test the idea.
“Current space systems are all designed, built, and tested on Earth before being launched into a stable orbit and deployed to their final operational configuration,” NOM4D program manager Bill Carter said in a 2022 press release. “These constraints are particularly acute for large structures such as solar arrays, antennas, and optical systems, where size is critical to performance.”
Three years later, the program is almost ready to launch its first raw materials into space to test assembly. One of these, designed by the California Institute of Technology and Momentus, will hitch a ride on a SpaceX Falcon 9 mission in early 2026. In orbit, a robotic device will transform the material into a circular “skeleton” mimicking the diameter of an antenna.
“If the assembly technology is successful, this would be the first step toward scaling up to eventually building very large space-based structures in the future,” program manager Andrew Detor said in a press release.
Another team from the University of Illinois Urbana-Champaign is partnering with Voyager Space to test their own material and manufacturing process on the International Space Station. Made up of flat carbon-fiber sleeves, similar to finger-trap toys, their material uses a novel chemical process that hardens liquid components into solid structures. Heating up one side of the sleeve stiffens the entire structure. Their test is also scheduled for 2026.
A Dose of BiologyBut DARPA is ready to get even more ambitious.
Thanks to synthetic biology and materials science, we’ve seen an explosion of biomaterials compatible with living cells. These have been used to deliver drugs deep into the body, form tough structures to support prosthetics, or 3D bioprint organs and tissues for transplant.
Meanwhile, scientists have also discovered a growing number of extremophiles—microbes that can withstand extremely high pressures and temperatures or survive acidic environments. Bacteria dotting the outside of the International Space Station can survive extreme ultraviolet radiation. Sequencing extremophile genomes is revealing genetic adaptations to these harsh environments, paving the way for scientist to engineer bacteria that survive and thrive in space.
The stage is set, then, for hybrid living materials that grow into predefined structures in space.
DARPA’s new vision is to rapidly engineer biological objects “of unprecedented size” in microgravity, with lengths reaching over half a kilometer, or more than 1,640 feet.
One idea is to weave biomaterials, extremophiles, and non-organic fibers into materials with different stiffnesses and strengths. This would be a bit like manufacturing a tent. Some materials could be used as tent poles supporting the overall structure. Others—such as bacteria—can grow the tent’s walls, floor, and roof, with the ability to stretch or shrink. Balancing the amount of each component would be critical for the material to work in multiple scenarios.
But space is an incredibly hostile environment. A crucial challenge will be figuring out how to keep the bacteria alive. Another will be directing their growth to form the desired final shape.
The setup will likely need biomaterial scaffolds to store and provide nutrients to the critters. These could be supplied to so-called leading edges, where rapidly dividing bacteria expand the material. Adding specific chemical signals—which many microbes already use for navigation—could nudge them toward designated locations as they form the final structure.
Some biomaterial building blocks sound rather exotic. For inspiration, DARPA suggested fungal filaments, protein-based fibers from hagfish slime, and graphene aerogels that are already being explored for drug delivery, wound healing, and bone and nerve regeneration.
The type of microbe used would likely also impact designs. Those that require oxygen are harder to keep alive in space, even when they can survive radiation-contaminated areas, Antarctic permafrost, or extreme dehydration. Bacteria that don’t require oxygen are likely easier to keep alive. But additional hardware would be needed to tinker with pressure, temperature, and humidity so they can thrive in space.
If all goes well, designers may also embed electronics inside the finished structures to transmit radio frequencies or infrared signals for communication.
DARPA is currently calling for proposals and planning a workshop in April to debate the idea with experts. Eventually, they hope the work leads to objects that can be “biologically manufactured and assembled, but that may be infeasible to produce traditionally.”
The post DARPA Wants to ‘Grow’ Enormous Living Structures in Space appeared first on SingularityHub.
This Week’s Awesome Tech Stories From Around the Web (Through March 8)
Eerily Realistic AI Voice Demo Sparks Amazement and Discomfort OnlineBenj Edwards | Ars Technica
“In late 2013, the Spike Jonze film ‘Her’ imagined a future where people would form emotional connections with AI voice assistants. Nearly 12 years later, that fictional premise has veered closer to reality with the release of a new conversational voice model from AI startup Sesame that has left many users both fascinated and unnerved.”
TechInside the Start of Project Stargate—and the Startup Powering ltAbram Brown | The Information
“Just the scale of economics around [Stargate’s] Abilene [datacenter project] is enormous, and Lochmiller made sure I understood that by comparing it to a familiar sight: Marc Benioff’s billion-dollar skyscraper in downtown San Francisco. ‘In the Bay Area, the Salesforce Tower defines the city skyline, right?’ he said. ‘You take three Salesforce Towers, and that’s the amount of work that’s going on here.'”
RoboticsThis Kung Fu Robot Video Makes It Look Like the Uprising Has Already StartedTrevor Mogg | Digital Trends
“Folks often joke about the so-called ‘robot uprising,’ but a new video of Unitree’s advanced G1 robot pulling some kung fu moves could well wipe the smile off their faces. Shared on Tuesday, the 15-second clip shows a baton-wielding human retreating from a robot that then kicks the baton clean out of his hand. Let’s just say that again: a baton-wielding human retreating from a robot.”
BiotechnologyDe-Extinction Scientists Say These Gene-Edited ‘Woolly Mice’ Are a Step Toward Woolly MammothsJessica Hamzelou | MIT Technology Review
“They’re small, fluffy, and kind of cute, but these mice represent a milestone in de-extinction efforts, according to their creators. The animals have undergone a series of genetic tweaks that give them features similar to those of woolly mammoths—and their creation may bring scientists a step closer to resurrecting the giant animals that roamed the tundra thousands of years ago.”
TechOpenAI Plots Charging $20,000 a Month For PhD-Level AgentsStephanie Palazzolo and Cory Weinberg | The Information
“OpenAI executives have told some investors it planned to sell low-end agents at a cost of $2,000 per month to ‘high-income knowledge workers’; mid-tier agents for software development costing possibly $10,000 a month; and high-end agents, acting as PhD-level research agents, which could cost $20,000 per month, according to a person who’s spoken with executives.”
SpaceFirefly Releases Stunning Footage of Blue Ghost Landing on the MoonPassant Rabie | Gizmodo
“The Texas-based company released a clip of Blue Ghost’s descent toward the moon followed by a smooth landing. The footage is a masterclass in lunar landings, capturing striking views of the lander emerging from a cloud of dust, its shadow stretching across the moon’s surface in a superhero-like stance.”
TechThis Scientist Left OpenAI Last Year. His Startup Is Already Worth $30 Billion.Berber Jin and Deepa Seetharaman | The Wall Street Journal
“Silicon Valley’s hottest investment isn’t a new app or hardware product. It’s one man. AI researcher Ilya Sutskever is the primary reason venture capitalists are putting some $2 billion into his secretive company Safe Superintelligence, according to people familiar with the matter. The new funding round values SSI at $30 billion, making it one of the most valuable AI startups in the world.”
RoboticsDriverless Race Car Sets a New Autonomous Speed RecordAndrew J. Hawkins | The Verge
“Look out: there’s a new fastest robot in the world. A Maserati MC20 Coupe with no one in the driver’s seat set a new land speed record for autonomous vehicles, reaching 197.7mph (318km/h) during an automotive event at the Kennedy Space Center last week.”
Artificial IntelligenceAI Reasoning Models Can Cheat to Win Chess GamesRhiannon Williams | MIT Technology Review
“Facing defeat in chess, the latest generation of AI reasoning models sometimes cheat without being instructed to do so. The finding suggests that the next wave of AI models could be more likely to seek out deceptive ways of doing whatever they’ve been asked to do. And worst of all? There’s no simple way to fix it.”
SpaceSpaceX Starship Spirals Out of Control in Second Straight Test Flight FailureSean O’Kane | TechCrunch
“The ship successfully separated and headed into space, while the booster came back to the company’s launchpad in Texas, where it was caught for a third time by the launch tower. But at around eight minutes and nine seconds into the flight, SpaceX’s broadcast graphics showed Starship lose multiple Raptor engines on the vehicle. On-board footage showed the ship started spiraling end over end over the ocean.”
Artificial IntelligencePeople Are Using Super Mario to Benchmark AI NowKyle Wiggers | TechCrunch
“Thought Pokémon was a tough benchmark for AI? One group of researchers argues that Super Mario Bros. is even tougher. Hao AI Lab, a research org at the University of California San Diego, on Friday threw AI into live Super Mario Bros. games. Anthropic’s Claude 3.7 performed the best, followed by Claude 3.5. Google’s Gemini 1.5 Pro and OpenAI’s GPT-4o struggled.”
Artificial IntelligenceAI Versus the Brain and the Race for General IntelligenceJohn Timmer | Ars Technica
“The systems being touted as evidence that AGI is just around the corner do not work at all like the brain does. …It’s entirely possible that there’s more than one way to reach intelligence, depending on how it’s defined. But at least some of the differences are likely to be functionally significant, and the fact that AI is taking a very different route from the one working example we have is likely to be meaningful.”
The post This Week’s Awesome Tech Stories From Around the Web (Through March 8) appeared first on SingularityHub.
Two Moon Landings in a Week—One Dead, One Alive—Aim to Kickstart the Lunar Economy
Until last year, the US hadn’t visited the moon in over a half century. But now? Twice in a week.
A growing number of companies are eyeing the moon as a source of commercial opportunities. Two private landings in under a week suggest our nearest celestial neighbor is open for business.
Rapidly falling launch costs have opened the door for smaller companies to take on more ambitious space missions, including efforts to land on the moon. NASA has also encouraged this activity. In 2018, the agency launched the Commercial Lunar Payload Services (CLPS) program, incentivizing firms to build robotic landers and rovers in support of its plans to return humans to the moon.
Last year, Intuitive Machines’ Odysseus became the first private spacecraft to touch down on the lunar surface. But the vehicle toppled over onto its side in the process, limiting its ability to communicate and deploy experiments.
Last Sunday, however, US startup Firefly Aerospace achieved a clean touchdown with its Blue Ghost lander in the Mare Crisium basin. Meanwhile, Intuitive Machines experienced déjà vu on its second landing near the moon’s south pole on Friday when its Athena lander ended up on its side again.
Firefly’s 6.6-foot-tall lander launched on a SpaceX Falcon 9 rocket on January 15 and entered lunar orbit on February 13. The solar-powered vehicle is carrying 10 NASA science experiments designed to gather data on the lunar surface. It will now conduct a 14-day mission before the lunar night’s frigid temperatures set in and disable the lander.
Things haven’t turned out as well for Intuitive Machines, whose spacecraft took a speedier path to the moon after launching on a Falcon 9 on February 26. The company experienced a repeat of the problems that took the shine off its first landing. Issues with its laser range finders meant the lander lost track of its trajectory above the moon and didn’t touch down properly.
After assessing the spacecraft, Intuitive Machines, who could play an important role in NASA’s plans to return humans to the moon later this decade, said the craft was on its side again, likely couldn’t revive its batteries, and declared the mission over.
“With the direction of the sun, the orientation of the solar panels, and extreme cold temperatures in the crater, Intuitive Machines does not expect Athena to recharge,” the company wrote in a statement Friday. “The mission has concluded, and teams are continuing to assess the data collected throughout the mission.”
Athena was carrying the agency’s Polar Resources Ice Mining Experiment, or PRIME-1, which NASA hoped could help the agency assess how easy it will be for astronauts to harvest water ice.
The experiment featured a drill called TRIDENT to extract lunar soil from three feet beneath the surface and a mass spectrometer to analyze the sample for water. Previous observations have suggested significant amounts of water ice is locked up in the soil at the moon’s south pole. This ice could prove a valuable resource for any future long-term outpost.
Athena was also carrying several robots made by Intuitive Machines, US startup Lunar Outpost, and the Massachusetts Institute of Technology, as well as equipment from Nokia designed to power the moon’s first 4G cellular network.
The hope for both missions is that renewed interest in lunar exploration could soon spur a flourishing off-world economy with plenty of opportunities for the private sector.
In the short term, national space agencies like NASA are likely to be the primary customers for companies like Firefly and Intuitive Machines, which both received funding from the CLPS program. NASA is eager to find cheaper ways to get cargo to the moon on a regular basis to support its more challenging missions.
But there’s hope that in the longer term there could be opportunities for companies to carve out a niche harvesting resources like water ice to create rocket fuel and oxygen or the rare isotope helium-3, which could be used to power fusion reactors. These could be particularly attractive to other private companies looking to push further into the solar system and use the moon as a staging post.
Whether this vision pans out remains to be seen. But with several more private moon landings scheduled later this year, the first shoots of a burgeoning lunar economy seem to be emerging.
The post Two Moon Landings in a Week—One Dead, One Alive—Aim to Kickstart the Lunar Economy appeared first on SingularityHub.
Scientists Discover Thousands of New Microbial Species Thriving in the Mariana Trench
The project explores how life adapts to extreme environments—and hopes to inspire new drugs or even treatments to aid space travel.
A human can’t survive in the Mariana Trench without protection. At its deepest, the trench plunges 35,000 feet below the surface of the Pacific Ocean to a region reigned by crushing pressure and darkness.
Yet somehow life finds a way. The hadal snailfish, with delicate fins and translucent body, roams the dark and freezing waters. Giant shrimp-like creatures up to a foot long scavenge fallen debris, including wood and plastic, and transparent eels with fish-like heads hunt prey. A carpet of bacteria breaks down dead sea creatures and plankton to recycle nutrients.
We’ve only scratched the surface of what thrives in the deepest regions of the ocean. But a large project has now added over 6,000 new microbes to the deep-sea species tally.
Called the Mariana Trench Environment and Ecology Research Project, or MEER for short, a team of scientists have collected sediment from the hadal zone—the deepest part of the ocean—in the Mariana Trench and two other areas. The investigation revealed thousands of new species and two adaptations allowing the microbes to thrive under intense pressure.
Another team assembled the genomes of 11 deep-sea fish and found a mutated gene that could boost their ability to survive. Sequencing the genome of a giant shrimp-like creature suggested bacteria boosted its metabolism to adapt to high-pressure environments.
Studying these mysterious species could yield new medications to fight infections, inflammation, or even cancer. They show how creatures adapt to extreme environments, which could be useful for engineering pressure- or radiation-resistant proteins for space exploration.
“The deep sea, especially hadal zones, represents some of the most extreme and least explored environments on Earth,” wrote study author Shunping He and colleagues at the Chinese Academy of Sciences. The project hopes to “push the boundaries of our understanding of life” in this alien world, added Shanshan Liu and her team at BGI research, in a separate study.
Meet MEEROceans cover roughly 70 percent of the Earth’s surface. Yet we know very little about their inhabitants, especially on the ocean floor.
Since the 1960s, multiple missions—some autonomous, others manned—have sought to explore the deepest part of the Pacific Ocean, the Mariana Trench. Over 30,000 feet deep, it could completely submerge Mount Everest.
The trench is an unforgiving environment. The pressure is over 1,000 times greater than that at sea level, and at Challenger Deep—the deepest point navigated to date—the temperature is just above freezing. The seabed there is shrouded in complete darkness.
Yet a manned descent 65 years ago found flatfish and large shrimp-like creatures thriving in the trench—the first signs that life could survive in such extreme environments. More recently, James Cameron, best known for directing films like Titanic, dived to nearly 36,000 feet and took footage that helped identify even more new species.
The deep sea, it seems, is a trove of alien species yet to be discovered. The MEER project is collecting specimens from the deepest trenches across the world to learn more.
MEER relies on a deep-sea submersible called Fendouzhe, which means striver or fighter in Chinese. Fendouzhe is self-propelled and can survive freezing temperatures and tremendous pressure. It holds three crew members and has two mechanical arms bristling with devices—cameras, sonars, drills.
The submersible reached the bottom of the Mariana Trench in 2020 followed by missions to the Yap Trench and Philippine Basin. Scientists on board gathered over 1,600 sediment samples from multiple hadal zones between 6 and 11 kilometers, or roughly 4 to 7 miles, under the sea.
Added to the punishing pressure and lack of light, the deep sea is low on environmental nutrients. It’s truly “a unique combination that sets it apart from all other marine and terrestrial environments,” wrote the authors.
Undersea GenesSediments hold genetic material that survives intact when brought to the surface for analysis.
One study sketched a landscape of living creatures in the deep ocean using an approach called metagenomics. Here, scientists sequenced genetic material from all microbes within an environment, allowing them to reconstruct a birds-eye view of the ecology.
In this case, the collection is “10-fold larger than all previously reported,” wrote the team. Over 89 percent of the genomes are entirely new, suggesting most belong to previously unknown microbial species living in the deep ocean.
Samples collected from other trenches have varying genetic profiles, suggesting the microbes learned to adapt to various deep ocean environments. But they share similar genetic changes. Several genes bump up their ability to digest toluene as food. The chemical is mostly known for manufacturing paints, plastics, medications, and cosmetics.
Other genes wipe out metabolic waste products called reactive oxygen species. In large amounts, these damage DNA and lead to aging and disease. The creatures also have a beefed-up DNA repair system. This could help them adapt to intense pressure and frigid temperatures, both of which increase the chances of these damaging chemicals wreaking havoc.
Deep-Sea SuperpowersMeanwhile, other studies peered into the genetic makeup of fish and shrimp-like creatures in the hadal zone.
In one, scientists collected samples using the Fendouzhe submersible and an autonomous rover, covering locations from the Mariana Trench to the Indian Ocean. The team zeroed in on roughly 230 genes in deep-sea fish that boost survival under pressure.
Most of these help repair DNA damage. Others increase muscle function. Surprisingly, all 11 species of deep-sea fish studied shared a single genetic mutation. Engineering the same mutation in lab-grown cells helped them more efficiently turn DNA instructions into RNA—the first step cells take when making the proteins that coordinate our bodily functions.
This is “likely to be advantageous in the deep-sea environment,” wrote the team.
Top predators in the deep rely on a steady supply of prey—mainly, a shrimp-like species called amphipods. Whole genome sequencing of these creatures showed the shrimp thrive thanks to various good bacteria that help them defend against other bacterial species.
There are also some other intriguing findings. For example, while most deep-sea fish have lost genes associated with vision, one species showed gene activity related to color vision. These genes are similar to ours and could potentially let them see color even in total darkness.
Scientists are still digging through the MEER database. The coalition hopes to bolster our understanding of the most resilient lifeforms on Earth—and potentially inspire journeys into other extreme environments, like outer space.
The post Scientists Discover Thousands of New Microbial Species Thriving in the Mariana Trench appeared first on SingularityHub.
Quantum Computing Startup Says It’s Already Making Millions of Light-Powered Chips
PsiQuantum claims to have solved scalability issues that have long plagued photonic approaches.
American quantum computing startup PsiQuantum announced last week that it has cracked a significant puzzle on the road to making the technology useful: manufacturing quantum chips in large quantities.
PsiQuantum burst out of stealth mode in 2021 with a blockbuster funding announcement. It followed up with two more last year.
The company uses so-called “photonic” quantum computing, which has long been dismissed as impractical.
The approach, which encodes data in individual particles of light, offers some compelling advantages—low noise, high-speed operation, and natural compatibility with existing fiber-optic networks. However, it was held back by extreme hardware demands to manage the fact photons fly with blinding speed, get lost, and are hard to create and detect.
PsiQuantum now claims to have addressed many of these difficulties. Last week, in a new peer-reviewed paper published in Nature, the company unveiled hardware for photonic quantum computing they say can be manufactured in large quantities and solves the problem of scaling up the system.
What’s in a Quantum Computer?Like any computer, quantum computers encode information in physical systems. Whereas digital computers encode bits (0s and 1s) in transistors, quantum computers use quantum bits (qubits), which can be encoded in many potential quantum systems.
Superconducting quantum computers require an elaborate cooling rig to keep them at temperatures close to absolute zero. Image Credit: RigettiThe darlings of the quantum computing world have traditionally been superconducting circuits running at temperatures near absolute zero. These have been championed by companies such as Google, IBM, and Rigetti.
These systems have attracted headlines claiming “quantum supremacy” (where quantum computers beat traditional computers at some task) or the ushering in of “quantum utility” (that is, actually useful quantum computers).
In a close second in the headline grabbing game, IonQ and Honeywell are pursuing trapped-ion quantum computing. In this approach, charged atoms are captured in special electromagnetic traps that encode qubits in their energy states.
Other commercial contenders include neutral atom qubits, silicon based qubits, intentional defects in diamonds, and non-traditional photonic encodings.
All of these are available now. Some are for sale with enormous price tags, and some are accessible through the cloud. But fair warning: They are more for experimentation than computation today.
Faults and How to Tolerate ThemThe individual bits in your digital computers are extraordinarily reliable. They might experience a fault (a 0 inadvertently flips to a 1, for example) once in every trillion operations.
PsiQuantum’s new platform has impressive-sounding features such as low-loss silicon nitride waveguides, high-efficiency photon-number-resolving detectors, and near-lossless interconnects.
The company reports a 0.02 percent error rate for single-qubit operations and 0.8 percent for two-qubit creation. These may seem like quite small numbers, but they are much bigger than the effectively zero error rate of the chip in your smartphone.
However, these numbers rival the best qubits today and are surprisingly encouraging.
One of the most critical breakthroughs in the PsiQuantum system is the integration of fusion-based quantum computing. This is a model that allows for errors to be corrected more easily than in traditional approaches.
Quantum computer developers want to achieve what is called “fault tolerance.” This means that, if the basic error rate is below a certain threshold, the errors can be suppressed indefinitely.
Claims of “below threshold” error rates should be met with skepticism, as they are generally measured on a few qubits. A practical quantum computer would be a very different environment, where each qubit would have to function alongside a million (or a billion, or a trillion) others.
This is the fundamental challenge of scalability. And while most quantum computing companies are tackling the problem from the ground up—building individual qubits and sticking them together—PsiQuantum is taking the top-down approach.
Scale-First ThinkingPsiQuantum developed its system in partnership with semiconductor manufacturer GlobalFoundries. All the key components—photon sources and detectors, logic gates, and error correction—are integrated on single silicon-based chip.
PsiQuantum says GlobalFoundries has already made millions of the chips.
A diagram showing the different components of PsiQuantum’s photonic chip. Image Credit: PsiQuantumBy making use of techniques already used to fabricate semiconductors, PsiQuantum claims to have solved the scalability issue that has long plagued photonic approaches.
PsiQuantum is fabricating their chips in a commercial semiconductor foundry. This means scaling to millions of qubits will be relatively straightforward.
If PsiQuantum’s technology delivers on its promise, it could mark the beginning of quantum computing’s first truly scalable era.
A fault-tolerant photonic quantum computer would have major advantages and lower energy requirements.
This article is republished from The Conversation under a Creative Commons license. Read the original article.
The post Quantum Computing Startup Says It’s Already Making Millions of Light-Powered Chips appeared first on SingularityHub.
