Transhumanismus
Record-Breaking Qubits Are Stable for 15 Times Longer Than Google and IBM’s Designs
The qubits are similar enough to those used by the likes of Google and IBM that they could slot into existing processors in the future.
One the biggest challenges for quantum computers is the incredibly short time that qubits can retain information. But a new qubit from Princeton University lasts 15 times longer than industry standard versions in a major step towards large-scale, fault-tolerant quantum systems.
A major bottleneck for quantum computing is decoherence—the rate at which qubits lose stored quantum information to the environment. The faster this happens, the less time the computer has to perform operations and the more errors are introduced to the calculations.
While companies and researchers are developing error-correction schemes to mitigate this problem, qubits with greater stability could be a more robust solution. Trapped-ion and neutral-atom qubits can have coherence times on the order of seconds, but the superconducting qubits used by companies like Google and IBM remain below the 100-microsecond threshold.
These so-called “transmon” qubits have other advantages such as faster operation speeds, but their short shelf life remains a major disadvantage. Now a team from Princeton has designed novel transmon qubits with coherence times of up to 1.6 milliseconds—15 times longer than those used in industry and three times longer than the best lab experiment.
“This advance brings quantum computing out of the realm of merely possible and into the realm of practical,” Princeton’s Andrew Houck, who co-led the research, said in a press release. “Now we can begin to make progress much more quickly.”
The team’s new approach, detailed in a paper in Nature, tackles a long-standing problem in the design of transmon qubits. Tiny surface defects in the metal used to make them, typically aluminium, can absorb energy as it travels through the circuit, resulting in errors in the underlying computations.
The new qubit instead uses the metal tantalum, which has far fewer of these defects. The researchers had already experimented with this material as far back as 2021, but earlier versions were built on top of a layer of sapphire. The researchers realized the sapphire was also leading to significant energy loss and so replaced it with a layer of silicon, which is commercially available at extremely high purity.
Creating a clean enough interface between the two materials to maintain superconductivity is challenging, but the team solved the problem with a new fabrication process. And because silicon is the computing industry’s material of choice, the new qubits should be easier to mass-produce than earlier versions.
To prove out the new process, the researchers built a fully functioning quantum chip with six of the new qubits. Crucially, the new design is similar enough to the qubits used by companies like Google and IBM that it could easily slot into existing processors to boost performance, the researchers say.
This could chip away at the main barrier preventing existing quantum computers from solving larger problems—the fact that short coherence times mean qubits are overwhelmed by errors before they can do any useful calculations.
The process of getting the design from the lab bench to the chip foundry is likely to be long and complicated though, so it’s unclear if companies will switch to this new qubit architecture any time soon. Still, the research has made dramatic progress on one of the biggest challenges holding back superconducting quantum computers.
The post Record-Breaking Qubits Are Stable for 15 Times Longer Than Google and IBM’s Designs appeared first on SingularityHub.
Scientists Map the Brain’s Construction From Stem Cells to Early Adolescence
This herculean effort could help scientists unravel the causes of autism, schizophrenia, and even a deadly form of cancer.
Like the seeds of a forest, a few cells in embryos eventually sprout into an ecosystem of brain cells. Neurons get the most recognition for their computing power. But a host of other cells provides nutrition, clears the brain of waste, and wards off dangers, such as toxic protein buildup or inflammation.
This rich diversity underlies our ability to process information, transforming perception of the world and our internal dialogues into thoughts, emotions, memories, and decisions. Mimicking the brain could potentially lead to energy-efficient computers or AI systems. But we’re still decoding how the brain works.
One way to understand a machine is to first examine its parts. The landmark project BRAIN Initiative Cell Atlas Network (BICAN), launched in 2022, has parsed the brains of multiple species and compiled a census of adult brain cells with unprecedented resolution.
But brains are not computers. Their components aren’t engineered and glued on. They develop and interact cohesively over time.
Building on previous work, the BICAN consortium has now released results that peek inside the developing brain. By tracking genes and their expression in the cells of developing human and mouse brains, the researchers have built a dynamic picture of how the brain constructs itself.
This herculean effort could help scientists unravel the causes of neurodevelopmental disorders. In one study, led by Arnold Kriegstein at the University of California, San Francisco, scientists found brain stem cells that are potentially co-opted to form a deadly brain cancer in adulthood. Other studies shed light on imbalances between excitatory and inhibitory neurons—these ramp up or tone down brain activity, respectively—which could contribute to autism and schizophrenia.
“Many brain diseases begin during different stages of development, but until now we haven’t had a comprehensive roadmap for simply understanding healthy brain development,” said Kriegstein in a press release. “Our map highlights the genetic programs behind the growth of the human brain that go awry during specific forms of brain dysfunction.”
Shifting LandscapeOver a century ago, the first neuroscientists used brain cell shapes to categorize their identities. BICAN collaborators have a much larger arsenal of tools to map the brain’s cells.
A key technology called single-cell spatial transcriptomics detects which genes are turned on in cells at any given time. The results are then combined with the cells’ physical location in the brain. The result is a gene expression “heat map” that provides clues about a cell’s lineage and final identity. Like genealogical tracking, the technology traces the heritage of different types of brain cells and when they emerge while at the same time providing their physical address.
Like other organs, the brain grows from stem cells.
In early developmental stages, stem cells are nudged into different fates: Some turn into neurons, some turn into other cell types. So far, no single technology can “film” their journey. But BICAN’s new releases measuring gene expression through development offer a glimpse.
In one tour-de-force study, Kriegstein and team used a technique that maps gene variants to single cells during multiple stages of development. Many variants were previously linked to neurodevelopmental disorders, including autism, but their biological contribution remained mysterious.
The team gathered 38 donated human cortex samples—the outermost part of the brain—that spanned all three trimesters of pregnancy, after birth, and early adolescence.
They then grouped individual cells using gene expression data across samples. They found roughly 30 different types of cells that emerge during brain development, including excitatory and inhibitory neurons, supporting cells such as glia, and immune cells called microglia.
Some were linked to a single source. This curious cell type, dubbed tripotential intermediate progenitor cells, spawned an inhibitory neuron, star-shaped glia, and brain cells that wrap around neurons as protective sheathes of electrical insulation. The latter break down in neurological diseases like multiple sclerosis, resulting in fatigue, pain, and memory problems.
Many genes related to autism were turned on in immature neurons as they began their brain-wiring journey. Gene mutations, environmental influences, and other disruptions could interfere with their growth.
“These programs of gene expression became active when young neurons were still migrating throughout the growing brain and figuring out how to build connections with other neurons,” said study author Li Wang. “If something goes wrong at this stage, those maturing neurons might become confused about where to go or what to do.”
The mother cells also have a dark side. Scientists have long thought that glioblastoma, a fatal brain cancer, stems from multiple types of neural precursor cells. Because mother cells, marked by their distinctive gene expression profiles, develop into all three types of cells involved in the cancer, they’re essentially cancer stem cells that could be targeted for future treatments.
“By understanding the context in which one stem cell produces three cell types in the developing brain, we could be able to interrupt that growth when it reappears during cancer,” said Wang.
A Wealth of DataOther BICAN studies also zeroed in on inhibitory neurons.
The authors of one hunted down a group of immature cells that shifted from making excitatory neurons to inhibitory ones during the middle of gestation, proving to be a balance between both forces. In another, in mice, researchers followed inhibitory neurons as they diversified and spread across the developing brain. More subtypes with unique gene expression profiles appeared in the cortex compared to deeper regions, which are more evolutionarily ancient.
Other studies investigated gene expression in neurodevelopment and how changes can lead to inflammation. Environmental influences such as social interactions played a role, especially in forming brain circuits tailored to gauging others’ behaviors. In developing mice, several genes related to social demands abruptly changed their activity during developmental milestones, including puberty.
Some cell types were shape-shifters. In mice, an immune challenge briefly changed microglia—the brain’s immune cells—back into a developmental-like state, suggesting these cells have the ability to turn back the clock.
The collection of studies only skims the surface of what BICAN’s database offers. Although the project mainly focused on the cortex, ongoing initiatives are detailing a cell atlas of the entire developing brain across dozens of timepoints and multiple species.
“Taken together, this collection from the BICAN turns the static portrait of cell types into a dynamic story of the developing brain,” wrote Emily Sylwestrak at the University of Oregon, who was not involved in the studies.
The post Scientists Map the Brain’s Construction From Stem Cells to Early Adolescence appeared first on SingularityHub.
This Week’s Awesome Tech Stories From Around the Web (Through November 8)
The Next Big Quantum Computer Has ArrivedIsabelle Bousquette | The Wall Street Journal ($)
“Helios contains 98 physical qubits, and from those can deliver 48 logical error-corrected qubits. This 2:1 ratio is unique and impressive, said Prineha Narang, professor of physical sciences and electrical and computer engineering at UCLA, and partner at venture-capital firm DCVC. Other companies require anything from dozens to hundreds of physical qubits to create one logical qubit.”
Artificial IntelligenceIn a First, AI Models Analyze Language as Well as a Human ExpertSteve Nadis | Quanta
“While most of the LLMs failed to parse linguistic rules in the way that humans are able to, one had impressive abilities that greatly exceeded expectations. It was able to analyze language in much the same way a graduate student in linguistics would—diagramming sentences, resolving multiple ambiguous meanings, and making use of complicated linguistic features such as recursion.”
ComputingWireless, Laser-Shooting Brain Implant Fits on a Grain of SaltMalcolm Azania | New Atlas
“Along with their international partners, researchers at Cornell University have developed a micro-neural implant so tiny it could dance on the head of a pin, and so astonishingly well-engineered that after implantation in a mouse, it can wirelessly transmit data about brain function for more than a year under its own power.”
ComputingQuantum Computing Jolted by DARPA Decision on Most Viable CompaniesAdam Bluestein | Fast Company
“For a technology that could produce world-changing feats but remains far from maturity—and into which billions of investment dollars have been flowing in recent months—the QBI validation is profound. The QBI’s first judgments, announced yesterday, reconfigure the competitive landscape, bolstering some powerful incumbents and boosting lesser-known players and outlier approaches. They also delivered a formidable gut punch to a couple of industry pioneers.”
FutureOur First Terraforming Goal Should Be the Moon, Not MarsEthan Siegel | Big Think
“The only way to prepare a world for human inhabitants is to make the environment more Earth-like: terraforming. While most of humanity’s space dreams have focused on Mars, a better candidate may be even closer: the moon. Its proximity to Earth, composition, and many other factors make it very appealing. Mars should be a dream, but not our only one.”
BiotechnologyThis Genetically Engineered Fungus Could Help Fix Your Mosquito ProblemJason P. Dinh | The New York Times ($)
“Researchers reported last week in the journal Nature Microbiology that Metarhizium—a fungus already used to control pests—can be genetically engineered to produce so much of a sweet-smelling substance that it is virtually irresistible to mosquitoes. When they laced traps with those fungi, 90 percent to 100 percent of mosquitoes were killed in lab experiments.”
Science10,000 Generations of Hominins Used the Same Stone Tools to Weather a Changing WorldKiona N. Smith | Ars Technica
“The oldest tools at the site date back to 2.75 million years ago. According to a recent study, the finds suggest that for hundreds of millennia, ancient hominins relied on the same stone tool technology as an anchor while the world changed around them.”
FutureThe First New Subsea Habitat in 40 Years Is About to LaunchMark Harris | MIT Technology Review ($)
“Once it is sealed and moved to its permanent home beneath the waves of the Florida Keys National Marine Sanctuary early next year, Vanguard will be the world’s first new subsea habitat in nearly four decades. Teams of four scientists will live and work on the seabed for a week at a time, entering and leaving the habitat as scuba divers.”
RoboticsWaymo’s Robotaxis Are Coming to Three New CitiesAndrew J. Hawkins | The Verge
“Waymo said it plans on launching commercial robotaxi services in three new cities: San Diego, Las Vegas, and Detroit. The announcement comes after the company said it would begin rapidly scaling to bring its fully driverless technology to more people on a faster timeline.”
Artificial IntelligenceAI Capabilities May Be Overhyped on Bogus Benchmarks, Study FindsAJ Dellinger | Gizmodo
“You know all of those reports about artificial intelligence models successfully passing the bar or achieving PhD-level intelligence? Looks like we should start taking those degrees back. A new study from researchers at the Oxford Internet Institute suggests that most of the popular benchmarking tools that are used to test AI performance are often unreliable and misleading.”
ComputingUnesco Adopts Global Standards on ‘Wild West’ Field of NeurotechnologyAisha Down | The Guardian
“The standards define a new category of data, ‘neural data,’ and suggest guidelines governing its protection. A list of more than 100 recommendations ranges from rights-based concerns to addressing scenarios that are—at least for now—science fiction, such as companies using neurotechnology to subliminally market to people during their dreams.”
The post This Week’s Awesome Tech Stories From Around the Web (Through November 8) appeared first on SingularityHub.
New Images Reveal the Milky Way’s Stunning Galactic Plane in More Detail Than Ever Before
The new radio portrait of the Milky Way is the most sensitive, widest-area map at these frequencies to date.
The Milky Way is a rich and complex environment. We see it as a luminous line stretching across the night sky, composed of innumerable stars.
But that’s just the visible light. Observing the sky in other ways, such as through radio waves, provides a much more nuanced scene—full of charged particles and magnetic fields.
For decades, astronomers have used radio telescopes to explore our galaxy. By studying the properties of the objects residing in the Milky Way, we can better understand its evolution and composition.
Our study, published recently in Publications of the Astronomical Society of Australia, provides new insights into the structure of our galaxy’s galactic plane.
Observing the Entire SkyTo reveal the radio sky, we used the Murchison Widefield Array, a radio telescope in the Australian outback, composed of 4,096 antennas spread over several square kilometers. The array observes wide regions of the sky at a time, enabling it to rapidly map the galaxy.
Between 2013 and 2015, the array was used to observe the entire southern hemisphere sky for the GaLactic and Extragalactic All-sky MWA (or GLEAM) survey. This survey covered a broad range of radio wave frequencies.
The wide frequency coverage of GLEAM gave astronomers the first “radio color” map of the sky, including the galaxy itself. It revealed the diffuse glow of the galactic disk, as well as thousands of distant galaxies and regions where stars are born and die.
With the upgrade of the array in 2018, we observed the sky with higher resolution and sensitivity, resulting in the GLEAM-eXtended survey (GLEAM-X).
The big difference between the two surveys is that GLEAM could detect the big picture but not the detail, while GLEAM-X saw the detail but not the big picture.
A Beautiful MosaicTo capture both, our team used a new imaging technique called image domain gridding. We combined thousands of GLEAM and GLEAM-X observations to form one huge mosaic of the galaxy.
Because the two surveys observed the sky at different times, it was important to correct for the ionosphere distortions—shifts in radio waves caused by irregularities in Earth’s upper atmosphere. Otherwise, these distortions would shift the position of the sources between observations.
The algorithm applies these corrections, aligning and stacking data from different nights smoothly. This took more than 1 million processing hours on supercomputers at the Pawsey Supercomputing Research Centre in Western Australia.
The result is a new mosaic covering 95 percent of the Milky Way visible from the southern hemisphere, spanning radio frequencies from 72 to 231 megahertz. The big advantage of the broad frequency range is the ability to see different sources with their “radio color” depending on whether the radio waves are produced by cosmic magnetic fields or by hot gas.
The emission coming from the explosion of dead stars appears in orange. The lower the frequency, the brighter it is. Meanwhile, the regions where stars are born shine in blue.
These colors allow astronomers to pick out the different physical components of the galaxy at a glance.
The new radio portrait of the Milky Way is the most sensitive, widest-area map at these low frequencies to date. It will enable a plethora of galactic science, from discovering and studying faint and old remnants of star explosions to mapping the energetic cosmic rays and the dust and grains that dominate the medium within the stars.
The power of this image will not be surpassed until the new SKA-Low telescope is complete and operational, eventually being thousands of times more sensitive and with higher resolution than its predecessor, the Murchison Widefield Array.
This upgrade is still a few years away. For now, this new image stands as an inspiring preview of the wonders the full SKA-Low will one day reveal.
This article is republished from The Conversation under a Creative Commons license. Read the original article.
The post New Images Reveal the Milky Way’s Stunning Galactic Plane in More Detail Than Ever Before appeared first on SingularityHub.
Scientists Unveil a ‘Living Vaccine’ That Kills Bad Bacteria in Food to Make It Last Longer
The technology unleashes self-replicating viruses called phages on food bacteria to continuously hunt down and destroy bad bugs.
It’s a home cook’s nightmare: You open the fridge ready to make dinner and realize the meat has spoiled. You have to throw it out, kicking yourself for not cooking it sooner.
According to the USDA, a staggering one-third of food is tossed out because of spoilage, leading to over $160 billion lost every year. Much of this food is protein and fresh produce, which could feed families in need. The land, water, labor, energy, and transportation that brought the food to people’s homes also goes to waste.
Canada’s McMaster University has a solution. A team of scientists wrapped virus-packed microneedles inside a paper towel-like square sitting at the bottom of a Ziploc container. It’s an unusual duo. But the viruses, called phages, specifically target bacteria related to food spoilage. Some are already approved for consumption.
Using microneedles to inject the phages into foods, the team decontaminated chicken, shrimp, peppers, and cheese. All it took was placing the square on the bottom of a storage dish or on the surface of the food. Mixing and matching the phages destroyed multiple dangerous bacterial strains. In some cases, it made spoiled meat safe to eat again based on current regulations.
It’s just a prototype, but a similar design could one day be used in food packaging.
“[The platform] can revolutionize current food contamination practices, preventing foodborne illness and waste through the active decontamination of food products,” wrote the team.
A Curious Food ChainIt’s easy to take food safety for granted. The occasional bad bite of leftover pizza might give you some discomfort, but you bounce back. Still, foodborne pathogens result in hundreds of millions of cases and tens of thousands of deaths every year according to the World Health Organization. Bacteria like E. Coliand Salmonella are the main culprits.
Existing solutions rely on antibiotics. But they come with baggage. Flooding agriculture with these drugs contributes to antibacterial resistance, impacting the farming industry and healthcare.
Other preservative additives—like those in off-the-shelf foods—incorporate chemicals, essential oils, and other molecules. Although these are wallet-friendly and safe to eat, they often change core aspects of food like texture and flavor (canned salsa never tastes as great as the fresh stuff).
Maverick food scientists have been exploring an alternative way to combat food spoilage—phages. Adding a bath of viruses to a bacteria-infected stew is hardly an obvious food safety strategy, but it stems from research into antibacterial resistance.
Phages are viruses that only infect bacteria. They look a bit like spiders. Their heads house genetic material, while their legs grab onto bacteria. Once attached, phages inject their DNA into the bacteria and force their hosts to reproduce more viruses—before destroying them.
Because phages don’t infect human cells, they can be antibacterial treatments and even gene therapies. And they’re already part of our food production system. FDA-approved ListShield, for example, reduces Listeria in produce, smoked salmon, and frozen foods. PhageGuard S, approved in the US and EU, fights off Salmonella. Other phage-based products include sprays, edible films, and hydrogel-based packaging used to decontaminate food surfaces.
Even better, phages self-renew. They are “self-dosing antimicrobial additives,” wrote the team.
But size has been a limiting factor: They’re too big. Phages struggle to tunnel into larger pieces of food—say, a plump chicken breast. Although they might swiftly wipe out bacteria on the surface, pathogens can still silently brew inside a cutlet.
Prickly PatchThe new device was inspired by medical microneedle patches. These look like Band-Aids, but loaded inside are medications that can seep deeper into tissues—or in this case, food.
To construct food-safe microneedles, the team tested a range of edible materials and homed in on four ingredients. These included gelatin, the squishy protein-rich component at the heart of Jell-O, and other biocompatible materials readily used in medical devices. The ingredients were poured into a mold, baked into separate microneedle patches, and checked for integrity.
Each ingredient had strengths and weakness. But after testing the patches on various foods—mushrooms, fish, cooked chicken, and cheese—one component stood out for its reusability and ability to penetrate deeper. Called PMMA, the coating is already used in food-safe plexiglass and reusable packaging.
The team next loaded multiple phages that target different food-spoiling bacteria into PMMA scaffolds and challenged the patches to neutralize bacterial “lawns.” True to their name, these are fuzzy microscopic bits of bacteria that form a carpet. You’ve probably seen them at the bottom of a food container you’ve left far too long in the fridge.
The phage patches completely erased both E. Coli and Salmonellain steaks with high levels of the bacteria. Another test pitted the patches against existing methods in leftover chicken that had lingered 18 hours in unsafe food conditions. Compared to directly injecting phages or applying phage sprays, the microneedle patch was the only strategy that kept the chicken safe to eat according to current regulations.
Phage BuffetThe system was especially resilient to temperature changes. When applied to chicken or raw beef, the phage patches were active for at least a month at regular refrigerator temperatures, “ensuring compatibility with food products that require prolonged storage,” wrote the team.
The system can be tailored to tackle different bacteria, especially by mixing up which phages are included. Using a variety could potentially target strains of bacteria throughout the food production line, making the final product safer.
The team is planning to integrate the platform into food packaging materials, which would ensure the microneedles are in constant contact with the food and deliver a large dose of phages that self-replicate to continue warding off bacteria. Other ideas include sprinkling phage-loaded materials directly onto food during manufacturing and production.
The idea of eating viruses might seem a little weird. But phages naturally occur in almost all foods, including meat, dairy, and vegetables. You’ve likely already eaten these bacteria-fighting warriors at some point as they’re silently hunting down disease-causing bacteria.
The vaccine could prevent foodborne illness and reduce waste. It’s easy to adapt to different strains of bacteria, food-safe, and cost effective, wrote the team, making it “well suited for applications within the food industry.”
The post Scientists Unveil a ‘Living Vaccine’ That Kills Bad Bacteria in Food to Make It Last Longer appeared first on SingularityHub.
A Tiny 3D Printer Could Mend Vocal Cords in Real Time During Surgery
A bioprinter with a printhead the size of a sesame seed could deliver hydrogels to surgical sites.
Elephant trunks and garden hoses hardly seem like inspirations for a miniature 3D bioprinter.
Yet they’ve led scientists at McGill University to engineer the smallest reported bioprinting head to date. Described in the journal Devices, the device has a flexible tip just 2.7 millimeters in diameter—roughly the length of a sesame seed.
Bioprinters can deposit a wide range of healing materials directly at the site of injury. Some bioinks combat infections in lab studies; others deliver chemotherapy to cancerous sites, which could prevent tumors from recurring. On the operating table, biocompatible hydrogels injected during surgery help heal wounds.
The devices are promising but most are rather bulky. They struggle to reach all the body’s nooks and crannies—including, for example, the vocal cords.
It’s easy to take our ability to speak for granted and only appreciate its loss after catching a bad cold. But up to nine percent of people develop vocal-cord disorders in their lifetimes. Smoking, acid reflux, and chronic coughing tear at the delicate folds of tissue. Abnormal growths and cancers also contribute. These are usually removed with surgery that comes with a significant risk of scarring.
Hydrogels can help with healing. But because throat and vocal cord tissue is so intricate, current treatments inject it through the skin, rather than precisely into damaged regions.
But the new device can, in theory, sneak into a patient’s throat during surgery. Its tiny printhead doesn’t block a surgeon’s view, allowing near real-time printing after the removal of damaged tissues.
“I thought this would not be feasible at first—it seemed like an impossible challenge to make a flexible robot less than 3 mm in size,” Luc Mongeau, who led the study, said in a press release.
Although just a prototype, the device could one day help restore people’s voices after surgery and improve quality of life. It also could lead to the delivery of bioinks containing medications or even living cells to other tissues through the nose, mouth, or a small surgical cut.
Squishy Band-AidSurgery inevitably results in scars. While these are an annoyance on the skin, excessive scarring—called fibrosis—seriously limits how well tissues can do their jobs.
Fibrosis in lungs after surgery, for example, leads to infections, blood clots, and a general decline in normal breathing. Scarring of the heart tampers with its electrical signals and often leads to irregular heartbeats. And for delicate tissues like vocal cords, fibrosis causes lasting stiffness, making it difficult to intonate, sing, or talk like before—essentially robbing the person of their voice.
Scientists have found a range of molecules that could aid the healing process. Hydrogels are one promising candidate. Soft, flexible, and biocompatible, hydrogel injections provide a squishy but structured architecture supporting vocal cords. Studies also suggest hydrogels boost the growth the healthy tissues and reduce fibrosis.
But because vocal cords are difficult to target, injections are handled through the skin, making it difficult to control where the hydrogel goes.
An alternative is to 3D print hydrogels directly in the body and repair damage during surgery. Both handheld and robotic systems have been successfully tested in labs, and minimally invasive versions are on the rise. One design uses air pressure to bioprint hydrogels inside the intestines. Another taps into magnets to repair the liver. But existing devices are too large to accommodate vocal cords.
Surgical TrunksTo heal vocal cords, an ideal mini 3D bioprinter must seamlessly integrate into throat surgeries. Here, surgeons insert a microscope through the mouth and suspend it inside the throat. While it sounds uncomfortable, the procedure is highly efficient with little pain afterward.
The printhead needs to snake around the microscope but also flexibly adjust its position to target injured sites without blocking the surgeon’s view. Finally, the speed and force of the hydrogel spray should be controllable—avoiding the equivalent of accidentally squeezing out too much superglue.
The new bioprinter’s has a printhead a bit like an elephant’s trunk. It has a flexible arm that easily slips into the throat with a 2.7-millimeter arched nozzle at the end. Picture it as a fine-point Sharpie connected to a flexible tube. Three cables operate the printhead and control nozzle movement by applying tension, like strings on a puppet.
The system’s brain is in the actuator housing, which looks like a tiny plastic gift box. It holds a syringe of hydrogel for the printhead and pilots the adjustable cables using motors that precisely move the printhead to its intended location with a custom algorithm. Other electronics allow the team to control the setup using a wireless gaming controller in real time.
The actuator can be mounted under a standard throat surgery microscope so it’s out of the way during an operation, wrote the team.
To put the device through its paces, the team used the mini bioprinter to draw a range of shapes, including a square, heart, spiral, and various letters on a flat surface. The printhead accurately deposited thin lines of hydrogel, which can be stacked to form thicker lines—like repeatedly tracing drawings using a fine-tipped pen.
The team also tried it out in a mock vocal cord surgery. The “patient” was an accurate 3D model of a person’s throat but with different types of wounds to its vocal cords, including one that completely lacked half of the tissue. The bioprinter successfully made the repairs and reconstructed the missing vocal cord without issue.
“Part of what makes this device so impressive is that it behaves predictably, even though it’s essentially a garden hose—and if you’ve ever seen a garden hose, you know that when you start running water through it, it goes crazy,” said study author Audrey Sedal.
The flexibility comes at a cost. Though the printhead design deforms to prevent injury to tissues, this also means it’s more prone to mechanical vibrations from the actuator’s motors, which dings its accuracy.
As of now the mini printer requires manual control, but the team is working on a semi-autonomous version. More importantly, it needs to be pitted against standard hydrogel injection methods in living animals to show it’s safe and effective.
“The next step is testing these hydrogels in animals, and hopefully that will lead us to clinical trials in humans to test the accuracy, usability, and clinical outcomes of the bioprinter and hydrogel,” said Mongeau.
The post A Tiny 3D Printer Could Mend Vocal Cords in Real Time During Surgery appeared first on SingularityHub.
Future Data Centers Could Orbit Earth, Powered by the Sun and Cooled by the Vacuum of Space
A new study suggests orbital data centers could be carbon neutral, but steep technical challenges remain.
As global demand for computing continues to explode, the carbon footprint of data centers is a growing concern. A new study outlines how hosting these facilities in space could help slash the sector’s emissions.
Data centers require enormous amounts of power and water to operate and cool the millions of chips housed within them. Current estimates from the International Energy Agency peg their electricity consumption at around 415 terawatt hours globally, roughly 1.5 percent of total consumption in 2024. And the Environmental and Energy Study Institute says that large data centers can use as much as five million gallons per day for cooling.
With demand for computing resources growing by the day, in particular since the rapid adoption of resource-guzzling generative AI across the economy, this threatens to become an unsustainable burden on the planet.
But a new paper in Nature Electronics by scientists at Nanyang Technological University in Singapore suggests that hosting data centers in space could provide a potential solution. By relying on the abundant solar energy available in orbit and releasing waste heat into the cold vacuum of space, these facilities could, in principle, become carbon neutral.
“Space offers a true sustainable environment for computing,” Wen Yonggang, lead author of the study, said in a press release. “By harnessing the sun’s energy and the cold vacuum of space, orbital data centers could transform global computing.”
To validate their proposal, the researchers used digital-twin simulations of orbital computing systems to model how they would generate power, manage heat, and maintain connectivity. The team investigated two potential architectures: one designed to reduce the footprint of data collected by satellites themselves and another that would receive data from Earth for processing.
The first model would involve integrating data processing capabilities into satellites equipped with sensors—for example, cameras for imaging the Earth. This would make it possible to carry out expensive computations on the data on board before transmitting just the results back to the ground, rather than processing the raw data in terrestrial data centers.
The other approach involves a constellation of satellites equipped with full servers that could receive data from Earth and coordinate to carry out complex computing tasks like training AI models or running large simulations. The researchers note that this kind of distributed data center architecture—as opposed to assembling a large, monolithic data center in orbit—is technologically feasible with today’s satellite and computing technologies.
The team’s analysis suggests that the considerable carbon footprint of launching hardware into space could be offset within five years of operation, after which the facilities could run indefinitely on renewable energy.
Significant technical and logistical hurdles remain. Computer chips are vulnerable to radiation, an ever-present danger in space, which would necessitate the use of specialized radiation-hardened processors. Long-term maintenance of the facilities would also require in-orbit servicing technologies that don’t yet exist. And as computing technologies rapidly improve, chips depreciate in just a few years. Keeping orbital data centers stocked with the latest and greatest could be costly.
But the NTU team isn’t the first to float the idea of shifting computing facilities into space. Last year, French defense and aerospace giant Thales published a study exploring the feasibility of the idea. And next month, the startup Starcloud will launch a satellite carrying an Nvidia H100 GPU as a first step towards creating a network of orbital data centers.
While realizing the vision is likely to require technical breakthroughs and a huge amount of investment, one solution to computing’s ever growing carbon footprint may be above our heads.
The post Future Data Centers Could Orbit Earth, Powered by the Sun and Cooled by the Vacuum of Space appeared first on SingularityHub.
This Week’s Awesome Tech Stories From Around the Web (Through November 1)
Nvidia Becomes First $5 Trillion CompanyHannah Erin Lang | The Wall Street Journal ($)
“Nvidia is now larger than AMD, Arm Holdings, ASML, Broadcom, Intel, Lam Research, Micron Technology, Qualcomm, and Taiwan Semiconductor Manufacturing combined, according to Dow Jones Market Data. Its value also exceeds entire sectors of the S&P 500, including utilities, industrials and consumer staples.”
Robotics1X Neo Is a $20,000 Home Robot That Will Learn Chores via TeleoperationMariella Moon | Engadget
“In an interview with The Wall Street Journal’s Joanna Stern, 1X CEO Bernt Børnich explained that the AI neural network running the machine still needs to learn from more real-world experiences. Børnich said that anybody who buys NEO for delivery next year will have to agree that a human operator will be seeing inside their houses through the robot’s camera. It’s necessary to be able to teach the machines and gather training data so it can eventually perform tasks autonomously.”
BiotechnologyA New Startup Wants to Edit Human EmbryosEmily Mullin | Wired ($)
“In 2018, Chinese scientist He Jiankui shocked the world when he revealed that he had created the first gene-edited babies. The backlash against He was immediate. Scientists said the technology was too new to be used for human reproduction and that the DNA change amounted to genetic enhancement. …Now, a New York–based startup called Manhattan Genomics is reviving the debate around gene-edited babies.”
TechOpenAI Reportedly Planning ‘Up to $1 Trillion’ IPO as Early as Next YearMike Pearl | Gizmodo
“An anonymously sourced report from Reuters claims that OpenAI is planning an initial public offering that would value the AI colossus at ‘up to $1 trillion.’ Just on Tuesday the company formally completed its slow evolution from an ambiguous non-profit to a for-profit company. Now it appears to be formalizing plans to become one of the world’s centers of economic power—at least on paper.”
Artificial IntelligenceAI Agents Are Terrible Freelance WorkersWill Knight | Wired ($)
“A new benchmark measures how well AI agents can automate economically valuable chores. Human-level AI is still some ways off. …’I should hope this gives much more accurate impressions as to what’s going on with AI capabilities,’ says Dan Hendrycks, director of CAIS. He adds that while some agents have improved significantly over the past year or so, that does not mean that this will continue at the same rate.”
ComputingThe $460 Billion Quantum Bitcoin Treasure HuntKyle Torpey | Gizmodo
“Satoshi’s early bitcoin stash creates massive opportunity for quantum computing startups. …These early Bitcoin addresses, including many that have been connected to Bitcoin creator Satoshi Nakamoto, may also be associated with private keys (passwords to the Bitcoin accounts basically) that are lost or otherwise not accessible to anyone. In other words, they’re sort of like lost digital treasure chests that a quantum computer could potentially unlock at some point in the future.”
FutureHow AGI Became the Most Consequential Conspiracy Theory of Our TimeWill Douglas Heaven | MIT Technology Review ($)
“The idea that machines will be as smart as—or smarter than—humans has hijacked an entire industry. But look closely and you’ll see it’s a myth reminiscent of more explicitly outlandish and fantastical schemes. …I get it, I get it—calling AGI a conspiracy isn’t a perfect analogy. It will also piss a lot of people off. But come with me down this rabbit hole and let me show you the light.”
BiotechnologyLife Lessons From (Very Old) Bowhead WhalesCarl Zimmer | The New York Times ($)
“By measuring the molecular damage that accumulates in the eyes, ears, and eggs of bowhead whales, researchers have estimated that bowheads live as long as 268 years. A study published in the journal Nature on Wednesday offers a clue to how the animals manage to live so long: They are extraordinarily good at fixing damaged DNA.”
EnergyRenewable Energy and EVs Have Grown So Much Faster Than Experts Predicted 10 Years AgoAdele Peters | Fast Company
“There’s now four times as much solar power as the International Energy Agency (IEA) expected 10 years ago. Last year alone, the world installed 553 gigawatts of solar power—roughly as much as 100 million US homes use—which is 1,500% more than the IEA had projected. …More than 1 in 5 new cars sold worldwide today is an EV; a decade ago, that number was fewer than 1 in 100. Even if growth flatlined now, the world is on track to reach 100 million EVs by 2028.”
ComputingExtropic Aims to Disrupt the Data Center BonanzaWill Knight | Wired ($)
“The startup’s chips work in a fundamentally different way to chips from Nvidia, AMD, and others, and promise to be thousands of times more energy efficient when scaled up. With AI companies pouring billions of dollars into building data centers, a completely new approach could offer a far less costly alternative to vast arrays of conventional chips.”
TechAI Browsers Are a Cybersecurity Time BombRobert Hart | The Verge
“‘Despite some heavy guardrails being in place, there is a vast attack surface,’ says Hamed Haddadi, professor of human-centered systems at Imperial College London and chief scientist at web browser company Brave. And what we’re seeing is just the tip of the iceberg.”
FutureNASA’s Supersonic Jet Finally Takes off for Its First Super Fast, Super Quiet FlightPassant Rabie | Gizmodo
“NASA’s X-59 aircraft completed its first flight over the Southern California desert, bringing us closer to traveling at the speed of sound without the explosive, thunder-like clap that comes with it. The experimental aircraft, built by aerospace contractor Lockheed Martin, is designed to break the sound barrier, albeit to do it quietly.”
BiotechnologyA Bay Area Grocery Store Will Be the First to Sell Cultivated Meat—but You Only Have a Limited Time to Try ItKristin Toussaint | Fast Company
“[Cultivated meat] has only appeared on a handful of restaurant menus since its approval by the US Food and Drug Administration (FDA). But if you’re in the Bay Area, you’re in luck: Cultivated meat startup Mission Barns will be selling its pork meatballs (made with a base of pea protein plus the company’s cultivated pork fat) at Berkeley Bowl West, one location of an independent grocery store in California.”
ScienceChimps Are Capable of Human-Like Rational Thought, Breakthrough Study FindsBecky Ferreira | 404 Media
“The chimpanzees were able to rationally evaluate forms of evidence and to change their existing beliefs if presented with more compelling clues. The results reveal that non-human animals can exhibit key aspects of rationality, some of which had never been directly tested before, which shed new light on the evolution of rational thought and critical thinking in humans and other intelligent animals.”
RoboticsIs Waymo Ready for Winter?Andrew J. Hawkins | The Verge
“In its first few years of operation, Waymo has strategically stuck to cities with warmer, drier climates—places like Phoenix, Los Angeles, Atlanta, and Austin. But as it eyes a slate of East Coast cities, including Boston, New York City, and Washington, DC, for the next phase of its expansion, its abilities to handle more adverse weather will become a crucial test.”
The post This Week’s Awesome Tech Stories From Around the Web (Through November 1) appeared first on SingularityHub.
The Hardest Part of Creating Conscious AI Might Be Convincing Ourselves It’s Real
What would a machine actually have to do to persuade us it’s conscious?
As far back as 1980, the American philosopher John Searle distinguished between strong and weak AI. Weak AIs are merely useful machines or programs that help us solve problems, whereas strong AIs would have genuine intelligence. A strong AI would be conscious.
Searle was skeptical of the very possibility of strong AI, but not everyone shares his pessimism. Most optimistic are those who endorse functionalism, a popular theory of mind that takes conscious mental states to be determined solely by their function. For a functionalist, the task of producing a strong AI is merely a technical challenge. If we can create a system that functions like us, we can be confident it is conscious like us.
Recently, we have reached the tipping point. Generative AIs such as ChatGPT are now so advanced that their responses are often indistinguishable from those of a real human—see this exchange between ChatGPT and Richard Dawkins, for instance.
This issue of whether a machine can fool us into thinking it is human is the subject of a well-known test devised by English computer scientist Alan Turing in 1950. Turing claimed that if a machine could pass the test, we ought to conclude it was genuinely intelligent.
Back in 1950 this was pure speculation, but according to a pre-print study from earlier this year—that’s a study that hasn’t been peer-reviewed yet—the Turing test has now been passed. ChatGPT convinced 73 percent of participants that it was human.
What’s interesting is that nobody is buying it. Experts are not only denying that ChatGPT is conscious but seemingly not even taking the idea seriously. I have to admit, I’m with them. It just doesn’t seem plausible.
The key question is: What would a machine actually have to do in order to convince us?
Experts have tended to focus on the technical side of this question. That is, to discern what technical features a machine or program would need in order to satisfy our best theories of consciousness. A 2023 article, for instance, as reported in The Conversation, compiled a list of fourteen technical criteria or “consciousness indicators,” such as learning from feedback (ChatGPT didn’t make the grade).
But creating a strong AI is as much a psychological challenge as a technical one. It is one thing to produce a machine that satisfies the various technical criteria that we set out in our theories, but it is quite another to suppose that, when we are finally confronted with such a thing, we will believe it is conscious.
The success of ChatGPT has already demonstrated this problem. For many, the Turing test was the benchmark of machine intelligence. But if it has been passed, as the pre-print study suggests, the goalposts have shifted. They might well keep shifting as technology improves.
Myna DifficultiesThis is where we get into the murky realm of an age-old philosophical quandary: the problem of other minds. Ultimately, one can never know for sure whether anything other than oneself is conscious. In the case of human beings, the problem is little more than idle skepticism. None of us can seriously entertain the possibility that other humans are unthinking automata, but in the case of machines it seems to go the other way. It’s hard to accept that they could be anything but.
A particular problem with AIs like ChatGPT is that they seem like mere mimicry machines. They’re like the myna bird who learns to vocalize words with no idea of what it is doing or what the words mean.
This doesn’t mean we will never make a conscious machine, of course, but it does suggest that we might find it difficult to accept it if we did. And that might be the ultimate irony: succeeding in our quest to create a conscious machine, yet refusing to believe we had done so. Who knows, it might have already happened.
So what would a machine need to do to convince us? One tentative suggestion is that it might need to exhibit the kind of autonomy we observe in many living organisms.
Current AIs like ChatGPT are purely responsive. Keep your fingers off the keyboard, and they’re as quiet as the grave. Animals are not like this, at least not the ones we commonly take to be conscious, like chimps, dolphins, cats, and dogs. They have their own impulses and inclinations (or at least appear to), along with the desires to pursue them. They initiate their own actions on their own terms, for their own reasons.
Perhaps if we could create a machine that displayed this type of autonomy—the kind of autonomy that would take it beyond a mere mimicry machine—we really would accept it was conscious?
It’s hard to know for sure. Maybe we should ask ChatGPT.
This article is republished from The Conversation under a Creative Commons license. Read the original article.
The post The Hardest Part of Creating Conscious AI Might Be Convincing Ourselves It’s Real appeared first on SingularityHub.
A Squishy New Robotic ‘Eye’ Automatically Focuses Like Our Own
The lenses could give soft robots the ability to ‘see’ without electronics.
Our eyes naturally adjust to the visual world. From reading small fonts on a screen to scanning lush greenery in the great outdoors, they automatically change their focus to see everything near and far.
This is a far cry from camera systems. Even top-of-the-line offerings, such as full-frame mirrorless cameras, require multiple bulky lenses to cover a wide range of focal lengths. For example, photographers use telephoto lenses to film wildlife at a distance and macro lenses to capture the fine details of small things up close—say, a drop of morning dew on a flower.
In contrast, our eyes are made of “soft, flexible tissues in a highly compact form,” Corey Zhang and Shu Jia at Georgia Institute of Technology recently wrote.
Inspired by nature, the duo engineered a highly flexible robotic lens that adjusts its curvature in response to light, no external power needed. Added to a standard microscope, the lens could zero in on individual hairs on an ant’s leg and the lobes of single pollen grains.
Called a photoresponsive hydrogel soft lens (PHySL), the system could be especially useful for mimicking human vision in soft robots. It could also open the door to a range of uses in medical imaging, environmental monitoring, or even as an alternative camera in ultra-light mobile devices.
Artificial EyesWe’re highly visual creatures. Roughly 20 percent of the brain’s cortex—four to six billion neurons—is devoted to processing vision.
The process begins when light hits the cornea, a clear dome-shaped structure at the front of our eyes. This layer of tissue begins focusing the light. The next layer is the colored part of the eye and the pupil. The latter dilates at night and shrinks by day to control the amount of light reaching the lens, which sits directly behind the pupil.
A flexible structure reminiscent of an M&M, the lens focuses light onto the retina, which then translates it into electrical signals for the brain to interpret. Eye muscles change focal length by physically pulling the lens into different shapes. Working in tandem with the cornea, this flexibility allows us to change what we’re focusing on without conscious thought.
Despite their delicate nature and daily use, our eyes can remain in working order for decades. It’s no wonder scientists have tried to engineer artificial lenses with similar properties. Biologically inspired eyes could be especially helpful in soft robots navigating dangerous terrain with limited power. They could also make surgical endoscopes and other medical tools more compatible with our squishy bodies or help soft grippers pick fruit and other delicate items without bruising or breaking them.
“These features have prompted substantial efforts in bioinspired optics,” wrote the team. Several previous attempts used a fluid-based method, which changes the curvature—and hence, focal length—of a soft lens with external pressure, an electrical zap, or temperature. But these are prone to mechanical damage. Other contraptions using solid hardware are sturdier, but they require heavier motors to operate.
“The optics needed to form a visual system are still typically restricted to rigid materials using electric power,” wrote the team.
New PerspectiveThe new system brought two fields together: Adjustable lenses and soft materials.
The system’s lens is made of PDMS, a lightweight and flexible silicon-based material used in the likes of contact lenses and catheters.
The other component acts like artificial muscles to change the curvature of the lens. It’s fabricated with a biocompatible hydrogel and dusted with a light-sensing chemical. Heating the chemical sensor causes the gel to change its shape.
The team combined these two parts into a soft robotic eye, with the hydrogel surrounding the central lens. When exposed to heat—such as that stemming from light—the gel releases water and contracts. As it shrinks, the lens flattens and its focal length increases, allowing the eye to resolve objects at greater distances.
Depriving the system of light—essentially like closing your eyes—cools the gel. It then swells to its original plumpness, releases tension, and the lens resets.
The design offers better mechanical stability than previous versions, wrote the team. Because the gel constricts with light, it can form a stronger supporting structure that prevents the delicate lens from bending or collapsing as it changes shape. The robotic eye worked as expected across the light spectrum, with resolution and focus comparable to the human eye. It was also durable, maintaining performance after multiple cycles of bending, twisting, and stretching.
Image Credit: Shu JiaWith additional tinkering, the system proved to be an efficient replacement for traditional glass-based lenses in optical instruments. The team attached the squishy lens to a standard microscope and visualized a range of biological samples. These included fungal fibers, microscopic hairs on an ant’s leg, and the gap between a tick’s claws—all sized roughly a tenth of the width of a human hair.
The team wants to improve the system too. Recently developed hydrogels respond faster to light with more powerful mechanical forces, which could improve the robotic eye’s focal range. The system’s heavy dependence on temperature fluctuations could limit its use in extreme environments. Exploring different chemical additives could potentially shift its operating temperature range and tailor the hydrogel to particular uses.
And because the robotic eye “sees” across the light spectrum, it could in theory mimic other creature’s eyes, such as mantis shrimp, which can detect color differences invisible to humans, or reptilian eyes that can capture UV light.
A next step is to incorporate it into a soft robot as a biologically inspired camera system that doesn’t rely on electronics or extra power. “This system would be a significant demonstration for the potential of our design to enable new types of soft visual sensing,” wrote the team.
The post A Squishy New Robotic ‘Eye’ Automatically Focuses Like Our Own appeared first on SingularityHub.
These High-Tech Glasses and an Eye Implant Restored Sight in People With Severe Vision Loss
Patients regained the ability to read books, food labels, and subway signs.
Globally, more than five million people are affected by age-related macular degeneration, which can make reading, driving, and the recognition of faces impossible. A new wireless retinal implant has now restored functional sight to patients in advanced stages of the disease.
The condition gradually destroys the light-sensitive photoreceptors at the center of the retina, leaving people with only blurred peripheral vision. While researcher are investigating whether stem cell implants or gene therapy could help restore sight in these patients, these approaches are still only experimental.
Now though, a system called PRIMA built by neurotechnology startup Science Corporation is helping patients regain the ability to read books, food labels, and subway signs. The system consists of a specially designed pair of glasses that uses a camera to capture images and transmit them wirelessly to a tiny chip implanted in the retina that then stimulates surviving neurons.
In a paper published in The New England Journal of Medicine, researchers showed that 27 out of 32 participants in a clinical trial of the technology had regained the ability to read a year after receiving the device.
“This study confirms that, for the first time, we can restore functional central vision in patients,” Frank Holz from the University Hospital of Bonn who was lead author on the paper said in a statement. “The implant represents a paradigm shift in treating late-stage AMD [age-related macular degeneration].”
The system works by converting images captured by the camera-equipped glasses into pulses of infrared light that are then transmitted through the patients’ pupils to a two-square-millimeter photovoltaic chip. The chip converts the light into electrical signals that are transmitted to the neurons at the back of the eye, allowing the patients to perceive the light patterns captured by the glasses. The PRIMA system also includes a zoom function that lets users magnify what they’re looking at.
Daniel Palanker at Stanford School of Medicine initially designed the technology, and a French startup called Pixium Vision was commercializing it. But facing bankruptcy, the company sold PRIMA to Science Corporation last year for €4 million ($4.7 million), according to MIT Technology Review.
Palanker said the idea for the product came 20 years ago when he realized that because the eye is transparent it’s possible to deliver information into it using light. Previous systems also relied on camera-equipped glasses to transmit signals to a retinal implant, but they were connected either by wires or radio transmitters.
In the recent study, 32 people with a form of macular degeneration that destroys photoreceptors in the center of the retina received implants in one eye. After several months of visual training, 80 per cent of them had regained the ability to read text and recognize high-contrast objects.
Some participants achieved visual acuity equivalent to 20/42 when images were zoomed. And 26 of them could read at least two extra lines on a standard eye chart, with the average closer to five lines.
Because PRIMA uses infrared light to stimulate the chip, these signals don’t interfere with the remaining healthy photoreceptors surrounding it, allowing the brain to merge the restored vision in the central region with the patients’ residual peripheral vision.
The chip is currently only capable of producing black and white images with no shades in between, which limits patients’ ability to recognize more complex objects like faces. But Palanker says he is currently developing software that will allow users to see in grayscale. The researchers are also developing a second-generation implant that will have more than 10,000 pixels, which could support close to normal levels of visual acuity.
One of the participants told the BBC that using the device requires considerable concentration, and it’s not really practical on the move. But Science Corporation told MIT Technology Review it is also in the process of slimming down the bulky glasses and control box into a sleeker headset that would be only slightly larger than a standard pair of sunglasses.
Given the huge number of people affected by macular degeneration, the market for such a device could already be large, but the designers hope the approach could also help cure other vision disorders. The company has already applied for medical approval in Europe, so it may not be long before neuroprosthetic devices become a standard treatment for those with vision loss.
The post These High-Tech Glasses and an Eye Implant Restored Sight in People With Severe Vision Loss appeared first on SingularityHub.
This Shot Gave Elderly Mice’s Skin a Glow Up. It Could Do the Same for Other Organs Too.
Boosting protective immune cells healed blood vessels and improved the skin’s ability to repair damage.
The first time I accepted that my grandpa was really aging was when I held his hand. His grip was strong as ever. But the skin on his hand was wafer thin and carried tell-tale signs of bruising across its back.
Plenty of “anti-aging” skin care products promise a younger look. But skin health isn’t just about vanity. The skin is the largest organ in the body and our first line of defense against pathogens and dangerous chemicals. It also keeps our bodies within normal operating temperatures—whether we’re in a Canadian snowstorm or the blistering heat of Death Valley.
The skin also has a remarkable ability to regenerate. After a sunburn, scraped knee, or knife cut while cooking, skin cells divide to repair damage and recruit immune cells to ward off infection. They also make hormones to coordinate fat storage, metabolism, and other bodily functions.
With age the skin deteriorates. It bruises more easily. Wound healing takes longer. And the risk of skin cancer rises. Many problems are connected to a dense web of blood vessels that becomes increasingly fragile as we age. Without a steady supply of nutrients, the skin weakens.
Now a team from New York University School of Medicine and collaborators have discovered a way to turn back the clock. In elderly mice and human skin cells, they detected a steep decline in the numbers of a particular immune cell type. The cells they studied, a type of macrophage, hug blood vessels, help maintain their integrity, and control which molecules flow in or out.
A protein-based drug designed to revive the cells’ numbers gave elderly mice a skin glow up, improving blood flow and the skin’s ability to repair damage. Because loss of these cells happens before the skin declines notably, renewing their numbers may offer “an early strategy” for keeping our largest organ humming along as the years pass.
Trusty ResidentsAll organs in mammals have residential macrophages. Literally meaning “big eaters,” these immune cells monitor tissues for infections, cancers, and other dangers. Once activated, they recruit more immune cells to tackle diseases and repair damaged tissues.
There’s more than one type of macrophage. The cells belong to a large family where each member has a slightly different task. How they populate different organs is still mysterious, and scientists are just beginning to decode all the jobs they do. But there’s a general consensus: With age, many macrophage types decline in numbers and health and are linked to a variety of age-related diseases, such as atherosclerosis, cancer, and neurodegeneration.
This trend could also affect aging skin.
The skin’s layers are populated by different types of macrophages. Those in the outermost layer detect pathogens, while cells in the lower, fatty layer help maintain metabolism and regulate body temperature and inflammation. But it was capillary-associated macrophages (CAMs), in the middle layer, that caught the team’s interest. These cells wrap around intricate webs of blood vessels woven through our skin, helping maintain their ability to function and heal.
Tracking CellsTo better understand how the skin’s macrophages change with age, the team developed a technology to monitor their numbers and health in mice. The researchers genetically engineered the critters such that they produced glow-in-the-dark macrophages and observed these throughout their life.
With age, the skin’s middle layer lost macrophages—which the scientists identified as CAMs—far faster than other skin layers. In mice between 1 and 18 months of age—the human equivalent of pre-teens through people in their 70s—blood vessels that had lost these macrophages behaved as if they were “older” and struggled to support oxygen-rich blood flow to the skin.
The macrophages also dwindled in their coverage of capillaries during aging. Roughly a tenth of the width of a human hair, these dainty blood vessels shuttle nutrients to tissues and dump toxic chemicals into the bloodstream. Macrophage losses eventually led to the death of capillaries in elderly mice. Similar results were found in human skin samples from people over 75.
All this reduced the skin’s ability to maintain capillary health and healing. For example, in one test, the scientists used targeted lasers to form small blood clots. In young mice, the macrophages traveled to the site and ate up damaged red blood cells in the clumps. In elderly mice, blood vessels with more macrophages better repaired injuries, but healing slowed overall.
Skin RewindThe team next developed a protein-based therapy that directly boosts CAM levels, and injected it into one hind paw of mice the human equivalent of over 80 years of age. The other paw received a non-active control.
In a few days, the treated paw saw a jump in macrophage numbers and improved capillary flow nourishing the skin. The blood vessels also healed more rapidly after laser damage, resulting in less bruising. The injection seemingly rejuvenated old macrophages, rather than recruiting new ones from the bone marrow, suggesting even vintage cells can grow and regain their strength.
These early results are in mice, and they don’t measure the full spectrum of skin function after repairing blood vessels, which would require observing other cells. Fibroblasts, for example, generate collagen for skin elasticity and promote wound healing. Their numbers also shrink with age. The new treatment is based on a protein from these cells, and the team is planning to test how fibroblasts and CAMs interact with age, and if the shot can be further optimized.
Beyond skin health, blood vessel disease wreaks havoc in multiple organs, contributing to heart attacks, stroke, and other medical scourges of aging. A similar strategy could pave the way for new treatments. In future studies, the team hopes to optimize dosing, follow long-term effects and safety, and potentially mix-and-match the treatment with other regenerative therapies.
The post This Shot Gave Elderly Mice’s Skin a Glow Up. It Could Do the Same for Other Organs Too. appeared first on SingularityHub.
This Week’s Awesome Tech Stories From Around the Web (Through October 25)
Google’s Quantum Computer Makes a Big Technical LeapCade Metz | The New York Times ($)
“Leveraging the counterintuitive powers of quantum mechanics, Google’s machine ran this algorithm 13,000 times as fast as a top supercomputer executing similar code in the realm of classical physics, according to a paper written by the Google researchers in the scientific journal Nature.”
FutureThe Next Revolution in Biology Isn’t Reading Life’s Code—It’s Writing ItAndrew Hessel | Big Think
“Andrew Hessel, cofounder of the Human Genome Project–write, argues that genome writing is humanity’s next great moonshot, outlining how DNA synthesis could transform biology, medicine, and industry. He calls for global cooperation to ensure that humanity’s new power to create life is used wisely and for the common good.”
RoboticsAmazon Hopes to Replace 600,000 Us Workers With Robots, According to Leaked DocumentsJess Weatherbed | The Verge
“Citing interviews and internal strategy documents, The New York Times reports that Amazon is hoping its robots can replace more than 600,000 jobs it would otherwise have to hire in the United States by 2033, despite estimating it’ll sell about twice as many products over the period.”
ComputingRetina e-Paper Promises Screens ‘Visually Indistinguishable From Reality’Michael Franco | New Atlas
“The team was able to create a screen that’s about the size of a human pupil packed with pixels measuring about 560 nanometers wide. The screen, which has been dubbed retinal e-paper, has a resolution beyond 25,000 pixels per inch. ‘This breakthrough paves the way for the creation of virtual worlds that are visually indistinguishable from reality,’ says a Chalmers news release about the breakthrough.”
RoboticsNike’s Robotic Shoe Gets Humans One Step Closer to CyborgMichael Calore | Wired ($)
“At the end of each step, the motor pulls up on the heel of the shoe. The device is calibrated so the movement of the motor can match the natural movement of each person’s ankle and lower leg. The result is that each step is powered, or given a little bit of a spring and an extra push by the robot mechanism.”
SpaceSpaceX Launches 10,000th Starlink Satellite, With No Sign of Slowing DownStephen Clark | Ars Technica
“Taking into account [decommissioned Starlink satellites, there are] 8,680 total Starlink satellites in orbit, 8,664 functioning Starlink satellites in orbit (including newly launched satellites not yet operational), [and] 7,448 Starlink satellites in operational orbit. …The European Space Agency estimates there are now roughly 12,500 functioning satellites in orbit. This means SpaceX owns and operates up to 70 percent of all the active satellites in orbit today.”
ComputingAmazon Unveils AI Smart Glasses for Its Delivery DriversAisha Malik | TechCrunch
“The e-commerce giant says the glasses will allow delivery drivers to scan packages, follow turn-by-turn walking directions, and capture proof of delivery, all without using their phones. The glasses use AI-powered sensing capabilities and computer vision alongside cameras to create a display that includes things like hazards and delivery tasks.”
BiotechnologyThe Astonishing Embryo Models of Jacob HannaAntonio Regalado | MIT Technology Review ($)
“Clark and her colleagues are right that, for the foreseeable future, no one is going to decant a full-term baby out of a bottle. That’s still science fiction. But there’s a pressing issue that needs to be dealt with right now. And that’s what to do about synthetic embryo models that develop just part of the way—say for a few weeks, or months, as Hanna proposes. Because right now, hardly any laws or policies apply to synthetic embryos.”
TechOpenAI Readies Itself for Its Facebook EraKalley Huang, Erin Woo, and Stephanie Palazzolo | The Information ($)
“As the Meta alums have arrived, it’s become evident that some of OpenAI’s latest strategies and initiatives do resemble the tactics Meta used to grow into a corporate juggernaut, according to conversations with seven current and former employees. OpenAI itself is keenly interested in growing into a similar gigantic form, an effort to satisfy investors and justify the half-a-trillion-dollar valuation it received a few months ago.”
Artificial IntelligenceSakana AI’s CTO Says He’s ‘Absolutely Sick’ of Transformers, the Tech That Powers Every Major AI ModelMichael Nuñez | VentureBeat
“Llion Jones, who co-authored the seminal 2017 paper ‘Attention Is All You Need’ and even coined the name ‘transformer,’ delivered an unusually candid assessment at the TED AI conference in San Francisco on Tuesday: Despite unprecedented investment and talent flooding into AI, the field has calcified around a single architectural approach, potentially blinding researchers to the next major breakthrough.”
TechThe ChatGPT Atlas Browser Still Feels Like Googling With Extra StepsEmma Roth | The Verge
“OpenAI’s new browser is great at providing AI-generated responses, but not so great at searches. …Given the options already out there, ChatGPT Atlas is a bit of an underwhelming start for a company that wants to build a series of interconnected apps that could eventually become an AI operating system.”
ComputingOpenAI Executive Explains the Insatiable Appetite For AI ChipsSri Muppidi | The Information ($)
“Because training and running models are blurring together, given inference is using more compute than before and incorporating user feedback, OpenAI likely needs more and stronger chips to power every stage of building and deploying its models. So it makes sense why OpenAI is trying to get its hands on every Nvidia chip under the sun.”
The post This Week’s Awesome Tech Stories From Around the Web (Through October 25) appeared first on SingularityHub.
OpenAI Slipped Shopping Into 800 Million ChatGPT Users’ Chats—Here’s Why That Matters
As AI shopping goes mainstream, will people keep any real control over what they buy and why?
Your phone buzzes at 6 a.m. It’s ChatGPT: “I see you’re traveling to New York this week. Based on your preferences, I’ve found three restaurants near your hotel. Would you like me to make a reservation?”
You didn’t ask for this. The AI simply knew your plans from scanning your calendar and email and decided to help. Later, you mention to the chatbot needing flowers for your wife’s birthday. Within seconds, beautiful arrangements appear in the chat. You tap one: “Buy now.” Done. The flowers are ordered.
This isn’t science fiction. On Sept. 29, 2025, OpenAI and payment processor Stripe launched the Agentic Commerce Protocol. This technology lets you buy things instantly from Etsy within ChatGPT conversations. ChatGPT users are scheduled to gain access to over a million other Shopify merchants, from major household brand names to small shops as well.
As marketing researchers who study how AI affects consumer behavior, we believe we’re seeing the beginning of the biggest shift in how people shop since smartphones arrived. Most people have no idea it’s happening.
From Searching to Being ServedFor three decades, the internet has worked the same way: You want something, you Google it, you compare options, you decide, you buy. You’re in control.
That era is ending.
AI shopping assistants are evolving through three phases. First came “on-demand AI.” You ask ChatGPT a question, it answers. That’s where most people are today.
Now we’re entering “ambient AI,” where AI suggests things before you ask. ChatGPT monitors your calendar, reads your emails, and offers recommendations without being asked.
Soon comes “autopilot AI,” where AI makes purchases for you with minimal input from you. “Order flowers for my anniversary next week.” ChatGPT checks your calendar, remembers preferences, processes payment, and confirms delivery.
Each phase adds convenience but gives you less control.
The Manipulation ProblemAI’s responses create what researchers call an “advice illusion.” When ChatGPT suggests three hotels, you don’t see them as ads. They feel like recommendations from a knowledgeable friend. But you don’t know whether those hotels paid for placement or whether better options exist that ChatGPT didn’t show you.
Traditional advertising is something most people have learned to recognize and dismiss. But AI recommendations feel objective even when they’re not. With one-tap purchasing, the entire process happens so smoothly that you might not pause to compare options.
OpenAI isn’t alone in this race. In the same month, Google announced its competing protocol, AP2. Microsoft, Amazon, and Meta are building similar systems. Whoever wins will be in position to control how billions of people buy things, potentially capturing a percentage of trillions of dollars in annual transactions.
What We’re Giving UpThis convenience comes with costs most people haven’t thought about.
Privacy: For AI to suggest restaurants, it needs to read your calendar and emails. For it to buy flowers, it needs your purchase history. People will be trading total surveillance for convenience.
Choice: Right now, you see multiple options when you search. With AI as the middleman, you might see only three options ChatGPT chooses. Entire businesses could become invisible if AI chooses to ignore them.
Power of comparing: When ChatGPT suggests products with one-tap checkout, the friction that made you pause and compare disappears.
It’s Happening Faster Than You ThinkChatGPT reached 800 million weekly users by September 2025, growing four times faster than social media platforms did. Major retailers began using OpenAI’s Agentic Commerce Protocol within days of its launch.
History shows people consistently underestimate how quickly they adapt to convenient technologies. Not long ago most people wouldn’t think of getting in a stranger’s car. Uber now has 150 million users.
Convenience always wins. The question isn’t whether AI shopping will become mainstream. It’s whether people will keep any real control over what they buy and why.
What You Can DoThe open internet gave people a world of information and choice at their fingertips. The AI revolution could take that away. Not by forcing people, but by making it so easy to let the algorithm decide that they forget what it’s like to truly choose for themselves. Buying things is becoming as thoughtless as sending a text.
In addition, a single company could become the gatekeeper for all digital shopping, with the potential for monopolization beyond even Amazon’s current dominance in e-commerce. We believe that it’s important to at least have a vigorous public conversation about whether this is the future people actually want.
Here are some steps you can take to resist the lure of convenience:
Question AI suggestions. When ChatGPT suggests products, recognize you’re seeing hand-picked choices, not all your options. Before one-tap purchases, pause and ask: Would I buy this if I had to visit five websites and compare prices?
Review your privacy settings carefully. Understand what you’re trading for convenience.
Talk about this with friends and family. The shift to AI shopping is happening without public awareness. The time to have conversations about acceptable limits is now, before one-tap purchasing becomes so normal that questioning it seems strange.
The Invisible Price TagAI will learn what you want, maybe even before you want it. Every time you tap “Buy now” you’re training it—teaching it your patterns, your weaknesses, what time of day you impulse buy.
Our warning isn’t about rejecting technology. It’s about recognizing the trade-offs. Every convenience has a cost. Every tap is data. The companies building these systems are betting you won’t notice, and in most cases, they’re probably right.
This article is republished from The Conversation under a Creative Commons license. Read the original article.
The post OpenAI Slipped Shopping Into 800 Million ChatGPT Users’ Chats—Here’s Why That Matters appeared first on SingularityHub.
One Mind, Two Bodies: Man With Brain Implant Controls Another Person’s Hand—and Feels What She Feels
It sounds like science fiction, but the system could help people with brain or spinal cord injuries regain lost abilities.
In 2020, Keith Thomas dived into a pool and snapped his spine. The accident left him paralyzed from the chest down and unable to feel and move his arms and legs. Alone and isolated in a hospital room due to the pandemic, he jumped on a “first-of-its-kind” clinical trial that promised to restore some sense of feeling and muscle control using an innovative brain implant.
Researchers designed the implant to reconnect the brain, body, and spinal cord. An AI detects Thomas’ intent to move and activates his muscles with gentle electrical zaps. Sensors on his fingertips shuttle feelings back to his brain. Within a year, Thomas was able to lift and drink from a cup, wipe his face, and pet and feel the soft fur of his family’s dog, Bow.
The promising results led the team at Feinstein Institutes for Medical Research and the Donald and Barbara Zucker School of Medicine at Hofstra/Northwell wondering: If the implant can control muscles in one person, can that person also use it to control someone else’s muscles?
A preprint now suggests such “interhuman” connections are possible. With thoughts alone, Thomas controlled the hand of an able-bodied volunteer using precise electrical zaps to her muscles.
The multi-person neural bypass also helped Kathy Denapoli, a woman suffering from partial paralysis and struggling to move her hand. With the system, Thomas helped her successfully pour water with his brain signals. He even eventually felt the objects she touched in return.
It sounds like science fiction, but the system could boost collaborative rehabilitation, where groups of people with brain or spinal cord injuries work together. By showing rather than telling Denapoli how to move her hand, she’s nearly doubled her hand strength since starting the trial.
“Crucially, this approach not only restores aspects of sensorimotor function,” wrote the team. It “also fosters interpersonal connection, allowing individuals with paralysis to re-experience agency, touch, and collaborative action through another person.”
Smart BridgeWe move without a second thought: pouring a hot cup of coffee while half awake, grabbing a basketball versus a tennis ball, or balancing a cup of ice cream instead of a delicate snow cone.
Under the hood, these mundane tasks activate a highly sophisticated circuit. First, the intention to move is encoded in the brain’s motor regions and the areas surrounding them. These electrical signals then travel down the spinal cord instructing muscles to contract or relax. The skin sends feedback on pressure, temperature, and other sensations back to the brain, which adjusts movement on the fly.
This circuit is broken in people with spinal cord injuries. But over the past decade, scientists have begun bridging the gap with the help of brain or spinal implants. These arrays of microelectrodes send electrical signals to tailored AI algorithms that can decode intent. The signals are then used to control robotic arms, drones, and other prosthetics. Other methods have focused on restoring sensation, a crucial aspect of detailed movement.
Connecting motor commands and sensation into a feedback loop—similar to what goes on in our brains naturally—is gaining steam. Thomas’s implant is one example. Unlike previous implants, the device simultaneously taps into the brain, spinal cord, and muscles.
The setup first records electrical activity from Thomas’s brain using sensors placed in its motor regions. The sensors send these signals to a computer where they’re decoded. The translated signals travel to flexible electrode patches, like Band-Aids, placed on his spine and forearm. The patches electrically stimulate his muscles to guide their movement. Tiny sensors on his fingertips and palm then transmit pressure and other sensations back to his brain.
Over time, Thomas learned to move his arms and feel his hand for the first time in three years.
“There was a time that I didn’t know if I was even going to live, or if I wanted to, frankly. And now, I can feel the touch of someone holding my hand. It’s overwhelming,” he said at the time. “The only thing I want to do is to help others. That’s always been the thing I’m best at. If this can help someone even more than it’s helped me somewhere down the line, it’s all worth it.”
Human ConnectionTo help people regain their speech after injury or disease, scientists have created digital avatars that capture vocal pitch and emotion from brain recordings. Others have linked up people’s minds with non-invasive technologies for rudimentary human-to-human brain communication.
The new study incorporated Thomas’s brain implant with a human “avatar.” The volunteer wore electrical stimulation patches, wired to his brain, on her forearm.
In training, Thomas watched his able-bodied partner grasp an object, such as a baseball or soft foam ball. He received electrical stimulation to the sensory regions of his brain based on force feedback. Eventually, Thomas learned to discriminate between the objects while blindfolded with up to over 90 percent accuracy. Different objects felt strong or light, said Thomas.
The researchers wondered if Thomas could also help others with spinal cord injury. For this trial, he worked with Denapoli, a woman in her 60s with some residual ability to move her arms despite damage to her spinal cord.
Denapoli voiced how she wanted to move her hand—for example, close, open, or hold. Thomas imagined the movement, and his brain signals wirelessly activated the muscle stimulators on Denapoli’s arm to move her hand as intended.
The collaboration allowed her to pick up and pour a water bottle in roughly 20 seconds, with a success rate nearly triple that of when she tried the same task alone. In another test, Thomas’s neural commands helped her grasp, sip from, and set a can of soda down without spillage.
The connection went both ways. Gradually, Thomas began to feel the objects she touched based on feedback sent to his brain.
“This paradigm…allowed two participants with tetraplegia to engage in cooperative rehabilitation, demonstrating increased success in a motor task with a real-world object,” wrote the team.
The implant may have long-lasting benefits. Because it taps into the three main components of neurological sensation and movement, repeatedly activating the circuit could trigger the body to restore damage. With the implant, Thomas experienced improved sensation and movement in his hands and Denapoli increased her grip strength.
The treatment could also help people who suffered a stroke and lost control of their arms, or those with amyotrophic lateral sclerosis (ALS), a neurological disease that gradually eats away at motor neurons. To be clear, the results haven’t yet been peer-reviewed and are for a very limited group of people. More work is need to see if this type of collaborative rehabilitation—or what the authors call “thought-driven therapy”—helps compared to existing approaches.
Still, both participants are happy. Thomas said the study gave him a sense of purpose. “I was more satisfied [because] I was helping somebody in real life…rather than just a computer,” he said.
“I couldn’t have done that without you,” Denapoli told Thomas.
The post One Mind, Two Bodies: Man With Brain Implant Controls Another Person’s Hand—and Feels What She Feels appeared first on SingularityHub.
‘Unprecedented’ Artificial Neurons Are Part Biological, Part Electrical—Work More Like the Real Thing
Bacterial nanowires and memristors combine in artificial neurons that can control living cells.
Most people wouldn’t give Geobacter sulfurreducens a second look. The bacteria was first discovered in a ditch in rural Oklahoma. But the lowly microbe has a superpower. It grows protein nanotubes that transmit electrical signals and uses them to communicate.
These bacterial wires are now the basis of a new artificial neuron that activates, learns, and responds to chemical signals like a real neuron.
Scientists have long wanted to mimic the brain’s computational efficiency. But despite years of engineering, artificial neurons still operate at much higher voltages than natural ones. Their frustratingly noisy signals require an extra step to boost fidelity, undercutting energy savings.
Because they don’t match biological neurons—imagine plugging a 110-volt device into a 220-volt wall socket—it’s difficult to integrate the devices with natural tissues.
But now a team at the University of Massachusetts Amherst has used bacterial protein nanowires to form conductive cables that capture the behaviors of biological neurons. When combined with an electrical module called a memristor—a resistor that “remembers” its past—the resulting artificial neuron operated at a voltage similar to its natural counterpart.
“Previous versions of artificial neurons used 10 times more voltage—and 100 times more power—than the one we have created,” said study author Jun Yao in a press release. “Ours register only 0.1 volts, which [is] about the same as the neurons in our bodies.”
The artificial neurons easily controlled the rhythm of living heart muscle cells in a dish. And adding an adrenaline-like molecule triggered the devices to up the muscle cells’ “heart rate.”
This level of integration between artificial neurons and biological tissue is “unprecedented,” Bozhi Tian at the University of Chicago, who was not involved in the work, told IEEE Spectrum.
Better Way to ComputeThe human brain is a computational wonder. It processes an enormous amount of data at very low power. Scientists have long wondered how it’s capable of such feats.
Massively parallel computing—with multiple neural networks humming along in sync—may be one factor. More efficient hardware design may be another. Computers have separate processing and memory modules that require time and energy to shuttle data back and forth. A neuron is both memory chip and processor in a single package. Recent studies have also uncovered previously unknown ways brain cells compute.
It’s no wonder researchers have long tried to mimic neural quirks. Some have used biocompatible organic materials that act like synapses. Others have incorporated light or quantum computing principles to drive toward brain-like computation.
Compared to traditional chips, these artificial neurons slashed energy use when faced with relatively simple tasks. Some even connected with biological neurons. In a cross-continental test, one artificial neuron controlled a living, biological neuron that then passed the commands on to a second artificial neuron.
But building mechanical neurons isn’t for the “whoa” factor. These devices could make implants more compatible with the brain and other tissues. They may also give rise to a more powerful, lower energy computing system compared to the status quo—an urgent need as energy-hogging AI models attract hundreds of millions of users.
The Life of a NeuronPrevious artificial neurons loosely mimicked the way biological neurons behave. The new study sought to recapitulate their electrical signaling.
Neurons aren’t like light switches. A small input, for example, isn’t enough to activate them. But as signals consistently build up, they trigger a voltage change, and the neuron fires. The electrical signal travels along its output branch and guides neighboring neurons to activate too. In the blink of an eye, the cells connect as a network, encoding memories, emotions, movement, and decisions.
Once activated, neurons go into a resting mode, during which they can’t be activated again—a brief reprieve before they tackle the next wave of electrical signals.
These dynamics are hard to mimic. But the tiny protein cables G. sulfurreducens bacteria use to communicate may help. The cables can withstand extremely unpredictable conditions, such as Oklahoma winters. They’re also particularly adept at conducting ions—the charged particles involved in neural activity—with high efficiency, nixing the need to amplify signals.
Harvesting the nanocables was a bit like drying wild mushrooms. The team snipped them off collections of bacteria and developed a way to rid them of contaminants. They suspended the wispy proteins in liquid and poured the concoction onto an even surface for drying. After the water evaporated, they were left with an extremely thin film containing protein nanocables that retained their electrical capabilities.
The team integrated this film into a memristor. Like in neurons, changing voltages altered the artificial neuron’s behavior. Built-up voltage caused the protein nanowires to bridge a gap inside the memristor. With sufficient input voltage, the nanocables completed the circuit and electrical signals flowed—essentially activating the neuron. Once the voltage dropped, the nanocables dissolved, and the artificial neurons reset to a resting state like their biological counterparts.
Because the protein wires are extremely sensitive to voltage changes, they can instruct the artificial neurons to switch their behavior at a much lower energy. This slashes total energy costs to one percent of previous artificial neurons. The devices operate at a voltage similar to biological neurons, suggesting they could better integrate with the brain.
Beating HeartAs proof of concept, the team connected their invention to heart muscle cells. These cells require specific electrical signals to keep their rhythm. Like biological neurons, the artificial neurons monitored the strength of heart cell contractions. Adding norepinephrine, a drug that rapidly increases heart rate, activated the artificial neurons in a way that mimics natural ones, suggesting they could capture chemical signals from the environment.
Although it’s still early, the artificial neurons pave the way for uses that seamlessly bridge biology and electronics. Wearable devices and brain implants inspired by the devices could yield prosthetics that better “talk” to the brain.
Outside of biotech, artificial neurons could be a greener alternative to silicon-based chips if the technology scales up. Unlike older designs that require difficult manufacturing processes, such as extreme temperatures, this new iteration can be printed with the same technology used to manufacture run-of-the-mill silicon chips.
It won’t be an easy journey. Harvesting and processing protein nanotubes remains time consuming. It’s yet unclear how long the artificial neurons can remain fully functional. And as with any device including biological components, more quality control will be needed to ensure even manufacturing.
Regardless, the team is hopeful the design can inspire more effective bioelectronic interfaces. “The work suggests a promising direction toward developing bioemulated electronics, which in turn can lead to closer interface with biosystems,” they wrote. Not too bad for bacteria discovered in a ditch.
The post ‘Unprecedented’ Artificial Neurons Are Part Biological, Part Electrical—Work More Like the Real Thing appeared first on SingularityHub.
Scientists Say New Air Filter Transforms Any Building Into a Carbon-Capture Machine
Like rooftop solar panels, the approach would use existing infrastructure to lower the cost and widen the reach of carbon-capture efforts.
As carbon emissions continue to rise there’s growing recognition we need to find ways to reverse them. Researchers have now created an air filter that passively captures CO2 from building ventilation systems, offering a low-cost alternative to energy-hungry carbon-capture plants.
The idea of pulling carbon out of the atmosphere to help solve climate change has long been resisted by climate activists, who worry it could be an excuse to take less drastic action.
But with the pace of reductions still well below what’s required to avert the worst impacts of a warming climate, even bodies like the Intergovernmental Panel on Climate Change now concede carbon capture is likely to play a crucial role.
However, conventional direct-air-capture systems are large, expensive, and energy-intensive, and it’s not clear whether the technology can be scaled to meet the challenge ahead.
Now researchers have developed a carbon-capture model that would instead install CO2-absorbing air filters in building ventilation systems. Much like rooftop solar panels, they say, the approach could use existing infrastructure to lower the cost and widen the reach of carbon-capture efforts.
“The massive land use and capital investment of centralized DAC [direct-air-capture] plants and the energy-intensive process of adsorbent regeneration limit its wide employment,” the researchers write in a paper in Science Advances. “By taking advantage of billions of ventilation systems in the world, distributed DAC air filter technology can shift the paradigm.”
Direct-air-capture plants currently under development are large and require significant amounts of land and infrastructure. They typically pull vast quantities of air through chemical sorbents to extract CO2. But because the concentration of CO2 in the atmosphere is relatively low, fans and pumps have to run at high power for long periods to extract even modest amounts of the gas.
The sorbents must then be heated to release the captured carbon. This uses even more energy. To make the process less costly, the plants are often located near sources of waste heat or low-cost electricity generation, such as geothermal, which significantly limits where they can be deployed.
The new approach proposes embedding carbon-capture materials into the heating, ventilation, and air-conditioning (HVAC) systems already installed in homes, offices, and factories. The design relies on a lightweight filter made of carbon nanofibers coated with polyethylenimine (PEI), a polymer that binds with CO2 from the air.
Crucially, the filter requires relatively little energy to release the carbon because the nanofibers absorb sunlight very efficiently. This means they can be regenerated by simply warming them to about 80 degrees Celsius under direct sunlight. A short electrical pulse of one to two seconds can also heat the conductive fibers enough that they release the gas almost instantly. Both methods require far less energy than the amount used in conventional direct-air-capture plants.
The filters also have a negligible impact on airflow, which means they could be added to existing infrastructure without major design changes or increases in fan power.
The researchers calculated that over a filter’s lifetime, it would achieve a net carbon removal efficiency of about 92 percent when regenerated using solar heat. That’s because it would take just 0.073 kilograms of carbon emissions for each kilogram of CO2 removed—much lower than most current direct-air-capture systems.
They estimated the system would cost $362 per ton of CO2 removed if the filters were regenerated using solar heat or $821 per ton with electricity. Current estimates for large-scale direct-air-capture plants range from $100 to $1,000 a ton, but the researchers note that those lower estimates are only possible with access to rare low-cost energy sources. Factoring in available tax incentives and storage credits, the authors estimate net costs could decrease to between $209 and $668 per ton.
If deployed widely, the impact could massive. The researchers estimate the approach could remove around 25 million tons of CO2 each year across the US and as much as 596 million tons globally. The main challenges would be scaling the production of the nanofiber material and working out the logistics of collecting and regenerating filters from so many locations.
Nonetheless, the approach’s low cost suggests it could be a promising way for businesses and homeowners to help chip away at carbon emissions.
The post Scientists Say New Air Filter Transforms Any Building Into a Carbon-Capture Machine appeared first on SingularityHub.
This Week’s Awesome Tech Stories From Around the Web (Through October 18)
Self-Improving Language Models Are Becoming Reality With MIT’s Updated SEAL TechniqueCarl Franzen | VentureBeat
“Researchers at [MIT] are gaining renewed attention for developing and open sourcing a technique that allows large language models (LLMs)—like those underpinning ChatGPT and most modern AI chatbots—to improve themselves by generating synthetic data to fine-tune upon.”
Biotechnology95% of Kids With ‘Bubble Boy’ Disease Cured by One-Time Gene TherapyPaul McClure | New Atlas
“The researchers followed these patients for a median of 7.5 years, totaling 474 patient-years. …The study was the largest and longest follow-up of a gene therapy of this kind to date. Importantly, all 62 children survived to the end of the trial. Of the 62 participants, 59 of them—that’s 95%—were successfully treated.”
ComputingParalyzed Man Can Feel Objects Through Another Person’s HandCarissa Wong | New Scientist ($)
“Keith Thomas, a man in his 40s with no sensation or movement in his hands, is able to feel and move objects by controlling another person’s hand via a brain implant. The technique might one day even allow us to experience another person’s body over long distances.”
ComputingWhy Signal’s Post-Quantum Makeover Is an Amazing Engineering AchievementDan Goodin | Ars Technica
“Eleven days ago, the nonprofit entity that develops the protocol, Signal Messenger LLC, published a 5,900-word write-up describing its latest updates that make Signal fully quantum-resistant. The complexity and problem-solving required for making the Signal Protocol quantum safe are as daunting as just about any in modern-day engineering.”
TechAI Economics Are Brutal. Demand Is the Variable to Watch.Steven Rosenbush | The Wall Street Journal ($)
“Even the most likely eventual winners in AI are losing billions of dollars right now. It’s hard to predict how this will play out in the financial markets, but here’s a clue. Keep an eye on demand for AI, measured in units of data processed. It’s soaring right now. The entire AI bet may turn on how far and fast it ramps from here.”
FutureCalifornia Becomes First State to Regulate AI Companion ChatbotsRebecca Bellan | TechCrunch
“The law, SB 243, is designed to protect children and vulnerable users from some of the harms associated with AI companion chatbot use. It holds companies—from the big labs like Meta and OpenAI to more focused companion startups like Character AI and Replika—legally accountable if their chatbots fail to meet the law’s standards.”
FutureMeet the Man Building a Starter Kit for CivilizationTiffany Ng | MIT Technology Review ($)
“[The Global Village Construction Set (GVCS) is] a set of 50 machines—everything from a tractor to an oven to a circuit maker—that are capable of building civilization from scratch and can be reconfigured however you see fit.”
RoboticsWaymo Plans to Launch a Robotaxi Service in London in 2026Kirsten Korosec | TechCrunch
“Waymo will start with human safety drivers behind the wheel before it launches driverless testing and eventually invites the public to hail its robotaxis, a strategy that it has used in other commercial markets such as Phoenix and San Francisco.”
ComputingNvidia Sells Tiny New Computer That Puts Big AI on Your DesktopBenj Edwards | Ars Technica
“On Tuesday, Nvidia announced it will begin taking orders for the DGX Spark, a $4,000 desktop AI computer that wraps one petaflop of computing performance and 128GB of unified memory into a form factor small enough to sit on a desk. Its biggest selling point is likely its large integrated memory that can run larger AI models than consumer GPUs.”
Artificial IntelligenceThe AI Industry’s Scaling Obsession Is Headed for a CliffWill Knight | Wired ($)
“By mapping scaling laws against continued improvements in model efficiency, the researchers found that it could become harder to wring leaps in performance from giant models whereas efficiency gains could make models running on more modest hardware increasingly capable over the next decade.”
EnergyIn Austin, This 100% Geothermal Neighborhood Is Designed to Shrink Utility BillsAdele Peters | Fast Company
“Heat pumps in each house connect to pipes that loop hundreds of feet underground, making use of the earth’s steady temperature for heating and cooling. The houses are also built to use as little energy as possible, with features like deep eaves that shade the interior and reduce the need for air-conditioning. Solar shingles on the roofs produce enough power to match each home’s expected electricity use.”
TechRise of the Cursor Resistance: Why Some Techies Want to Ignore AI Coding ToolsRocket Drew | The Information ($)
“The technology has indeed become an inescapable part of Silicon Valley, but the rush to adopt it has brought a backlash among programmers. Partly that’s because the AI coding tools have some obvious technical limitations—sometimes producing error-ridden code, among other problems—and partly it’s because human coders worry any sort of adoption of the tools will hasten their own obsolescence.”
Artificial IntelligenceWhy AI Startups Are Taking Data Into Their Own HandsRussell Brandom | TechCrunch
“Where training sets were once scraped freely from the web or collected from low-paid annotators, companies are now paying top dollar for carefully curated data. With the raw power of AI already established, companies are looking to proprietary training data as a competitive advantage. And instead of farming out the task to contractors, they’re often taking on the work themselves.”
FutureFrom Slop to Sotheby’s? AI Art Enters a New PhaseGrace Huckins | MIT Technology Review ($)
“Amid all the muck, there are people using AI tools with real consideration and intent. Some of them are finding notable success as AI artists: They are gaining huge online followings, selling their work at auction, and even having it exhibited in galleries and museums.”
SpaceLong-Lived Gamma Ray Burst Could Signal a New Kind of Cosmic CatastropheDaniel Clery | Science
“Some theorists [are considering] perhaps the most unusual scenario of all: a black hole ripping up another star from within. This model starts with a stellar-mass black hole and a large star orbiting each other. When the star has burned up all its hydrogen fuel, it is left with a dense helium core and an outer envelope that swells up. That envelope can drag on the black hole, causing it to spiral in and ultimately fall into the helium core.”
The post This Week’s Awesome Tech Stories From Around the Web (Through October 18) appeared first on SingularityHub.
Investors Have Poured Nearly $10 Billion Into Fusion Power. Will Their Bet Pay Off?
Some companies claim they’ll be supplying power commercially within a decade. How likely is that?
Over the past five years, private-sector funding for fusion energy has exploded. The total invested is approaching $10 billion, from a combination of venture capital, deep-tech investors, energy corporations, and sovereign governments.
Most of the companies involved (and the cash) are in the United States, though activity is also increasing in China and Europe.
Why has this happened? There are several drivers: increasing urgency for carbon-free power, advances in technology and understanding such as new materials and control methods using artificial intelligence (AI), a growing ecosystem of private-sector companies, and a wave of capital from tech billionaires. This comes on the back of demonstrated progress in theory and experiments in fusion science.
Some companies are now making aggressive claims to start supplying power commercially within a few years.
What Is Fusion?Nuclear fusion involves combining light atoms (typically hydrogen and its heavy isotopes, deuterium and tritium) to form a heavier atom, releasing energy in the process. It’s the opposite of nuclear fission (the process used in existing nuclear power plants), in which heavy atoms split into lighter ones.
Taming fusion for energy production is hard. Nature achieves fusion reactions in the cores of stars, at extremely high density and temperature.
The density of the plasma at the sun’s core is 150 times that of water, and the temperature is around 15 million degrees Celsius. Here, ordinary hydrogen atoms fuse to ultimately form helium.
However, each kilogram of hydrogen produces only around 0.3 watts of power because the “cross section of reaction” (how likely the hydrogen atoms are to fuse) is tiny. The sun, however, is enormous and massive, so the total power output (1026 watts) and the burn duration (10 billion years) are astronomical.
Fusion of heavier forms of hydrogen (deuterium and tritium) has a much higher cross section of reaction, meaning they are more likely to fuse. The cross-section peaks at a temperature ten times hotter than the core of the sun: around 150 million degrees Celsius.
The only way to continuously contain the plasma at temperatures this high is with an extremely strong magnetic field.
Increasing the OutputSo far, fusion reactors have struggled to consistently put out more energy than is put in to make the fusion reaction happen.
The most common design for fusion reactors uses a toroidal, or donut-like, shape.
The best result using deuterium–tritium fusion in the donut-like “tokamak” design was achieved at the European JET reactor in 1997, where the energy output was 0.67 times the input. (However, the Japanese JT-60 reactor has achieved a result using only deuterium that suggests it would reach a higher number if tritium were involved.)
Larger gains have been demonstrated in brief pulses. This was first achieved in 1952 in thermonuclear weapons tests, and in a more controlled manner in 2022 using high-powered lasers.
The ITER ProjectThe public program most likely to demonstrate fusion is the ITER project. ITER, formerly known as the International Thermonuclear Experimental Reactor, is a collaborative project of more than 35 nations that aims to demonstrate the scientific and technological feasibility of fusion as an energy source.
ITER was first conceived in 1985, at a summit between US and Soviet leaders Ronald Reagan and Mikhail Gorbachev. Designing the reactor and selecting a site took around 25 years, with construction commencing at Cadarache in southern France in 2010.
The project has seen some delays, but research operations are now expected to begin in 2034, with deuterium–tritium fusion operation slated for 2039. If all goes according to plan, ITER will produce some 500 megawatts of fusion power, from as little as 50 megawatts of external heating. ITER is a science experiment, and won’t generate electricity. For context, however, 500 megawatts would be enough to power perhaps 400,000 homes in the US.
New Technologies, New DesignsITER uses superconducting magnets that operate at temperatures close to absolute zero (around –269°C). Some newer designs take advantage of technological advances that allow for strong magnetic fields at higher temperatures, reducing the cost of refrigeration.
One such design is the privately owned Commonwealth Fusion System’s SPARC tokamak, which has attracted some $3 billion in investment. SPARC was designed using sophisticated simulations of how plasma behaves, many of which now use AI to speed up calculations. AI may also be used to control the plasma during operations.
Another company, Type I Energy, is pursuing a design called a stellarator, which uses a complex asymmetric system of coils to produce a twisted magnetic field. In addition to high-temperature superconductors and advanced manufacturing techniques, Type I Energy uses high-performance computing to optimally design machines for maximum performance.
Both companies claim they will roll out commercial fusion power by the mid-2030s.
In the United Kingdom, a government-sponsored industry partnership is pursuing the Spherical Tokamak for Energy Production, a prototype fusion pilot plant proposed for completion by 2040.
Meanwhile, in China, a state-owned fusion company is building the Burning Plasma Experimental Superconducting Tokamak, which aims to demonstrate a power gain of five. “First plasma” is slated for 2027.
When?All projects planning to make power from fusion using donut-shaped magnetic fields are very large, producing on the order of a gigawatt of power. This is for fundamental reasons: Larger devices have better confinement, and more plasma means more power.
Can this be done in a decade? It won’t be easy. For comparison, design, siting, regulatory compliance, and construction of a 1-gigawatt coal-fired power station (a well-understood, mature, but undesirable technology) could take up to a decade. A 2018 Korean study indicated the construction alone of a 1-gigawatt coal-fired plant could take more than 5 years. Fusion is a much harder build.
Private and public-private partnership fusion energy projects with such ambitious timelines would have high returns—but a high risk of failure. Even if they don’t meet their lofty goals, these projects will still accelerate the development of fusion energy by integrating new technology and diversifying risk.
Many private companies will fail. This shouldn’t dissuade the public from supporting fusion. In the long term, we have good reasons to pursue fusion power—and to believe the technology can work.
Disclosure statement: Matthew Hole receives funding from the Australian government through the Australian Research Council and the Australian Nuclear Science and Technology Organization (ANSTO), and the Simons Foundation. He is also affiliated with ANSTO, the ITER Organization as an ITER Science Fellow, and is chair of the Australian ITER Forum.
This article is republished from The Conversation under a Creative Commons license. Read the original article.
The post Investors Have Poured Nearly $10 Billion Into Fusion Power. Will Their Bet Pay Off? appeared first on SingularityHub.
ChatGPT-Like AI Unveils 1,300 Regions in the Mouse Brain—Some Uncharted
The algorithm identified known regions as well as mysterious domains with yet unknown functions.
At the turn of the 20th century, Korbinian Brodmann released one of the most consequential brain maps ever. By studying the humps, grooves, layers, and cells of the cortex—the outermost layer of the brain—he divided the wrinkly tissue into 52 distinct areas.
Brodmann’s map was based solely on microscopic images of the brain. Since then, neuroscientists have added a variety of other data types, including high-resolution brain scans, neuron connectivity, and gene expression. In 2016, the human cortex map received a seminal update including multiple datasets. It defined 180 “universal” areas in the human cerebral cortex—far more than Brodmann’s map—many of which were linked to specific brain functions.
Subdividing the brain can drive neuroscience discoveries. By linking specific brain functions in health and disease to smaller, more precise anatomical regions, scientists can better study how the brain changes with age and disease or fine-tune treatments.
Previous maps heavily relied on the keen eyes of human experts to draw out regions. But with increasingly detailed datasets on multiple scales—genes, cells, neural networks—across the entire brain, scientists are increasingly relying on machine minds for help.
Now, thanks to a ChatGPT-like AI, machines may take over brain districting entirely. A recent collaboration between the University of California, San Francisco’s Abbasi Lab and the Allen Institute married AI and neuroanatomy to build one of the most detailed mouse brain maps ever. Dubbed CellTransformer, the AI learned how cells relate to each other using massive datasets detailing which genes are turned on or off throughout the brain.
The AI churned through over 200 mouse brain slices and nine million cells to outline 1,300 brain regions and subregions across multiple mice. It easily discerned well-defined areas such as the hippocampus, the brain’s memory hub. But the algorithm also identified an elusive layer in the motor cortex and mysterious domains with yet unknown functions.
“It’s like going from a map showing only continents and countries to one showing states and cities,” said study author Bosiljka Tasic in a press release. “And based on decades of neuroscience, new regions correspond to specialized brain functions to be discovered.”
An Atlas of Brain MapsThanks to increasingly sophisticated microscopy and affordable genetic tools, large-scale brain maps now cover a range of complexities in brain organization.
You can think of the brain’s architecture as a tower. Genes are the foundation. All brain cell types have the same set of genes, but mutations can lead to a multitude of brain diseases. This layer inspires gene therapies, some of which are gaining steam.
The next level up is transcriptomics—that is, which genes are turned on or off. Different brain cells have unique gene expression signatures that hint at their health and function. A powerful tool called spatial transcriptomics captures these signals at the level of single cells in a map across brain slices. This map pinpoints genetic profiles in time and space.
Further up the tower is connectomics—how neurons functionally wire together at both the local and global scales—and behavior. The Machine Intelligence From Cortical Networks (MICrONS) consortium operates at this scale. The group has painstakingly imaged and mapped a cubic millimeter of mouse brain and linked the neural connections to behavior. Finally, brain scans, such as functional MRI, offer a more birds-eye view of the brain in action.
Each level gives us a unique perspective on brain regions and how they work. But too much data can be an embarrassment of riches. “Transforming this abundance of data into a useful representation can be difficult, even for fields with a wealth of prior knowledge, such as neuroanatomy,” wrote the authors.
Hello, NeighborThe new study zeroed in on one level: Spatial transcriptomics.
At the heart of CellTransformer is the same type of AI that powers ChatGPT and other popular chatbots. Called a transformer, the algorithm uses artificial neural networks to process data. First introduced in 2017, transformers are a foundation for other AI models, such as large language models, to build upon. Think of them as scaffolding for building a house. The final architectural designs may look vastly different, but they all rely on the same initial framework.
Transformers are especially adept at “understanding” context. For example, they can model how words in sentences relate to each other, allowing chatbots to deliver human-like responses. Rather than training the AI with data scraped from the internet, the authors fed it multiple existing datasets collected from mouse brains. These included the Allen Brain Cell Whole Mouse Brain Atlas for structural information, a spatial transcriptomic atlas called MERFISH, and a single-cell RNA sequencing dataset—which also charts active genes—from millions of cells.
They then asked the AI to find “local neighborhoods” based on any given cell without additional guidance. Similar to finding patterns in words, CellTransformer learned patterns of spatial transcriptomics surrounding cells. Each neighborhood was then marked with a set of “tokens”— building blocks for the AI to analyze—that could accurately predict gene expression and link the results to cell type and tissue information.
“While transformers are often applied to analyze the relationship between words in a sentence, we use CellTransformer to analyze the relationship between cells that are nearby in space,” said study author Reza Abbasi-Asl. “It learns to predict a cell’s molecular features based on its local neighborhood, allowing it to build up a detailed map of the overall tissue organization.”
The team first used the AI to analyze complex but well-known brain areas, including the hippocampus, using Allen Institute’s Common Coordinates Framework, a gold standard for neuroanatomy.
The hippocampus is a seahorse-shaped structure buried deep inside the brain critical for learning and memory. It consists of multiple regions, each with distinct but intertwined jobs and unique gene expression profiles. CellTransformer performed admirably, marking subdivisions similar to previous results. It also excelled at delineating areas in the cortex—for example, those related to sensing and movement—which Brodmann roughly sketched out over a century ago.
Perhaps more excitingly, the AI charted a slew of previously unknown areas. Some centered around a hub in the midbrain, which is known for initiating movement, emotion, and other behaviors. Often destroyed in Parkinson’s disease, the area could be a target for treatment. CellTransformer also found several cellular neighborhoods that intermingled in a grid-like pattern, suggesting they could form a previously undiscovered local neural network.
The AI identified 1,300 brain regions overall. Though to be clear, the results haven’t been experimentally confirmed. The authors also stress the findings shouldn’t be interpreted to mean “the brain is composed of discrete brain regions” but perhaps as a gradient of gene expression differences. Still, the map may help scientists uncover yet unknown functions in small but distinctive brain regions or link specific brain areas to diseases.
The AI isn’t tailored to analyzing just the brain. It could also digitally dissect other tissues—including cancerous ones—and organs into subsections. Similar to the brain, the AI could perhaps find nuanced structures and functions that inspire new targets and treatments.
The post ChatGPT-Like AI Unveils 1,300 Regions in the Mouse Brain—Some Uncharted appeared first on SingularityHub.



