Singularity HUB
This Wireless Brain Implant Is Smaller Than a Grain of Salt
It uses light to record and transmit brain signals and worked for a year with minimal scarring in mice.
As its whiskers flitter, the mouse’s brain sparks with activity. A tiny implant records the electrical chatter and beams it to a nearby computer.
Smaller than a grain of salt, the implant is powered by and transmits data with light. Unlike most implants, it moves with the brain to reduce scarring. Dubbed MOTE, the device reliably captured electrical signals for a year in mice—about half their lifespan—without obvious damage.
“The long-term recording of neural activity could be used to understand complex behaviors and disorders,” Sunwoo Lee at Nanyang Technological University, Alyosha Molnar at Cornell University, and team wrote in a recent paper describing the implant.
Penny for Your ThoughtsBrain implants are helping decode—and restore—the neural signals behind our thoughts, memories, movements, and behavior.
Most devices rely on arrays of microelectrodes inserted into the brain, though some sit on the brain’s surface to minimize damage. From translating neural activity into computerized speech to restoring movement in people with paralysis, these devices have already transformed lives.
But there’s a major drawback. Most implants use wires plugged into a port embedded in a person’s skull to transfer signals, requiring extensive surgery. Implanted electrodes, although small, are like fixed pins inside a wobbly Jell-o block. They can’t move with brain tissue. Over time, scarring reduces the implants’ efficiency, and the hardware triggers inflammation.
Scientists have been tackling these roadblocks with clever ideas like wireless implants that transmit data on radio frequencies, a bit like walkie-talkies. “Neurograin” implants, for example, record and stimulate the brain wirelessly and transmit data to a thin electrical patch on the scalp. Other devices use ultrasound for power and to send signals to a controller.
But most wireless implants are still bulky, “equivalent to a sizable fraction of the mouse brain,” wrote the team.
Then there’s the ultimate enemy: Time. The brain is bathed in fluid for nourishment and waste removal, but this soupy concoction eats away at electrical components. Although some methods can capture neural activity over many months with a microscope and implanted light probes, they only work in genetically engineered mice with glow-in-the-dark neurons.
A durable wireless implant for living brains has so far escaped scientists’ grasp.
“Our goal was to make the device small enough to minimize that disruption while still capturing brain activity faster than imaging systems, and without the need to genetically modify the neurons for imaging,” Molnar said in a press release.
Power of LightThe new MOTE device, smaller than a grain of salt, combines electronics and LEDs for wireless recording and communication.
Red and infrared light penetrate the scalp, skull, and brain with minimal distortion, making them useful energy sources. The device has a diode that turns those wavelengths of light into electrical energy—a bit like those inside solar panels—to power the device. Once the implant captures electrical signals from the brain, it sends them to a computer on short pulses of light.
Like morse code, the exact timing and duration of the pulses reflect neural activity. This technology is widely used in satellite communication, wrote the team, and requires very little power to operate.
MOTE’s onboard electronics are like computer chips. Each packs 186 transistors, which form the basis of three main circuits. One circuit boosts recorded brain signals, another recodes them into light pulses, and the third drives LEDs for transmission to a computer.
These components are protected by a custom sheath made by coating the implant one atomic layer at a time. The ultra-thin sheath protects MOTE from the brain’s corrosive environment. Each fabrication step can be done in parallel, making nearly 100 devices at the same time.
“As far as we know, this is the smallest neural implant that will measure electrical activity in the brain and then report it out wirelessly,” said Molnar.
Live Long and ProsperIn a first test, the devices reliably captured electrical activity from heart muscle cells in petri dishes, suggesting they worked as intended.
The team next implanted the device into a unique part of the mouse brain. Mice heavily depend on their whiskers to navigate the world. These signals are processed in the barrel cortex. A range of electrical patterns capture sensations and generate twitches in each whisker.
Some mice received the implant on top of their brains, instead of penetrating into the delicate tissue. But most had the device implanted using a nanoinjector. Over the next year, the device faithfully transmitted data from the barrel cortex when scientists tickled the mice’s whiskers. It detected activity from single neurons and neural network activity associated with behavior.
MOTE seemed mostly harmless. None of the mice experienced seizures or other neurological issues sometimes seen in larger implants. They skittered around and chowed down on food as usual. There was also very little scarring around the implant, even after a year.
The devices aren’t just for decoding mouse brains. They could one day pick up electrical signals from organoids—so-called mini-brains. Organoids loosely mimic the early stages of brain development. Although tiny, they’re densely packed with multiple types of brain cells and connections, making it difficult for bulkier implants to record signals without damage.
Upgraded with better detection and light-emission hardware, MOTEs could theoretically work up to six millimeters deep in the brain, enough to record from the entire mouse brain and in organoids, wrote the team.
They’re still far from clinical use, but making implants wireless means they’re more compatible with brain imaging technologies, such as fMRI (functional magnetic resonance imaging), which could paint a wider picture of brain activity during tasks. Outside the brain, MOTE could tap into the spinal cord, heart, or other tissues and record dynamic movies of their health.
The post This Wireless Brain Implant Is Smaller Than a Grain of Salt appeared first on SingularityHub.
This Week’s Awesome Tech Stories From Around the Web (Through November 15)
Fei-Fei LI’s World Labs Speeds Up the World Model Race With Marble, Its First Commercial ProductRebecca Bellan | TechCrunch
“If large language models can teach machines to read and write, Li hopes systems like Marble can teach them to see and build. She says the ability to understand how things exist and interact in three-dimensional spaces can eventually help machines make breakthroughs beyond gaming and robotics, and even into science and medicine.”
ComputingIBM Has Unveiled Two Unprecedentedly Complex Quantum ComputersKarmela Padavic-Callaghan | New Scientist ($)
“If large language models can teach machines to read and write, Li hopes systems like Marble can teach them to see and build. She says the ability to understand how things exist and interact in three-dimensional spaces can eventually help machines make breakthroughs beyond gaming and robotics, and even into science and medicine.”
SpaceBlue Origin Sticks First New Glenn Rocket Landing and Launches NASA SpacecraftSean O’Kane | TechCrunch
“Jeff Bezos’ Blue Origin has landed the booster of its New Glenn mega-rocket on a drone ship in the Atlantic Ocean on just its second attempt—making it the second company to perform such a feat, following Elon Musk’s SpaceX. It’s an accomplishment that will help the new rocket system become an option to send larger payloads to space, the moon, and beyond.”
TechWhen AI Hype Meets AI Reality: A Reckoning in 6 ChartsChristopher Mims | The Wall Street Journal ($)
“The takeaway: The projections of AI companies and their partners don’t reflect shortages of equipment. At the same time, these projections assume a gargantuan market for AI-powered products and services. Analysts can’t agree whether that market will materialize as quickly as promised.”
ComputingMIT’s Injectable Brain Chips Could Treat Disease Without SurgeryAbhimanyu Ghoshal | New Atlas
“[The technology] involves sub-cellular sized wireless electronic devices (SWED) that can be delivered to your brain via a jab in the arm. Once these tiny chips have been injected, they can autonomously implant themselves on target regions in the brain and power themselves as they deliver electrical stimulation to the affected areas.”
ComputingTwo Visions for the Future of AR Smart GlassesAlfred Poor | IEEE Spectrum
“Some tech companies are betting that today’s smart glasses will be the perfect interface for delivering AI-supported information and other notifications. The other possibility is that smart glasses will replace bulky computer screens, acting instead as a private and portable monitor. But the companies pursuing these two approaches don’t yet know which choice consumers will make or what applications they really want.”
RoboticsWaymo to Roll Out Driverless Taxis on Highways in Three US CitiesRafe Rosner-Uddin, Financial Times | Ars Technica
“Waymo’s rollout on highways marks a significant step for the robotaxi operator as it aims to encourage the mass adoption of driverless vehicles. It is the first time a company will carry out paid driverless services on the highway without a driver behind the wheel.”
BiotechnologyScientists Grow More Hopeful About Ending a Global Organ ShortageRoni Caryn Rabin | The New York Times ($)
“In a modern glass complex in Geneva last month, hundreds of scientists from around the world gathered to share data, review cases—and revel in some astonishing progress. Their work was once considered the stuff of science fiction: so-called xenotransplantation, the use of animal organs to replace failing kidneys, hearts, and livers in humans.”
FutureThese Technologies Could Help Put a Stop to Animal TestingJessica Hamzelou | MIT Technology Review
“Earlier this week, the UK’s science minister announced an ambitious plan: to phase out animal testing. …Animal welfare groups have been campaigning for commitments like these for decades. But a lack of alternatives has made it difficult to put a stop to animal testing. Advances in medical science and biotechnology are changing that.”
TechThe Complicated Reality of 3D Printed ProstheticsBritt H. Young | IEEE Spectrum
“By the mid-2010s, 3D-printing was in the ‘Peak of Inflated Expectations’ phase, and prosthetics was no exception. …Erenstone says [despite struggles to lower costs] the technology is finally getting closer to achieving some of the things everyone imagined was possible ten years ago.”
The post This Week’s Awesome Tech Stories From Around the Web (Through November 15) appeared first on SingularityHub.
In Wild Experiment, Surgeon Uses Robot to Remove Blood Clot in Brain 4,000 Miles Away
The transatlantic procedure, carried out on a human cadaver in Scotland, suggests future stroke surgeries could be completed remotely.
Robotic surgery has dramatically improved surgical precision, but it could also help surgeons treat people on the other side of the world. A surgeon in Florida has now used a robot to remove a simulated brain clot from a cadaver in Scotland, with near-instant feedback across 4,000 miles.
In the US, someone has a stroke roughly every 40 seconds, totally more than 795,000 cases each year and costing the health system more than $56 billion annually, according to the Centers for Disease Control and Prevention.
Ischemic strokes block blood flow to the brain and account for 87 percent of cases. These strokes often require an emergency surgery called a thrombectomy to remove the offending blood clot. However, the procedure requires highly skilled specialists and advanced imaging setups, which means they’re only available to a fraction of stroke patients.
That could soon change thanks to a breakthrough experiment carried out by doctors on either side of the Atlantic. Ricardo Hanel, a neurosurgeon at the Baptist Medical Center in Jacksonville, Florida used a surgical robot to carry out a thrombectomy on a human cadaver at the University of Dundee in Scotland.
“To operate from the US to Scotland with a 120 millisecond (blink of an eye) lag is truly remarkable,” Hanel said in a press release.
“Tele neurointervention [robotic surgery at a distance] will allow us to decrease the gap and further our reach to provide one of the most impactful procedures in humankind.”
The robotic system used in the experiment was developed by Lithuanian company Sentante. The system translates a surgeon’s hand movements into fine robotic control of the standard tools used in the procedure. It also provides haptic feedback, giving the surgeon the same sensations they would feel if doing the procedure by hand.
This feedback makes it possible for the operators to recognize subtle but crucial cues—such as the softness of clot material or the transition into more delicate vessels in the brain. Study leader Iris Grunwald at the University of Dundee also used the robot to carry out a thrombectomy on a cadaver from a remote site within the same hospital, as a precursor to the transatlantic experiment.
“It is remarkable to feel the same fine control and resistance through a robotic interface as during a live procedure,” she said in the press release. “Sentante’s robotic platform redefines what is possible in endovascular treatment today.”
The technology could greatly expand access to this life saving procedure, as it only requires a medical professional trained to gain access to the patient’s arteries before a neurosurgical specialist can take over remotely. The robotic system can also be wheeled to a patient’s bedside within minutes—a critical capability given that every minute counts when it comes to strokes.
“For an ischemic stroke, the difference between walking out of hospital and a lifetime of disability can be just two to three hours,” Edvardas Satkauskas, co-founder and CEO of Sentante, said in the press release.
“Today, patients are often transported long distances to reach one of a limited number of thrombectomy centers. With Sentante, the specialist comes to the patient over a secure network and performs the entire procedure remotely—with the same tactile feel and control they have at the bedside.”
Of course, the experiments took place on cadavers rather than living patients, and bridging the gap could still be tricky. Also, a reliable internet connection—plus good backup plans should it fail—will be as crucial as a smoothly operating robot.
But these experiments suggest that your chances of surviving a stroke may soon no longer be reliant on how close you are to the nearest specialist hospital.
The post In Wild Experiment, Surgeon Uses Robot to Remove Blood Clot in Brain 4,000 Miles Away appeared first on SingularityHub.
Can You Really Talk to the Dead Using AI? We Tried Out ‘Deathbots’ So You Don’t Have To
A growing digital afterlife industry promises to make memory interactive and, in some cases, eternal.
Artificial intelligence is increasingly being used to preserve the voices and stories of the dead. From text-based chatbots that mimic loved ones to voice avatars that let you “speak” with the deceased, a growing digital afterlife industry promises to make memory interactive and, in some cases, eternal.
In our research, recently published in Memory, Mind & Media, we explored what happens when remembering the dead is left to an algorithm. We even tried talking to digital versions of ourselves to find out.
“Deathbots” are AI systems designed to simulate the voices, speech patterns, and personalities of the deceased. They draw on a person’s digital traces—voice recordings, text messages, emails, and social media posts—to create interactive avatars that appear to “speak” from beyond the grave.
As the media theorist Simone Natale has said, these “technologies of illusion” have deep roots in spiritualist traditions. But AI makes them far more convincing and commercially viable.
Our work is part of a project called Synthetic Pasts, which explores the impact technology has on the preservation of personal and collective memory. For our study, we looked at services that claim to preserve or recreate a person’s voice, memories, or digital presence using AI. To understand how they work, we became our own test subjects. We uploaded our own videos, messages, and voice notes, creating “digital doubles” of ourselves.
In some cases, we played the role of users preparing our own synthetic afterlives. In others, we acted as the bereaved trying to talk to a digital version of someone who has passed away.
What we found was both fascinating and unsettling. Some systems focus on preserving memory. They help users record and store personal stories, organized by theme, such as childhood, family, or advice for loved ones. AI then indexes the content and guides people through it, like a searchable archive.
Others use generative AI to create ongoing conversations. You upload data about a deceased person—messages, posts, even voice samples – and the system builds a chatbot that can respond in their tone and style. It uses a subset of AI called machine learning (which improves through practice) to make its avatars evolve over time.
Some present themselves as playful (“host a séance with AI”), but the experience can feel eerily intimate. All the platforms claim to offer an “authentic” emotional connection. Yet the more personalization we attempted, the more artificial it felt.
When prompted by these systems we provided more information about ourselves, only to have the bot repeat the exact phrasing we had used in stiff, scripted replies. At times, the tone was incongruous, such as when cheerful emojis or upbeat phrasing appeared even when discussing death—a clear reminder that algorithms are poor at handling the emotional weight of loss:
Oh hun… it (the death) is not something I’d wish for anyone to dwell on. It’s all a bit foggy now, to be honest. Let’s chat about something a bit cheerier, yeah?
The more archival-based tools we tested offered a calmer experience but also imposed rigid categories and limited nuance. As digital media scholar Andrew Hoskins has observed, memory in the age of AI becomes “conversational”—shaped by interactions between human and machine. But in our experiments, those conversations often felt flat, exposing the limits of synthetic intimacy:
Human: You were always so encouraging and supportive. I miss you.
Deathbot: I’m right here for you, always ready to offer encouragement and support whenever you need it. And I miss you too… Let’s take on today together, with positivity and strength.
Behind these experiences lies a business model. These are not memorial charities, they are tech startups. Subscription fees, “freemium” tiers, and partnerships with insurers or care providers reveal how remembrance is being turned into a product.
As the philosophers Carl Öhman and Luciano Floridi have argued, the digital afterlife industry operates within a “political economy of death,” where data continues to generate value long after a person’s life ends.
Platforms encourage users to “capture their story forever,” but they also harvest emotional and biometric data to keep engagement high. Memory becomes a service—an interaction to be designed, measured, and monetized. This, as the professor of technology and society Andrew McStay has shown, is part of a wider “emotional AI” economy.
Digital Resurrection?The promise of these systems is a kind of resurrection—the reanimation of the dead through data. They offer to return voices, gestures, and personalities, not as memories recalled but as presences simulated in real time. This kind of “algorithmic empathy” can be persuasive, even moving, yet it exists within the limits of code and quietly alters the experience of remembering, smoothing away the ambiguity and contradiction.
These platforms demonstrate a tension between archival and generative forms of memory. All platforms, though, normalize certain ways of remembering, placing privilege on continuity, coherence, and emotional responsiveness, while also producing new, data-driven forms of personhood.
As the media theorist Wendy Chun has observed, digital technologies often conflate “storage” with “memory,” promising perfect recall while erasing the role of forgetting—the absence that makes both mourning and remembering possible.
In this sense, digital resurrection risks misunderstanding death itself: replacing the finality of loss with the endless availability of simulation, where the dead are always present, interactive, and updated.
AI can help preserve stories and voices, but it cannot replicate the living complexity of a person or a relationship. The “synthetic afterlives” we encountered are compelling precisely because they fail. They remind us that memory is relational, contextual, and not programmable.
Our study suggests that while you can talk to the dead with AI, what you hear back reveals more about the technologies and platforms that profit from memory—and about ourselves—than about the ghosts they claim we can talk to.
This article is republished from The Conversation under a Creative Commons license. Read the original article.
The post Can You Really Talk to the Dead Using AI? We Tried Out ‘Deathbots’ So You Don’t Have To appeared first on SingularityHub.
Record-Breaking Qubits Are Stable for 15 Times Longer Than Google and IBM’s Designs
The qubits are similar enough to those used by the likes of Google and IBM that they could slot into existing processors in the future.
One the biggest challenges for quantum computers is the incredibly short time that qubits can retain information. But a new qubit from Princeton University lasts 15 times longer than industry standard versions in a major step towards large-scale, fault-tolerant quantum systems.
A major bottleneck for quantum computing is decoherence—the rate at which qubits lose stored quantum information to the environment. The faster this happens, the less time the computer has to perform operations and the more errors are introduced to the calculations.
While companies and researchers are developing error-correction schemes to mitigate this problem, qubits with greater stability could be a more robust solution. Trapped-ion and neutral-atom qubits can have coherence times on the order of seconds, but the superconducting qubits used by companies like Google and IBM remain below the 100-microsecond threshold.
These so-called “transmon” qubits have other advantages such as faster operation speeds, but their short shelf life remains a major disadvantage. Now a team from Princeton has designed novel transmon qubits with coherence times of up to 1.6 milliseconds—15 times longer than those used in industry and three times longer than the best lab experiment.
“This advance brings quantum computing out of the realm of merely possible and into the realm of practical,” Princeton’s Andrew Houck, who co-led the research, said in a press release. “Now we can begin to make progress much more quickly.”
The team’s new approach, detailed in a paper in Nature, tackles a long-standing problem in the design of transmon qubits. Tiny surface defects in the metal used to make them, typically aluminium, can absorb energy as it travels through the circuit, resulting in errors in the underlying computations.
The new qubit instead uses the metal tantalum, which has far fewer of these defects. The researchers had already experimented with this material as far back as 2021, but earlier versions were built on top of a layer of sapphire. The researchers realized the sapphire was also leading to significant energy loss and so replaced it with a layer of silicon, which is commercially available at extremely high purity.
Creating a clean enough interface between the two materials to maintain superconductivity is challenging, but the team solved the problem with a new fabrication process. And because silicon is the computing industry’s material of choice, the new qubits should be easier to mass-produce than earlier versions.
To prove out the new process, the researchers built a fully functioning quantum chip with six of the new qubits. Crucially, the new design is similar enough to the qubits used by companies like Google and IBM that it could easily slot into existing processors to boost performance, the researchers say.
This could chip away at the main barrier preventing existing quantum computers from solving larger problems—the fact that short coherence times mean qubits are overwhelmed by errors before they can do any useful calculations.
The process of getting the design from the lab bench to the chip foundry is likely to be long and complicated though, so it’s unclear if companies will switch to this new qubit architecture any time soon. Still, the research has made dramatic progress on one of the biggest challenges holding back superconducting quantum computers.
The post Record-Breaking Qubits Are Stable for 15 Times Longer Than Google and IBM’s Designs appeared first on SingularityHub.
Scientists Map the Brain’s Construction From Stem Cells to Early Adolescence
This herculean effort could help scientists unravel the causes of autism, schizophrenia, and even a deadly form of cancer.
Like the seeds of a forest, a few cells in embryos eventually sprout into an ecosystem of brain cells. Neurons get the most recognition for their computing power. But a host of other cells provides nutrition, clears the brain of waste, and wards off dangers, such as toxic protein buildup or inflammation.
This rich diversity underlies our ability to process information, transforming perception of the world and our internal dialogues into thoughts, emotions, memories, and decisions. Mimicking the brain could potentially lead to energy-efficient computers or AI systems. But we’re still decoding how the brain works.
One way to understand a machine is to first examine its parts. The landmark project BRAIN Initiative Cell Atlas Network (BICAN), launched in 2022, has parsed the brains of multiple species and compiled a census of adult brain cells with unprecedented resolution.
But brains are not computers. Their components aren’t engineered and glued on. They develop and interact cohesively over time.
Building on previous work, the BICAN consortium has now released results that peek inside the developing brain. By tracking genes and their expression in the cells of developing human and mouse brains, the researchers have built a dynamic picture of how the brain constructs itself.
This herculean effort could help scientists unravel the causes of neurodevelopmental disorders. In one study, led by Arnold Kriegstein at the University of California, San Francisco, scientists found brain stem cells that are potentially co-opted to form a deadly brain cancer in adulthood. Other studies shed light on imbalances between excitatory and inhibitory neurons—these ramp up or tone down brain activity, respectively—which could contribute to autism and schizophrenia.
“Many brain diseases begin during different stages of development, but until now we haven’t had a comprehensive roadmap for simply understanding healthy brain development,” said Kriegstein in a press release. “Our map highlights the genetic programs behind the growth of the human brain that go awry during specific forms of brain dysfunction.”
Shifting LandscapeOver a century ago, the first neuroscientists used brain cell shapes to categorize their identities. BICAN collaborators have a much larger arsenal of tools to map the brain’s cells.
A key technology called single-cell spatial transcriptomics detects which genes are turned on in cells at any given time. The results are then combined with the cells’ physical location in the brain. The result is a gene expression “heat map” that provides clues about a cell’s lineage and final identity. Like genealogical tracking, the technology traces the heritage of different types of brain cells and when they emerge while at the same time providing their physical address.
Like other organs, the brain grows from stem cells.
In early developmental stages, stem cells are nudged into different fates: Some turn into neurons, some turn into other cell types. So far, no single technology can “film” their journey. But BICAN’s new releases measuring gene expression through development offer a glimpse.
In one tour-de-force study, Kriegstein and team used a technique that maps gene variants to single cells during multiple stages of development. Many variants were previously linked to neurodevelopmental disorders, including autism, but their biological contribution remained mysterious.
The team gathered 38 donated human cortex samples—the outermost part of the brain—that spanned all three trimesters of pregnancy, after birth, and early adolescence.
They then grouped individual cells using gene expression data across samples. They found roughly 30 different types of cells that emerge during brain development, including excitatory and inhibitory neurons, supporting cells such as glia, and immune cells called microglia.
Some were linked to a single source. This curious cell type, dubbed tripotential intermediate progenitor cells, spawned an inhibitory neuron, star-shaped glia, and brain cells that wrap around neurons as protective sheathes of electrical insulation. The latter break down in neurological diseases like multiple sclerosis, resulting in fatigue, pain, and memory problems.
Many genes related to autism were turned on in immature neurons as they began their brain-wiring journey. Gene mutations, environmental influences, and other disruptions could interfere with their growth.
“These programs of gene expression became active when young neurons were still migrating throughout the growing brain and figuring out how to build connections with other neurons,” said study author Li Wang. “If something goes wrong at this stage, those maturing neurons might become confused about where to go or what to do.”
The mother cells also have a dark side. Scientists have long thought that glioblastoma, a fatal brain cancer, stems from multiple types of neural precursor cells. Because mother cells, marked by their distinctive gene expression profiles, develop into all three types of cells involved in the cancer, they’re essentially cancer stem cells that could be targeted for future treatments.
“By understanding the context in which one stem cell produces three cell types in the developing brain, we could be able to interrupt that growth when it reappears during cancer,” said Wang.
A Wealth of DataOther BICAN studies also zeroed in on inhibitory neurons.
The authors of one hunted down a group of immature cells that shifted from making excitatory neurons to inhibitory ones during the middle of gestation, proving to be a balance between both forces. In another, in mice, researchers followed inhibitory neurons as they diversified and spread across the developing brain. More subtypes with unique gene expression profiles appeared in the cortex compared to deeper regions, which are more evolutionarily ancient.
Other studies investigated gene expression in neurodevelopment and how changes can lead to inflammation. Environmental influences such as social interactions played a role, especially in forming brain circuits tailored to gauging others’ behaviors. In developing mice, several genes related to social demands abruptly changed their activity during developmental milestones, including puberty.
Some cell types were shape-shifters. In mice, an immune challenge briefly changed microglia—the brain’s immune cells—back into a developmental-like state, suggesting these cells have the ability to turn back the clock.
The collection of studies only skims the surface of what BICAN’s database offers. Although the project mainly focused on the cortex, ongoing initiatives are detailing a cell atlas of the entire developing brain across dozens of timepoints and multiple species.
“Taken together, this collection from the BICAN turns the static portrait of cell types into a dynamic story of the developing brain,” wrote Emily Sylwestrak at the University of Oregon, who was not involved in the studies.
The post Scientists Map the Brain’s Construction From Stem Cells to Early Adolescence appeared first on SingularityHub.
This Week’s Awesome Tech Stories From Around the Web (Through November 8)
The Next Big Quantum Computer Has ArrivedIsabelle Bousquette | The Wall Street Journal ($)
“Helios contains 98 physical qubits, and from those can deliver 48 logical error-corrected qubits. This 2:1 ratio is unique and impressive, said Prineha Narang, professor of physical sciences and electrical and computer engineering at UCLA, and partner at venture-capital firm DCVC. Other companies require anything from dozens to hundreds of physical qubits to create one logical qubit.”
Artificial IntelligenceIn a First, AI Models Analyze Language as Well as a Human ExpertSteve Nadis | Quanta
“While most of the LLMs failed to parse linguistic rules in the way that humans are able to, one had impressive abilities that greatly exceeded expectations. It was able to analyze language in much the same way a graduate student in linguistics would—diagramming sentences, resolving multiple ambiguous meanings, and making use of complicated linguistic features such as recursion.”
ComputingWireless, Laser-Shooting Brain Implant Fits on a Grain of SaltMalcolm Azania | New Atlas
“Along with their international partners, researchers at Cornell University have developed a micro-neural implant so tiny it could dance on the head of a pin, and so astonishingly well-engineered that after implantation in a mouse, it can wirelessly transmit data about brain function for more than a year under its own power.”
ComputingQuantum Computing Jolted by DARPA Decision on Most Viable CompaniesAdam Bluestein | Fast Company
“For a technology that could produce world-changing feats but remains far from maturity—and into which billions of investment dollars have been flowing in recent months—the QBI validation is profound. The QBI’s first judgments, announced yesterday, reconfigure the competitive landscape, bolstering some powerful incumbents and boosting lesser-known players and outlier approaches. They also delivered a formidable gut punch to a couple of industry pioneers.”
FutureOur First Terraforming Goal Should Be the Moon, Not MarsEthan Siegel | Big Think
“The only way to prepare a world for human inhabitants is to make the environment more Earth-like: terraforming. While most of humanity’s space dreams have focused on Mars, a better candidate may be even closer: the moon. Its proximity to Earth, composition, and many other factors make it very appealing. Mars should be a dream, but not our only one.”
BiotechnologyThis Genetically Engineered Fungus Could Help Fix Your Mosquito ProblemJason P. Dinh | The New York Times ($)
“Researchers reported last week in the journal Nature Microbiology that Metarhizium—a fungus already used to control pests—can be genetically engineered to produce so much of a sweet-smelling substance that it is virtually irresistible to mosquitoes. When they laced traps with those fungi, 90 percent to 100 percent of mosquitoes were killed in lab experiments.”
Science10,000 Generations of Hominins Used the Same Stone Tools to Weather a Changing WorldKiona N. Smith | Ars Technica
“The oldest tools at the site date back to 2.75 million years ago. According to a recent study, the finds suggest that for hundreds of millennia, ancient hominins relied on the same stone tool technology as an anchor while the world changed around them.”
FutureThe First New Subsea Habitat in 40 Years Is About to LaunchMark Harris | MIT Technology Review ($)
“Once it is sealed and moved to its permanent home beneath the waves of the Florida Keys National Marine Sanctuary early next year, Vanguard will be the world’s first new subsea habitat in nearly four decades. Teams of four scientists will live and work on the seabed for a week at a time, entering and leaving the habitat as scuba divers.”
RoboticsWaymo’s Robotaxis Are Coming to Three New CitiesAndrew J. Hawkins | The Verge
“Waymo said it plans on launching commercial robotaxi services in three new cities: San Diego, Las Vegas, and Detroit. The announcement comes after the company said it would begin rapidly scaling to bring its fully driverless technology to more people on a faster timeline.”
Artificial IntelligenceAI Capabilities May Be Overhyped on Bogus Benchmarks, Study FindsAJ Dellinger | Gizmodo
“You know all of those reports about artificial intelligence models successfully passing the bar or achieving PhD-level intelligence? Looks like we should start taking those degrees back. A new study from researchers at the Oxford Internet Institute suggests that most of the popular benchmarking tools that are used to test AI performance are often unreliable and misleading.”
ComputingUnesco Adopts Global Standards on ‘Wild West’ Field of NeurotechnologyAisha Down | The Guardian
“The standards define a new category of data, ‘neural data,’ and suggest guidelines governing its protection. A list of more than 100 recommendations ranges from rights-based concerns to addressing scenarios that are—at least for now—science fiction, such as companies using neurotechnology to subliminally market to people during their dreams.”
The post This Week’s Awesome Tech Stories From Around the Web (Through November 8) appeared first on SingularityHub.
New Images Reveal the Milky Way’s Stunning Galactic Plane in More Detail Than Ever Before
The new radio portrait of the Milky Way is the most sensitive, widest-area map at these frequencies to date.
The Milky Way is a rich and complex environment. We see it as a luminous line stretching across the night sky, composed of innumerable stars.
But that’s just the visible light. Observing the sky in other ways, such as through radio waves, provides a much more nuanced scene—full of charged particles and magnetic fields.
For decades, astronomers have used radio telescopes to explore our galaxy. By studying the properties of the objects residing in the Milky Way, we can better understand its evolution and composition.
Our study, published recently in Publications of the Astronomical Society of Australia, provides new insights into the structure of our galaxy’s galactic plane.
Observing the Entire SkyTo reveal the radio sky, we used the Murchison Widefield Array, a radio telescope in the Australian outback, composed of 4,096 antennas spread over several square kilometers. The array observes wide regions of the sky at a time, enabling it to rapidly map the galaxy.
Between 2013 and 2015, the array was used to observe the entire southern hemisphere sky for the GaLactic and Extragalactic All-sky MWA (or GLEAM) survey. This survey covered a broad range of radio wave frequencies.
The wide frequency coverage of GLEAM gave astronomers the first “radio color” map of the sky, including the galaxy itself. It revealed the diffuse glow of the galactic disk, as well as thousands of distant galaxies and regions where stars are born and die.
With the upgrade of the array in 2018, we observed the sky with higher resolution and sensitivity, resulting in the GLEAM-eXtended survey (GLEAM-X).
The big difference between the two surveys is that GLEAM could detect the big picture but not the detail, while GLEAM-X saw the detail but not the big picture.
A Beautiful MosaicTo capture both, our team used a new imaging technique called image domain gridding. We combined thousands of GLEAM and GLEAM-X observations to form one huge mosaic of the galaxy.
Because the two surveys observed the sky at different times, it was important to correct for the ionosphere distortions—shifts in radio waves caused by irregularities in Earth’s upper atmosphere. Otherwise, these distortions would shift the position of the sources between observations.
The algorithm applies these corrections, aligning and stacking data from different nights smoothly. This took more than 1 million processing hours on supercomputers at the Pawsey Supercomputing Research Centre in Western Australia.
The result is a new mosaic covering 95 percent of the Milky Way visible from the southern hemisphere, spanning radio frequencies from 72 to 231 megahertz. The big advantage of the broad frequency range is the ability to see different sources with their “radio color” depending on whether the radio waves are produced by cosmic magnetic fields or by hot gas.
The emission coming from the explosion of dead stars appears in orange. The lower the frequency, the brighter it is. Meanwhile, the regions where stars are born shine in blue.
These colors allow astronomers to pick out the different physical components of the galaxy at a glance.
The new radio portrait of the Milky Way is the most sensitive, widest-area map at these low frequencies to date. It will enable a plethora of galactic science, from discovering and studying faint and old remnants of star explosions to mapping the energetic cosmic rays and the dust and grains that dominate the medium within the stars.
The power of this image will not be surpassed until the new SKA-Low telescope is complete and operational, eventually being thousands of times more sensitive and with higher resolution than its predecessor, the Murchison Widefield Array.
This upgrade is still a few years away. For now, this new image stands as an inspiring preview of the wonders the full SKA-Low will one day reveal.
This article is republished from The Conversation under a Creative Commons license. Read the original article.
The post New Images Reveal the Milky Way’s Stunning Galactic Plane in More Detail Than Ever Before appeared first on SingularityHub.
Scientists Unveil a ‘Living Vaccine’ That Kills Bad Bacteria in Food to Make It Last Longer
The technology unleashes self-replicating viruses called phages on food bacteria to continuously hunt down and destroy bad bugs.
It’s a home cook’s nightmare: You open the fridge ready to make dinner and realize the meat has spoiled. You have to throw it out, kicking yourself for not cooking it sooner.
According to the USDA, a staggering one-third of food is tossed out because of spoilage, leading to over $160 billion lost every year. Much of this food is protein and fresh produce, which could feed families in need. The land, water, labor, energy, and transportation that brought the food to people’s homes also goes to waste.
Canada’s McMaster University has a solution. A team of scientists wrapped virus-packed microneedles inside a paper towel-like square sitting at the bottom of a Ziploc container. It’s an unusual duo. But the viruses, called phages, specifically target bacteria related to food spoilage. Some are already approved for consumption.
Using microneedles to inject the phages into foods, the team decontaminated chicken, shrimp, peppers, and cheese. All it took was placing the square on the bottom of a storage dish or on the surface of the food. Mixing and matching the phages destroyed multiple dangerous bacterial strains. In some cases, it made spoiled meat safe to eat again based on current regulations.
It’s just a prototype, but a similar design could one day be used in food packaging.
“[The platform] can revolutionize current food contamination practices, preventing foodborne illness and waste through the active decontamination of food products,” wrote the team.
A Curious Food ChainIt’s easy to take food safety for granted. The occasional bad bite of leftover pizza might give you some discomfort, but you bounce back. Still, foodborne pathogens result in hundreds of millions of cases and tens of thousands of deaths every year according to the World Health Organization. Bacteria like E. Coliand Salmonella are the main culprits.
Existing solutions rely on antibiotics. But they come with baggage. Flooding agriculture with these drugs contributes to antibacterial resistance, impacting the farming industry and healthcare.
Other preservative additives—like those in off-the-shelf foods—incorporate chemicals, essential oils, and other molecules. Although these are wallet-friendly and safe to eat, they often change core aspects of food like texture and flavor (canned salsa never tastes as great as the fresh stuff).
Maverick food scientists have been exploring an alternative way to combat food spoilage—phages. Adding a bath of viruses to a bacteria-infected stew is hardly an obvious food safety strategy, but it stems from research into antibacterial resistance.
Phages are viruses that only infect bacteria. They look a bit like spiders. Their heads house genetic material, while their legs grab onto bacteria. Once attached, phages inject their DNA into the bacteria and force their hosts to reproduce more viruses—before destroying them.
Because phages don’t infect human cells, they can be antibacterial treatments and even gene therapies. And they’re already part of our food production system. FDA-approved ListShield, for example, reduces Listeria in produce, smoked salmon, and frozen foods. PhageGuard S, approved in the US and EU, fights off Salmonella. Other phage-based products include sprays, edible films, and hydrogel-based packaging used to decontaminate food surfaces.
Even better, phages self-renew. They are “self-dosing antimicrobial additives,” wrote the team.
But size has been a limiting factor: They’re too big. Phages struggle to tunnel into larger pieces of food—say, a plump chicken breast. Although they might swiftly wipe out bacteria on the surface, pathogens can still silently brew inside a cutlet.
Prickly PatchThe new device was inspired by medical microneedle patches. These look like Band-Aids, but loaded inside are medications that can seep deeper into tissues—or in this case, food.
To construct food-safe microneedles, the team tested a range of edible materials and homed in on four ingredients. These included gelatin, the squishy protein-rich component at the heart of Jell-O, and other biocompatible materials readily used in medical devices. The ingredients were poured into a mold, baked into separate microneedle patches, and checked for integrity.
Each ingredient had strengths and weakness. But after testing the patches on various foods—mushrooms, fish, cooked chicken, and cheese—one component stood out for its reusability and ability to penetrate deeper. Called PMMA, the coating is already used in food-safe plexiglass and reusable packaging.
The team next loaded multiple phages that target different food-spoiling bacteria into PMMA scaffolds and challenged the patches to neutralize bacterial “lawns.” True to their name, these are fuzzy microscopic bits of bacteria that form a carpet. You’ve probably seen them at the bottom of a food container you’ve left far too long in the fridge.
The phage patches completely erased both E. Coli and Salmonellain steaks with high levels of the bacteria. Another test pitted the patches against existing methods in leftover chicken that had lingered 18 hours in unsafe food conditions. Compared to directly injecting phages or applying phage sprays, the microneedle patch was the only strategy that kept the chicken safe to eat according to current regulations.
Phage BuffetThe system was especially resilient to temperature changes. When applied to chicken or raw beef, the phage patches were active for at least a month at regular refrigerator temperatures, “ensuring compatibility with food products that require prolonged storage,” wrote the team.
The system can be tailored to tackle different bacteria, especially by mixing up which phages are included. Using a variety could potentially target strains of bacteria throughout the food production line, making the final product safer.
The team is planning to integrate the platform into food packaging materials, which would ensure the microneedles are in constant contact with the food and deliver a large dose of phages that self-replicate to continue warding off bacteria. Other ideas include sprinkling phage-loaded materials directly onto food during manufacturing and production.
The idea of eating viruses might seem a little weird. But phages naturally occur in almost all foods, including meat, dairy, and vegetables. You’ve likely already eaten these bacteria-fighting warriors at some point as they’re silently hunting down disease-causing bacteria.
The vaccine could prevent foodborne illness and reduce waste. It’s easy to adapt to different strains of bacteria, food-safe, and cost effective, wrote the team, making it “well suited for applications within the food industry.”
The post Scientists Unveil a ‘Living Vaccine’ That Kills Bad Bacteria in Food to Make It Last Longer appeared first on SingularityHub.
A Tiny 3D Printer Could Mend Vocal Cords in Real Time During Surgery
A bioprinter with a printhead the size of a sesame seed could deliver hydrogels to surgical sites.
Elephant trunks and garden hoses hardly seem like inspirations for a miniature 3D bioprinter.
Yet they’ve led scientists at McGill University to engineer the smallest reported bioprinting head to date. Described in the journal Devices, the device has a flexible tip just 2.7 millimeters in diameter—roughly the length of a sesame seed.
Bioprinters can deposit a wide range of healing materials directly at the site of injury. Some bioinks combat infections in lab studies; others deliver chemotherapy to cancerous sites, which could prevent tumors from recurring. On the operating table, biocompatible hydrogels injected during surgery help heal wounds.
The devices are promising but most are rather bulky. They struggle to reach all the body’s nooks and crannies—including, for example, the vocal cords.
It’s easy to take our ability to speak for granted and only appreciate its loss after catching a bad cold. But up to nine percent of people develop vocal-cord disorders in their lifetimes. Smoking, acid reflux, and chronic coughing tear at the delicate folds of tissue. Abnormal growths and cancers also contribute. These are usually removed with surgery that comes with a significant risk of scarring.
Hydrogels can help with healing. But because throat and vocal cord tissue is so intricate, current treatments inject it through the skin, rather than precisely into damaged regions.
But the new device can, in theory, sneak into a patient’s throat during surgery. Its tiny printhead doesn’t block a surgeon’s view, allowing near real-time printing after the removal of damaged tissues.
“I thought this would not be feasible at first—it seemed like an impossible challenge to make a flexible robot less than 3 mm in size,” Luc Mongeau, who led the study, said in a press release.
Although just a prototype, the device could one day help restore people’s voices after surgery and improve quality of life. It also could lead to the delivery of bioinks containing medications or even living cells to other tissues through the nose, mouth, or a small surgical cut.
Squishy Band-AidSurgery inevitably results in scars. While these are an annoyance on the skin, excessive scarring—called fibrosis—seriously limits how well tissues can do their jobs.
Fibrosis in lungs after surgery, for example, leads to infections, blood clots, and a general decline in normal breathing. Scarring of the heart tampers with its electrical signals and often leads to irregular heartbeats. And for delicate tissues like vocal cords, fibrosis causes lasting stiffness, making it difficult to intonate, sing, or talk like before—essentially robbing the person of their voice.
Scientists have found a range of molecules that could aid the healing process. Hydrogels are one promising candidate. Soft, flexible, and biocompatible, hydrogel injections provide a squishy but structured architecture supporting vocal cords. Studies also suggest hydrogels boost the growth the healthy tissues and reduce fibrosis.
But because vocal cords are difficult to target, injections are handled through the skin, making it difficult to control where the hydrogel goes.
An alternative is to 3D print hydrogels directly in the body and repair damage during surgery. Both handheld and robotic systems have been successfully tested in labs, and minimally invasive versions are on the rise. One design uses air pressure to bioprint hydrogels inside the intestines. Another taps into magnets to repair the liver. But existing devices are too large to accommodate vocal cords.
Surgical TrunksTo heal vocal cords, an ideal mini 3D bioprinter must seamlessly integrate into throat surgeries. Here, surgeons insert a microscope through the mouth and suspend it inside the throat. While it sounds uncomfortable, the procedure is highly efficient with little pain afterward.
The printhead needs to snake around the microscope but also flexibly adjust its position to target injured sites without blocking the surgeon’s view. Finally, the speed and force of the hydrogel spray should be controllable—avoiding the equivalent of accidentally squeezing out too much superglue.
The new bioprinter’s has a printhead a bit like an elephant’s trunk. It has a flexible arm that easily slips into the throat with a 2.7-millimeter arched nozzle at the end. Picture it as a fine-point Sharpie connected to a flexible tube. Three cables operate the printhead and control nozzle movement by applying tension, like strings on a puppet.
The system’s brain is in the actuator housing, which looks like a tiny plastic gift box. It holds a syringe of hydrogel for the printhead and pilots the adjustable cables using motors that precisely move the printhead to its intended location with a custom algorithm. Other electronics allow the team to control the setup using a wireless gaming controller in real time.
The actuator can be mounted under a standard throat surgery microscope so it’s out of the way during an operation, wrote the team.
To put the device through its paces, the team used the mini bioprinter to draw a range of shapes, including a square, heart, spiral, and various letters on a flat surface. The printhead accurately deposited thin lines of hydrogel, which can be stacked to form thicker lines—like repeatedly tracing drawings using a fine-tipped pen.
The team also tried it out in a mock vocal cord surgery. The “patient” was an accurate 3D model of a person’s throat but with different types of wounds to its vocal cords, including one that completely lacked half of the tissue. The bioprinter successfully made the repairs and reconstructed the missing vocal cord without issue.
“Part of what makes this device so impressive is that it behaves predictably, even though it’s essentially a garden hose—and if you’ve ever seen a garden hose, you know that when you start running water through it, it goes crazy,” said study author Audrey Sedal.
The flexibility comes at a cost. Though the printhead design deforms to prevent injury to tissues, this also means it’s more prone to mechanical vibrations from the actuator’s motors, which dings its accuracy.
As of now the mini printer requires manual control, but the team is working on a semi-autonomous version. More importantly, it needs to be pitted against standard hydrogel injection methods in living animals to show it’s safe and effective.
“The next step is testing these hydrogels in animals, and hopefully that will lead us to clinical trials in humans to test the accuracy, usability, and clinical outcomes of the bioprinter and hydrogel,” said Mongeau.
The post A Tiny 3D Printer Could Mend Vocal Cords in Real Time During Surgery appeared first on SingularityHub.
Future Data Centers Could Orbit Earth, Powered by the Sun and Cooled by the Vacuum of Space
A new study suggests orbital data centers could be carbon neutral, but steep technical challenges remain.
As global demand for computing continues to explode, the carbon footprint of data centers is a growing concern. A new study outlines how hosting these facilities in space could help slash the sector’s emissions.
Data centers require enormous amounts of power and water to operate and cool the millions of chips housed within them. Current estimates from the International Energy Agency peg their electricity consumption at around 415 terawatt hours globally, roughly 1.5 percent of total consumption in 2024. And the Environmental and Energy Study Institute says that large data centers can use as much as five million gallons per day for cooling.
With demand for computing resources growing by the day, in particular since the rapid adoption of resource-guzzling generative AI across the economy, this threatens to become an unsustainable burden on the planet.
But a new paper in Nature Electronics by scientists at Nanyang Technological University in Singapore suggests that hosting data centers in space could provide a potential solution. By relying on the abundant solar energy available in orbit and releasing waste heat into the cold vacuum of space, these facilities could, in principle, become carbon neutral.
“Space offers a true sustainable environment for computing,” Wen Yonggang, lead author of the study, said in a press release. “By harnessing the sun’s energy and the cold vacuum of space, orbital data centers could transform global computing.”
To validate their proposal, the researchers used digital-twin simulations of orbital computing systems to model how they would generate power, manage heat, and maintain connectivity. The team investigated two potential architectures: one designed to reduce the footprint of data collected by satellites themselves and another that would receive data from Earth for processing.
The first model would involve integrating data processing capabilities into satellites equipped with sensors—for example, cameras for imaging the Earth. This would make it possible to carry out expensive computations on the data on board before transmitting just the results back to the ground, rather than processing the raw data in terrestrial data centers.
The other approach involves a constellation of satellites equipped with full servers that could receive data from Earth and coordinate to carry out complex computing tasks like training AI models or running large simulations. The researchers note that this kind of distributed data center architecture—as opposed to assembling a large, monolithic data center in orbit—is technologically feasible with today’s satellite and computing technologies.
The team’s analysis suggests that the considerable carbon footprint of launching hardware into space could be offset within five years of operation, after which the facilities could run indefinitely on renewable energy.
Significant technical and logistical hurdles remain. Computer chips are vulnerable to radiation, an ever-present danger in space, which would necessitate the use of specialized radiation-hardened processors. Long-term maintenance of the facilities would also require in-orbit servicing technologies that don’t yet exist. And as computing technologies rapidly improve, chips depreciate in just a few years. Keeping orbital data centers stocked with the latest and greatest could be costly.
But the NTU team isn’t the first to float the idea of shifting computing facilities into space. Last year, French defense and aerospace giant Thales published a study exploring the feasibility of the idea. And next month, the startup Starcloud will launch a satellite carrying an Nvidia H100 GPU as a first step towards creating a network of orbital data centers.
While realizing the vision is likely to require technical breakthroughs and a huge amount of investment, one solution to computing’s ever growing carbon footprint may be above our heads.
The post Future Data Centers Could Orbit Earth, Powered by the Sun and Cooled by the Vacuum of Space appeared first on SingularityHub.
This Week’s Awesome Tech Stories From Around the Web (Through November 1)
Nvidia Becomes First $5 Trillion CompanyHannah Erin Lang | The Wall Street Journal ($)
“Nvidia is now larger than AMD, Arm Holdings, ASML, Broadcom, Intel, Lam Research, Micron Technology, Qualcomm, and Taiwan Semiconductor Manufacturing combined, according to Dow Jones Market Data. Its value also exceeds entire sectors of the S&P 500, including utilities, industrials and consumer staples.”
Robotics1X Neo Is a $20,000 Home Robot That Will Learn Chores via TeleoperationMariella Moon | Engadget
“In an interview with The Wall Street Journal’s Joanna Stern, 1X CEO Bernt Børnich explained that the AI neural network running the machine still needs to learn from more real-world experiences. Børnich said that anybody who buys NEO for delivery next year will have to agree that a human operator will be seeing inside their houses through the robot’s camera. It’s necessary to be able to teach the machines and gather training data so it can eventually perform tasks autonomously.”
BiotechnologyA New Startup Wants to Edit Human EmbryosEmily Mullin | Wired ($)
“In 2018, Chinese scientist He Jiankui shocked the world when he revealed that he had created the first gene-edited babies. The backlash against He was immediate. Scientists said the technology was too new to be used for human reproduction and that the DNA change amounted to genetic enhancement. …Now, a New York–based startup called Manhattan Genomics is reviving the debate around gene-edited babies.”
TechOpenAI Reportedly Planning ‘Up to $1 Trillion’ IPO as Early as Next YearMike Pearl | Gizmodo
“An anonymously sourced report from Reuters claims that OpenAI is planning an initial public offering that would value the AI colossus at ‘up to $1 trillion.’ Just on Tuesday the company formally completed its slow evolution from an ambiguous non-profit to a for-profit company. Now it appears to be formalizing plans to become one of the world’s centers of economic power—at least on paper.”
Artificial IntelligenceAI Agents Are Terrible Freelance WorkersWill Knight | Wired ($)
“A new benchmark measures how well AI agents can automate economically valuable chores. Human-level AI is still some ways off. …’I should hope this gives much more accurate impressions as to what’s going on with AI capabilities,’ says Dan Hendrycks, director of CAIS. He adds that while some agents have improved significantly over the past year or so, that does not mean that this will continue at the same rate.”
ComputingThe $460 Billion Quantum Bitcoin Treasure HuntKyle Torpey | Gizmodo
“Satoshi’s early bitcoin stash creates massive opportunity for quantum computing startups. …These early Bitcoin addresses, including many that have been connected to Bitcoin creator Satoshi Nakamoto, may also be associated with private keys (passwords to the Bitcoin accounts basically) that are lost or otherwise not accessible to anyone. In other words, they’re sort of like lost digital treasure chests that a quantum computer could potentially unlock at some point in the future.”
FutureHow AGI Became the Most Consequential Conspiracy Theory of Our TimeWill Douglas Heaven | MIT Technology Review ($)
“The idea that machines will be as smart as—or smarter than—humans has hijacked an entire industry. But look closely and you’ll see it’s a myth reminiscent of more explicitly outlandish and fantastical schemes. …I get it, I get it—calling AGI a conspiracy isn’t a perfect analogy. It will also piss a lot of people off. But come with me down this rabbit hole and let me show you the light.”
BiotechnologyLife Lessons From (Very Old) Bowhead WhalesCarl Zimmer | The New York Times ($)
“By measuring the molecular damage that accumulates in the eyes, ears, and eggs of bowhead whales, researchers have estimated that bowheads live as long as 268 years. A study published in the journal Nature on Wednesday offers a clue to how the animals manage to live so long: They are extraordinarily good at fixing damaged DNA.”
EnergyRenewable Energy and EVs Have Grown So Much Faster Than Experts Predicted 10 Years AgoAdele Peters | Fast Company
“There’s now four times as much solar power as the International Energy Agency (IEA) expected 10 years ago. Last year alone, the world installed 553 gigawatts of solar power—roughly as much as 100 million US homes use—which is 1,500% more than the IEA had projected. …More than 1 in 5 new cars sold worldwide today is an EV; a decade ago, that number was fewer than 1 in 100. Even if growth flatlined now, the world is on track to reach 100 million EVs by 2028.”
ComputingExtropic Aims to Disrupt the Data Center BonanzaWill Knight | Wired ($)
“The startup’s chips work in a fundamentally different way to chips from Nvidia, AMD, and others, and promise to be thousands of times more energy efficient when scaled up. With AI companies pouring billions of dollars into building data centers, a completely new approach could offer a far less costly alternative to vast arrays of conventional chips.”
TechAI Browsers Are a Cybersecurity Time BombRobert Hart | The Verge
“‘Despite some heavy guardrails being in place, there is a vast attack surface,’ says Hamed Haddadi, professor of human-centered systems at Imperial College London and chief scientist at web browser company Brave. And what we’re seeing is just the tip of the iceberg.”
FutureNASA’s Supersonic Jet Finally Takes off for Its First Super Fast, Super Quiet FlightPassant Rabie | Gizmodo
“NASA’s X-59 aircraft completed its first flight over the Southern California desert, bringing us closer to traveling at the speed of sound without the explosive, thunder-like clap that comes with it. The experimental aircraft, built by aerospace contractor Lockheed Martin, is designed to break the sound barrier, albeit to do it quietly.”
BiotechnologyA Bay Area Grocery Store Will Be the First to Sell Cultivated Meat—but You Only Have a Limited Time to Try ItKristin Toussaint | Fast Company
“[Cultivated meat] has only appeared on a handful of restaurant menus since its approval by the US Food and Drug Administration (FDA). But if you’re in the Bay Area, you’re in luck: Cultivated meat startup Mission Barns will be selling its pork meatballs (made with a base of pea protein plus the company’s cultivated pork fat) at Berkeley Bowl West, one location of an independent grocery store in California.”
ScienceChimps Are Capable of Human-Like Rational Thought, Breakthrough Study FindsBecky Ferreira | 404 Media
“The chimpanzees were able to rationally evaluate forms of evidence and to change their existing beliefs if presented with more compelling clues. The results reveal that non-human animals can exhibit key aspects of rationality, some of which had never been directly tested before, which shed new light on the evolution of rational thought and critical thinking in humans and other intelligent animals.”
RoboticsIs Waymo Ready for Winter?Andrew J. Hawkins | The Verge
“In its first few years of operation, Waymo has strategically stuck to cities with warmer, drier climates—places like Phoenix, Los Angeles, Atlanta, and Austin. But as it eyes a slate of East Coast cities, including Boston, New York City, and Washington, DC, for the next phase of its expansion, its abilities to handle more adverse weather will become a crucial test.”
The post This Week’s Awesome Tech Stories From Around the Web (Through November 1) appeared first on SingularityHub.
The Hardest Part of Creating Conscious AI Might Be Convincing Ourselves It’s Real
What would a machine actually have to do to persuade us it’s conscious?
As far back as 1980, the American philosopher John Searle distinguished between strong and weak AI. Weak AIs are merely useful machines or programs that help us solve problems, whereas strong AIs would have genuine intelligence. A strong AI would be conscious.
Searle was skeptical of the very possibility of strong AI, but not everyone shares his pessimism. Most optimistic are those who endorse functionalism, a popular theory of mind that takes conscious mental states to be determined solely by their function. For a functionalist, the task of producing a strong AI is merely a technical challenge. If we can create a system that functions like us, we can be confident it is conscious like us.
Recently, we have reached the tipping point. Generative AIs such as ChatGPT are now so advanced that their responses are often indistinguishable from those of a real human—see this exchange between ChatGPT and Richard Dawkins, for instance.
This issue of whether a machine can fool us into thinking it is human is the subject of a well-known test devised by English computer scientist Alan Turing in 1950. Turing claimed that if a machine could pass the test, we ought to conclude it was genuinely intelligent.
Back in 1950 this was pure speculation, but according to a pre-print study from earlier this year—that’s a study that hasn’t been peer-reviewed yet—the Turing test has now been passed. ChatGPT convinced 73 percent of participants that it was human.
What’s interesting is that nobody is buying it. Experts are not only denying that ChatGPT is conscious but seemingly not even taking the idea seriously. I have to admit, I’m with them. It just doesn’t seem plausible.
The key question is: What would a machine actually have to do in order to convince us?
Experts have tended to focus on the technical side of this question. That is, to discern what technical features a machine or program would need in order to satisfy our best theories of consciousness. A 2023 article, for instance, as reported in The Conversation, compiled a list of fourteen technical criteria or “consciousness indicators,” such as learning from feedback (ChatGPT didn’t make the grade).
But creating a strong AI is as much a psychological challenge as a technical one. It is one thing to produce a machine that satisfies the various technical criteria that we set out in our theories, but it is quite another to suppose that, when we are finally confronted with such a thing, we will believe it is conscious.
The success of ChatGPT has already demonstrated this problem. For many, the Turing test was the benchmark of machine intelligence. But if it has been passed, as the pre-print study suggests, the goalposts have shifted. They might well keep shifting as technology improves.
Myna DifficultiesThis is where we get into the murky realm of an age-old philosophical quandary: the problem of other minds. Ultimately, one can never know for sure whether anything other than oneself is conscious. In the case of human beings, the problem is little more than idle skepticism. None of us can seriously entertain the possibility that other humans are unthinking automata, but in the case of machines it seems to go the other way. It’s hard to accept that they could be anything but.
A particular problem with AIs like ChatGPT is that they seem like mere mimicry machines. They’re like the myna bird who learns to vocalize words with no idea of what it is doing or what the words mean.
This doesn’t mean we will never make a conscious machine, of course, but it does suggest that we might find it difficult to accept it if we did. And that might be the ultimate irony: succeeding in our quest to create a conscious machine, yet refusing to believe we had done so. Who knows, it might have already happened.
So what would a machine need to do to convince us? One tentative suggestion is that it might need to exhibit the kind of autonomy we observe in many living organisms.
Current AIs like ChatGPT are purely responsive. Keep your fingers off the keyboard, and they’re as quiet as the grave. Animals are not like this, at least not the ones we commonly take to be conscious, like chimps, dolphins, cats, and dogs. They have their own impulses and inclinations (or at least appear to), along with the desires to pursue them. They initiate their own actions on their own terms, for their own reasons.
Perhaps if we could create a machine that displayed this type of autonomy—the kind of autonomy that would take it beyond a mere mimicry machine—we really would accept it was conscious?
It’s hard to know for sure. Maybe we should ask ChatGPT.
This article is republished from The Conversation under a Creative Commons license. Read the original article.
The post The Hardest Part of Creating Conscious AI Might Be Convincing Ourselves It’s Real appeared first on SingularityHub.
A Squishy New Robotic ‘Eye’ Automatically Focuses Like Our Own
The lenses could give soft robots the ability to ‘see’ without electronics.
Our eyes naturally adjust to the visual world. From reading small fonts on a screen to scanning lush greenery in the great outdoors, they automatically change their focus to see everything near and far.
This is a far cry from camera systems. Even top-of-the-line offerings, such as full-frame mirrorless cameras, require multiple bulky lenses to cover a wide range of focal lengths. For example, photographers use telephoto lenses to film wildlife at a distance and macro lenses to capture the fine details of small things up close—say, a drop of morning dew on a flower.
In contrast, our eyes are made of “soft, flexible tissues in a highly compact form,” Corey Zhang and Shu Jia at Georgia Institute of Technology recently wrote.
Inspired by nature, the duo engineered a highly flexible robotic lens that adjusts its curvature in response to light, no external power needed. Added to a standard microscope, the lens could zero in on individual hairs on an ant’s leg and the lobes of single pollen grains.
Called a photoresponsive hydrogel soft lens (PHySL), the system could be especially useful for mimicking human vision in soft robots. It could also open the door to a range of uses in medical imaging, environmental monitoring, or even as an alternative camera in ultra-light mobile devices.
Artificial EyesWe’re highly visual creatures. Roughly 20 percent of the brain’s cortex—four to six billion neurons—is devoted to processing vision.
The process begins when light hits the cornea, a clear dome-shaped structure at the front of our eyes. This layer of tissue begins focusing the light. The next layer is the colored part of the eye and the pupil. The latter dilates at night and shrinks by day to control the amount of light reaching the lens, which sits directly behind the pupil.
A flexible structure reminiscent of an M&M, the lens focuses light onto the retina, which then translates it into electrical signals for the brain to interpret. Eye muscles change focal length by physically pulling the lens into different shapes. Working in tandem with the cornea, this flexibility allows us to change what we’re focusing on without conscious thought.
Despite their delicate nature and daily use, our eyes can remain in working order for decades. It’s no wonder scientists have tried to engineer artificial lenses with similar properties. Biologically inspired eyes could be especially helpful in soft robots navigating dangerous terrain with limited power. They could also make surgical endoscopes and other medical tools more compatible with our squishy bodies or help soft grippers pick fruit and other delicate items without bruising or breaking them.
“These features have prompted substantial efforts in bioinspired optics,” wrote the team. Several previous attempts used a fluid-based method, which changes the curvature—and hence, focal length—of a soft lens with external pressure, an electrical zap, or temperature. But these are prone to mechanical damage. Other contraptions using solid hardware are sturdier, but they require heavier motors to operate.
“The optics needed to form a visual system are still typically restricted to rigid materials using electric power,” wrote the team.
New PerspectiveThe new system brought two fields together: Adjustable lenses and soft materials.
The system’s lens is made of PDMS, a lightweight and flexible silicon-based material used in the likes of contact lenses and catheters.
The other component acts like artificial muscles to change the curvature of the lens. It’s fabricated with a biocompatible hydrogel and dusted with a light-sensing chemical. Heating the chemical sensor causes the gel to change its shape.
The team combined these two parts into a soft robotic eye, with the hydrogel surrounding the central lens. When exposed to heat—such as that stemming from light—the gel releases water and contracts. As it shrinks, the lens flattens and its focal length increases, allowing the eye to resolve objects at greater distances.
Depriving the system of light—essentially like closing your eyes—cools the gel. It then swells to its original plumpness, releases tension, and the lens resets.
The design offers better mechanical stability than previous versions, wrote the team. Because the gel constricts with light, it can form a stronger supporting structure that prevents the delicate lens from bending or collapsing as it changes shape. The robotic eye worked as expected across the light spectrum, with resolution and focus comparable to the human eye. It was also durable, maintaining performance after multiple cycles of bending, twisting, and stretching.
Image Credit: Shu JiaWith additional tinkering, the system proved to be an efficient replacement for traditional glass-based lenses in optical instruments. The team attached the squishy lens to a standard microscope and visualized a range of biological samples. These included fungal fibers, microscopic hairs on an ant’s leg, and the gap between a tick’s claws—all sized roughly a tenth of the width of a human hair.
The team wants to improve the system too. Recently developed hydrogels respond faster to light with more powerful mechanical forces, which could improve the robotic eye’s focal range. The system’s heavy dependence on temperature fluctuations could limit its use in extreme environments. Exploring different chemical additives could potentially shift its operating temperature range and tailor the hydrogel to particular uses.
And because the robotic eye “sees” across the light spectrum, it could in theory mimic other creature’s eyes, such as mantis shrimp, which can detect color differences invisible to humans, or reptilian eyes that can capture UV light.
A next step is to incorporate it into a soft robot as a biologically inspired camera system that doesn’t rely on electronics or extra power. “This system would be a significant demonstration for the potential of our design to enable new types of soft visual sensing,” wrote the team.
The post A Squishy New Robotic ‘Eye’ Automatically Focuses Like Our Own appeared first on SingularityHub.
These High-Tech Glasses and an Eye Implant Restored Sight in People With Severe Vision Loss
Patients regained the ability to read books, food labels, and subway signs.
Globally, more than five million people are affected by age-related macular degeneration, which can make reading, driving, and the recognition of faces impossible. A new wireless retinal implant has now restored functional sight to patients in advanced stages of the disease.
The condition gradually destroys the light-sensitive photoreceptors at the center of the retina, leaving people with only blurred peripheral vision. While researcher are investigating whether stem cell implants or gene therapy could help restore sight in these patients, these approaches are still only experimental.
Now though, a system called PRIMA built by neurotechnology startup Science Corporation is helping patients regain the ability to read books, food labels, and subway signs. The system consists of a specially designed pair of glasses that uses a camera to capture images and transmit them wirelessly to a tiny chip implanted in the retina that then stimulates surviving neurons.
In a paper published in The New England Journal of Medicine, researchers showed that 27 out of 32 participants in a clinical trial of the technology had regained the ability to read a year after receiving the device.
“This study confirms that, for the first time, we can restore functional central vision in patients,” Frank Holz from the University Hospital of Bonn who was lead author on the paper said in a statement. “The implant represents a paradigm shift in treating late-stage AMD [age-related macular degeneration].”
The system works by converting images captured by the camera-equipped glasses into pulses of infrared light that are then transmitted through the patients’ pupils to a two-square-millimeter photovoltaic chip. The chip converts the light into electrical signals that are transmitted to the neurons at the back of the eye, allowing the patients to perceive the light patterns captured by the glasses. The PRIMA system also includes a zoom function that lets users magnify what they’re looking at.
Daniel Palanker at Stanford School of Medicine initially designed the technology, and a French startup called Pixium Vision was commercializing it. But facing bankruptcy, the company sold PRIMA to Science Corporation last year for €4 million ($4.7 million), according to MIT Technology Review.
Palanker said the idea for the product came 20 years ago when he realized that because the eye is transparent it’s possible to deliver information into it using light. Previous systems also relied on camera-equipped glasses to transmit signals to a retinal implant, but they were connected either by wires or radio transmitters.
In the recent study, 32 people with a form of macular degeneration that destroys photoreceptors in the center of the retina received implants in one eye. After several months of visual training, 80 per cent of them had regained the ability to read text and recognize high-contrast objects.
Some participants achieved visual acuity equivalent to 20/42 when images were zoomed. And 26 of them could read at least two extra lines on a standard eye chart, with the average closer to five lines.
Because PRIMA uses infrared light to stimulate the chip, these signals don’t interfere with the remaining healthy photoreceptors surrounding it, allowing the brain to merge the restored vision in the central region with the patients’ residual peripheral vision.
The chip is currently only capable of producing black and white images with no shades in between, which limits patients’ ability to recognize more complex objects like faces. But Palanker says he is currently developing software that will allow users to see in grayscale. The researchers are also developing a second-generation implant that will have more than 10,000 pixels, which could support close to normal levels of visual acuity.
One of the participants told the BBC that using the device requires considerable concentration, and it’s not really practical on the move. But Science Corporation told MIT Technology Review it is also in the process of slimming down the bulky glasses and control box into a sleeker headset that would be only slightly larger than a standard pair of sunglasses.
Given the huge number of people affected by macular degeneration, the market for such a device could already be large, but the designers hope the approach could also help cure other vision disorders. The company has already applied for medical approval in Europe, so it may not be long before neuroprosthetic devices become a standard treatment for those with vision loss.
The post These High-Tech Glasses and an Eye Implant Restored Sight in People With Severe Vision Loss appeared first on SingularityHub.
This Shot Gave Elderly Mice’s Skin a Glow Up. It Could Do the Same for Other Organs Too.
Boosting protective immune cells healed blood vessels and improved the skin’s ability to repair damage.
The first time I accepted that my grandpa was really aging was when I held his hand. His grip was strong as ever. But the skin on his hand was wafer thin and carried tell-tale signs of bruising across its back.
Plenty of “anti-aging” skin care products promise a younger look. But skin health isn’t just about vanity. The skin is the largest organ in the body and our first line of defense against pathogens and dangerous chemicals. It also keeps our bodies within normal operating temperatures—whether we’re in a Canadian snowstorm or the blistering heat of Death Valley.
The skin also has a remarkable ability to regenerate. After a sunburn, scraped knee, or knife cut while cooking, skin cells divide to repair damage and recruit immune cells to ward off infection. They also make hormones to coordinate fat storage, metabolism, and other bodily functions.
With age the skin deteriorates. It bruises more easily. Wound healing takes longer. And the risk of skin cancer rises. Many problems are connected to a dense web of blood vessels that becomes increasingly fragile as we age. Without a steady supply of nutrients, the skin weakens.
Now a team from New York University School of Medicine and collaborators have discovered a way to turn back the clock. In elderly mice and human skin cells, they detected a steep decline in the numbers of a particular immune cell type. The cells they studied, a type of macrophage, hug blood vessels, help maintain their integrity, and control which molecules flow in or out.
A protein-based drug designed to revive the cells’ numbers gave elderly mice a skin glow up, improving blood flow and the skin’s ability to repair damage. Because loss of these cells happens before the skin declines notably, renewing their numbers may offer “an early strategy” for keeping our largest organ humming along as the years pass.
Trusty ResidentsAll organs in mammals have residential macrophages. Literally meaning “big eaters,” these immune cells monitor tissues for infections, cancers, and other dangers. Once activated, they recruit more immune cells to tackle diseases and repair damaged tissues.
There’s more than one type of macrophage. The cells belong to a large family where each member has a slightly different task. How they populate different organs is still mysterious, and scientists are just beginning to decode all the jobs they do. But there’s a general consensus: With age, many macrophage types decline in numbers and health and are linked to a variety of age-related diseases, such as atherosclerosis, cancer, and neurodegeneration.
This trend could also affect aging skin.
The skin’s layers are populated by different types of macrophages. Those in the outermost layer detect pathogens, while cells in the lower, fatty layer help maintain metabolism and regulate body temperature and inflammation. But it was capillary-associated macrophages (CAMs), in the middle layer, that caught the team’s interest. These cells wrap around intricate webs of blood vessels woven through our skin, helping maintain their ability to function and heal.
Tracking CellsTo better understand how the skin’s macrophages change with age, the team developed a technology to monitor their numbers and health in mice. The researchers genetically engineered the critters such that they produced glow-in-the-dark macrophages and observed these throughout their life.
With age, the skin’s middle layer lost macrophages—which the scientists identified as CAMs—far faster than other skin layers. In mice between 1 and 18 months of age—the human equivalent of pre-teens through people in their 70s—blood vessels that had lost these macrophages behaved as if they were “older” and struggled to support oxygen-rich blood flow to the skin.
The macrophages also dwindled in their coverage of capillaries during aging. Roughly a tenth of the width of a human hair, these dainty blood vessels shuttle nutrients to tissues and dump toxic chemicals into the bloodstream. Macrophage losses eventually led to the death of capillaries in elderly mice. Similar results were found in human skin samples from people over 75.
All this reduced the skin’s ability to maintain capillary health and healing. For example, in one test, the scientists used targeted lasers to form small blood clots. In young mice, the macrophages traveled to the site and ate up damaged red blood cells in the clumps. In elderly mice, blood vessels with more macrophages better repaired injuries, but healing slowed overall.
Skin RewindThe team next developed a protein-based therapy that directly boosts CAM levels, and injected it into one hind paw of mice the human equivalent of over 80 years of age. The other paw received a non-active control.
In a few days, the treated paw saw a jump in macrophage numbers and improved capillary flow nourishing the skin. The blood vessels also healed more rapidly after laser damage, resulting in less bruising. The injection seemingly rejuvenated old macrophages, rather than recruiting new ones from the bone marrow, suggesting even vintage cells can grow and regain their strength.
These early results are in mice, and they don’t measure the full spectrum of skin function after repairing blood vessels, which would require observing other cells. Fibroblasts, for example, generate collagen for skin elasticity and promote wound healing. Their numbers also shrink with age. The new treatment is based on a protein from these cells, and the team is planning to test how fibroblasts and CAMs interact with age, and if the shot can be further optimized.
Beyond skin health, blood vessel disease wreaks havoc in multiple organs, contributing to heart attacks, stroke, and other medical scourges of aging. A similar strategy could pave the way for new treatments. In future studies, the team hopes to optimize dosing, follow long-term effects and safety, and potentially mix-and-match the treatment with other regenerative therapies.
The post This Shot Gave Elderly Mice’s Skin a Glow Up. It Could Do the Same for Other Organs Too. appeared first on SingularityHub.
This Week’s Awesome Tech Stories From Around the Web (Through October 25)
Google’s Quantum Computer Makes a Big Technical LeapCade Metz | The New York Times ($)
“Leveraging the counterintuitive powers of quantum mechanics, Google’s machine ran this algorithm 13,000 times as fast as a top supercomputer executing similar code in the realm of classical physics, according to a paper written by the Google researchers in the scientific journal Nature.”
FutureThe Next Revolution in Biology Isn’t Reading Life’s Code—It’s Writing ItAndrew Hessel | Big Think
“Andrew Hessel, cofounder of the Human Genome Project–write, argues that genome writing is humanity’s next great moonshot, outlining how DNA synthesis could transform biology, medicine, and industry. He calls for global cooperation to ensure that humanity’s new power to create life is used wisely and for the common good.”
RoboticsAmazon Hopes to Replace 600,000 Us Workers With Robots, According to Leaked DocumentsJess Weatherbed | The Verge
“Citing interviews and internal strategy documents, The New York Times reports that Amazon is hoping its robots can replace more than 600,000 jobs it would otherwise have to hire in the United States by 2033, despite estimating it’ll sell about twice as many products over the period.”
ComputingRetina e-Paper Promises Screens ‘Visually Indistinguishable From Reality’Michael Franco | New Atlas
“The team was able to create a screen that’s about the size of a human pupil packed with pixels measuring about 560 nanometers wide. The screen, which has been dubbed retinal e-paper, has a resolution beyond 25,000 pixels per inch. ‘This breakthrough paves the way for the creation of virtual worlds that are visually indistinguishable from reality,’ says a Chalmers news release about the breakthrough.”
RoboticsNike’s Robotic Shoe Gets Humans One Step Closer to CyborgMichael Calore | Wired ($)
“At the end of each step, the motor pulls up on the heel of the shoe. The device is calibrated so the movement of the motor can match the natural movement of each person’s ankle and lower leg. The result is that each step is powered, or given a little bit of a spring and an extra push by the robot mechanism.”
SpaceSpaceX Launches 10,000th Starlink Satellite, With No Sign of Slowing DownStephen Clark | Ars Technica
“Taking into account [decommissioned Starlink satellites, there are] 8,680 total Starlink satellites in orbit, 8,664 functioning Starlink satellites in orbit (including newly launched satellites not yet operational), [and] 7,448 Starlink satellites in operational orbit. …The European Space Agency estimates there are now roughly 12,500 functioning satellites in orbit. This means SpaceX owns and operates up to 70 percent of all the active satellites in orbit today.”
ComputingAmazon Unveils AI Smart Glasses for Its Delivery DriversAisha Malik | TechCrunch
“The e-commerce giant says the glasses will allow delivery drivers to scan packages, follow turn-by-turn walking directions, and capture proof of delivery, all without using their phones. The glasses use AI-powered sensing capabilities and computer vision alongside cameras to create a display that includes things like hazards and delivery tasks.”
BiotechnologyThe Astonishing Embryo Models of Jacob HannaAntonio Regalado | MIT Technology Review ($)
“Clark and her colleagues are right that, for the foreseeable future, no one is going to decant a full-term baby out of a bottle. That’s still science fiction. But there’s a pressing issue that needs to be dealt with right now. And that’s what to do about synthetic embryo models that develop just part of the way—say for a few weeks, or months, as Hanna proposes. Because right now, hardly any laws or policies apply to synthetic embryos.”
TechOpenAI Readies Itself for Its Facebook EraKalley Huang, Erin Woo, and Stephanie Palazzolo | The Information ($)
“As the Meta alums have arrived, it’s become evident that some of OpenAI’s latest strategies and initiatives do resemble the tactics Meta used to grow into a corporate juggernaut, according to conversations with seven current and former employees. OpenAI itself is keenly interested in growing into a similar gigantic form, an effort to satisfy investors and justify the half-a-trillion-dollar valuation it received a few months ago.”
Artificial IntelligenceSakana AI’s CTO Says He’s ‘Absolutely Sick’ of Transformers, the Tech That Powers Every Major AI ModelMichael Nuñez | VentureBeat
“Llion Jones, who co-authored the seminal 2017 paper ‘Attention Is All You Need’ and even coined the name ‘transformer,’ delivered an unusually candid assessment at the TED AI conference in San Francisco on Tuesday: Despite unprecedented investment and talent flooding into AI, the field has calcified around a single architectural approach, potentially blinding researchers to the next major breakthrough.”
TechThe ChatGPT Atlas Browser Still Feels Like Googling With Extra StepsEmma Roth | The Verge
“OpenAI’s new browser is great at providing AI-generated responses, but not so great at searches. …Given the options already out there, ChatGPT Atlas is a bit of an underwhelming start for a company that wants to build a series of interconnected apps that could eventually become an AI operating system.”
ComputingOpenAI Executive Explains the Insatiable Appetite For AI ChipsSri Muppidi | The Information ($)
“Because training and running models are blurring together, given inference is using more compute than before and incorporating user feedback, OpenAI likely needs more and stronger chips to power every stage of building and deploying its models. So it makes sense why OpenAI is trying to get its hands on every Nvidia chip under the sun.”
The post This Week’s Awesome Tech Stories From Around the Web (Through October 25) appeared first on SingularityHub.
OpenAI Slipped Shopping Into 800 Million ChatGPT Users’ Chats—Here’s Why That Matters
As AI shopping goes mainstream, will people keep any real control over what they buy and why?
Your phone buzzes at 6 a.m. It’s ChatGPT: “I see you’re traveling to New York this week. Based on your preferences, I’ve found three restaurants near your hotel. Would you like me to make a reservation?”
You didn’t ask for this. The AI simply knew your plans from scanning your calendar and email and decided to help. Later, you mention to the chatbot needing flowers for your wife’s birthday. Within seconds, beautiful arrangements appear in the chat. You tap one: “Buy now.” Done. The flowers are ordered.
This isn’t science fiction. On Sept. 29, 2025, OpenAI and payment processor Stripe launched the Agentic Commerce Protocol. This technology lets you buy things instantly from Etsy within ChatGPT conversations. ChatGPT users are scheduled to gain access to over a million other Shopify merchants, from major household brand names to small shops as well.
As marketing researchers who study how AI affects consumer behavior, we believe we’re seeing the beginning of the biggest shift in how people shop since smartphones arrived. Most people have no idea it’s happening.
From Searching to Being ServedFor three decades, the internet has worked the same way: You want something, you Google it, you compare options, you decide, you buy. You’re in control.
That era is ending.
AI shopping assistants are evolving through three phases. First came “on-demand AI.” You ask ChatGPT a question, it answers. That’s where most people are today.
Now we’re entering “ambient AI,” where AI suggests things before you ask. ChatGPT monitors your calendar, reads your emails, and offers recommendations without being asked.
Soon comes “autopilot AI,” where AI makes purchases for you with minimal input from you. “Order flowers for my anniversary next week.” ChatGPT checks your calendar, remembers preferences, processes payment, and confirms delivery.
Each phase adds convenience but gives you less control.
The Manipulation ProblemAI’s responses create what researchers call an “advice illusion.” When ChatGPT suggests three hotels, you don’t see them as ads. They feel like recommendations from a knowledgeable friend. But you don’t know whether those hotels paid for placement or whether better options exist that ChatGPT didn’t show you.
Traditional advertising is something most people have learned to recognize and dismiss. But AI recommendations feel objective even when they’re not. With one-tap purchasing, the entire process happens so smoothly that you might not pause to compare options.
OpenAI isn’t alone in this race. In the same month, Google announced its competing protocol, AP2. Microsoft, Amazon, and Meta are building similar systems. Whoever wins will be in position to control how billions of people buy things, potentially capturing a percentage of trillions of dollars in annual transactions.
What We’re Giving UpThis convenience comes with costs most people haven’t thought about.
Privacy: For AI to suggest restaurants, it needs to read your calendar and emails. For it to buy flowers, it needs your purchase history. People will be trading total surveillance for convenience.
Choice: Right now, you see multiple options when you search. With AI as the middleman, you might see only three options ChatGPT chooses. Entire businesses could become invisible if AI chooses to ignore them.
Power of comparing: When ChatGPT suggests products with one-tap checkout, the friction that made you pause and compare disappears.
It’s Happening Faster Than You ThinkChatGPT reached 800 million weekly users by September 2025, growing four times faster than social media platforms did. Major retailers began using OpenAI’s Agentic Commerce Protocol within days of its launch.
History shows people consistently underestimate how quickly they adapt to convenient technologies. Not long ago most people wouldn’t think of getting in a stranger’s car. Uber now has 150 million users.
Convenience always wins. The question isn’t whether AI shopping will become mainstream. It’s whether people will keep any real control over what they buy and why.
What You Can DoThe open internet gave people a world of information and choice at their fingertips. The AI revolution could take that away. Not by forcing people, but by making it so easy to let the algorithm decide that they forget what it’s like to truly choose for themselves. Buying things is becoming as thoughtless as sending a text.
In addition, a single company could become the gatekeeper for all digital shopping, with the potential for monopolization beyond even Amazon’s current dominance in e-commerce. We believe that it’s important to at least have a vigorous public conversation about whether this is the future people actually want.
Here are some steps you can take to resist the lure of convenience:
Question AI suggestions. When ChatGPT suggests products, recognize you’re seeing hand-picked choices, not all your options. Before one-tap purchases, pause and ask: Would I buy this if I had to visit five websites and compare prices?
Review your privacy settings carefully. Understand what you’re trading for convenience.
Talk about this with friends and family. The shift to AI shopping is happening without public awareness. The time to have conversations about acceptable limits is now, before one-tap purchasing becomes so normal that questioning it seems strange.
The Invisible Price TagAI will learn what you want, maybe even before you want it. Every time you tap “Buy now” you’re training it—teaching it your patterns, your weaknesses, what time of day you impulse buy.
Our warning isn’t about rejecting technology. It’s about recognizing the trade-offs. Every convenience has a cost. Every tap is data. The companies building these systems are betting you won’t notice, and in most cases, they’re probably right.
This article is republished from The Conversation under a Creative Commons license. Read the original article.
The post OpenAI Slipped Shopping Into 800 Million ChatGPT Users’ Chats—Here’s Why That Matters appeared first on SingularityHub.
One Mind, Two Bodies: Man With Brain Implant Controls Another Person’s Hand—and Feels What She Feels
It sounds like science fiction, but the system could help people with brain or spinal cord injuries regain lost abilities.
In 2020, Keith Thomas dived into a pool and snapped his spine. The accident left him paralyzed from the chest down and unable to feel and move his arms and legs. Alone and isolated in a hospital room due to the pandemic, he jumped on a “first-of-its-kind” clinical trial that promised to restore some sense of feeling and muscle control using an innovative brain implant.
Researchers designed the implant to reconnect the brain, body, and spinal cord. An AI detects Thomas’ intent to move and activates his muscles with gentle electrical zaps. Sensors on his fingertips shuttle feelings back to his brain. Within a year, Thomas was able to lift and drink from a cup, wipe his face, and pet and feel the soft fur of his family’s dog, Bow.
The promising results led the team at Feinstein Institutes for Medical Research and the Donald and Barbara Zucker School of Medicine at Hofstra/Northwell wondering: If the implant can control muscles in one person, can that person also use it to control someone else’s muscles?
A preprint now suggests such “interhuman” connections are possible. With thoughts alone, Thomas controlled the hand of an able-bodied volunteer using precise electrical zaps to her muscles.
The multi-person neural bypass also helped Kathy Denapoli, a woman suffering from partial paralysis and struggling to move her hand. With the system, Thomas helped her successfully pour water with his brain signals. He even eventually felt the objects she touched in return.
It sounds like science fiction, but the system could boost collaborative rehabilitation, where groups of people with brain or spinal cord injuries work together. By showing rather than telling Denapoli how to move her hand, she’s nearly doubled her hand strength since starting the trial.
“Crucially, this approach not only restores aspects of sensorimotor function,” wrote the team. It “also fosters interpersonal connection, allowing individuals with paralysis to re-experience agency, touch, and collaborative action through another person.”
Smart BridgeWe move without a second thought: pouring a hot cup of coffee while half awake, grabbing a basketball versus a tennis ball, or balancing a cup of ice cream instead of a delicate snow cone.
Under the hood, these mundane tasks activate a highly sophisticated circuit. First, the intention to move is encoded in the brain’s motor regions and the areas surrounding them. These electrical signals then travel down the spinal cord instructing muscles to contract or relax. The skin sends feedback on pressure, temperature, and other sensations back to the brain, which adjusts movement on the fly.
This circuit is broken in people with spinal cord injuries. But over the past decade, scientists have begun bridging the gap with the help of brain or spinal implants. These arrays of microelectrodes send electrical signals to tailored AI algorithms that can decode intent. The signals are then used to control robotic arms, drones, and other prosthetics. Other methods have focused on restoring sensation, a crucial aspect of detailed movement.
Connecting motor commands and sensation into a feedback loop—similar to what goes on in our brains naturally—is gaining steam. Thomas’s implant is one example. Unlike previous implants, the device simultaneously taps into the brain, spinal cord, and muscles.
The setup first records electrical activity from Thomas’s brain using sensors placed in its motor regions. The sensors send these signals to a computer where they’re decoded. The translated signals travel to flexible electrode patches, like Band-Aids, placed on his spine and forearm. The patches electrically stimulate his muscles to guide their movement. Tiny sensors on his fingertips and palm then transmit pressure and other sensations back to his brain.
Over time, Thomas learned to move his arms and feel his hand for the first time in three years.
“There was a time that I didn’t know if I was even going to live, or if I wanted to, frankly. And now, I can feel the touch of someone holding my hand. It’s overwhelming,” he said at the time. “The only thing I want to do is to help others. That’s always been the thing I’m best at. If this can help someone even more than it’s helped me somewhere down the line, it’s all worth it.”
Human ConnectionTo help people regain their speech after injury or disease, scientists have created digital avatars that capture vocal pitch and emotion from brain recordings. Others have linked up people’s minds with non-invasive technologies for rudimentary human-to-human brain communication.
The new study incorporated Thomas’s brain implant with a human “avatar.” The volunteer wore electrical stimulation patches, wired to his brain, on her forearm.
In training, Thomas watched his able-bodied partner grasp an object, such as a baseball or soft foam ball. He received electrical stimulation to the sensory regions of his brain based on force feedback. Eventually, Thomas learned to discriminate between the objects while blindfolded with up to over 90 percent accuracy. Different objects felt strong or light, said Thomas.
The researchers wondered if Thomas could also help others with spinal cord injury. For this trial, he worked with Denapoli, a woman in her 60s with some residual ability to move her arms despite damage to her spinal cord.
Denapoli voiced how she wanted to move her hand—for example, close, open, or hold. Thomas imagined the movement, and his brain signals wirelessly activated the muscle stimulators on Denapoli’s arm to move her hand as intended.
The collaboration allowed her to pick up and pour a water bottle in roughly 20 seconds, with a success rate nearly triple that of when she tried the same task alone. In another test, Thomas’s neural commands helped her grasp, sip from, and set a can of soda down without spillage.
The connection went both ways. Gradually, Thomas began to feel the objects she touched based on feedback sent to his brain.
“This paradigm…allowed two participants with tetraplegia to engage in cooperative rehabilitation, demonstrating increased success in a motor task with a real-world object,” wrote the team.
The implant may have long-lasting benefits. Because it taps into the three main components of neurological sensation and movement, repeatedly activating the circuit could trigger the body to restore damage. With the implant, Thomas experienced improved sensation and movement in his hands and Denapoli increased her grip strength.
The treatment could also help people who suffered a stroke and lost control of their arms, or those with amyotrophic lateral sclerosis (ALS), a neurological disease that gradually eats away at motor neurons. To be clear, the results haven’t yet been peer-reviewed and are for a very limited group of people. More work is need to see if this type of collaborative rehabilitation—or what the authors call “thought-driven therapy”—helps compared to existing approaches.
Still, both participants are happy. Thomas said the study gave him a sense of purpose. “I was more satisfied [because] I was helping somebody in real life…rather than just a computer,” he said.
“I couldn’t have done that without you,” Denapoli told Thomas.
The post One Mind, Two Bodies: Man With Brain Implant Controls Another Person’s Hand—and Feels What She Feels appeared first on SingularityHub.
‘Unprecedented’ Artificial Neurons Are Part Biological, Part Electrical—Work More Like the Real Thing
Bacterial nanowires and memristors combine in artificial neurons that can control living cells.
Most people wouldn’t give Geobacter sulfurreducens a second look. The bacteria was first discovered in a ditch in rural Oklahoma. But the lowly microbe has a superpower. It grows protein nanotubes that transmit electrical signals and uses them to communicate.
These bacterial wires are now the basis of a new artificial neuron that activates, learns, and responds to chemical signals like a real neuron.
Scientists have long wanted to mimic the brain’s computational efficiency. But despite years of engineering, artificial neurons still operate at much higher voltages than natural ones. Their frustratingly noisy signals require an extra step to boost fidelity, undercutting energy savings.
Because they don’t match biological neurons—imagine plugging a 110-volt device into a 220-volt wall socket—it’s difficult to integrate the devices with natural tissues.
But now a team at the University of Massachusetts Amherst has used bacterial protein nanowires to form conductive cables that capture the behaviors of biological neurons. When combined with an electrical module called a memristor—a resistor that “remembers” its past—the resulting artificial neuron operated at a voltage similar to its natural counterpart.
“Previous versions of artificial neurons used 10 times more voltage—and 100 times more power—than the one we have created,” said study author Jun Yao in a press release. “Ours register only 0.1 volts, which [is] about the same as the neurons in our bodies.”
The artificial neurons easily controlled the rhythm of living heart muscle cells in a dish. And adding an adrenaline-like molecule triggered the devices to up the muscle cells’ “heart rate.”
This level of integration between artificial neurons and biological tissue is “unprecedented,” Bozhi Tian at the University of Chicago, who was not involved in the work, told IEEE Spectrum.
Better Way to ComputeThe human brain is a computational wonder. It processes an enormous amount of data at very low power. Scientists have long wondered how it’s capable of such feats.
Massively parallel computing—with multiple neural networks humming along in sync—may be one factor. More efficient hardware design may be another. Computers have separate processing and memory modules that require time and energy to shuttle data back and forth. A neuron is both memory chip and processor in a single package. Recent studies have also uncovered previously unknown ways brain cells compute.
It’s no wonder researchers have long tried to mimic neural quirks. Some have used biocompatible organic materials that act like synapses. Others have incorporated light or quantum computing principles to drive toward brain-like computation.
Compared to traditional chips, these artificial neurons slashed energy use when faced with relatively simple tasks. Some even connected with biological neurons. In a cross-continental test, one artificial neuron controlled a living, biological neuron that then passed the commands on to a second artificial neuron.
But building mechanical neurons isn’t for the “whoa” factor. These devices could make implants more compatible with the brain and other tissues. They may also give rise to a more powerful, lower energy computing system compared to the status quo—an urgent need as energy-hogging AI models attract hundreds of millions of users.
The Life of a NeuronPrevious artificial neurons loosely mimicked the way biological neurons behave. The new study sought to recapitulate their electrical signaling.
Neurons aren’t like light switches. A small input, for example, isn’t enough to activate them. But as signals consistently build up, they trigger a voltage change, and the neuron fires. The electrical signal travels along its output branch and guides neighboring neurons to activate too. In the blink of an eye, the cells connect as a network, encoding memories, emotions, movement, and decisions.
Once activated, neurons go into a resting mode, during which they can’t be activated again—a brief reprieve before they tackle the next wave of electrical signals.
These dynamics are hard to mimic. But the tiny protein cables G. sulfurreducens bacteria use to communicate may help. The cables can withstand extremely unpredictable conditions, such as Oklahoma winters. They’re also particularly adept at conducting ions—the charged particles involved in neural activity—with high efficiency, nixing the need to amplify signals.
Harvesting the nanocables was a bit like drying wild mushrooms. The team snipped them off collections of bacteria and developed a way to rid them of contaminants. They suspended the wispy proteins in liquid and poured the concoction onto an even surface for drying. After the water evaporated, they were left with an extremely thin film containing protein nanocables that retained their electrical capabilities.
The team integrated this film into a memristor. Like in neurons, changing voltages altered the artificial neuron’s behavior. Built-up voltage caused the protein nanowires to bridge a gap inside the memristor. With sufficient input voltage, the nanocables completed the circuit and electrical signals flowed—essentially activating the neuron. Once the voltage dropped, the nanocables dissolved, and the artificial neurons reset to a resting state like their biological counterparts.
Because the protein wires are extremely sensitive to voltage changes, they can instruct the artificial neurons to switch their behavior at a much lower energy. This slashes total energy costs to one percent of previous artificial neurons. The devices operate at a voltage similar to biological neurons, suggesting they could better integrate with the brain.
Beating HeartAs proof of concept, the team connected their invention to heart muscle cells. These cells require specific electrical signals to keep their rhythm. Like biological neurons, the artificial neurons monitored the strength of heart cell contractions. Adding norepinephrine, a drug that rapidly increases heart rate, activated the artificial neurons in a way that mimics natural ones, suggesting they could capture chemical signals from the environment.
Although it’s still early, the artificial neurons pave the way for uses that seamlessly bridge biology and electronics. Wearable devices and brain implants inspired by the devices could yield prosthetics that better “talk” to the brain.
Outside of biotech, artificial neurons could be a greener alternative to silicon-based chips if the technology scales up. Unlike older designs that require difficult manufacturing processes, such as extreme temperatures, this new iteration can be printed with the same technology used to manufacture run-of-the-mill silicon chips.
It won’t be an easy journey. Harvesting and processing protein nanotubes remains time consuming. It’s yet unclear how long the artificial neurons can remain fully functional. And as with any device including biological components, more quality control will be needed to ensure even manufacturing.
Regardless, the team is hopeful the design can inspire more effective bioelectronic interfaces. “The work suggests a promising direction toward developing bioemulated electronics, which in turn can lead to closer interface with biosystems,” they wrote. Not too bad for bacteria discovered in a ditch.
The post ‘Unprecedented’ Artificial Neurons Are Part Biological, Part Electrical—Work More Like the Real Thing appeared first on SingularityHub.



