Transhumanismus
This Device Pulls Water From Thin Air—Even in Death Valley
Portable, reusable, and affordable, the device is the latest in technologies aiming to expand access to drinking water.
It’s easy to take safe drinking water for granted. In most developed countries, access to safe water takes a simple flip of a kitchen tap or a run to the grocery store.
But over two billion people worldwide lack easy access to clean water, which can lead to diseases such as cholera. And the problem is getting worse as demand for water in farming and other industries increases.
One blue-sky solution may literally come from the sky. A team from MIT developed a window-sized portable device that pulls water vapor from the atmosphere. The sandwich-like contraption includes an origami-like hydrogel to capture moisture at night. As day breaks, it releases water vapor onto glass panels where the vapor condenses into drinking water.
The device, dubbed atmospheric water harvesting window, or AWHW, generated a modest amount of water in different environments—including a humid urban setting in Massachusetts and the desert of Death Valley.
It’s performance is “remarkable,” wrote Jiabin Liu and Shaoting Lin at Michigan State University, who were not involved in the work.
Liker solar panels, the device could produce more water if it was bigger or multiple were stacked into vertical “water farms.” For now, its portability could potentially aid thirsty hikers or soldiers trekking through hot terrain.
“We imagine that you could one day deploy an array of these panels, and the footprint is very small because they are all vertical,” said study author Xuanhe Zhao in a press release.
Compared to bottled water, the device is highly cost-effective. “The economic advantage makes it a potentially off-grid solution for communities facing persistent water scarcity,” wrote Liu and Lin. The device “offers a practical and deployable solution for providing affordable family-scale drinking water.”
CheersThirst is deadlier than hunger. Our bodies need water to work. Without enough hydration, fatigue and dizziness rapidly set in. The brain struggles to process thoughts, while rising heart and breathing rates add stress to the body. Extended periods of dehydration can lead to multiple organ failure.
It’s no wonder people have invented ways to harvest drinkable water for millennia. From South America’s Atacama Desert to Egypt, archaeologists have found human-made piles of stone arranged so condensation from fog or dew trickles down the walls and is stored within.
We no longer need stones, but harvesting atmospheric water vapor as drinking water is just as valuable today. In one estimate, the air around us holds roughly 13,000 trillion kilograms of water—an abundant resource ripe for collection.
How? One idea is to use hydrogel as a sponge. Like the materials used in diapers, hydrogels soak up water vapor, but coaxing them to release it has been a challenge. Earlier approaches involved water-absorbing desiccants—like the packets inside some crunchy foods—that release water when heated. But this setup requires an additional energy source and is hard to scale up.
Another problem is that most hydrogels are rather “salty.” These soft and porous materials have microscopic networks of interconnecting channels that capture water vapor. It’s common to spike them with a type of naturally absorbent salt to capture even more. But these salts can leak into the water during extraction and make in undrinkable.
The new design prevented salt from leaking into the water a dab of syrupy glycerol. An initial lab test found salt levels in the water were far below the threshold for safe drinking water.
The team also shaped the hydrogel into a dome-like origami array, like a sheet of bubble wrap. The unique structure increased surface area and maximized how much the material could swell so it would hold more water vapor. The team then sandwiched the gel between two glass panels roughly the size of a small window, both coated with a cooling chemical layer, and added tubing to collect the water.
The device captures moisture from ambient air at night. As the sun rises, the temperature of the hydrogel increases, and it releases water as vapor. The vapor hits the glass, cools and condenses on the panels, and drips into the collection tube.
Early stress tests found the panel stood the test of time, retaining roughly 90 percent capability after 340 cycles—equivalent to nearly one year of continuous use every day.
Road TripFor the ultimate test, the team traveled to California’s Death Valley, one of the driest and hottest places in the world. They monitored the panel’s performance for a week with humidity ranging from 21 to 88 percent—the latter mostly at night—as the panel was blasted by relentless dry heat.
On the higher end, the device captured a respectable 161.5 milliliters, or roughly 5.5 ounces, of water. That’s still a far cry from a small cup of coffee, but the drippings came from just a single, unpowered panel in the desert. The team estimates the panels should last at least a year, “setting the benchmark in daily water production and climate adaptability,” they wrote.
The water harvester is the latest in a push to draw drinkable water from air. A complementary design, called the metal-organic framework, also uses natural cooling and ambient sunlight as energy to capture water in porous channels. The design’s materials more rapidly release water compared to hydrogels, making them potentially more efficient, but they store less.
Cost is always an issue for practical use. Another recent study showed hydrogels could be made from plant-based materials commonly found in food waste. The dirt-cheap biomaterials were tweaked to rapidly expand upon absorbing water and shrink when heated—like a sponge squeezing out water. This design needs to reach at least 60 degrees Celsius or 140 degrees Fahrenheit to release stored water. But with power from solar panels, it could be useful in emergency situations or off-grid communities.
For now, the MIT team is trying to make arrays out of their device. “It’s a test of feasibility in scaling up this water harvesting technology. Now people can build it even larger, or make it into parallel panels, to supply drinking water to people and achieve real impact,” said Zhao.
The post This Device Pulls Water From Thin Air—Even in Death Valley appeared first on SingularityHub.
Shifting Forces: The Evolving Debate Around Dark Energy
New evidence suggests the universe might not behave as expected, raising questions about the costs of being wrong.
In the beginning, the Big Bang happened, sending everything in the universe expanding outward and apart, from a dense hot point. Since then, all that matter and energy has continued to move outward, carried along with the cosmos’ expansion.
That expansion is fueled by dark energy, a mysterious force that is fundamental to scientists’ understanding of the past and future of the universe. Since dark energy’s discovery a quarter-century ago, scientists have assumed its influence to be constant, its force exerted the same way 5 billion years ago, today, and forever; a sort of steady foot on the gas pedal.
But new results, from an instrument called the Dark Energy Spectroscopic Instrument, or DESI, located at the Kitt Peak National Observatory in Arizona, suggest that might not be true: Dark energy may, in fact, evolve—its influence changing over time. Data now suggest dark energy has weakened in more recent epochs, essentially lessening its pressure on the accelerator. The results potentially “really change your understanding of what dark energy actually is,” said Ashley Ross, a cosmologist at The Ohio State University who is working on a project to measure how galaxies are distributed, as part of the DESI collaboration.
If DESI’s recent finding holds up, it means scientists’ current conception of the universe’s past, present, and future is mistaken. And news stories about the findings were quick to point that out: “We might have gotten dark energy totally wrong,” proclaimed Live Science. “This could change everything,” wrote Futurism.
Headlines like those may be true, but the verdict isn’t yet in. Some scientists take the possible error as exciting, since it could provide a path to better understanding the most fundamental physics, for which details have so far been elusive; others doubt the finding will stand time’s test.
To understand the universe, scientists use telescopes to observe as much of it as they can, gathering and characterizing patterns—how galaxies tend to form, for instance, or how stars tend to die. They use those observations to create, and bolster or refute, theories: the underlying, usually mathematical models that explain why they see what they see through their telescopes.
But any human-made model is likely to be incomplete, oversimplified. And data that potentially conflicts with existing ideas, as DESI’s data might, raises questions about the costs and benefits of being wrong, like spending hundreds of millions of dollars of federal research money on instruments and human capital, as dark-energy studies have, that ultimately upend a particular idea about the universe.
Getting closer to cosmic truth, though, experts point out, often requires fumbling through uncertainties and smashing into dead ends. That mental maze, which is typical in this area of research, experts say, is something that sensationalized news coverage often fails to acknowledge.
And making that incremental progress and taking advantage of its fruits, like potential practical applications of fundamental science, asks scientists to be willing to change their minds about even their closest-held foundational theories in favor of creative new lines of inquiry. But that line can be subjective, said Melissa Jacquart, a philosopher of science at the University of Cincinnati: “When do we have enough evidence to make us shift our perspective or think that we need to be approaching it differently?”
Dark energy seems distant from daily life on Earth, but its presence has made the universe what it is today, maybe even enabling that life to arise. And though it’s not apparent on this planet, dark energy causes the universe to grow larger, and faster, with each passing picosecond.
For a long time, scientists thought the expansion rate was slowing with time, like a coasting car. But in 1998 astronomers discovered that the opposite was true: Cosmic expansion was actually speeding up. The universe seemed to be pressing the gas pedal pretty hard.
Something had to be providing that fuel, counteracting the gravity that naturally draws things together. Scientists didn’t know what that something was, so they called it dark energy. Decades later, they still can’t explain it: Dark energy is “dark” because it remains a mystery.
Dark energy is ubiquitous, though. It’s estimated to make up about 70 percent of the universe, and together with dark matter—another scientific shoulder-shrug—the two account for a staggering 95 percent of the universe. “We don’t know what most of the universe actually is,” said Ross.
Still, despite that lack of knowledge, scientists assumed dark energy forced a constant acceleration since it fit with the data they had gathered thus far on the universe’s history and evolution.
“When do we have enough evidence to make us shift our perspective or think that we need to be approaching it differently?”
Cosmologists are not naive: They knew that assumption could be incorrect. And, in fact, analyzing it—along with other hypotheses—was one of DESI’s goals.
The instrument, which started its main work in 2021, kicked toward that goalpost by peering at various galaxies across the universe. By analyzing the light emitted from those galaxies, DESI scientists were able to measure their distance from Earth and how fast they were moving outward and made a three-dimensional map of the cosmos to understand how its expansion has changed.
DESI’s data, when combined with other observations, suggested that the universe’s expansion rate has actually shifted over time—and if dark energy dictates that rate, the energy itself must be changing.
If that’s the case, it could alter scientists’ prediction of the universe’s fate: With constant dark energy, the cosmos is doomed to expand faster and faster forever, pushing everything so far apart that other galaxies will recede beyond the view of even the most powerful telescopes; our cosmic neighborhood will appear to be alone. If dark energy can change over time, though, that dark ending may be avoided.
By analyzing the light emitted from various galaxies, DESI scientists were able to measure their distance from Earth and how fast they were moving outward and made a three-dimensional map of the cosmos to understand how its expansion has changed. In this video, fly through millions of galaxies mapped using coordinate data from DESI. Credit: DESI collaboration and Fiske Planetarium/CU BoulderEven though this evolving possibility was DESI scientists’ idea, it’s nevertheless a big departure from scientists’ current cosmological model of the universe, appropriately called the “standard model.” That model postulates, in mathematical equations, that after the Big Bang, the universe experienced a period of rapid inflation. Since then, it has continued to expand, in a way dictated by the balance of its contents: regular atomic material and dark matter, both influenced by gravity, and dark energy—the latter assumed to exert a constant acceleratory force in opposition to gravity.
But cosmologists have actually been searching for holes in the standard model—holes that might lead to a more complete understanding of spacetime, because the standard model has limitations. Dark energy and dark matter, for instance, have never been directly detected—only their effects. Scientists have also seen discrepancies in the measurement of the universe’s expansion rate based on different methods of measurement. And the light left over from the Big Bang shows wonky anomalies that don’t necessarily line up with the standard model’s predictions.
“We don’t know what most of the universe actually is.”
To James Overduin, a theoretical physicist at Towson University who co-wrote a book about dark energy called The Weight of the Vacuum, the concept of dark energy itself is an opaque placeholder—something hand-wave-y that explains a physical behavior that astronomers observe. That kind of a cover is something scientists have created for centuries, when they wanted to hang onto their current conception of the universe in the face of evidentiary challenges.
In some sense, making models of the universe always involves those kinds of simplifications—something scientists don’t always like to admit. In physics, we often think of the universe as a set of facts waiting to be discovered, said Jacquart. “But we can’t really just know those facts of the matter,” she said. “And so, in terms of how to explain everything, there are all of these spaces along the way where the scientists have to make either assumptions or idealizations.”
Data and analysis that poke holes in those smooth models can push science in new directions, acting as their own kind of dark energy. The question, always, is how big those holes must be before a theory—like the standard model, or dark energy’s constancy—rips.
Whether the DESI results rise to that level remains a debate among scientists. Zachary Slepian, an astrophysicist at the University of Florida and a member of the DESI team, doesn’t think the new data represent enough evidence to abandon current cosmological conceptions. What appears to be creepily evolving dark energy could, in fact, be some kind of experimental error, or an instrumental quirk. At the lower end of calculations, scientists estimate that the odds the DESI results are due to random chance are about one in 385—close to a statistical significance known as three-sigma. Five sigma is the field’s standard for a real discovery—something that has a one in 3.5 million chance of being a random fluke.
Colin Hill, a Columbia University cosmologist who works with the Atacama Cosmology Telescope in Chile, also isn’t convinced. “There’s sort of a borderline hint that maybe there’s something going on,” said Hill. But the statistical significance could vanish with more data, and extensions of the standard model—rather than a whole new model—could also explain the galactic findings.
Besides, he said, if dark energy really is evolving, it could imply a scenario where, as the universe expands, more dark energy is created. “That’d be truly, truly wild,” he said.
That data from DESI and other experiments doesn’t necessarily indicate dark energy is evolving, he added; the DESI measurement could be attributed to other phenomena. “It’s a little bit of a messy situation.”
Ross, though, sees the discovery as more solid: The statistical significance has increased as more DESI data has come in, for instance. “That’s what makes me excited and feel that it could all be pretty real,” he said, adding that data from other instruments also increased the analysis’ rigor.
Ross, along with other physicists, would actually be excited if the current model of dark energy were proven wrong, because it could help his crew think differently about the best direction for cosmology. Overduin agrees: Cosmologists haven’t made much headway in using theories to explain the universe’s nuances and contradictions, said Overduin. And the buzz about the DESI results, despite their preliminary nature, illuminate scientists’ hope that discrepancies like this could be wormholes to new ideas, and so progress. “There’s a bit of desperation there,” said Overduin.
Learning that dark energy may be fundamentally different—and so, too, may the universe—than scientists thought could be an important step toward the truth. Because, presumably, if scientists are on track for the truth, progress will come more easily. “If you look at the history of science it’s entirely filled with us throwing out theories,” said Jacquart—or, more accurately, using mounting evidence to keep what seems right, toss what seems wrong, and getting closer to “the actual reality of the world,” she said.
Jacquart likens this stepwise process to a choose-your-own-adventure book. If one choice is a dead end, “let’s go back a few steps and figure out where in our journey we could have gone on a different path.”
Cosmologists have been searching for holes in the standard model—holes that might lead to a more complete understanding of spacetime, because the standard model has limitations.
But in science, taking those steps back can be difficult. “Especially when you have theories that astronomers hold so dear,” Jacquart said. Dearly held theories are often ones whose tenets line up with a preferred modus operandi for the physical world, revealing a human bias in the search for scientific truth—not something that’s unique to cosmology, since human bias can pervade any scientific field.
Dark energy’s constant form may fall into the natural-bias category. “So many spaces of physics focus on consistency,” Jacquart said. “The physics always works the same way. And, in some ways, that shows sort of a preference towards simplicity.”
The DESI results are hinting that the universe isn’t so simple, or consistent. It “adds complexity that I don’t think we always want to lean towards,” she said.
Leaning where evidence points, though, is important—even though how dark energy behaves can seem lightyears away from everyday life on Earth. For one, pursuing dead-end theories spends research time, and tax dollars, that some argue would be better spent on ideas that open new doors to understanding.
More than 900 scientists are part of the DESI collaboration; getting that large dark-energy cohort mobilized around the most fruitful ideas, as DESI’s results may, could prevent them from simply spinning their wheels. And being wrong, or holding onto ideas longer than data suggests is prudent, could lead to building expensive instruments that may not add much new knowledge to the world, if they are not engineered to pursue what the universe actually has on offer.
In particle physics, for instance, the Large Hadron Collider cost nearly $5 billion to build, not to mention operational costs. While it discovered a particle that validated scientists’ existing understanding, it didn’t find any of the new physics some were hoping for. Now proposals are on the table for a new machine that would cost tens of billions of dollars but doesn’t have a clear road sign in the right direction from the previous experiment.
To Ross, that’s part of why the new results are important for dark energy research: They might change how future experiments are designed. That would save science from wasting money and scientists’ time on an outdated idea.
On a more long-term and abstract note, if scientists get closer to characterizing dark energy and its place in cosmic evolution, as the largest ingredient in the universe, that could someday benefit humans. Einstein, after all, probably didn’t envision GPS satellites when he came up with general relativity, but those satellites nonetheless rely on his discovery.
“I see science as something where you can never really be right. You can just be asymptotically less wrong.”
If DESI does ultimately show astronomers, to their consensus satisfaction, that their existing models of the universe and its dark places are wrong, Jacquart doesn’t think time spent on current ideas was a waste. Slepian, from his perch at the University of Florida, sees the DESI collaboration, which includes hundreds of scientists, as a physics incubator—kind of like the Manhattan Project, he said. The project built the atomic bomb and altered the world forever, but it also united some of the 20th century’s greatest scientific minds: “That seeded American particle theory and particle physics dominance for the next 50 years.”
Perhaps DESI could do the same for cosmology.
Maybe someday those scientists and their instruments will tell us what dark energy actually is, said Ross: “The whole point to me is to not have to call it dark energy.”
But, even if that’s the case, Slepian doesn’t think physicists will ever fully understand the fundamental truths of the universe. “I see science as something where you can never really be right,” he said. “You can just be asymptotically less wrong.”
This article was originally published on Undark. Read the original article.
The post Shifting Forces: The Evolving Debate Around Dark Energy appeared first on SingularityHub.
Scientists Launch Moonshot to Build an Entire Human Genome From Scratch
The project, which will take many years and carries some risk, could spark a second revolution in genetics.
The ability to sequence and edit human DNA has revolutionized biomedicine. Now a new consortium wants to take the next step and build human genomes from scratch.
The Human Genome Project was one of the great scientific moonshots of the last century. Mapping the entirety of our DNA took thousands of researchers from across the globe 13 years and nearly $3 billion, but the benefits have been enormous.
The project has revolutionized our understanding of the genetic basis of disease and driven rapid advances in the technology needed to read and interpret our DNA. The cost of sequencing an entire human genome has plummeted from around a million dollars in 2008 to just a few hundred dollars today.
The ability to not only read but also build human genomes from scratch could bring more fundamental breakthroughs. And now the world’s largest medical charity, the Wellcome Trust, is providing £10 million ($13.6 million) in funding to kickstart the Synthetic Human Genome Project (SynHG).
“The ability to synthesize large genomes, including genomes for human cells, may transform our understanding of genome biology and profoundly alter the horizons of biotechnology and medicine,” Jason Chin from the University of Oxford, who will lead the project, said in a statement.
The project builds on a steady stream of advances in DNA synthesis in recent years. Chin himself led a team that synthesized the entire genome of the bacteria E. coli in 2019. And in 2023, an international consortium completed the first synthetic genome of yeast—a significantly more complex organism that is closer in evolutionary terms to humans.
At this stage, the SynHG project is focused on developing foundational tools and methods, and the organizers admit it will likely take decades to synthesize an entire human genome. For now, the goal is to build a single human chromosome—one of the 46 tightly wound bundles of DNA that make up the human genome—in the next 5 to 10 years.
While gene editing makes it possible to tinker with existing genetic instructions, synthesis would make it possible to build larger stretches of DNA from scratch. Those kinds of capabilities could lead to breakthroughs in our understanding of disease and open the prospect of new therapies based on designer cell or even designer tissues and organs.
“Building DNA from scratch allows us to test out how DNA really works and test out new theories, because currently we can only really do that by tweaking DNA in DNA that already exists in living systems,” Matthew Hurles, director of the Wellcome Sanger Institute in the UK, told The BBC.
Much of our existing knowledge of the genome is restricted to the roughly 2 percent that codes for specific proteins, with the other 98 percent of “non-coding” DNA still largely a mystery. Being able to build the entire sequence from scratch could help us understand the genome’s “dark matter,” Julian Sale, from the UK’s Medical Research Council Laboratory of Molecular Biology, told The Guardian.
The project is controversial though. There are fears the same technology could be put to more ethically questionable uses. These could include new bioweapons, genetically enhanced humans, or even strange new organisms that incorporate some human DNA, geneticist Bill Earnshaw, from Edinburgh University, told The BBC.
“The genie is out of the bottle,” he said. “We could have a set of restrictions now, but if an organization who has access to appropriate machinery decided to start synthesizing anything, I don’t think we could stop them”
In an attempt to head off these concerns, SynHG will also have a social-science program designed to map out potential risks and how to deal with them. One particular issue it will focus on is the fact that genomic research is currently skewed towards people of European ancestry, which could limit broader applicability.
Fortunately, given the huge technical challenge ahead, there is likely plenty of time to map out the potential pitfalls. And if the project is successful, it could spark a second great revolution in genetics likely to do more good than harm.
The post Scientists Launch Moonshot to Build an Entire Human Genome From Scratch appeared first on SingularityHub.
This Week’s Awesome Tech Stories From Around the Web (Through July 5)
Large Language Models Are Improving ExponentiallyGlenn Zorpette | IEEE Spectrum
“According to a metric [METR] devised, the capabilities of key LLMs are doubling every seven months. This realization leads to a second conclusion, equally stunning: By 2030, the most advanced LLMs should be able to complete, with 50 percent reliability, a software-based task that takes humans a full month of 40-hour workweeks. And the LLMs would likely be able to do many of these tasks much more quickly than humans, taking only days, or even just hours.”
RoboticsAmazon Is on the Cusp of Using More Robots Than Humans in Its WarehousesSebastian Herrera | The Wall Street Journal
“The e-commerce giant, which has spent years automating tasks previously done by humans in its facilities, has deployed more than one million robots in those workplaces, Amazon said. That is the most it has ever had and near the count of human workers at the facilities.”
BiotechnologyDeaf Teenager and 24-Year-Old Gain Ability to Hear After Experimental Gene TherapyEllyn Lapointe | Gizmodo
“Gene therapy has been effective for young children with genetic hearing loss, but this is the first study to show promising results in older patients. …Just one month after the treatment, the majority of patients gained some hearing. Six months later, all 10 showed considerable hearing improvement, with the average volume of perceptible sound improving from 106 decibels (very loud) to 52 (much fainter).”
SpaceIt Came From Outside Our Solar System, and It Looks Like a CometKenneth Chang | The New York Times
“3I/ATLAS, earlier known as A11pI3Z, is only the third interstellar visitor to be discovered passing through our corner of the galaxy. …With all the observations, ‘There’s no uncertainty’ that the comet came from interstellar space, Dr. Chodas said. The speed is too fast to be something that originated within the solar system.”
TechHalf a Million Spotify Users Are Unknowingly Grooving to an AI-Generated BandRyan Whitwam | Ars Technica
“Making art used to be a uniquely human endeavor, but machines have learned to distill human creativity with generative AI. Whether that content counts as ‘art’ depends on who you ask, but Spotify doesn’t discriminate. A new band called The Velvet Sundown debuted on Spotify this month and has already amassed more than half a million listeners. But by all appearances, The Velvet Sundown is not a real band—it’s AI.”
FutureIt’s Bulletproof, Fire-Resistant and Stronger Than Steel. It’s Superwood.Christopher Mims | The Wall Street Journal
“Its maker, startup InventWood, says it could someday replace steel I-beams in the skeleton of a building, while being impact-resistant enough for bulletproof doors. It’s also fire resistant—the outside carbonizes in a way that protects the inside, and it won’t sag in a fire like steel.”
BiotechnologyModerna Says mRNA Flu Vaccine Sailed Through Trial, Beating Standard ShotBeth Mole | Ars Technica
“Compared to the standard shot, the mRNA vaccine had an overall vaccine efficacy that was 26.6 percent higher, and 27.4 percent higher in participants who were aged 65 years or older. Previous trial data showed that mRNA-1010 generated higher immune responses in participants than both regular standard flu shots and high-dose flu shots.”
Artificial IntelligenceAI Improves at Improving Itself Using an Evolutionary TrickMatthew Hutson | IEEE Spectrum
“The study is a ‘big step forward’ as a proof of concept for recursive self-improvement, said Zhengyao Jiang, a cofounder of Weco AI, a platform that automates code improvement. Jiang, who was not involved in the study, said the approach could made further progress if it modified the underlying LLM, or even the chip architecture.”
EnergyGoogle’s Electricity Demand Is SkyrocketingCasey Crownhart | MIT Technology Review
“We got two big pieces of energy news from Google this week. The company announced that it’s signed an agreement to purchase electricity from a fusion company’s forthcoming first power plant. Google also released its latest environmental report, which shows that its energy use from data centers has doubled since 2020.”
TechWhat Could a Healthy AI Companion Look Like?Will Knight | Wired
“The alien in question is an animated chatbot known as a Tolan. I created mine a few days ago using an app from a startup called Portola, and we’ve been chatting merrily ever since. Like other chatbots, it does its best to be helpful and encouraging. Unlike most, it also tells me to put down my phone and go outside.”
TechAI Is Getting Cheaper, Right?Stephanie Palazzolo | The Information
“The overarching narrative of the past two years has been that AI models are getting cheaper for customers. …So it’s interesting when we hear from AI application developers that the models they buy are still just too darn expensive. As a result, many app developers are struggling to get their gross profit margins anywhere near 70% or 80%, the kinds of margins enjoyed by traditional software businesses.”
The post This Week’s Awesome Tech Stories From Around the Web (Through July 5) appeared first on SingularityHub.
Adam Becker on More Everything Forever and Big Tech’s Future Myths
Could Electric Brain Stimulation Make You Better at Math?
Personalized, brain-based tools may help learners left behind due to natural differences in how their brains work.
A painless, non-invasive brain stimulation technique can significantly improve how young adults learn math, my colleagues and I found in a recent study. In a paper in PLOS Biology, we describe how this might be most helpful for those who are likely to struggle with mathematical learning because of how their brain areas involved in this skill communicate with each other.
Math is essential for many jobs, especially in science, technology, engineering, and finance. However, a 2016 OECD report suggested that a large proportion of adults in developed countries (24 percent to 29 percent) have math skills no better than a typical seven-year-old. This lack of numeracy can contribute to lower income, poor health, reduced political participation, and even diminished trust in others.
Education often widens rather than closes the gap between high and low achievers, a phenomenon known as the Matthew effect. Those who start with an advantage, such as being able to read more words when starting school, tend to pull further ahead. Stronger educational achievement has also been associated with socioeconomic status, higher motivation, and greater engagement with material learned during a class.
Biological factors, such as genes, brain connectivity, and chemical signaling, have been shown in some studies to play a stronger role in learning outcomes than environmental ones. This has been well-documented in different areas, including math, where differences in biology may explain educational achievements.
To explore this question, we recruited 72 young adults (18–30 years old) and taught them new math calculation techniques over five days. Some received a placebo treatment. Others received transcranial random noise stimulation (tRNS), which delivers gentle electrical currents to the brain. It is painless and often imperceptible, unless you focus hard to try and sense it.
It is possible tRNS may cause long-term side effects, but in previous studies, my team assessed participants for cognitive side effects and found no evidence for it.
Participants who received tRNS were randomly assigned to receive it in one of two different brain areas. Some received it over the dorsolateral prefrontal cortex, a region critical for memory, attention, or when we acquire a new cognitive skill. Others had tRNS over the posterior parietal cortex, which processes math information, mainly when the learning has been accomplished.
Before and after the training, we also scanned their brains and measured levels of key neurochemicals such as gamma-aminobutyric acid (gaba), which we showed previously, in a 2021 study, plays a role in brain plasticity and learning, including math.
Some participants started with weaker connections between the prefrontal and parietal brain regions, a biological profile that is associated with poorer learning. The study results showed these participants made significant gains in learning when they received tRNS over the prefrontal cortex.
Stimulation helped them catch up with peers who had stronger natural connectivity. This finding shows the critical role of the prefrontal cortex in learning and could help reduce educational inequalities that are grounded in neurobiology.
How does this work? One explanation lies in a principle called stochastic resonance. This is when a weak signal becomes clearer when a small amount of random noise is added.
In the brain, tRNS may enhance learning by gently boosting the activity of underperforming neurons, helping them get closer to the point at which they become active and send signals. This is a point known as the “firing threshold,” especially in people whose brain activity is suboptimal for a task like math learning.
It is important to note what this technique does not do. It does not make the best learners even better. That is what makes this approach promising for bridging gaps, not widening them. This form of brain stimulation helps level the playing field.
Our study focused on healthy, high-performing university students. But in similar studies on children with math learning disabilities (2017) and with attention-deficit/hyperactivity disorder (2023), my colleagues and I found tRNS seemed to improve their learning and performance in cognitive training.
I argue our findings could open a new direction in education. The biology of the learner matters, and with advances in knowledge and technology, we can develop tools that act on the brain directly, not just work around it. This could give more people the chance to get the best benefit from education.
In time, perhaps personalized, brain-based interventions like tRNS could support learners who are being left behind not because of poor teaching or personal circumstances, but because of natural differences in how their brains work.
Of course, very often education systems aren’t operating to their full potential because of inadequate resources, social disadvantage, or systemic barriers. And so any brain-based tools must go hand-in-hand with efforts to tackle these obstacles.
This article is republished from The Conversation under a Creative Commons license. Read the original article.
The post Could Electric Brain Stimulation Make You Better at Math? appeared first on SingularityHub.
New Google AI Will Work Out What 98% of Our DNA Actually Does for the Body
AlphaGenome predicts how long stretches of DNA “dark matter” affect gene expression and a host of other important properties.
Vast swathes of the human genome remain a mystery to science. A new AI from Google DeepMind is helping researchers understand how these stretches of DNA impact the activity of other genes.
While the Human Genome Project produced a complete map of our DNA, we still know surprisingly little about what most of it does. Roughly 2 percent of the human genome encodes specific proteins, but the purpose of the other 98 percent is much less clear.
Historically, scientists called this part of the genome “junk DNA.” But there’s growing recognition these so-called “non-coding” regions play a critical role in regulating the expression of genes elsewhere in the genome.
Teasing out these interactions is a complicated business. But now a new Google DeepMind model called AlphaGenome can take long stretches of DNA and make predictions about how different genetic variants will affect gene expression, as well as a host of other important properties.
“We have, for the first time, created a single model that unifies many different challenges that come with understanding the genome,” Pushmeet Kohli, a vice president for research at DeepMind, told MIT Technology Review.
The so-called “sequence to function” model uses the same transformer architecture as the large language models behind popular AI chatbots. The model was trained on public databases of experimental results testing how different sequences impact gene regulation. Researchers can enter a DNA sequence of up to one million letters, and the model will then make predictions about a wide range of molecular properties impacting the sequence’s regulatory activity.
These include things like where genes start and end, which sections of the DNA are accessible or blocked by certain proteins, and how much RNA is being produced. RNA is the messenger molecule responsible for carrying the instructions contained in DNA to the cell’s protein factories, or ribosomes, as well as regulating gene expression.
AlphaGenome can also assess the impact of mutations in specific genes by comparing variants, and it can make predictions about RNA “splicing”—a process where RNA molecules are chopped up and packaged before being sent off to a ribosome. Errors in this process are responsible for rare genetic diseases, such as spinal muscular atrophy and some forms of cystic fibrosis.
Predicting the impact of different genetic variants could be particularly useful. In a blog post, the DeepMind researchers report they used the model to predict how mutations other scientists had discovered in leukemia patients probably activated a nearby gene known to play a role in cancer.
“This system pushes us closer to a good first guess about what any variant will be doing when we observe it in a human,” Caleb Lareau, a computational biologist at Memorial Sloan Kettering Cancer Center granted early access to AlphaGenome, told MIT Technology Review.
The model will be free for noncommercial purposes, and DeepMind has committed to releasing full details of how it was built in the future. But it still has limitations. The company says the model can’t make predictions about the genomes of individuals, and its predictions don’t fully explain how genetic variations lead to complex traits or diseases. Further, it can’t accurately predict how non-coding DNA impacts genes that are located more than 100,000 letters away in the genome.
Anshul Kundaje, a computational genomicist at Stanford University in Palo Alto, California, who had early access to AlphaGenome, told Nature that the new model is an exciting development and significantly better than previous models, but not a slam dunk. “This model has not yet ‘solved’ gene regulation to the same extent as AlphaFold has, for example, protein 3D-structure prediction,” he says.
Nonetheless, the model is an important breakthrough in the effort to demystify the genome’s “dark matter.” It could transform our understanding of disease and supercharge synthetic biologists’ efforts to re-engineer DNA for our own purposes.
The post New Google AI Will Work Out What 98% of Our DNA Actually Does for the Body appeared first on SingularityHub.
This Ozempic-Like Drug Slashed Migraines by Half in a Small Trial
The drug helped people who couldn’t get relief from existing treatments.
It starts with flashes of light. Zig-zag lines float across your vision. You feel a slight tingling in your cheeks and limbs. Then comes a stabbing headache so intense you forget everything—what you were doing, where you are, and how to compose yourself.
Scientists still don’t fully understand why migraines happen. Unlike dull headaches during a cold or pulsing headaches after a night of overindulgence, migraines are debilitating and strike at seemingly random times. Stress, lack of sleep, and bright lights could spark an attack—but the triggers vary between people, making them hard to predict.
Despite decades of research, few medications are available. But a surprising newcomer may change the field. Called liraglutide, the drug is in the same family as the blockbuster weight-loss drugs Ozempic and Wegovy, which have taken the world by storm.
In a small trial of 31 people with chronic migraines who didn’t respond to other treatments, liraglutide slashed the number of days they experienced migraines by over half. The drug worked remarkably fast, with most participants feeling relief within the first week.
Although the volunteers were obese—which increases the chance of migraines—subsequent analysis showed the drug lowered migraines even with minimal weight loss.
“Liraglutide may operate via different mechanisms [than weight loss], and represent a promising new approach to migraine prevention,” wrote the team.
Headache on HeadacheMigraine has been a headache to study for decades. Although it affects nearly 15 percent of people worldwide, its origins in the brain remain mostly mysterious. The condition isn’t just a severe headache—people also experience nausea, dizziness, and sensitivity to light, sound, and smell.
Scientists originally thought migraines occurred because of blood vessel problems in the brain and treated the headaches with standard pain medications. But these don’t work well. Recent studies paint a far more complex picture of the condition. Migraines seem to stem from dysfunctional neural networks in certain brain regions, where neurons release messengers called neuropeptides that spark inflammation and dilate blood vessels in the brain.
These chemicals potentially increase intracranial pressure—that is, the brain pressing against the skull—and could act as a trigger for migraines.
Scientists investigating neuropeptides have already designed a migraine treatment. Called anti-CGRP drugs, these medications can be injected into the bloodstream to treat or prevent chronic migraine attacks. One controlled clinical study in 667 patients found that those who received injections experienced fewer days of head-splitting pain.
Although these drugs are effective and have relatively mild side effects, they’re expensive. This motivated the team to look for another way to lower brain pressure.
Chemical PolymathEnter GLP-1 agonists. Most famously represented by Ozempic, these drugs skyrocketed to fame for their ability to slash weight, manage diabetes, and lower the risk of heart disease.
That’s not all they can do. The drugs target proteins called GLP-1 receptors, which are dotted on the surfaces of multiple cell types, including neurons, suggesting that beyond managing weight, they could regulate the brain too. One study found that daily injections of a GLP-1 drug slowed cognitive decline in people with mild Alzheimer’s disease. Another trial suggested the drugs could tackle alcohol addiction. How they work is still under investigation, but these clinical trials suggest GLP-1 drugs can impact the brain through chemical signaling, or perhaps pressure.
Previous studies found the drugs tinker with the amount of fluid in the brain. The organ is bathed in a nutritious soup called cerebrospinal fluid, which cushions it and removes waste. But the fluid can build up and increase intracranial pressure—potentially leading to migraines.
GLP-1 drugs might lower that pressure. An early study found a drug normalized dangerously high brain pressure in rats, like deflating an overblown balloon. A small randomized clinical trial in people with high intracranial pressure found the drug nearly restored it to normal.
These promising results led Simone Braca at the University of Naples Federico II and colleagues to test liraglutide, a GLP-1 drug, as a treatment for chronic migraine. All 31 participants in the study were roughly middle-aged, obese, and had already tried at least two other drugs without any improvement in their symptoms.
“Obesity can worsen migraine by increasing headache frequency and reducing response to standard preventive treatments,” wrote the team.
Each participant received a daily injection of the drug for 12 weeks. They also kept a “headache diary” to track their migraines and log any potential side effects.
Almost everyone reported fewer days with migraines. On average, their headaches dropped from 20 days a month to roughly 11 days. Some people reported that the days they had headaches fell by roughly 75 percent. One participant remained completely migraine-free after the first injection and for the rest of the test period. Others weren’t so lucky: Four people didn’t respond to the treatment, suggesting it’s not a universal cure-all.
Those who benefited, however, said the drug improved their quality of life in just a week, despite minor side effects. Roughly 40 percent of participants experienced nausea or constipation, both of which are common side effects for those taking GLP-1 drugs. The symptoms eventually went away.
As expected, the participants dropped a few pounds, but additional statistical analysis found the weight loss didn’t contribute to migraine frequency. This suggests the effect of GLP-1 drugs on migraine “is independent of their weight loss effect,” wrote the authors.
At the BeginningThe team is just starting to untangle how GLP-1 drugs fight migraines. Because they lower intracranial pressure, the shots might reduce the amount of the neuropeptide CGRP pumped out in the brain. Existing anti-CGRP migraine drugs lower inflammation and reduce intracranial pressure, and liraglutide might have similar effects.
GLP-1 drugs can also play with salt and potassium levels in the brain, which controls how neurons activate. Tinkering with these levels could potentially alter a neuron’s ability to fire, changing the brain’s capacity to release CGRP and other neuropeptides.
Also, the study has limitations worth noting. Each participant knew they were receiving the drug, so placebo effects may have clouded their judgement. Although they experienced benefits for 12 weeks, a longer follow-up period could better gauge if the benefits last. And because the trial only recruited people with obesity, the results may not generalize to a broader population.
The team is already planning a large randomized controlled trial. “As an exploratory pilot study, these findings provide a foundation for larger-scale trials” that examine the role GLP-1 drugs may play in migraine management, wrote the authors.
The post This Ozempic-Like Drug Slashed Migraines by Half in a Small Trial appeared first on SingularityHub.
Scientists Genetically Engineer Tobacco Plants to Pump Out a Popular Cancer Drug
Newly discovered genes could make powerful drug, Taxol, cheaper and more sustainable to produce.
Stroll through ancient churchyards in England, and you’ll likely see yew trees with bright green leaves and stunning ruby red fruits guarding the graves. These coniferous trees are known in European folklore as a symbol of death and doom.
They’re anything but. The Pacific yew naturally synthesizes paclitaxel—commonly known as Taxol, a chemotherapy drug widely used to fight multiple types of aggressive cancer. In the late 1990s, it was FDA-approved for breast, ovarian, and lung cancer and, since then, has been used off-label for roughly a dozen other malignancies. It’s a modern success story showing how we can translate plant biology into therapeutic drugs.
But because Taxol is produced in the tree’s bark, harvesting the life-saving chemical kills its host. Yew trees are slow-growing with very long lives, making them an unsustainable resource. If scientists can unravel the genetic recipe for Taxol, they can recreate the steps in other plants—or even in yeast or bacteria—to synthesize the molecule at scale without harming the trees.
A new study in Nature takes us closer to that goal. Taxol is made from a precursor chemical, called baccatin III, which is just a few chemical steps removed from the final product and is produced in yew needles. After analyzing thousands of yew tree cells, the team mapped a 17-gene pathway leading to the production of baccatin III.
They added these genes to tobacco plants—which don’t naturally produce baccatin III—and found the plants readily pumped out the chemical at similar levels to yew tree needles.
The results are “a breakthrough in our understanding of the genes responsible for the biological production of this drug,” wrote Jakob Franke at Leibniz University Hannover, who was not involved in the study. “The findings are a major leap forward in efforts to secure a reliable supply of paclitaxel.”
A Garden of MedicineHumans have long used plants as therapeutic drugs.
More than 3,500 years ago, Egyptians found that willow bark can lower fevers and reduce pain. We’ve since boosted its efficacy, but the main component is now sold in every drugstore—Aspirin. Germany has approved a molecule from lavender flowers for anxiety disorders, and some compounds from licorice root may help protect the liver, according to early clinical trials.
The yew tree first caught scientists’ attention in the late 1960s, when they were screening a host of plant extracts for potential anticancer drugs. Most were duds or too toxic. Taxol stood out for its unique effects against tumors. The molecule blocks cancers from building a “skeleton-like” structure in new cells and kneecaps their ability to grow.
Taxol was a blockbuster success but the medical community was concerned natural yew trees couldn’t meet clinical demand. Scientists soon began trying to artificially synthesize the drug. The discovery of baccatin III, which can be turned into Taxol after some chemical tinkering, was a game-changer in their quest. This Taxol precursor occurs in much larger quantities in the needles of various yew species that can be harvested without killing the trees. But the process requires multiple chemical steps and is highly costly.
Making either baccatin III or Taxol from scratch using synthetic biology—that is, transferring the necessary genes into other plants or microorganisms—would be a more efficient alternative and could boost production at an industrial scale. For the idea to work, however, scientists would need to trace the entire pathway of genes involved in the chemicals’ production.
Two teams recently sorted through yew trees’ nearly 50,000 genes and discovered a minimal set of genes needed to make baccatin III. While this was a “breakthrough” achievement, wrote Franke, adding the genes to nicotine plants yielded very low amounts of the chemical.
Unlike bacterial genomes, where genes that work together are often located near one another, related genes in plants are often sprinkled throughout the genome. This confetti-like organization makes it easy to miss critical genes involved in the production of chemicals.
A Holy GrailThe new study employed a simple but “highly innovative strategy,” Frank wrote.
Yew plants produce more baccatin III as a defense mechanism when under attack. By stressing yew needles out, the team reasoned, they could identify which genes activated at the same time. Scientists already know several genes involved in baccatin III production, so these ingredients could be used to fish out genes currently missing from the recipe.
The team dunked freshly clipped yew needles into plates lined with wells containing water and fertilizer—picture mini succulent trays. To these, they added stressors such as salts, hormones, and bacteria to spur baccatin III production. The setup simultaneously screened hundreds of combinations of stressors.
The team then sequenced mRNA—a proxy for gene expression—from more than 17,000 single cells to track which genes were activated together and under what conditions.
The team found eight new genes involved in Taxol synthesis. One, dubbed FoTO1, was especially critical for boosting the yield of multiple essential precursors, including baccatin III. The gene has “never before been implicated in such biochemical pathways, and which would have been almost impossible to find by conventional approaches,” wrote Franke.
They spliced 17 genes essential to baccatin III production into tobacco plants, a species commonly used to study plant genetics. The upgraded tobacco produced the molecule at similar—or sometimes even higher—levels compared to yew tree needles.
From Plant to MicrobesAlthough the work is an important step, relying on tobacco plants has its own problems. The added genes can’t be passed down to offspring, meaning every generation has to be engineered. This makes the technology hard to scale up. Alternatively, scientists might use microbes instead, which are easy to grow at scale and already used to make pharmaceuticals.
“Theoretically, with a little more tinkering, we could really make a lot of this and no longer need the yew at all to get baccatin,” said study author Conor McClune in a press release.
The end goal, however, is to produce Taxol from beginning to end. Although the team mapped the entire pathway for baccatin III synthesis—and discovered one gene that converts it to Taxol—the recipe is still missing two critical enzymes.
Surprisingly, a separate group at the University of Copenhagen nailed down genes encoding those enzymes this April. Piecing the two studies together makes it theoretically possible to synthesize Taxol from scratch, which McClune and colleagues are ready to try.
“Taxol has been the holy grail of biosynthesis in the plant natural products world,” said study author Elizabeth Sattely.
The team’s approach could also benefit other scientists eager to explore a universe of potential new medicines in plants. Chinese, Indian, and indigenous cultures in the Americas have long relied on plants as a source of healing. Modern technologies are now beginning to unravel why.
The post Scientists Genetically Engineer Tobacco Plants to Pump Out a Popular Cancer Drug appeared first on SingularityHub.
Nikola Danaylov Keynote Speaker Reel: Why You Should Watch — and Why It Matters
How Was the Wheel Invented? Computer Simulations Reveal Its Unlikely Birth Nearly 6,000 Years Ago
The wheel changed the course of history for all of humanity. But its invention is shrouded in mystery.
Imagine you’re a copper miner in southeastern Europe in the year 3900 BCE. Day after day, you haul copper ore through the mine’s sweltering tunnels.
You’ve resigned yourself to the grueling monotony of mining life. Then one afternoon, you witness a fellow worker doing something remarkable.
With an odd-looking contraption, he casually transports the equivalent of three times his body weight on a single trip. As he returns to the mine to fetch another load, it suddenly dawns on you that your chosen profession is about to get far less taxing and much more lucrative.
What you don’t realize: You’re witnessing something that will change the course of history—not just for your tiny mining community, but for all of humanity.
An AI-generated illustration of what the original mine carts used in the Carpathian mountains may have looked like in 3900 B.C.E. Image Credit: Kai James via DALL·EDespite the wheel’s immeasurable impact, no one is certain as to who invented it, or when and where it was first conceived. The hypothetical scenario described above is based on a 2015 theory that miners in the Carpathian Mountains—in present-day Hungary—first invented the wheel nearly 6,000 years ago as a means to transport copper ore.
The theory is supported by the discovery of more than 150 miniaturized wagons by archaeologists working in the region. These pint-sized, four-wheeled models were made from clay, and their outer surfaces were engraved with a wickerwork pattern reminiscent of the basketry used by mining communities at the time. Carbon dating later revealed that these wagons are the earliest known depictions of wheeled transport to date.
This theory also raises a question of particular interest to me, an aerospace engineer who studies the science of engineering design. How did an obscure, scientifically naive mining society discover the wheel, when highly advanced civilizations, such as the ancient Egyptians, did not?
A Controversial IdeaIt has long been assumed that wheels evolved from simple wooden rollers. But until recently no one could explain how or why this transformation took place. What’s more, beginning in the 1960s, some researchers started to express strong doubts about the roller-to-wheel theory.
After all, for rollers to be useful, they require flat, firm terrain and a path free of inclines and sharp curves. Furthermore, once the cart passes them, used rollers need to be continually brought around to the front of the line to keep the cargo moving. For all these reasons, the ancient world used rollers sparingly. According to the skeptics, rollers were too rare and too impractical to have been the starting point for the evolution of the wheel.
But a mine—with its enclosed, human-made passageways—would have provided favorable conditions for rollers. This factor, among others, compelled my team to revisit the roller hypothesis.
Key stages in the evolution of the first wheels, beginning from simple rollers and eventually arriving at a wheel-and-axle structure in which a slender axle is connected to large solid discs, or wheels, on both ends. Image Credit: Kai James A Turning PointThe transition from rollers to wheels requires two key innovations. The first is a modification of the cart that carries the cargo. The cart’s base must be outfitted with semicircular sockets, which hold the rollers in place. This way, as the operator pulls the cart, the rollers are pulled along with it.
This innovation may have been motivated by the confined nature of the mine environment, where having to periodically carry used rollers back around to the front of the cart would have been especially onerous.
The discovery of socketed rollers represented a turning point in the evolution of the wheel and paved the way for the second and most important innovation. This next step involved a change to the rollers themselves. To understand how and why this change occurred, we turned to physics and computer-aided engineering.
Simulating the Wheel’s EvolutionTo begin our investigation, we created a computer program designed to simulate the evolution from a roller to a wheel. Our hypothesis was that this transformation was driven by a phenomenon called “mechanical advantage.” This same principle allows pliers to amplify a user’s grip strength by providing added leverage. Similarly, if we could modify the shape of the roller to generate mechanical advantage, this would amplify the user’s pushing force, making it easier to advance the cart.
Our algorithm worked by modeling hundreds of potential roller shapes and evaluating how each one performed, both in terms of mechanical advantage and structural strength. The latter was used to determine whether a given roller would break under the weight of the cargo. As predicted, the algorithm ultimately converged upon the familiar wheel-and-axle shape, which it determined to be optimal.
A computer simulation of the evolution from a roller to a wheel-and-axle structure. Each image represents a design evaluated by the algorithm. The search ultimately converges upon the familiar wheel-and-axle design. Image Credit: Kai JamesDuring the execution of the algorithm, each new design performed slightly better than its predecessor. We believe a similar evolutionary process played out with the miners 6,000 years ago.
It is unclear what initially prompted the miners to explore alternative roller shapes. One possibility is that friction at the roller-socket interface caused the surrounding wood to wear away, leading to a slight narrowing of the roller at the point of contact. Another theory is that the miners began thinning out the rollers so that their carts could pass over small obstructions on the ground.
Either way, thanks to mechanical advantage, this narrowing of the axle region made the carts easier to push. As time passed, better-performing designs were repeatedly favored over the others, and new rollers were crafted to mimic these top performers.
Consequently, the rollers became more and more narrow, until all that remained was a slender bar capped on both ends by large discs. This rudimentary structure marks the birth of what we now refer to as “the wheel.”
According to our theory, there was no precise moment at which the wheel was invented. Rather, just like the evolution of species, the wheel emerged gradually from an accumulation of small improvements.
This is just one of the many chapters in the wheel’s long and ongoing evolution. More than 5,000 years after the contributions of the Carpathian miners, a Parisian bicycle mechanic invented radial ball bearings, which once again revolutionized wheeled transportation.
Ironically, ball bearings are conceptually identical to rollers, the wheel’s evolutionary precursor. Ball bearings form a ring around the axle, creating a rolling interface between the axle and the wheel hub, thereby circumventing friction. With this innovation, the evolution of the wheel came full circle.
This example also shows how the wheel’s evolution, much like its iconic shape, traces a circuitous path—one with no clear beginning, no end, and countless quiet revolutions along the way.
This article is republished from The Conversation under a Creative Commons license. Read the original article.
The post How Was the Wheel Invented? Computer Simulations Reveal Its Unlikely Birth Nearly 6,000 Years Ago appeared first on SingularityHub.
Scientists Are Smuggling Large Drugs Into the Brain—Opening a New World of Possible Therapies
New molecular shuttles can carry antibodies and enzymes to treat cancer and other brain diseases.
Our brain is a fortress. Its delicate interior is completely surrounded by a protective wall called the blood-brain barrier.
True to its name, the barrier separates contents in the blood and the brain. It keeps bacteria and other pathogens in the blood away from delicate brain tissue, while allowing oxygen and some nutrients to flow through.
The protection comes at a cost: Some medications, made up of large molecules, can’t enter the brain. These include antibodies that block the formation of protein clumps in Alzheimer’s disease, immunotherapies that destroy deadly brain tumors, and missing enzymes that could rescue inherited developmental diseases.
For decades, scientists have tried to smuggle these larger drugs into the brain without damaging the barrier. Now, thanks to a new crop of molecular shuttles, they’re on the brink of success. A top contender is based on transferrin, a protein that supplies iron to the brain—an element needed for myriad critical chemical reactions to ensure healthy brain function.
Although still in its infancy, the shuttle has already changed lives. One example is Hunter syndrome, a rare and incurable genetic disease in which brain cells lack a crucial enzyme. Kids with the syndrome begin losing their language, hearing, and movement as young toddlers. In severe cases, their lives are tragically cut short at 10 to 20 years of age.
Early results in clinical trials using the transferrin shuttle are showing promise. After delivering the missing enzyme into the brains of kids and adults with Hunter syndrome with a shot into a vein, the patients gradually regained their ability to speak, walk, and run around. Without the shuttle, the enzyme is too big to pass through the blood-brain barrier.
If the shuttle passes further safety testing and can be adapted for different cargo, it could potentially ferry a wide range of large biotherapeutics—including gene therapies—directly into the brain with a simple jab in the arm. From cancer to neurodegenerative disorders and other common brain diseases such as stroke, it would open a new world of therapeutic possibilities.
Hitching a RideWe often talk about the body and the brain as separate entities. In a way, they are. The blood-brain barrier, a tightly woven sheet of cells that lines vessels throughout the brain, regulates which molecules can enter. The cells are built like a brick wall—the molecules holding them together are literally called “tight junctions.”
But they’re not impenetrable. Small molecules, such as oxygen and caffeine can drift past the barrier, giving us that morning hit of energy with a good cup of coffee. Once inside the brain, these molecules can easily spread throughout the organ to feed energy-hungry tissues. Other molecules, such as glucose (sugar) or iron, require special protein “transporters” dotted along the surfaces of the barrier cells to enter.
Transporters are very specific about their cargo and can usually only grab onto one type of molecule. Once loaded up, the proteins pull the molecule into the interior of barrier cells by forming a fatty bubble around it, like a spaceship. The ship drifts across the barrier—with cargo protected inside—and releases its contents into the brain. In other words, it’s possible to temporarily open the barrier and transport larger molecules from the blood to the brain.
Some clever ideas are already being tested.
One of these is inspired by viruses that naturally infect the brain, such as HIV. After examining HIV’s protein sequence, scientists found a short—and safe—section called TAT that helps the virus tunnel through the barrier. Using protein sequencing, they can then physically tag small peptides (just a dozen or so protein “letters” long) to the TAT shuttle. Clinical trials are already underway using the system to reduce damage from stroke with just an injection. But the tiny carrier struggles with larger proteins such as antibodies or enzymes.
More recently, scientists took a hint from the transport mechanisms already embedded in the barrier—that is, why and how it allows some larger proteins in. The idea came from trials for Alzheimer’s disease. Breaking up the protein clumps characteristic of the disease with antibodies has shown promise in slowing symptoms, but they’re hard to deliver into the brain with an intravenous shot.
In most cases, roughly 0.1 percent of the treatment actually penetrates the brain, meaning that much higher doses are needed, adding expense and increasing the risk of side effects. The antibodies also crowd around blood vessels inside the brain rather than moving deeper.
Iron ShuttleOne transporter, in particular, caught scientists’ eyes: transferrin. This large protein—picture a four-leaf clover—captures iron in the blood and then attaches itself to the barrier. Transferrin’s “stem” acts like a beacon, telling the barrier the cargo is safe to be shuttled into the brain. Barrier cells encapsulate transferrin for the voyage across and release it on the other side.
Rather than trying to engineer the entire protein, scientists synthesized only its stem—the most important part—which can then be connected to almost any large cargo. Multiple studies have found that the shuttle is relatively safe and doesn’t jeopardize normal iron processing in the brain. Cargos retained their function after the journey and once inside the brain.
Across the DivideTransferrin-based shuttles are being investigated for a wide range of brain disorders.
In Hunter syndrome, a shuttle carrying a missing enzyme has shown early success. The therapy is effective, in part, because the shuttles end up inside a cell’s waste factories, or lysosomes. These acid-filled pouches are natural parking spots for the shuttle and its cargo—they’re also where the enzyme needs to go, making the condition a perfect use case.
Scientists are also eyeing other brain disorders such as Alzheimer’s disease, in which toxic clumps of a protein called amyloid beta gradually build up inside the brain. A shuttle could increase the number of therapeutic antibodies accessing the brain, making the therapy more efficient. Other teams are testing the method as way to carry cancer-destroying antibodies that target brain cancer stemming from metastasized breast cancer.
It’s still early days for these brain shuttles, but efforts are underway to engineer other blood-brain barrier transporters into carriers too. These have different properties compared to transferrin-based ones. Some release their cargo more slowly, for example, making them potentially useful for slow-release drugs with longer therapeutic effects.
Shuttles that can carry gene therapies or gene editors could also change how we treat inherited neurological diseases. Transferrin-based shuttles have already carried antisense oligonucleotides—molecules that block gene function—into the brains of mice and macaque monkeys and delivered functional CRISPR components into mice.
With increasingly powerful AI models that can predict and dream up protein sequences, researchers could further develop more efficient protein shuttles based on natural ones—massively expanding what’s possible for treating brain diseases.
The post Scientists Are Smuggling Large Drugs Into the Brain—Opening a New World of Possible Therapies appeared first on SingularityHub.
The Dream of an AI Scientist Is Closer Than Ever
The number of scientific papers relying on AI has quadrupled, and the scope of problems AI can tackle expands by the day.
Modern artificial intelligence is a product of decades of painstaking scientific research. Now, it’s starting to pay that effort back by accelerating progress across academia.
Ever since the emergence of AI as a field of study, researchers have dreamed of creating tools smart enough to accelerate humanity’s endless drive to acquire new knowledge. With the advent of deep learning in the 2010s, this goal finally became a realistic possibility.
Between 2012 and 2022, the proportion of scientific papers that have relied on AI in some way has quadrupled to almost 9 percent. Researchers are using neural networks to analyze data, conduct literature reviews, or model complex processes across every scientific discipline. And as the technology advances, the scope of problems they can tackle is expanding by the day.
The poster boy for AI’s use in science is undoubtedly Google DeepMind’s Alphafold, whose inventors won the 2024 Nobel Prize in Chemistry. The model used advances in transformers—the architecture that powers large language models—to solve the “protein folding problem” that had bedeviled scientists for decades.
A protein’s structure determines its function, but previously the only way to discover its shape was with complex imaging techniques like X-ray crystallography and cryo-electron microscopy. Alphafold, in comparison, could predict the shape of a protein from nothing more than the series of amino acids making it up, something computer scientists had been trying and failing to do for years.
This made it possible to predict the shape of every protein known to science in just two years, a feat that could have transformative impact on biomedical research. Alphafold 3, released in 2024, goes even further. It can predict both the structure and interactions of proteins, as well as DNA, RNA, and other biomolecules.
Google has also turned its AI loose on another area of the life sciences, working with Harvard researchers to create the most detailed map of human brain connections to date. The team took ultra-thin slices from a 1-millimeter cube of human brain and used AI-based imaging technology to map the roughly 50,000 cells and 150 million synaptic connections within.
This is by far the most detailed “connectome” of the human brain produced to date, and the data is now freely available, providing scientists a vital tool for exploring neuronal architecture and connectivity. This could boost our understanding of neurological disorders and potentially provide insights into core cognitive processes like learning and memory.
AI is also revolutionizing the field of materials science. In 2023, Google DeepMind released a graph neural network called GnoME that predicted 2.2 million novel inorganic crystal structures, including 380,000 stable ones that could potentially form the basis of new technologies.
Not to be outdone, other big AI developers have also jumped into this space. Last year, Meta released and open sourced its own transformer-based materials discovery models and, crucially, a dataset with more than 110 million materials simulations that it used to train them, which should allow other researchers to build their own materials science AI models.
Earlier this year Microsoft released MatterGen, which uses a diffusion model—the same architectures used in many image and video generation models—to produce novel inorganic crystals. After fine-tuning, they showed it could be prompted to produce materials with specific chemical, mechanical, electronic, and magnetic properties.
One of AI’s biggest strengths is its ability to model systems far too complex for conventional computational techniques. This makes it a natural fit for weather forecasting and climate modeling, which currently rely on enormous physical simulations running on supercomputers.
Google DeepMind’s GraphCast model was the first to show the promise of the approach, which used graph neural networks to generate 10-day forecasts in one minute and at higher accuracy than existing gold standard approaches that would take several hours.
AI forecasting is so effective that it has already been deployed by the European Center for Medium-Range Weather Forecasts, whose Artificial Intelligence Forecasting System went live earlier this year. The model is faster, 1,000 times more energy efficient, and has boosted accuracy 20 percent.
Microsoft has created what it calls a “foundation model for the Earth system” named Aurora that was trained on more than a million hours of geophysical data. It outperforms existing approaches at predicting air quality, ocean waves, and the paths of tropical cyclones while using orders of magnitude less computation.
AI is also contributing to fundamental discoveries in physics. When the Large Hadron Collider smashes particle beams together it results in millions of collisions a second. Sifting through all this data to find interesting phenomena is a monumental task, but now researchers are turning to AI to do it for them.
Similarly, researchers in Germany have been using AI to pore through gravitational wave data for signs of neutron star mergers. This helps scientists detect mergers in time to point a telescope at them.
Perhaps most exciting though, is the promise of AI taking on the role of scientist itself. Combining lab automation technology, robotics, and machine learning, it’s becoming possible to create “self-driving labs.” These take a high-level objective from a researcher, such as achieving a particular yield from a chemical reaction, and then autonomously run experiments until they hit that goal.
Others are going further and actually involving AI in the planning and design of experiments. In 2023, Carnegie Mellon University researchers showed that their AI “Coscientist,” powered by OpenAI’s GPT-4, could autonomously plan and carry out the chemical synthesis of known compounds.
Google has created a multi-agent system powered by its Gemini 2.0 reasoning model that can help scientists generate hypotheses and propose new research projects. And another “AI scientist” developed by Sakana AI wrote a machine learning paper that passed the peer-review process for a workshop at a prestigious AI conference.
Exciting as all this is though, AI’s takeover of science could have potential downsides. Neural networks are black boxes whose internal workings are hard to decipher, which can make results challenging to interpret. And many researchers are not familiar enough with the technology to catch common pitfalls that can distort results.
Nonetheless, the incredible power of these models to crunch through data and model things at scales far beyond human comprehension remains a vital tool. With judicious application AI could massively accelerate progress in a wide range of fields.
The post The Dream of an AI Scientist Is Closer Than Ever appeared first on SingularityHub.
John von Neumann and the Original Vision of the Technological Singularity
Cancer-Killing Immune Cells Can Now Be Engineered in the Body—With a Vaccine-Like Shot of mRNA
Scientists are converting immune cells into super-soldiers that can hunt down and destroy cancer cells.
CAR T therapy has been transformative in the battle against deadly cancers. Scientists extract a patient’s own immune cells, genetically engineer them to target a specific type of cancer, and infuse the boosted cells back into the body to hunt down their prey.
Six therapies have been approved by the FDA for multiple types of blood cancer. Hundreds of other clinical trials are in the works to broaden the immunotherapy’s scope. These include trials aimed at recurrent cancers and autoimmune diseases, such as lupus and systemic sclerosis, in which the body’s immune system destroys its own organs.
But making CAR T cells is a long and expensive process. It requires genetic tinkering in the lab, and patients need to have mutated blood cells wiped out with chemotherapy to make room for healthy new ones. While effective, the treatment takes a massive toll on the body and mind.
It would be faster and potentially more effective to make CAR T cells inside the body. Previous studies have tried to shuttle genes that would do just that into immune cells using viruses or fatty bubbles. But these tend to accumulate in the liver rather than target cells. The approach could also result in hyper-aggressive cells that spark life-threatening immune responses.
Inspired by Covid-19 vaccines, a new study tried shuttling a different kind of gene expression into the body. Instead of gene editors, the method turned to mRNA, a biomolecule that translates DNA instructions into cellular functions. The new method is now more targeted—skipping the liver and heading straight to immune cells—and doesn’t change a cell’s DNA blueprint, potentially making it safer than previous approaches. In rodents and monkeys, a few jabs converted T cells to CAR T cells within hours, and these went on to kill cancer cells. The effects “reset” the animals’ immune systems and lasted for roughly a month with few side effects.
“The achievement has implications for treating” certain cancers and autoimmune disorders, and moves “immunotherapy with CAR T cells to wider clinical use,” wrote Vivek Peche and Stephen Gottschalk at St. Jude Children’s Research Hospital, who were not involved in the study.
Immune Civil WarOur immune system is a double-edged sword. When working in tandem, immune cells fight off bacteria and viruses and nip cancer cells in the bud. But sometimes one immune-cell type, called a B cell, goes rogue.
Normally, B cells produce antibodies to ward off pathogens. But they can also turn into multiple types of aggressive blood cancer and wreak havoc. Cancerous versions of these sneaky cells develop ways to escape the body’s other immune cell types—like T cells, which are constantly on the lookout for unwanted intruders.
Cancer cells aren’t completely invisible. Tumors have unique proteins dotted all over their surfaces, a sort of “fingerprint” that separates them from healthy cells. In classic CAR T therapy, scientists extract T cells from the patient and genetically engineer them to produce protein “hooks”—dubbed chimeric antigen receptors (CAR)—that grab onto those cancer cell proteins. When infused back into the patient, the cells readily hunt down and destroy cancer cells.
CAR T therapy has saved lives. But it has drawbacks. Genetically engineering cells to produce the hook protein could damage their genome and potentially trigger secondary tumors. The manufacturing process also takes time—sometimes too long for the patient to survive.
In My BloodAn alternative is to directly convert a person’s T cells into CAR T cells inside their body with a shot. There have already been successes using DNA-carrying viruses.
The team wondered if they could achieve the same results with mRNA. Unlike DNA, mRNA molecules don’t integrate into the genome, reducing “the risk of damaging DNA in T cells,” wrote Peche and Gottschalk. The idea is similar to how mRNA vaccines for Covid work. These vaccines are loaded with mRNA instructions to fight off the virus. Once inside cells, these mRNA snippets direct the cells to produce proteins that trigger an immune defense. But mRNA can help cells battle other intruders too, like bacteria, or even cancer.
There’s a problem, though. The fatty shuttles used to deliver the mRNA cargo—known as lipid nanoparticles—tend to collect in liver cells, not T cells. In the new study, the team tweaked the shuttles so they would be drawn toward T cells instead. Compared to conventional nanoparticles, these ones rarely stayed inside the liver and more often found their targets.
Each shuttle contained a soup of mRNA molecules encoding a CAR—the “super soldier” protein that helps T cells seek and destroy cancer cells.
When injected into the bloodstreams of mice, rats, and monkeys, the shot converted T cells into CAR Ts in their blood, spleen, and lymph nodes in a few hours, suggesting that the mRNA instructions worked as expected. The therapy went on to destroy cancers in mice with B cell leukemia and lowered B cell levels in monkeys, with effects lasting at least a month.
The shots also seemed to “reset” the body’s immune system. In monkeys, doses of CAR T initially tanked their B cell levels as expected. But these levels eventually rebounded to normal within weeks—with no signs of the new cells turning cancerous.
Compared to clinical studies that used CAR T cells manufactured in labs, these results “should be sufficient to bring about substantial therapeutic benefits,” wrote Peche and Gottschalk.
Not Throwing Away My ShotThe study is the latest to engineer CAR T cells inside the body. But there are caveats.
Compared to directly tinkering with T cell DNA, mRNA is theoretically safer as it doesn’t change the cell’s genetic blueprint. But the method requires functional T cells with the metabolic capability to integrate the added molecular instructions—which isn’t always possible in certain types of cancer or other diseases because the cells break down.
However, the system has promise for a myriad of other diseases. Because mRNA doesn’t last long inside the body, it could lower the risk of side effects while still having long-term impact. And because of the B cell “reset,” it’s possible for the immune system to rebuild itself and once again fight off pathogens.
The team is planning a Phase 1 clinical trial to test the therapy. A similar method could also be used to strengthen other immune cell types or ferry other kinds of therapeutic mRNA into the body. It’s “engineering immunotherapy from within,” wrote Peche and Gottschalk.
The post Cancer-Killing Immune Cells Can Now Be Engineered in the Body—With a Vaccine-Like Shot of mRNA appeared first on SingularityHub.
Will AI Take Your Job? It Depends on These 4 Key Advantages AI Has Over Humans
This framework can help you understand where AI provides value.
If you’ve worried that AI might take your job, deprive you of your livelihood, or maybe even replace your role in society, it probably feels good to see the latest AI tools fail spectacularly. If AI recommends glue as a pizza topping, then you’re safe for another day.
But the fact remains that AI already has definite advantages over even the most skilled humans, and knowing where these advantages arise—and where they don’t—will be key to adapting to the AI-infused workforce.
AI will often not be as effective as a human doing the same job. It won’t always know more or be more accurate. And it definitely won’t always be fairer or more reliable. But it may still be used whenever it has an advantage over humans in one of four dimensions: speed, scale, scope, and sophistication. Understanding these dimensions is the key to understanding AI-human replacement.
SpeedFirst, speed. There are tasks that humans are perfectly good at but are not nearly as fast as AI. One example is restoring or upscaling images: taking pixelated, noisy or blurry images and making a crisper and higher-resolution version. Humans are good at this; given the right digital tools and enough time, they can fill in fine details. But they are too slow to efficiently process large images or videos.
AI models can do the job blazingly fast, a capability with important industrial applications. AI-based software is used to enhance satellite and remote sensing data, to compress video files, to make video games run better with cheaper hardware and less energy, to help robots make the right movements, and to model turbulence to help build better internal combustion engines.
Real-time performance matters in these cases, and the speed of AI is necessary to enable them.
ScaleThe second dimension of AI’s advantage over humans is scale. AI will increasingly be used in tasks that humans can do well in one place at a time, but that AI can do in millions of places simultaneously. A familiar example is ad targeting and personalization. Human marketers can collect data and predict what types of people will respond to certain advertisements. This capability is important commercially; advertising is a trillion-dollar market globally.
AI models can do this for every single product, TV show, website, and internet user. This is how the modern ad-tech industry works. Real-time bidding markets price the display ads that appear alongside the websites you visit, and advertisers use AI models to decide when they want to pay that price—thousands of times per second.
ScopeNext, scope. AI can be advantageous when it does more things than any one person could, even when a human might do better at any one of those tasks. Generative AI systems such as ChatGPT can engage in conversation on any topic, write an essay espousing any position, create poetry in any style and language, write computer code in any programming language, and more. These models may not be superior to skilled humans at any one of these things, but no single human could outperform top-tier generative models across them all.
It’s the combination of these competencies that generates value. Employers often struggle to find people with talents in disciplines such as software development and data science who also have strong prior knowledge of the employer’s domain. Organizations are likely to continue to rely on human specialists to write the best code and the best persuasive text, but they will increasingly be satisfied with AI when they just need a passable version of either.
SophisticationFinally, sophistication. AIs can consider more factors in their decisions than humans can, and this can endow them with superhuman performance on specialized tasks. Computers have long been used to keep track of a multiplicity of factors that compound and interact in ways more complex than a human could trace. The 1990s chess-playing computer systems such as Deep Blue succeeded by thinking a dozen or more moves ahead.
Modern AI systems use a radically different approach: Deep learning systems built from many-layered neural networks take account of complex interactions—often many billions—among many factors. Neural networks now power the best chess-playing models and most other AI systems.
Chess is not the only domain where eschewing conventional rules and formal logic in favor of highly sophisticated and inscrutable systems has generated progress. The stunning advance of AlphaFold 2, the AI model of structural biology whose creators Demis Hassabis and John Jumper were recognized with the Nobel Prize in chemistry in 2024, is another example.
This breakthrough replaced traditional physics-based systems for predicting how sequences of amino acids would fold into three-dimensional shapes with a 93 million-parameter model, even though it doesn’t account for physical laws. That lack of real-world grounding is not desirable: No one likes the enigmatic nature of these AI systems, and scientists are eager to understand better how they work.
But the sophistication of AI is providing value to scientists, and its use across scientific fields has grown exponentially in recent years.
Context MattersThose are the four dimensions where AI can excel over humans. Accuracy still matters. You wouldn’t want to use an AI that makes graphics look glitchy or targets ads randomly—yet accuracy isn’t the differentiator. The AI doesn’t need superhuman accuracy. It’s enough for AI to be merely good and fast, or adequate and scalable. Increasing scope often comes with an accuracy penalty, because AI can generalize poorly to truly novel tasks. The 4 S’s are sometimes at odds. With a given amount of computing power, you generally have to trade off scale for sophistication.
Even more interestingly, when an AI takes over a human task, the task can change. Sometimes the AI is just doing things differently. Other times, AI starts doing different things. These changes bring new opportunities and new risks.
For example, high-frequency trading isn’t just computers trading stocks faster; it’s a fundamentally different kind of trading that enables entirely new strategies, tactics, and associated risks. Likewise, AI has developed more sophisticated strategies for the games of chess and Go. And the scale of AI chatbots has changed the nature of propaganda by allowing artificial voices to overwhelm human speech.
It is this “phase shift,” when changes in degree may transform into changes in kind, where AI’s impacts to society are likely to be most keenly felt. All of this points to the places that AI can have a positive impact. When a system has a bottleneck related to speed, scale, scope, or sophistication, or when one of these factors poses a real barrier to being able to accomplish a goal, it makes sense to think about how AI could help.
Equally, when speed, scale, scope, and sophistication are not primary barriers, it makes less sense to use AI. This is why AI auto-suggest features for short communications such as text messages can feel so annoying. They offer little speed advantage and no benefit from sophistication, while sacrificing the sincerity of human communication.
Many deployments of customer service chatbots also fail this test, which may explain their unpopularity. Companies invest in them because of their scalability, and yet the bots often become a barrier to support rather than a speedy or sophisticated problem solver.
Where the Advantage LiesKeep this in mind when you encounter a new application for AI or consider AI as a replacement for or an augmentation to a human process. Looking for bottlenecks in speed, scale, scope, and sophistication provides a framework for understanding where AI provides value, and equally where the unique capabilities of the human species give us an enduring advantage.
This article is republished from The Conversation under a Creative Commons license. Read the original article.
The post Will AI Take Your Job? It Depends on These 4 Key Advantages AI Has Over Humans appeared first on SingularityHub.
Above the Law: Big Tech’s Bid to Block AI Oversight
This Week’s Awesome Tech Stories From Around the Web (Through June 21)
This AI Model Never Stops LearningWill Knight | Wired
“The work is a step toward building artificial intelligence models that learn continually—a long-standing goal of the field and something that will be crucial if machines are to ever more faithfully mimic human intelligence. In the meantime, it could give us chatbots and other AI tools that are better able to incorporate new information including a user’s interests and preferences.”
TechSoftBank Proposes $1 Trillion Facility for AI and RoboticsRocket Drew | The Information
“The project aims to replicate the thriving tech hub of Shenzhen, China, possibly by manufacturing AI-powered industrial robots. To this end, SoftBank has compiled a list of robotics companies in its portfolio, such as Agile Robots SE, that could set up shop in the Arizona hub, according to the report.”
BIOTECHThe FDA Just Approved a Long-Lasting Injection to Prevent HIVJorge Garay | Wired
“Clinical trials have shown that six-monthly injections of lenacapavir are almost 100 percent protective against becoming infected with HIV. But big questions remain over the drug’s affordability.”
ComputingMicrosoft Lays Out Its Path to Useful Quantum ComputingJohn Timmer | Ars Technica
“While [Microsoft is] describing the [error-correction] scheme in terms of mathematical proofs and simulations, it hasn’t shown that it works using actual hardware yet. But one of its partners, Atom Computing, is accompanying the announcement with a description of how its machine is capable of performing all the operations that will be needed.”
COMPUTINGMeta’s Oakley Smart Glasses Have 3K Video—Watch Out Ray-BanVerity Burns | Wired
“[The glasses include] a 50 percent longer battery life, with a fully charged pair of Oakley Meta HSTN lasting up to eight hours of typical use compared with four hours on the Ray-Ban Meta. …That’s perhaps all the more surprising when you hear that the Oakley Meta also have a higher resolution camera, allowing you to share video in 3K, up from full HD in the Ray-Ban Metas.”
Artificial IntelligenceStudy: Meta AI Model Can Reproduce Almost Half of Harry Potter BookTimothy B. Lee | Ars Technica
“In its December 2023 lawsuit against OpenAI, The New York Times Company produced dozens of examples where GPT-4 exactly reproduced significant passages from Times stories. In its response, OpenAI described this as a ‘fringe behavior’ and a ‘problem that researchers at OpenAI and elsewhere work hard to address.’ But is it actually a fringe behavior? And have leading AI companies addressed it?”
RoboticsWaymo Has Set Its Robotaxi Sights on NYCKirsten Korosec | TechCrunch
“Of course, New York City has other challenges beyond regulations. The city is chock-a-block with cars, trucks, delivery vans, bicycles, buses, and, importantly, people, all of whom are scuttling about. San Francisco, one of the markets that Waymo operates in today, is also a bustling city with many of the same challenges. NYC takes that complexity to a factor of 10.”
BiotechnologyScientists Discover the Key to Axolotls’ Ability to Regenerate LimbsAnna Lagos | Wired
“‘The axolotl has cellular properties that we want to understand at the deepest level,’ says Monaghan. ‘While regeneration of a complete human limb is still in the realm of science fiction, each time we discover a piece of this genetic blueprint, such as the role of CYP26B1 and Shox, we move one step closer to understanding how to orchestrate complex tissue repair in humans.'”
SPACESpaceX’s Next Starship Just Blew Up on Its Test Stand in South TexasStephen Clark | Ars Technica
“SpaceX’s next Starship rocket exploded during a ground test in South Texas late Wednesday, dealing another blow to a program already struggling to overcome three consecutive failures in recent months. The late-night explosion at SpaceX’s rocket development complex in Starbase, Texas, destroyed the bullet-shaped upper stage that was slated to launch on the next Starship test flight.”
TechThe Entire Internet Is Reverting to BetaMatteo Wong | The Atlantic
“[Generative AI] tools can be legitimately helpful for many people when used in a measured way, with human verification; I’ve reported on scientific work that has advanced as a result of the technology, including revolutions in neuroscience and drug discovery. But these success stories bear little resemblance to the way many people and firms understand and use the technology; marketing has far outpaced innovation.”
FutureThe Future of Weather Forecasting Is HyperlocalThomas E. Weber | The Wall Street Journal
“NOAA’s High-Resolution Rapid Refresh system (HRRR, usually pronounced “hurr”), can zero in on an area of 1.8 miles. Contrast that with the Comprehensive Bespoke Atmospheric Model, or CBAM, developed by Tomorrow.io, a hyperlocal weather startup. Tomorrow.io says the CBAM can be run at resolutions as small as tens of meters, effectively predicting how the weather will differ from one city block to another.”
SpaceMars Trips Could Be Cut in Half With Nuclear PowerMark Thompson | IEEE Spectrum
“Here’s how it works: Instead of burning fuel with oxygen, a nuclear reactor heats up a propellant like hydrogen. The super-heated propellant then shoots out of the rocket nozzle, pushing the spacecraft forward. This method is much more efficient than chemical rockets.”
The post This Week’s Awesome Tech Stories From Around the Web (Through June 21) appeared first on SingularityHub.
‘Cyborg Tadpoles’ With Super Soft Neural Implants Shine Light on Early Brain Development
Tofu-like probes capture the activity of individual neurons in tadpole embryos as they grow.
Early brain development is a biological black box. While scientists have devised multiple ways to record electrical signals in adult brains, these techniques don’t work for embryos.
A team at Harvard has now managed to peek into the box—at least when it comes to amphibians and rodents. They developed an electrical array using a flexible, tofu-like material that seamlessly embeds into the early developing brain. As the brain grows, the implant stretches and shifts, continuously recording individual neurons without harming the embryo.
“There is just no ability currently to measure neural activity during early neural development. Our technology will really enable an uncharted area,” said study author Jia Liu in a press release.
The mesh array not only records brain activity, it can also stimulate nerve regeneration in axolotl embryos with electrical zaps. An adorable amphibian known for its ability to regrow tissues, axolotl research could inspire ideas for how we might heal damaged nerves, such as those in spinal cord injury.
Amphibians and rodents have much smaller brains than us. Due to obvious ethical concerns, the team didn’t try the device in human embryos. But they did use it to capture single neuron activity in brain organoids. These “mini-brains” are derived from human cells and loosely mimic developing brains. Their study could help pin down genes or other molecular changes specific to neurodevelopmental disorders. “Autism, bipolar disorder, schizophrenia—these all could happen at early developmental stages,” said Liu.
Probing the BrainRecording electrical chatter from the developing brain allows scientists to understand how neurons self-assemble into a mighty computing machine capable of learning and cognition. But capturing these short sparks of activity throughout the brain is difficult.
Current technologies mostly focus on mature brains. Functional magnetic resonance imaging, for example, is used to scan the entire brain as it computes specific tasks. This doesn’t require surgery and can help scientists stitch together brain-wide activity maps. But the approach lacks resolution and is laggy.
Molecular imaging is another way to record brain activity. Here, animals such as zebrafish are genetically engineered to grow neurons that light up under the microscope when activated. These provide real-time insight into each individual neuron’s activity. But the method only works for translucent animals.
Neural implants are the newest kid on the block. These microelectrode arrays are directly implanted into brain tissue and can capture electrical signals from large populations of neurons with millisecond precision. With the help of AI, such implants have already restored speech and movement and untangled neural networks for memory and cognition in people.
They’re also unsuitable for developing brains.
“The brain is very soft, like a piece of tofu. Traditional electronics are very rigid, when you put them into the brain, any movement of the electronics can cut the brain at the micrometer scale,” Liu told Nature. Over time, the devices cause scarring which degrades the signals.
The problem is acute during development, as the brain dramatically changes shape and size. Rigid probes can’t continuously monitor single neurons as the brain grows and could damage the nascent organ.
Opening the BoxPicture the brain and a walnut-shaped structure etched with grooves likely comes to mind. But the organ begins life as a flat single-cell layer in the embryo.
Called the neural plate, this layer of cells lines the embryo’s surface before eventually folding into a tube-like shape. As brain cells expand and migrate, they generate tissues that eventually fold into the brain’s final 3D structure. This dimensional transition makes it impossible to monitor single neurons with rigid probes. But stretchable electronics may do the job.
In 2015, Liu and colleagues developed an ultra-flexible probe that could integrate into adult rodent brains and human brain organoids. The mesh-like implant had a stiffness similar to brain tissue and minimized scarring. The team used a material called fluorinated elastomers, which is stretchy like gum but has the toughness of Teflon—and is 10,000 times softer than conventional flexible implants made of plastic-like materials. Implants made of the material captured single-neuron activity in mice for months and were relatively easy to manufacture.
Because of the probe’s stretchiness, the team wondered if it could also monitor developing embryonic brains as they folded up from 2D to 3D. They picked tadpoles as a test case because the embryos grow fast and are easy to monitor.
The first try failed. “It turns out tadpole embryos are much softer than human stem cell-derived tissue,” said Liu. “We ultimately had to change everything, including developing new electronic materials.”
The team came up with a new meshy material that can be embedded with electrodes and is less than a micrometer thick. They then fabricated a “holding” device to support tadpole embryos and gently placed the mesh onto the tadpoles’ neural plates during early brain formation.
“You need a very stable hand” for the procedure, said Liu.
The tadpoles’ developing brains treated the mesh as another layer of their own biology as they folded themselves into 3D structures, essentially stretching the device across their brains. The implant reliably captured neural activity throughout development on millisecond scales across multiple brain regions. The cyborg tadpoles grew into healthy frogs, which acted normally in behavioral tests and showed no signs of brain damage or stress.
The implant picked up different brain-activity dynamics as the tadpoles developed. Early brain cells synchronized into patterns of slow activity as the neural plate folded into a tube. But as the brain matured and developed different regions, each of these established its own unique electrical fingerprint with faster neural activity.
By observing these dynamics, scientists can potentially decipher how the brain wires itself into such a powerful computing machine and detect when things go awry.
Rebuilding ConnectionsThe human nervous system has limited regenerative capabilities. Axolotls, not so much. A type of salamander, these cartoonish-looking creatures can rebuild nearly any part of their bodies, including their nerves. How this happens is still mysterious, but if we can discover their secret, we might use it to develop treatments for spinal cord injuries or nerve diseases.
In one test, the team implanted the recording mesh in an axolotl tadpole with a damaged tail. The critter’s brain activity spiked during regeneration. When they added carefully timed zaps from external electrodes mimicking post-injury neural patterns, the regeneration sped up, suggesting brain activity could play a role in tissue regeneration (at least in some species).
“We found that the brain activity goes back to its early [embryo] development stage, so this is maybe a unique reason why this creature has this regeneration ability,” said Liu.
The team is giving the technology to other researchers to further probe life’s beginnings, especially in mammals such as rodents. “Preliminary tests confirmed that the devices’ mechanical properties are compatible with mouse embryos and neonatal rats,” they wrote.
Liu is clear the method isn’t ready for implantation in human embryos. Using it in frogs, axolotls, and human brain organoids is already yielding insights into brain development. But ultimately, his team hopes to help people with neurodevelopmental conditions.
“We have this foundation of stretchable electronics that could be directly translated to the neonatal or developing brain,” said Liu.
The post ‘Cyborg Tadpoles’ With Super Soft Neural Implants Shine Light on Early Brain Development appeared first on SingularityHub.
Honda Surprises Space Industry by Launching and Landing a New Reusable Rocket
Honda’s been quietly working on a side hustle.
The private space race has been dominated by SpaceX for years. But Japanese carmaker Honda may be about to throw its hat in the ring after demonstrating a reusable rocket.
Space rockets might seem like a strange side hustle for a company better known for building motorcycles, fuel-efficient cars, and humanoid robots. But the company’s launch vehicle program has been ticking away quietly in the background for a number of years.
In 2021, officials announced that they had been working on a small-satellite rocket for two years and had already developed an engine. But the company has been relatively tight-lipped about the project since then.
Now, it’s taken the aerospace community by surprise after successfully launching a prototype reusable rocket to an altitude of nearly 900 feet and then landing it again just 15 inches from its designated target.
“We are pleased that Honda has made another step forward in our research on reusable rockets with this successful completion of a launch and landing test,” Honda’s global CEO Toshihiro Mibe said in a statement. “We believe that rocket research is a meaningful endeavor that leverages Honda’s technological strengths. Honda will continue to take on new challenges.”
The test vehicle is modest compared to commercial launch vehicles, standing just 21-feet tall and weighing only 1.4 tons fully fueled. It features four retractable legs and aerodynamic fins near the nose of the rocket, similar to those on SpaceX’s Falcon 9, which are presumably responsible for steering and stabilizing the rocket on its descent.
Honda said the development of the rocket was built on core technologies the company has developed in combustion, control systems, and self-driving vehicles. While it didn’t reveal details about the engine, Stephen Clark of Ars Technica writes that the video suggests the rocket burns liquid cryogenic fuels—potentially methane and liquid oxygen.
Honda says the goal of the test flight, which took place on Tuesday in Taiki, Hokkaido and lasted just under a minute, was to demonstrate the key technologies required for a reusable rocket, including flight stabilization during ascent and descent and the ability to land smoothly.
In a video of the launch shared by Honda, the rocket lifts off, retracts its four legs, and then rises smoothly to 890 feet. It then hovers briefly and extends its fins before returning to the launch platform, deploying its legs just before touchdown.
With this successful test flight, Honda joins an elite club of companies that have managed to land a reusable rocket, including SpaceX, Blue Origin, and handful of Chinese startups. It’s also beaten Japan’s space agency (JAXA) to the milestone. The agency is developing a reusable rocket called Callisto alongside the French and German space agencies, but it has yet to conduct a test flight.
The company is currently targeting a suborbital launch—where the spacecraft reaches space but doesn’t enter into Earth orbit—by 2029. But Honda says it has yet to decide if it will commercialize the technology.
Nonetheless, the company noted the technology could have synergies with its existing business by making it possible to launch satellite constellations that could help support the “connected car” features of its vehicles. And it is already developing other space technologies including renewable-energy systems and robots designed to work in space.
Whatever their decision, this launch shows the barriers to space are falling rapidly as a growing number of companies develop capabilities necessary to push into Earth orbit and beyond.
The post Honda Surprises Space Industry by Launching and Landing a New Reusable Rocket appeared first on SingularityHub.
