Transhumanismus

A Universal Vaccine Against Any Viral Variant? A New Study Suggest It’s Possible

Singularity HUB - 22 Duben, 2024 - 22:28

From Covid boosters to annual flu shots, most of us are left wondering: Why so many, so often?

There’s a reason to update vaccines. Viruses rapidly mutate, which can help them escape the body’s immune system, putting previously vaccinated people at risk of infection. Using AI modeling, scientists have increasingly been able to predict how viruses will evolve. But they mutate fast, and we’re still playing catch up.

An alternative strategy is to break the cycle with a universal vaccine that can train the body to recognize a virus despite mutation. Such a vaccine could eradicate new flu strains, even if the virus has transformed into nearly unrecognizable forms. The strategy could also finally bring a vaccine for the likes of HIV, which has so far notoriously evaded decades of efforts.

This month, a team from UC California Riverside, led by Dr. Shou-Wei Ding, designed a vaccine that unleashed a surprising component of the body’s immune system against invading viruses.

In baby mice without functional immune cells to ward off infections, the vaccine defended against lethal doses of a deadly virus. The protection lasted at least 90 days after the initial shot.

The strategy relies on a controversial theory. Most plants and fungi have an innate defense against viruses that chops up their genetic material. Called RNA interference (RNAi), scientists have long debated whether the same mechanism exists in mammals—including humans.

“It’s an incredible system because it can be adapted to any virus,” Dr. Olivier Voinnet at the Swiss Federal Institute of Technology, who championed the theory with Ding, told Nature in late 2013.

A Hidden RNA Universe

RNA molecules are usually associated with the translation of genes into proteins.

But they’re not just biological messengers. A wide array of small RNA molecules roam our cells. Some shuttle protein components through the cell during the translation of DNA. Others change how DNA is expressed and may even act as a method of inheritance.

But fundamental to immunity are small interfering RNA molecules, or siRNAs. In plants and invertebrates, these molecules are vicious defenders against viral attacks. To replicate, viruses need to hijack the host cell’s machinery to copy their genetic material—often, it’s RNA. The invaded cells recognize the foreign genetic material and automatically launch an attack.

During this attack, called RNA interference, the cell chops the invading viruses’ RNA genome into tiny chunks–siRNA. The cell then spews these viral siRNA molecules into the body to alert the immune system. The molecules also directly grab onto the invading viruses’ genome, blocking it from replicating.

Here’s the kicker: Vaccines based on antibodies usually target one or two locations on a virus, making them vulnerable to mutation should those locations change their makeup. RNA interference generates thousands of siRNA molecules that cover the entire genome—even if one part of a virus mutates, the rest is still vulnerable to the attack.

This powerful defense system could launch a new generation of vaccines. There’s just one problem. While it’s been observed in plants and flies, whether it exists in mammals has been highly controversial.

“We believe that RNAi has been antiviral for hundreds of millions of years,” Ding told Nature in 2013. “Why would we mammals dump such an effective defense?”

Natural Born Viral Killers

In the 2013 study in Science, Ding and colleagues suggested mammals also have an antiviral siRNA mechanism—it’s just being repressed by a gene carried by most viruses. Dubbed B2, the gene acts like a “brake,” smothering any RNA interference response from host cells by destroying their ability to make siRNA snippets.

Getting rid of B2 should kick RNA interference back into gear. To prove the theory, the team genetically engineered a virus without a functioning B2 gene and tried to infect hamster cells and immunocompromised baby mice. Called Nodamura virus, it’s transmitted by mosquitoes in the wild and is often deadly.

But without B2, even a lethal dose of the virus lost its infectious power. The baby mice rapidly generated a hefty dose of siRNA molecules to clear out the invaders. As a result, the infection never took hold, and the critters—even when already immunocompromised—survived.

“I truly believe that the RNAi response is relevant to at least some viruses that infect mammals,” said Ding at the time.

New-Age Vaccines

Many vaccines contain either a dead or a living but modified version of a virus to train the immune system. When faced with the virus again, the body produces T cells to kill off the target, B cells that pump out antibodies, and other immune “memory” cells to alert against future attacks. But their effects don’t always last, especially if a virus mutates.

Rather than rallying T and B cells, triggering the body’s siRNA response offers another type of immune defense. This can be done by deleting the B2 gene in live viruses. These viruses can be formulated into a new type of vaccine, which the team has been working to develop, relying on RNA interference to ward off invaders. The resulting flood of siRNA molecules triggered by the vaccine would, in theory, also provide some protection against future infection.

“If we make a mutant virus that cannot produce the protein to suppress our RNAi [RNA interference], we can weaken the virus. It can replicate to some level, but then loses the battle to the host RNAi response,” Ding said in a press release about the most recent study.  “A virus weakened in this way can be used as a vaccine for boosting our RNAi immune system.”

In the study, his team tried the strategy against Nodamura virus by removing its B2 gene.

The team vaccinated baby and adult mice, both of which were genetically immunocompromised in that they couldn’t mount T cell or B cell defenses. In just two days, the single shot fully protected the mice against a deadly dose of virus, and the effect lasted over three months.

Viruses are most harmful to vulnerable populations—infants, the elderly, and immunocompromised individuals. Because of their weakened immune systems, current vaccines aren’t always as effective. Triggering siRNA could be a life-saving alternative strategy.

Although it works in mice, whether humans respond similarly remains to be seen. But there’s much to look forward to. The B2 “brake” protein has also been found in lots of other common viruses, including dengue, flu, and a family of viruses that causes fever, rash, and blisters.

The team is already working on a new flu vaccine, using live viruses without the B2 protein. If successful, the vaccine could potentially be made as a nasal spray—forget the needle jab. And if their siRNA theory holds up, such a vaccine might fend off the virus even as it mutates into new strains. The playbook could also be adapted to tackle new Covid variants, RSV, or whatever nature next throws at us.

This vaccine strategy is “broadly applicable to any number of viruses, broadly effective against any variant of a virus, and safe for a broad spectrum of people,” study author Dr. Rong Hai said in the press release. “This could be the universal vaccine that we have been looking for.”

Image Credit: Diana Polekhina / Unsplash

Kategorie: Transhumanismus

This Week’s Awesome Tech Stories From Around the Web (Through April 20)

Singularity HUB - 20 Duben, 2024 - 16:00
ARTIFICIAL INTELLIGENCE

15 Graphs That Explain the State of AI in 2024
Eliza Strickland | IEEE Spectrum
“Each year, the AI Index lands on virtual desks with a louder virtual thud—this year, its 393 pages are a testament to the fact that AI is coming off a really big year in 2023. For the past three years, IEEE Spectrum has read the whole damn thing and pulled out a selection of charts that sum up the current state of AI.”

NEUROSCIENCE

The Next Frontier for Brain Implants Is Artificial Vision
Emily Mullin | Wired
“Elon Musk’s Neuralink and others are developing devices that could provide blind people with a crude sense of sight. …’This is not about getting biological vision back,’ says Philip Troyk, a professor of biomedical engineering at Illinois Tech, who’s leading the study Bussard is in. ‘This is about exploring what artificial vision could be.'”

DIGITAL MEDIA

Microsoft’s VASA-1 Can Deepfake a Person With One Photo and One Audio Track
Benj Edwards | Ars Technica
“On Tuesday, Microsoft Research Asia unveiled VASA-1, an AI model that can create a synchronized animated video of a person talking or singing from a single photo and an existing audio track. In the future, it could power virtual avatars that render locally and don’t require video feeds—or allow anyone with similar tools to take a photo of a person found online and make them appear to say whatever they want.”

TECH

Meta Is Already Training a More Powerful Successor to Llama 3
Will Knight | Wired
“On Thursday morning, Meta released its latest artificial intelligence model, Llama 3, touting it as the most powerful to be made open source so that anyone can use it. The same afternoon, Yann LeCun, Meta’s chief AI scientist, said an even more powerful successor to Llama is in the works. He suggested it could potentially outshine the world’s best closed AI models, including OpenAI’s GPT-4 and Google’s Gemini.”

COMPUTING

Intel Reveals World’s Biggest ‘Brain-Inspired’ Neuromorphic Computer
Matthew Sparkes | New Scientist
“Hala Point contains 1.15 billion artificial neurons across 1152 Loihi 2 achips, and is capable of 380 trillion synaptic operations per second. Mike Davies at Intel says that despite this power it occupies just six racks in a standard server case—a space similar to that of a microwave oven. Larger machines will be possible, says Davies. ‘We built this scale of system because, honestly, a billion neurons was a nice round number,’ he says. ‘I mean, there wasn’t any particular technical engineering challenge that made us stop at this level.'”

AUTOMATION

US Air Force Confirms First Successful AI Dogfight
Emma Roth | The Verge
“Human pilots were on board the X-62A with controls to disable the AI system, but DARPA says the pilots didn’t need to use the safety switch ‘at any point.’ The X-62A went against an F-16 controlled solely by a human pilot, where both aircraft demonstrated ‘high-aspect nose-to-nose engagements’ and got as close as 2,000 feet at 1,200 miles per hour. DARPA doesn’t say which aircraft won the dogfight, however.”

CULTURE

What If Your AI Girlfriend Hated You?
Kate Knibbs | Wired
“It seems as though we’ve arrived at the moment in the AI hype cycle where no idea is too bonkers to launch. This week’s eyebrow-raising AI project is a new twist on the romantic chatbot—a mobile app called AngryGF, which offers its users the uniquely unpleasant experience of getting yelled at via messages from a fake person.”

NEUROSCIENCE

Insects and Other Animals Have Consciousness, Experts Declare
Dan Falk | Quanta
“For decades, there’s been a broad agreement among scientists that animals similar to us—the great apes, for example—have conscious experience, even if their consciousness differs from our own. In recent years, however, researchers have begun to acknowledge that consciousness may also be widespread among animals that are very different from us, including invertebrates with completely different and far simpler nervous systems.”

SCIENCE

Two Lifeforms Merge in Once-in-a-Billion-Years Evolutionary Event
Michael Irving | New Atlas
“Scientists have caught a once-in-a-billion-years evolutionary event in progress, as two lifeforms have merged into one organism that boasts abilities its peers would envy. Last time this happened, Earth got plants. …A species of algae called Braarudosphaera bigelowii was found to have engulfed a cyanobacterium that lets them do something that algae, and plants in general, can’t normally do—’fixing’ nitrogen straight from the air, and combining it with other elements to create more useful compounds.”

Image Credit: Shubham Dhage / Unsplash

Kategorie: Transhumanismus

Cell Therapies Now Beat Back Once Untreatable Blood Cancers. Scientists Are Making Them Even Deadlier.

Singularity HUB - 19 Duben, 2024 - 19:03

Dubbed “living drugs,” CAR T cells are bioengineered from a patient’s own immune cells to make them better able to hunt and destroy cancer.

The treatment is successfully tackling previously untreatable blood cancers. Six therapies are already approved by the FDA. Over a thousand clinical trials are underway. These aren’t limited to cancer—they cover a range of difficult medical problems such as autoimmune diseases, heart conditions, and viral infections including HIV. They may even slow down the biological processes that contribute to aging.

But CAR T has an Achilles heel.

Once injected into the body, the cells often slowly dwindle. Called “exhaustion,” this process erodes therapeutic effect over time and has dire medical consequences. According to Dr. Evan Weber at the University of Pennsylvania, more than 50 percent of people who respond to CAR T therapies eventually relapse. This may also be why CAR T cells have struggled to fight off solid tumors in breast, pancreatic, or deadly brain cancers.

This month, two teams found a potential solution—make CAR T cells more like stem cells. Known for their regenerative abilities, stem cells easily repopulate the body. Both teams identified the same protein “master switch” to make engineered cells resemble stem cells.

One study, led by Weber, found that adding the protein, called FOXO1, revved up metabolism and health in CAR T cells in mice. Another study from a team at the Peter MacCallum Cancer Center in Australia found FOXO1-boosted cells appeared genetically similar to immune stem cells and were better able to fend off solid tumors.

While still early, “these findings may help improve the design of CAR T cell therapies and potentially benefit a wider range of patients,” said Weber in a press release.

I Remember

Here’s how CAR T cell therapy usually works.

The approach focuses on T cells, a particular type of immune cell that naturally hunts downs and eliminates infections and cancers inside the body. Enemy cells are dotted with a specific set of proteins, a kind of cellular fingerprint, that T cells recognize and latch onto.

Tumors also have a unique signature. But they can be sneaky, with some eventually developing ways to evade immune surveillance. In solid cancers, for example, they can pump out chemicals that fight off immune cell defenders, allowing the cancer to grow and spread.

CAR T cells are designed to override these barriers.

To make them, medical practitioners remove T cells from the body and genetically engineer them to produce tailormade protein hooks targeting a particular protein on tumor cells. The supercharged T cells are then grown in petri dishes and transfused back into the body.

In the beginning, CAR T was a last-resort blood cancer treatment, but now it’s a first-line therapy. Keeping the engineered cells around inside the body, however, has been a struggle. With time, the cells stop dividing and become dysfunctional, potentially allowing the cancer to relapse.

The Translator

To tackle cell exhaustion, Weber’s team found inspiration in the body itself.

Our immune system has a cellular ledger tracking previous infections. The cells making up this ledger are called memory T cells. They’re a formidable military reserve, a portion of which resemble stem cells. When the immune system detects an invader it’s seen before—a virus, bacteria, or cancer cell—these reserve cells rapidly proliferate to fend off the attack.

CAR T cells don’t usually have this ability. Inside multiple cancers, they eventually die off—allowing cancers to return. Why?

In 2012, Dr. Crystal Mackall at Stanford University found several changes in gene expression that lead to CAR T cell exhaustion. In the new study, together with Weber, the team discovered a protein, FOXO1, that could lengthen CAR T’s effects.

In one test, a drug that inhibited FOXO1 caused CAR T cells to rapidly fail and eventually die in petri dishes. Erasing genes encoding FOXO1 also hindered the cells and increased signs of CAR T exhaustion. When infused into mice with leukemia, CAR T cells without FOXO1 couldn’t treat the cancer. By contrast, increasing levels of FOXO1 helped the cells readily fight it off.

Analyzing genes related to FOXO1, the team found they were mostly connected to immune cell memory. It’s likely that adding the gene encoding FOXO1 to CAR T cells promotes a stable memory for the cells, so they can easily recognize potential harm—be it cancer or pathogen—long after the initial infection.

When treating mice with leukemia, a single dose of the FOXO1-enhanced cells decreased cancer growth and increased survival up to five-fold compared to standard CAR T therapy. The enhanced treatment also tackled a type of bone cancer in mice, which is often hard to treat without surgery and chemotherapy.

An Immune Link

Meanwhile, the Australian team also zeroed in on FOXO1. Led by Drs. Junyun Lai, Paul Beavis, and Phillip Darcy, the team was looking for protein candidates to enhance CAR T longevity.

The idea was, like their natural counterparts, engineered CAR T cells also need a healthy metabolism to thrive and divide.

They started by analyzing a protein previously shown to enhance CAR T metabolism, potentially lowering the chances of exhaustion. Mapping the epigenome and transcriptome in CAR T cells—both of which tell us how genes are expressed—they also discovered FOXO1 regulating CAR T cell longevity.

As a proof of concept, the team induced exhaustion in the engineered cells by increasingly restricting their ability to divide.

In mice with cancer, cells supercharged with FOXO1 lasted longer by months than those that hadn’t been boosted. The critters’ liver and kidney functions remained normal, and they didn’t lose weight during the treatment, a marker of overall health. The FOXO1 boost also changed how genes were expressed in the cells—they looked younger, as if in a stem cell-like state.

The new recipe also worked in T cells donated by six people with cancer who had undergone standard CAR T therapy. Adding a dose of FOXO1 to these cells increased their metabolism.

Multiple CAR T clinical trials are ongoing. But “the effects of such cells are transient and do not provide long-term protection against exhaustion,” wrote Darcy and team. In other words, durability is key for CAR T cells to live up to their full potential.

A FOXO1 boost offers a way—although it may not be the only way.

“By studying factors that drive memory in T cells, like FOXO1, we can enhance our understanding of why CAR T cells persist and work more effectively in some patients compared to others,” said Weber.

Image Credit: Gerardo Sotillo, Stanford Medicine

Kategorie: Transhumanismus

Scientists Create Atomically Thin Gold With Century-Old Japanese Knife Making Technique

Singularity HUB - 18 Duben, 2024 - 19:36

Graphene has been hailed as a wonder material, but it also set off a rush to find other promising atomically thin materials. Now researchers have managed to create a 2D version of gold they call “goldene,” which could have a host of applications in chemistry.

Scientists had speculated about the possibility of creating layers of carbon just a single atom thick for many decades. But it wasn’t until 2004 that a team from the University of Manchester in the UK first produced graphene sheets using the remarkably simple technique of peeling them off a lump of graphite with common sticky tape.

The resulting material’s high strength, high conductivity, and unusual optical properties set off a stampede to find applications for it. But it also spurred researchers to investigate what kinds of exotic capabilities other ultra-thin materials could have.

Gold is one material scientists have long been eager to make as thin as graphene, but so far, efforts have been in vain. Now though, researchers from Linköping University in Sweden have borrowed from an old Japanese forging technique to create ultra-thin flakes of what they’re calling “goldene.”

“If you make a material extremely thin, something extraordinary happens,” Shun Kashiwaya, who led the research, said in a press release. “The same thing happens with gold.”

Making goldene has proven tough in the past because its atoms tend to clump together. So, even if you can create a 2D sheet of gold atoms they quickly roll up to create nanoparticles instead.

The researchers got around this by taking a ceramic called titanium silicon carbide, which features ultra-thin layers of silicon between layers of titanium carbide, and coating it with gold. They then heated it in a furnace, which caused the gold to diffuse into the material and replace the silicon layers in a process known as intercalation.

This created atomically thin layers of gold embedded in the ceramic. To get it out, they had to borrow a century-old technique developed by Japanese knife makers. They used a chemical formulation known as Murakami’s reagent, which etches away carbon residue, to slowly reveal the gold sheets.

The researchers had to experiment with different concentrations of the reagent and various etching times. They also had to add a detergent-like chemical called a surfactant that protected the gold sheets from the etching liquid and prevented them from curling up. The gold flakes could then be sieved out of the solution to be examined more closely.

In a paper in Nature Synthesis, the researchers describe how they used an electron microscope to confirm that the gold layers were indeed just one atom thick. They also showed that the goldene flakes were semiconductors.

It’s not the first time someone has claimed to have created goldene, notes Nature. But previous attempts have involved creating the ultra-thin sheets sandwiched between other materials, and the Linköping team say their effort is the first to create a “free-standing 2D metal.”

The material could have a range of use cases, the researchers say. Gold nanoparticles already show promise as catalysts that can turn plastic waste and biomass into valuable materials, they note in their paper, and they have properties that could prove useful for energy harvesting, creating photonic devices, or even splitting water to create hydrogen fuel.

It will take work to tweak the synthesis method so it can produce commercially useful amounts of the material, a challenge that has delayed the full arrival of graphene as a widely used product too. But the team is also investigating whether similar approaches can be applied to other useful catalytic metals. Graphene might not be the only wonder material in town for long.

Image Credit: Nature Synthesis (CC BY 4.0)

Kategorie: Transhumanismus

Boston Dynamics Says Farewell to Its Humanoid Atlas Robot—Then Brings It Back Fully Electric

Singularity HUB - 18 Duben, 2024 - 00:29

Yesterday, Boston Dynamics announced it was retiring its hydraulic Atlas robot. Atlas has long been the standard bearer of advanced humanoid robots. Over the years, the company was known as much for its research robots as it was for slick viral videos of them working out in military fatigues, forming dance mobs, and doing parkour. Fittingly, the company put together a send-off video of Atlas’s greatest hits and blunders.

But there were clues this wasn’t really the end, not least of which was the specific inclusion of the word “hydraulic” and the last line of the video, “‘Til we meet again, Atlas.” It wasn’t a long hiatus. Today, the company released hydraulic Atlas’s successor—electric Atlas.

The new Atlas is notable for several reasons. First, and most obviously, Boston Dynamics has finally done away with hydraulic actuators in favor of electric motors. To be clear, Atlas has long had an onboard battery pack—but now it’s fully electric. The advantages of going electric include less cost, noise, weight, and complexity. It also allows for a more polished design. From the company’s own Spot robot to a host of other humanoid robots, fully electric models are the norm these days. So, it’s about time Atlas made the switch.

Without a mess of hydraulic hoses to contend with, the new Atlas can now also contort itself in new ways. As you’ll note in the release video, the robot rises to its feet—a crucial skill for a walking robot—in a very, let’s say, special way. It folds its legs up along its torso and impossibly, for a human at least, pivots up through its waist (no hands). Once standing Atlas swivels its head 180 degrees, then does the same thing at each hip joint and the waist. It takes a few watches to really appreciate all the weirdness there.

The takeaway is that while Atlas looks like us, it’s capable of movements we aren’t and therefore has more flexibility in how it completes future tasks.

This theme of same-but-different is evident in its head too. Instead of opting for a human-like head that risks slipping into the uncanny valley, the team chose a featureless (for now) lighted circle. In an interview with IEEE Spectrum, Boston Dynamics CEO, Robert Playter, said the human-like designs they tried seemed “a little bit threatening or dystopian.”

“We’re trying to project something else: a friendly place to look to gain some understanding about the intent of the robot,” he said. “The design borrows from some friendly shapes that we’d seen in the past. For example, there’s the old Pixar lamp that everybody fell in love with decades ago, and that informed some of the design for us.”

While most of these upgrades are improvements, there is one area where it’s not totally clear how well the new form will fair: strength and power.

Hydraulics are known to provide both, and Atlas pushed its hydraulics to their limits carrying heavy objects, executing backflips, and doing 180-degree, in-air twists. According to the press release and Playter’s interviews, little has been lost in this category. In fact, they say, electric Atlas is stronger than hydraulic Atlas. Still, as with all things robotics, the ultimate proof of how capable it is will likely be in video form, which we’ll eagerly await.

Despite big design updates, the company’s messaging is perhaps more notable. Atlas used to be a research robot. Now, the company intends to sell them commercially.

This isn’t terribly surprising. There are now a number of companies competing in the humanoid robots space, including Agility, 1X, Tesla, Apptronik, and Figure—which just raised $675 million at a $2.6 billion valuation. Several are making rapid progress, with a heavy focus on AI, and have kicked off real-world pilots.

Where does Boston Dynamics fit in? With Atlas, the company has been the clear leader for years. So, it’s not starting from the ground floor. Also, thanks to its Spot and Stretch robots, the company already has experience commercializing and selling advanced robots, from identifying product-market fit to dealing with logistics and servicing. But AI was, until recently, less of a focus. Now, they’re folding reinforcement learning into Spot, have begun experimenting with generative AI too, and promise more is coming.

Hyundai acquired Boston Dynamics for $1.1 billion in 2021. This may prove advantageous, as they have access to a world-class manufacturing company along with its resources and expertise producing and selling machines at scale. It’s also an opportunity to pilot Atlas in real-world situations and perfect it for future customers. Plans are already in motion to put Atlas to work at Hyundai next year.

Still, it’s worth noting that, although humanoid robots are attracting attention, getting big time investment, and being tried out in commercial contexts, there’s likely a ways to go before they reach the kind of generality some companies are touting. Playter says Boston Dynamics is going for multi-purpose, but still niche, robots in the near term.

“It definitely needs to be a multi-use case robot. I believe that because I don’t think there’s very many examples where a single repetitive task is going to warrant these complex robots,” he said. “I also think, though, that the practical matter is that you’re going to have to focus on a class of use cases, and really making them useful for the end customer.”

Humanoid robots that tidy your house and do the dishes may not be imminent, but the field is hot, and AI is bringing a degree of generality not possible a year ago. Now that Boston Dynamics has thrown its name in the hat, things will only get more interesting from here. We’ll be keeping a close eye on YouTube to see what new tricks Atlas has up its sleeve.

Image Credit: Boston Dynamics

Kategorie: Transhumanismus

Exploding Stars Are Rare—but if One Was Close Enough, It Could Threaten Life on Earth

Singularity HUB - 16 Duben, 2024 - 19:37

Stars like the sun are remarkably constant. They vary in brightness by only 0.1 percent over years and decades, thanks to the fusion of hydrogen into helium that powers them. This process will keep the sun shining steadily for about 5 billion more years, but when stars exhaust their nuclear fuel, their deaths can lead to pyrotechnics.

The sun will eventually die by growing large and then condensing into a type of star called a white dwarf. But stars over eight times more massive than the sun die violently in an explosion called a supernova.

Supernovae happen across the Milky Way only a few times a century, and these violent explosions are usually remote enough that people here on Earth don’t notice. For a dying star to have any effect on life on our planet, it would have to go supernova within 100 light years from Earth.

I’m an astronomer who studies cosmology and black holes.

In my writing about cosmic endings, I’ve described the threat posed by stellar cataclysms such as supernovae and related phenomena such as gamma-ray bursts. Most of these cataclysms are remote, but when they occur closer to home they can pose a threat to life on Earth.

The Death of a Massive Star

Very few stars are massive enough to die in a supernova. But when one does, it briefly rivals the brightness of billions of stars. At one supernova per 50 years, and with 100 billion galaxies in the universe, somewhere in the universe a supernova explodes every hundredth of a second.

The dying star emits high-energy radiation as gamma rays. Gamma rays are a form of electromagnetic radiation with wavelengths much shorter than light waves, meaning they’re invisible to the human eye. The dying star also releases a torrent of high-energy particles in the form of cosmic rays: subatomic particles moving at close to the speed of light.

Supernovae in the Milky Way are rare, but a few have been close enough to Earth that historical records discuss them. In 185 AD, a star appeared in a place where no star had previously been seen. It was probably a supernova.

Observers around the world saw a bright star suddenly appear in 1006 AD. Astronomers later matched it to a supernova 7,200 light years away. Then, in 1054 AD, Chinese astronomers recorded a star visible in the daytime sky that astronomers subsequently identified as a supernova 6,500 light years away.

Johannes Kepler, the astronomer who observed what was likely a supernova in 1604. Image Credit: Kepler-Museum in Weil der Stadt

Johannes Kepler observed the last supernova in the Milky Way in 1604, so in a statistical sense, the next one is overdue.

At 600 light years away, the red supergiant Betelgeuse in the constellation of Orion is the nearest massive star getting close to the end of its life. When it goes supernova, it will shine as bright as the full moon for those watching from Earth, without causing any damage to life on our planet.

Radiation Damage

If a star goes supernova close enough to Earth, the gamma-ray radiation could damage some of the planetary protection that allows life to thrive on Earth. There’s a time delay due to the finite speed of light. If a supernova goes off 100 light years away, it takes 100 years for us to see it.

Astronomers have found evidence of a supernova 300 light years away that exploded 2.5 million years ago. Radioactive atoms trapped in seafloor sediments are the telltale signs of this event. Radiation from gamma rays eroded the ozone layer, which protects life on Earth from the sun’s harmful radiation. This event would have cooled the climate, leading to the extinction of some ancient species.

Safety from a supernova comes with greater distance. Gamma rays and cosmic rays spread out in all directions once emitted from a supernova, so the fraction that reach the Earth decreases with greater distance. For example, imagine two identical supernovae, with one 10 times closer to Earth than the other. Earth would receive radiation that’s about a hundred times stronger from the closer event.

A supernova within 30 light years would be catastrophic, severely depleting the ozone layer, disrupting the marine food chain and likely causing mass extinction. Some astronomers guess that nearby supernovae triggered a series of mass extinctions 360 to 375 million years ago. Luckily, these events happen within 30 light years only every few hundred million years.

When Neutron Stars Collide

But supernovae aren’t the only events that emit gamma rays. Neutron star collisions cause high-energy phenomena ranging from gamma rays to gravitational waves.

Left behind after a supernova explosion, neutron stars are city-size balls of matter with the density of an atomic nucleus, so 300 trillion times denser than the sun. These collisions created many of the gold and precious metals on Earth. The intense pressure caused by two ultradense objects colliding forces neutrons into atomic nuclei, which creates heavier elements such as gold and platinum.

A neutron star collision generates an intense burst of gamma rays. These gamma rays are concentrated into a narrow jet of radiation that packs a big punch.

If the Earth were in the line of fire of a gamma-ray burst within 10,000 light years, or 10 percent of the diameter of the galaxy, the burst would severely damage the ozone layer. It would also damage the DNA inside organisms’ cells, at a level that would kill many simple life forms like bacteria.

That sounds ominous, but neutron stars do not typically form in pairs, so there is only one collision in the Milky Way about every 10,000 years. They are 100 times rarer than supernova explosions. Across the entire universe, there is a neutron star collision every few minutes.

Gamma-ray bursts may not hold an imminent threat to life on Earth, but over very long time scales, bursts will inevitably hit the Earth. The odds of a gamma-ray burst triggering a mass extinction are 50 percent in the past 500 million years and 90 percent in the 4 billion years since there has been life on Earth.

By that math, it’s quite likely that a gamma-ray burst caused one of the five mass extinctions in the past 500 million years. Astronomers have argued that a gamma-ray burst caused the first mass extinction 440 million years ago, when 60 percent of all marine creatures disappeared.

A Recent Reminder

The most extreme astrophysical events have a long reach. Astronomers were reminded of this in October 2022, when a pulse of radiation swept through the solar system and overloaded all of the gamma-ray telescopes in space.

It was the brightest gamma-ray burst to occur since human civilization began. The radiation caused a sudden disturbance to the Earth’s ionosphere, even though the source was an explosion nearly two billion light years away. Life on Earth was unaffected, but the fact that it altered the ionosphere is sobering—a similar burst in the Milky Way would be a million times brighter.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image Credit: NASA, ESA, Joel Kastner (RIT)

Kategorie: Transhumanismus

A New Photonic Computer Chip Uses Light to Slash AI Energy Costs

Singularity HUB - 15 Duben, 2024 - 22:45

AI models are power hogs.

As the algorithms grow and become more complex, they’re increasingly taxing current computer chips. Multiple companies have designed chips tailored to AI to reduce power draw. But they’re all based on one fundamental rule—they use electricity.

This month, a team from Tsinghua University in China switched up the recipe. They built a neural network chip that uses light rather than electricity to run AI tasks at a fraction of the energy cost of NVIDIA’s H100, a state-of-the-art chip used to train and run AI models.

Called Taichi, the chip combines two types of light-based processing into its internal structure. Compared to previous optical chips, Taichi is far more accurate for relatively simple tasks such as recognizing hand-written numbers or other images. Unlike its predecessors, the chip can generate content too. It can make basic images in a style based on the Dutch artist Vincent van Gogh, for example, or classical musical numbers inspired by Johann Sebastian Bach.

Part of Taichi’s efficiency is due to its structure. The chip is made of multiple components called chiplets. Similar to the brain’s organization, each chiplet performs its own calculations in parallel, the results of which are then integrated with the others to reach a solution.

Faced with a challenging problem of separating images over 1,000 categories, Taichi was successful nearly 92 percent of the time, matching current chip performance, but slashing energy consumption over a thousand-fold.

For AI, “the trend of dealing with more advanced tasks [is] irreversible,” wrote the authors. “Taichi paves the way for large-scale photonic [light-based] computing,” leading to more flexible AI with lower energy costs.

Chip on the Shoulder

Today’s computer chips don’t mesh well with AI.

Part of the problem is structural. Processing and memory on traditional chips are physically separated. Shuttling data between them takes up enormous amounts of energy and time.

While efficient for solving relatively simple problems, the setup is incredibly power hungry when it comes to complex AI, like the large language models powering ChatGPT.

The main problem is how computer chips are built. Each calculation relies on transistors, which switch on or off to represent the 0s and 1s used in calculations. Engineers have dramatically shrunk transistors over the decades so they can cram ever more onto chips. But current chip technology is cruising towards a breaking point where we can’t go smaller.

Scientists have long sought to revamp current chips. One strategy inspired by the brain relies on “synapses”—the biological “dock” connecting neurons—that compute and store information at the same location. These brain-inspired, or neuromorphic, chips slash energy consumption and speed up calculations. But like current chips, they rely on electricity.

Another idea is to use a different computing mechanism altogether: light. “Photonic computing” is “attracting ever-growing attention,” wrote the authors. Rather than using electricity, it may be possible to hijack light particles to power AI at the speed of light.

Let There Be Light

Compared to electricity-based chips, light uses far less power and can simultaneously tackle multiple calculations. Tapping into these properties, scientists have built optical neural networks that use photons—particles of light—for AI chips, instead of electricity.

These chips can work two ways. In one, chips scatter light signals into engineered channels that eventually combine the rays to solve a problem. Called diffraction, these optical neural networks pack artificial neurons closely together and minimize energy costs. But they can’t be easily changed, meaning they can only work on a single, simple problem.

A different setup depends on another property of light called interference. Like ocean waves, light waves combine and cancel each other out. When inside micro-tunnels on a chip, they can collide to boost or inhibit each other—these interference patterns can be used for calculations. Chips based on interference can be easily reconfigured using a device called an interferometer. Problem is, they’re physically bulky and consume tons of energy.

Then there’s the problem of accuracy. Even in the sculpted channels often used for interference experiments, light bounces and scatters, making calculations unreliable. For a single optical neural network, the errors are tolerable. But with larger optical networks and more sophisticated problems, noise rises exponentially and becomes untenable.

This is why light-based neural networks can’t be easily scaled up. So far, they’ve only been able to solve basic tasks, such as recognizing numbers or vowels.

“Magnifying the scale of existing architectures would not proportionally improve the performances,” wrote the team.

Double Trouble

The new AI, Taichi, combined the two traits to push optical neural networks towards real-world use.

Rather than configuring a single neural network, the team used a chiplet method, which delegated different parts of a task to multiple functional blocks. Each block had its own strengths: One was set up to analyze diffraction, which could compress large amounts of data in a short period of time. Another block was embedded with interferometers to provide interference, allowing the chip to be easily reconfigured between tasks.

Compared to deep learning, Taichi took a “shallow” approach whereby the task is spread across multiple chiplets.

With standard deep learning structures, errors tend to accumulate over layers and time. This setup nips problems that come from sequential processing in the bud. When faced with a problem, Taichi distributes the workload across multiple independent clusters, making it easier to tackle larger problems with minimal errors.

The strategy paid off.

Taichi has the computational capacity of 4,256 total artificial neurons, with nearly 14 million parameters mimicking the brain connections that encode learning and memory. When sorting images into 1,000 categories, the photonic chip was nearly 92 percent accurate, comparable to “currently popular electronic neural networks,” wrote the team.

The chip also excelled in other standard AI image-recognition tests, such as identifying hand-written characters from different alphabets.

As a final test, the team challenged the photonic AI to grasp and recreate content in the style of different artists and musicians. When trained with Bach’s repertoire, the AI eventually learned the pitch and overall style of the musician. Similarly, images from van Gogh or Edvard Munch—the artist behind the famous painting, The Scream—fed into the AI allowed it to generate images in a similar style, although many looked like a toddler’s recreation.

Optical neural networks still have much further to go. But if used broadly, they could be a more energy-efficient alternative to current AI systems. Taichi is over 100 times more energy efficient than previous iterations. But the chip still requires lasers for power and data transfer units, which are hard to condense.

Next, the team is hoping to integrate readily available mini lasers and other components into a single, cohesive photonic chip. Meanwhile, they hope Taichi will “accelerate the development of more powerful optical solutions” that could eventually lead to “a new era” of powerful and energy-efficient AI.

Image Credit: spainter_vfx / Shutterstock.com

Kategorie: Transhumanismus

This Week’s Awesome Tech Stories From Around the Web (Through April 13)

Singularity HUB - 13 Duben, 2024 - 16:00
ROBOTICS

Is Robotics About to Have Its Own ChatGPT Moment?
Melissa Heikkilä | MIT Technology Review
“For decades, roboticists have more or less focused on controlling robots’ ‘bodies’—their arms, legs, levers, wheels, and the like—via purpose-driven software. But a new generation of scientists and inventors believes that the previously missing ingredient of AI can give robots the ability to learn new skills and adapt to new environments faster than ever before. This new approach, just maybe, can finally bring robots out of the factory and into our homes.”

ARTIFICIAL INTELLIGENCE

Humans Forget. AI Assistants Will Remember Everything
Boone Ashworth | Wired
“Human brains, Gruber says, are really good at story retrieval, but not great at remembering details, like specific dates, names, or faces. He has been arguing for digital AI assistants that can analyze everything you do on your devices and index all those details for later reference.”

BIOTECH

The Effort to Make a Breakthrough Cancer Therapy Cheaper
Cassandra Willyard | MIT Technology Review
“CAR-T therapies are already showing promise beyond blood cancers. Earlier this year, researchers reported stunning results in 15 patients with lupus and other autoimmune diseases. CAR-T is also being tested as a treatment for solid tumors, heart disease, aging, HIV infection, and more. As the number of people eligible for CAR-T therapies increases, so will the pressure to reduce the cost.”

ETHICS

Students Are Likely Writing Millions of Papers With AI
Amanda Hoover | Wired
“A year ago, Turnitin rolled out an AI writing detection tool that was trained on its trove of papers written by students as well as other AI-generated texts. Since then, more than 200 million papers have been reviewed by the detector, predominantly written by high school and college students. Turnitin found that 11 percent may contain AI-written language in 20 percent of its content, with 3 percent of the total papers reviewed getting flagged for having 80 percent or more AI writing.”

SCIENCE

Physicists Capture First-Ever Image of an Electron Crystal
Isaac Schultz | Gizmodo
“Electrons are typically seen flitting around their atoms, but a team of physicists has now imaged the particles in a very different state: nestled together in a quantum phase called a Wigner crystal, without a nucleus at their core. The phase is named after Eugene Wigner, who predicted in 1934 that electrons would crystallize in a lattice when certain interactions between them are strong enough. The recent team used high-resolution scanning tunneling microscopy to directly image the predicted crystal.”

GADGETS

Review: Humane Ai Pin
Julian Chokkattu | Wired
“Humane has potential with the Ai Pin. I like being able to access an assistant so quickly, but right now, there’s nothing here that makes me want to use it over my smartphone. Humane says this is just version 1.0 and that many of the missing features I’ve mentioned will arrive later. I’ll be happy to give it another go then.”

SPACE

The Moon Likely Turned Itself Inside Out 4.2 Billion Years Ago
Passant Rabie | Gizmodo
“A team of researchers from the University of Arizona found new evidence that supports one of the wildest formation theories for the moon, which suggests that Earth’s natural satellite may have turned itself inside out a few million years after it came to be. In a new study published Monday in Nature Geoscience, the researchers looked at subtle variations in the moon’s gravitational field to provide the first physical evidence of a sinking mineral-rich layer.”

TECH

How Tech Giants Cut Corners to Harvest Data for AI
Cade Metz, Cecilia Kang, Sheera Frenkel, Stuart A. Thompson, and Nico Grantade | The New York Times
“The race to lead AI has become a desperate hunt for the digital data needed to advance the technology. To obtain that data, tech companies including OpenAI, Google and Meta have cut corners, ignored corporate policies and debated bending the law, according to an examination by The New York Times.”

ENERGY

Artificial Intelligence’s ‘Insatiable’ Energy Needs Not Sustainable, Arm CEO Says
Peter Landers | The Wall Street Journal
“In a January report, the International Energy Agency said a request to ChatGPT requires 2.9 watt-hours of electricity on average—equivalent to turning on a 60-watt lightbulb for just under three minutes. That is nearly 10 times as much as the average Google search. The agency said power demand by the AI industry is expected to grow by at least 10 times between 2023 and 2026.”

FUTURE

Someday, Earth Will Have a Final Total Solar Eclipse
Katherine Kornei | The New York Times
“The total solar eclipse visible on Monday over parts of Mexico, the United States and Canada was a perfect confluence of the sun and the moon in the sky. But it’s also the kind of event that comes with an expiration date: At some point in the distant future, Earth will experience its last total solar eclipse. That’s because the moon is drifting away from Earth, so our nearest celestial neighbor will one day, millions or even billions of years in the future, appear too small in the sky to completely obscure the sun.”
archive page

Image Credit: Tim Foster / Unsplash

Kategorie: Transhumanismus

Elon Musk Doubles Down on Mars Dreams and Details What’s Next for SpaceX’s Starship

Singularity HUB - 12 Duben, 2024 - 21:13

Elon Musk has long been open about his dreams of using SpaceX to spread humanity’s presence further into the solar system. And last weekend, he gave an updated outline of his vision for how the company’s rockets could enable the colonization of Mars.

The serial entrepreneur has been clear for a number of years that the main motivation for founding SpaceX was to make humans a multiplanetary species. For a long time, that seemed like the kind of aspirational goal one might set to inspire and motivate engineers rather than one with a realistic chance of coming to fruition.

But following the successful launch of the company’s mammoth Starship vehicle last month, the idea is beginning to look less far-fetched. And in a speech at the company’s facilities in South Texas, Musk explained how he envisions using Starship to deliver millions of tons of cargo to Mars over the next couple of decades to create a self-sustaining civilization.

“Starship is the first design of a rocket that is actually capable of making life multiplanetary,” Musk said. “No rocket before this has had the potential to extend life to another planet.”

In a slightly rambling opening to the speech, Musk explained that making humans multiplanetary could be an essential insurance policy in case anything catastrophic happens to Earth. The red planet is the most obvious choice, he said, as it’s neither too close nor too far from Earth and has many of the raw ingredients required to support a functioning settlement.

But he estimates it will require us to deliver several million tons of cargo to the surface to get that civilization up and running. Starship is central to those plans, and Musk outlined the company’s roadmap for the massive rocket over the coming years.

Key to the vision is making the vehicle entirely reusable. That means the first hurdle is proving SpaceX can land and reuse both the Super Heavy first stage rocket and the Starship spacecraft itself. The second of those challenges will be tougher, as the vehicle must survive reentry to the atmosphere—in the most recent test, it broke up on its way back to Earth.

Musk says they plan to demonstrate the ability to land and reuse the Super Heavy booster this year, which he thinks has an 80 to 90 percent chance of success. Assuming they can get Starship to survive the extreme heat of reentry, they are also going to attempt landing the vehicle on a mock launch pad out at sea in 2024, with the aim of being able to land and reuse it by next year.

Proving the rocket works and is reusable is just the very first step in Musk’s Mars ambitions though. To achieve his goal of delivering a million people to the red planet in the next 20 years, SpaceX will have to massively ramp up its production and launch capabilities.

The company is currently building a second launch tower at its base in South Texas and is also planning to build two more at Cape Canaveral in Florida. Musk said the Texas sites would be mostly used for test launches and development work, with the Florida ones being the main hub for launches once Starship begins commercial operations.

SpaceX plans to build six Starships this year, according to Musk, but it is also building what he called a “giant factory” that will enable it to massively ramp up production of the spacecraft. The long-term goal is to produce multiple Starships a day. That’s crucial, according to Musk, because Starships initially won’t return from Mars and will instead be used as raw materials to construct structures on the surface.

The company also plans to continue development of Starship, boosting its carrying capacity from around 100 tons today to 200 tons in the future and enabling it to complete multiple launches in a day. SpaceX also hopes to demonstrate ship-to-ship refueling in orbit next year. It will be necessary to replenish the fuel used up by Starship on launch so it has a full tank as it sets off for Mars.

Those missions will depart when the orbits of Earth and Mars bring them close together, an alignment that only happens every 26 months. As such, Musk envisions entire armadas of Starships setting off together whenever these windows arrive.

SpaceX has done some early work on what needs to happen once Starships arrive at the red planet. They’ve identified promising landing sites and the infrastructure that will need setting up, including power generation, ice-mining facilities, propellant factories, and communication networks. But Musk admits they’ve yet to start development of any of these.

One glaring omission in the talk was any detail on who’s going to be paying for all of this. While the goal of making humankind multiplanetary is a noble one, it’s far from clear how the endeavor would make money for those who put up the funds to make it possible.

Musk estimates that the cost of each launch could eventually fall to just $2 to $3 million. And he noted that profits from the company’s Starlink satellites and Falcon 9 launch vehicle are currently paying for Starship’s development. But those revenue streams are unlikely to cover the thousands of launches a year required to make his Mars dreams a reality.

Still, the very fact that the questions these days are more about economics than technical feasibility is testament to the rapid progress SpaceX has made. The dream of becoming a multiplanetary species may not be science fiction for much longer.

Image Credit: SpaceX

Kategorie: Transhumanismus

This Company Is Growing Mini Livers Inside People to Fight Liver Disease

Singularity HUB - 11 Duben, 2024 - 23:10

Growing a substitute liver inside a human body sounds like science fiction.

Yet a patient with severe liver damage just received an injection that could grow an additional “mini liver” directly inside their body. If all goes well, it’ll take up the failing liver’s job of filtering toxins from the blood.

For people with end-stage liver disease, a transplant is the only solution. But matching donor organs are hard to come by. Across the globe, two million people die from liver failure each year.

The new treatment, helmed by biotechnology company LyGenesis, offers an unusual solution. Rather than transplanting a whole new liver, the team is injecting healthy donor liver cells into lymph nodes in the patient’s upper abdomen. In a few months, it’s hoped the cells will gradually replicate and grow into a functional miniature liver.

The patient is part of a Phase 2a clinical trial, a stage that begins to gauge whether a therapy is effective. In up to 12 people with end-stage liver disease, the trial will test multiple doses to find the “Goldilocks” zone of treatment—effective with minimal side effects.

If successful, the therapy could sidestep the transplant organ shortage problem, not just for liver disease, but potentially also for kidney failure or diabetes. The math also works in favor of patients. Instead of one donor organ per recipient, healthy cells from one person could help multiple people in need of new organs.

A Living Bioreactor

Most of us don’t think about lymph nodes until we catch a cold, and they swell up painfully under the chin. These structures are dotted throughout the body. Like tiny cellular nurseries, they help immune cells proliferate to fend off invading viruses and bacteria.

They also have a dark side. Lymph nodes aid the spread of breast and other types of cancers. Because they’re highly connected to a highway of lymphatic vessels, cancer cells tunnel into them and take advantage of nutrients in the blood to grow and spread across the body.

What seems like a biological downfall may benefit regenerative medicine. If lymph nodes can support both immune cells and cancer growth, they may also be able to incubate other cell types and grow them into tissues—or even replacement organs.

The idea diverges from usual regenerative therapies, such as stem cell treatments, which aim to revive damaged tissues at the spot of injury. This is a hard ask: When organs fail, they often scar and spew out toxic chemicals that prevent engrafted cells from growing.

Lymph nodes offer a way to skip these cellular cesspools entirely.

Growing organs inside lymph nodes may sound far-fetched, but over a decade ago, LyGenesis’ chief scientific officer and co-founder, Dr. Eric Lagasse, showed it was possible in mice. In one test, his team injected liver cells directly into a lymph node inside a mouse’s belly. They found the grafted cells stayed in the “nursery,” rather than roaming the body and causing unexpected side effects.

In a mouse model of lethal liver failure, an infusion of healthy liver cells into the lymph node grew into a mini liver in just twelve weeks. The transplanted cells took over their host, developing into cube-like cells characteristic of normal liver cells and leaving behind just a sliver of normal lymph node cells.

The graft could support immune system growth and grew cells to shuttle bile and other digestive chemicals. It also boosted the mice’s average survival rate. Without treatment, most mice died within 10 weeks of the start of the study. Most mice injected with liver cells survived past 30 weeks.

A similar strategy worked in dogs and pigs with damaged livers. Injecting donor cells into lymph nodes formed mini livers in less than two months in pigs. Under the microscope, the newborn structures resembled the liver’s intricate architecture, including “highways” for bile to easily flow along instead of accumulating, which causes even more damage and scarring.

The body has over 500 hundred lymph nodes. Injecting into other lymph nodes located elsewhere also grew mini livers, but they weren’t as effective.

“It’s all about location, location, location,” said Lagasse at the time.

A Daring Trial

With prior experience guiding their clinical trial, LyGenesis dosed a first patient in late March.

The team used a technique called endoscopic ultrasound to direct the cells into the designated lymph node. In the procedure, a thin, flexible tube with a small ultrasound device is inserted through the mouth into the digestive track. The ultrasound generates an image of the surrounding tissue and helps guide the tube to the target lymph node for injection.

The procedure may sound difficult, but compared to a liver transplant, it’s minimally invasive. In an interview with Nature, Dr. Michael Hufford, CEO of LyGenesis, said the patient is recovering well and already discharged from the clinic.

The company aims to enroll all 12 patients by mid 2025 to test the therapy’s safety and efficacy.

Many questions remain. The transplanted cells could grow into mini livers of different sizes, based on chemical signals from the body. Although not a problem in mice and pigs, could they potentially overgrow in humans? Meanwhile, patients receiving the treatment will need to take a hefty dose of medications to suppress their immune systems. How these will interact with the transplants is also unknown.

Another question is dosage. Lymph nodes are plentiful. The trial will inject liver cells into up to five lymph nodes to see if multiple mini livers can grow and function without side effects.

If successful, the therapy has wider reach.

In diabetic mice, seeding lymph nodes with pancreatic cellular clusters restored their blood sugar levels. A similar strategy could combat Type 1 diabetes in humans. The company is also looking into whether the technology can revive kidney function or even combat aging.

But for now, Hufford is focused on helping millions of people with liver damage. “This therapy will potentially be a remarkable regenerative medicine milestone by helping patients with ESLD [end-stage liver disease] grow new functional ectopic livers in their own body,” he said.

Image Credit: A solution with liver cells in suspension / LyGenesis

Kategorie: Transhumanismus

Harvard’s New Programmable Liquid Shifts Its Properties on Demand

Singularity HUB - 11 Duben, 2024 - 00:37

We’re surrounded by ingenious substances: a menu of metal alloys that can wrap up leftovers or skin rockets, paints in any color imaginable, and ever-morphing digital displays. Virtually all of these exploit the natural properties of the underlying materials.

But an emerging class of materials is more versatile, even programmable.

Known as metamaterials, these substances are meticulously engineered such that their structural makeup—as opposed to their composition—determines their properties. Some metamaterials might make long-distance wireless power transfer practical, others could bring “invisibility cloaks” or futuristic materials that respond to brainwaves.

But most examples are solid metamaterials—a Harvard team wondered if they could make a metafluid. As it turns out, yes, absolutely. The team recently described their results in Nature.

“Unlike solid metamaterials, metafluids have the unique ability to flow and adapt to the shape of their container,” Katia Bertoldi, a professor in applied mechanics at Harvard and senior author of the paper, said in a press release. “Our goal was to create a metafluid that not only possesses these remarkable attributes but also provides a platform for programmable viscosity, compressibility, and optical properties.”

The team’s metafluid is made up of hundreds of thousands of tiny, stretchy spheres—each between 50 to 500 microns across—suspended in oil. The spheres change shape depending on the pressure of the surrounding oil. At higher pressures, they deform, one hemisphere collapsing inward into a kind of half moon shape. They then resume their original spherical shape when the pressure is relieved.

The metafluid’s properties—such as viscosity or opacity—change depending on which of these shapes its constituent spheres assume. The fluid’s properties can be fine-tuned based on how many spheres are in the liquid and how big or thick they are.

Greater pressure causes the spheres to collapse. When the pressure is relieved, they resume their spherical shape. Credit: Adel Djellouli/Harvard SEAS

As a proof of concept, the team filled a hydraulic robotic gripper with their metafluid. Robots usually have to be programmed to sense objects and adjust grip strength. The team showed the gripper could automatically adapt to a blueberry, a glass, and an egg without additional sensing or programming required. The pressure of each object “programmed” the liquid to adjust, allowing the gripper to pick up all three, undamaged, with ease.

The team also showed the metafluid could switch from opaque, when its constituents were spherical, to more transparent, when they collapsed. The latter shape, the researchers said, functions like a lens focusing light, while the former scatters light.

The metafluid obscures the Harvard logo then becomes more transparent as the capsules collapse. Credit: Adel Djellouli/Harvard SEAS

Also of note, the metafluid behaves like a Newtonian fluid when its components are spherical, meaning its viscosity only changes with shifts in temperature. When they collapse, however, it becomes a non-Newtonian fluid, where its viscosity changes depending on the shear forces present. The greater the shear force—that is, parallel forces pushing in opposite directions—the more liquid the metafluid becomes.

Next, the team will investigate additional properties—such as how their creation’s acoustics and thermodynamics change with pressure—and look into commercialization. Making the elastic spheres themselves is fairly straightforward, and they think metafluids like theirs might be useful in robots, as “intelligent” shock absorbers, or in color-changing e-inks.

“The application space for these scalable, easy-to-produce metafluids is huge,” said Bertoldi.

Of course, the team’s creation is still in the research phase. There are a plenty of hoops yet to navigate before it shows up in products we all might enjoy. Still, the work adds to a growing list of metamaterials—and shows the promise of going from solid to liquid.

Image Credit: Adel Djellouli/Harvard SEAS

Kategorie: Transhumanismus

3 Body Problem: Is the Universe Really a ‘Dark Forest’ Full of Hostile Aliens in Hiding?

Singularity HUB - 9 Duben, 2024 - 20:17

We have no good reason to believe that aliens have ever contacted Earth. Sure, there are conspiracy theories, and some rather strange reports about harm to cattle, but nothing credible. Physicist Enrico Fermi found this odd. His formulation of the puzzle, proposed in the 1950s and now known as the Fermi Paradox, is still key to the search for extraterrestrial life (SETI) and messaging by sending signals into space (METI).

The Earth is about 4.5 billion years old, and life is at least 3.5 billion years old. The paradox states that, given the scale of the universe, favorable conditions for life are likely to have occurred many, many times. So where is everyone? We have good reasons to believe that there must be life out there, but nobody has come to call.

This is an issue that the character Ye Wenjie wrestles with in the first episode of Netflix’s 3 Body Problem. Working at a radio observatory, she does finally receive a message from a member of an alien civilization—telling her they are a pacifist and urging her not to respond to the message or Earth will be attacked.

The series will ultimately offer a detailed, elegant solution to the Fermi Paradox, but we will have to wait until the second season.

Or you can read the second book in Cixin Liu’s series, The Dark Forest. Without spoilers, the explanation set out in the books runs as follows: “The universe is a dark forest. Every civilization is an armed hunter stalking through the trees like a ghost, gently pushing aside branches that block the path and trying to tread without sound.”

Ultimately, everybody is hiding from everyone else. Differential rates of technological progress make an ongoing balance of power impossible, leaving the most rapidly progressing civilizations in a position to wipe out anyone else.

In this ever-threatening environment, those who play the survival game best are the ones who survive longest. We have joined a game which has been going on before our arrival, and the strategy that everyone has learned is to hide. Nobody who knows the game will be foolish enough to contact anyone—or to respond to a message.

Liu has depicted what he calls “the worst of all possible universes,” continuing a trend within Chinese science fiction. He is not saying that our universe is an actual dark forest, with one survival strategy of silence and predation prevailing everywhere, but that such a universe is possible and interesting.

Liu’s dark forest theory is also sufficiently plausible to have reinforced a trend in the scientific discussion in the west—away from worries about mutual incomprehensibility, and towards concerns about direct threat.

We can see its potential influence in the protocol for what to do on first contact that was proposed in 2020 by the prominent astrobiologists Kelly Smith and John Traphagan. “First, do nothing,” they conclude, because doing something could lead to disaster.

In the case of alien contact, Earth should be notified using pre-established signaling rather than anything improvised, they argue. And we should avoid doing anything that might disclose information about who we are. Defensive behavior would show our familiarity with conflict, so that would not be a good idea. Returning messages would give away the location of Earth—also a bad idea.

Again, the Smith and Traphagan thought is not that the dark forest theory is correct. Benevolent aliens really could be out there. The thought is simply that first contact would involve a high civilization-level risk.

This is different from assumptions from a great deal of Russian literature about space of the Soviet era, which suggested that advanced civilizations would necessarily have progressed beyond conflict, and would therefore share a comradely attitude. This no longer seems to be regarded as a plausible guide to protocols for contact.

Misinterpreting Darwin

The interesting thing is that the dark forest theory is almost certainly wrong. Or at least, it is wrong in our universe. It sets up a scenario in which there is a Darwinian process of natural selection, a competition for survival.

Charles Darwin’s account of competition for survival is evidence-based. By contrast, we have absolutely no evidence about alien behavior, or about competition within or between other civilizations. This makes for entertaining guesswork rather than good science, even if we accept the idea that natural selection could operate at group level, at the level of civilizations.

Even if you were to assume the universe did operate in accordance with Darwinian evolution, the argument is questionable. No actual forest is like the dark one. They are noisy places where co-evolution occurs.

Creatures evolve together, in mutual interdependence, and not alone. Parasites depend upon hosts, flowers depend upon birds for pollination. Every creature in a forest depends upon insects. Mutual connection does lead to encounters which are nasty, brutish and short, but it also takes other forms. That is how forests in our world work.

Interestingly, Liu acknowledges this interdependence as a counterpoint to the dark forest theory. The viewer, and the reader, are told repeatedly that “in nature, nothing exists alone”—a quote from Rachel Carson’s Silent Spring (1962). This is a text which tells us that bugs can be our friends and not our enemies.

There are many galaxies out there, and potentially plenty of life. Image Credit: X-ray: NASA/CXC/SAO

In Liu’s story, this is used to explain why some humans immediately go over to the side of the aliens, and why the urge to make contact is so strong, in spite of all the risks. Ye Wenjie ultimately replies to the alien warning.

The Carson allusions do not reinstate the old Russian idea that aliens will be advanced and therefore comradely. But they do help to paint a more varied and realistic picture than the dark forest theory.

For this reason, the dark forest solution to the Fermi Paradox is unconvincing. The fact that we do not hear anyone is just as likely to indicate that they are too far off, or we are listening in all the wrong ways, or else that there is no forest and nothing else to be heard.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image Credit: ESO/A. Ghizzi Panizza (www.albertoghizzipanizza.com)

Kategorie: Transhumanismus

Your Brain Breaks Its Own DNA to Form Memories That Can Last a Lifetime

Singularity HUB - 8 Duben, 2024 - 21:55

Some memories last a lifetime. The awe of seeing a full solar eclipse. The first smile you shared with your partner. The glimpse of a beloved pet who just passed away in their sleep.

Other memories, not so much. Few of us remember what we had for lunch a week ago. Why do some memories last, while others fade away?

Surprisingly, the answer may be broken DNA and inflammation in the brain. On the surface, these processes sound utterly detrimental to brain function. Broken DNA strands are usually associated with cancer, and inflammation is linked to aging.

But a new study in mice suggests that breaking and repairing DNA in neurons paves the way for long-lasting memories.

We form memories when electrical signals zap through neurons in the hippocampus, a seahorse-shaped region deep inside the brain. The electrical pulses wire groups of neurons together into networks that encode memories. The signals only capture brief snippets of a treasured experience, yet some can be replayed over and over for decades (although they do gradually decay like a broken record).

Like artificial neural networks, which power most of today’s AI, scientists have long thought that rewiring the brain’s connections happens fast and is prone to changes. But the new study found a subset of neurons that alter their connections to encode long-lasting memories.

To do this, strangely, the neurons recruit proteins that normally fend off bacteria and cause inflammation.

“Inflammation of brain neurons is usually considered to be a bad thing, since it can lead to neurological problems such as Alzheimer’s and Parkinson’s disease,” said study author Dr. Jelena Radulovic at Albert Einstein College of Medicine in a press release. “But our findings suggest that inflammation in certain neurons in the brain’s hippocampal region is essential for making long-lasting memories.”

Should I Stay or Should I Go?

We all have a mental scrapbook for our lives. When playing a memory—the whens, wheres, whos, and whats—our minds transport us through time to relive the experience.

The hippocampus is at the heart of this ability. In the 1950s, a man known as H.M. had his hippocampus removed to treat epilepsy. After the surgery, he retained old memories, but could no longer form new ones, suggesting that the brain region is a hotspot for encoding memories.

But what does DNA have to do with the hippocampus or memory?

It comes down to how brain cells are wired. Neurons connect with each other through little bumps called synapses. Like docks between two opposing shores, synapses pump out chemicals to transmit messages from one neuron to another. Depending on the signals, synapses can form a strong connection to their neighboring neurons, or they can dial down communications.

This ability to rewire the brain is called synaptic plasticity. Scientists have long thought it’s the basis of memory. When learning something new, electrical signals flow through neurons triggering a cascade of molecules. These stimulate genes that restructure the synapse to either bump up or decrease their connection with neighbors. In the hippocampus, this “dial” can rapidly change overall neural network wiring to record new memories.

Synaptic plasticity comes at a cost. Synapses are made up of a collection of proteins produced from DNA inside cells. With new learning, electrical signals from neurons cause temporary snips to DNA inside neurons.

DNA damage isn’t always detrimental. It’s been associated with memory formation since 2021. One study found breakage of our genetic material is widespread in the brain and was surprisingly linked to better memory in mice. After learning a task, mice had more DNA breaks in multiple types of brain cells, hinting that the temporary damage may be part of the brain’s learning and memory process.

But the results were only for brief memories. Do similar mechanisms also drive long-term ones?

“What enables brief experiences, encoded over just seconds, to be replayed again and again during a lifetime remains a mystery,” Drs.  Benjamin Kelvington and Ted Abel at the Iowa Neuroscience Institute, who were not involved in the work, wrote in Nature.

The Memory Omelet

To find an answer, the team used a standard method for assessing memory. They hosted mice in different chambers: Some were comfortable; others gave the critters a tiny electrical zap to the paws, just enough that they disliked the habitat. The mice rapidly learned to prefer the comfortable room.

The team then compared gene expression from mice with a recent memory—roughly four days after the test—to those nearly a month after the stay.

Surprisingly, genes involved in inflammation flared up in addition to those normally associated with synaptic plasticity. Digging deeper, the team found a protein called TLR9. Usually known as part of the body’s first line of defense against dangerous bacteria, TLR9 boosts the body’s immune response against DNA fragments from invading bacteria. Here, however, the gene became highly active in neurons inside the hippocampus—especially those with persistent DNA breaks that last for days.

What does it do? In one test, the team deleted the gene encoding TLR9 in the hippocampus. When challenged with the chamber test, these mice struggled to remember the “dangerous” chamber in a long-term memory test compared to peers with the gene intact.

Interestingly, the team found that TLR9 could sense DNA breakage. Deleting the gene prevented mouse cells from recognizing DNA breaks, causing not just loss of long-term memory, but also overall genomic instability in their neurons.

“One of the most important contributions of this study is the insight into the connection between DNA damage…and the persistent cellular changes associated with long-term memory,” wrote Kelvington and Abel.

Memory Mystery

How long-term memories persist remains a mystery. Immune responses are likely just one aspect.

In 2021, the same team found that net-like structures around neurons are crucial for long-term memory. The new study pinpointed TLR9 as a protein that helps form these structures, providing a molecular mechanism between different brain components that support lasting memories.

The results suggest “we are using our own DNA as a signaling system,” Radulovic told Nature, so that we can “retain information over a long time.”

Lots of questions remain. Does DNA damage predispose certain neurons to the formation of memory-encoding networks? And perhaps more pressing, inflammation is often associated with neurodegenerative disorders, such as Alzheimer’s disease. TLR9, which helped the mice remember dangerous chambers in this study, was previously involved in triggering dementia when expressed in microglia, the brain’s immune cells.

“How is it that, in neurons, activation of TLR9 is crucial for memory formation, whereas, in microglia, it produces neurodegeneration—the antithesis of memory?” asked Kelvington and Abel. “What separates detrimental DNA damage and inflammation from that which is essential for memory?”

Image Credit: geralt / Pixabay

Kategorie: Transhumanismus

This Week’s Awesome Tech Stories From Around the Web (Through April 6)

Singularity HUB - 6 Duben, 2024 - 16:00
COMPUTING

To Build a Better AI Supercomputer, Let There Be Light
Will Knight | Wired
“Lightmatter wants to directly connect hundreds of thousands or even millions of GPUs—those silicon chips that are crucial to AI training—using optical links. Reducing the conversion bottleneck should allow data to move between chips at much higher speeds than is possible today, potentially enabling distributed AI supercomputers of extraordinary scale.“

ROBOTICS

Apple Has Been Secretly Building Home Robots That Could End up as a New Product Line, Report Says
Aaron Mok | Business Insider
“Apple is in the early stages of looking into making home robots, a move that appears to be an effort to create its ‘next big thing’ after it killed its self-driving car project earlier this year, sources familiar with the matter told Bloomberg. Engineers are looking into developing a robot that could follow users around their houses, Bloomberg reported. They’re also exploring a tabletop at-home device that uses robotics to rotate the display, a more advanced project than the mobile robot.”

SPACE

A Tantalizing ‘Hint’ That Astronomers Got Dark Energy All Wrong
Dennis Overbye | The New York Times
“On Thursday, astronomers who are conducting what they describe as the biggest and most precise survey yet of the history of the universe announced that they might have discovered a major flaw in their understanding of dark energy, the mysterious force that is speeding up the expansion of the cosmos. Dark energy was assumed to be a constant force in the universe, both currently and throughout cosmic history. But the new data suggest that it may be more changeable, growing stronger or weaker over time, reversing or even fading away.”

COMPUTING

How ASML Took Over the Chipmaking Chessboard
Mat Honan and James O’Donnell | MIT Technology Review
“When asked what he thought might eventually cause Moore’s Law to finally stall out, van den Brink rejected the premise entirely. ‘There’s no reason to believe this will stop. You won’t get the answer from me where it will end,’ he said. ‘It will end when we’re running out of ideas where the value we create with all this will not balance with the cost it will take. Then it will end. And not by the lack of ideas.'”

TRANSPORTATION

The Very First Jet Suit Grand Prix Takes Off in Dubai
Mike Hanlon | New Atlas
“A new sport kicked away this month when the first ever jet-suit race was held in Dubai. Each racer wore an array of seven 130-hp jet engines (two on each arm and three in the backpack for a total 1,050 hp) that are controlled by hand-throttles. After that, the pilots use the three thrust vectors to gain lift, move forward and try to stay above ground level while negotiating the course…faster than anyone else.“

ROBOTICS

Toyota’s Bubble-ized Humanoid Grasps With Its Whole Body
Evan Ackerman | IEEE Spectrum
“Many of those motions look very human-like, because this is how humans manipulate things. Not to throw too much shade at all those humanoid warehouse robots, but as is pointed out in the video above, using just our hands outstretched in front of us to lift things is not how humans do it, because using other parts of our bodies to provide extra support makes lifting easier.”

FUTURE

‘A Brief History of the Future’ Offers a Hopeful Antidote to Cynical Tech Takes
Devin Coldewey | TechCrunch
“The future, he said, isn’t just what a Silicon Valley publicist tells you, or what ‘Big Dystopia’ warns you of, or even what a TechCrunch writer predicts. In the six-episode series, he talks with dozens of individuals, companies and communities about how they’re working to improve and secure a future they may never see. From mushroom leather to ocean cleanup to death doulas, Wallach finds people who see the same scary future we do but are choosing to do something about it, even if that thing seems hopelessly small or naïve.”

TECH

This AI Startup Wants You to Talk to Houses, Cars, and Factories
Steven Levy | Wired
“We’ve all been astonished at how chatbots seem to understand the world. But what if they were truly connect to the real world? What if the dataset behind the chat interface was physical reality itself, captured in real time by interpreting the input of billions of sensors sprinkled around the globe? That’s the idea behind Archetype AI, an ambitious startup launching today. As cofounder and CEO Ivan Poupyrev puts it, ‘Think of ChatGPT, but for physical reality.'”

FUTURE

How One Tech Skeptic Decided AI Might Benefit the Middle Class
Steve Lohr | The New York Times
“David Autor seems an unlikely AI optimist. The labor economist at the Massachusetts Institute of Technology is best known for his in-depth studies showing how much technology and trade have eroded the incomes of millions of American workers over the years. But Mr. Autor is now making the case that the new wave of technology—generative artificial intelligence, which can produce hyper-realistic images and video and convincingly imitate humans’ voices and writing—could reverse that trend.”

Image Credit: Harole Ethan / Unsplash

Kategorie: Transhumanismus

Life’s Origins: How Fissures in Hot Rocks May Have Kickstarted Biochemistry

Singularity HUB - 5 Duben, 2024 - 21:17

How did the building blocks of life originate?

The question has long vexed scientists. Early Earth was dotted with pools of water rich in chemicals—a primordial soup. Yet biomolecules supporting life emerged from the mixtures, setting the stage for the appearance of the first cells.

Life was kickstarted when two components formed. One was a molecular carrier—like, for example, DNA—to pass along and remix genetic blueprints. The other component was made up of proteins, the workhorses and structural elements of the body.

Both biomolecules are highly complex. In humans, DNA has four different chemical “letters,” called nucleotides, whereas proteins are made of 20 types of amino acids. The components have distinct structures, and their creation requires slightly different chemistries. The final products need to be in large enough amounts to string them together into DNA or proteins.

Scientists can purify the components in the lab using additives. But it begs the question: How did it happen on early Earth?

The answer, suggests Dr. Christof Mast, a researcher at Ludwig Maximilians University of Munich, may be cracks in rocks like those occurring in the volcanoes or geothermal systems that were abundant on early Earth. It’s possible that temperature differences along the cracks naturally separate and concentrate biomolecule components, providing a passive system to purify biomolecules.

Inspired by geology, the team developed heat flow chambers roughly the size of a bank card, each containing minuscule fractures with a temperature gradient. When given a mixture of amino acids or nucleotides—a “prebiotic mix”—the components readily separated.

Adding more chambers further concentrated the chemicals, even those that were similar in structure. The network of fractures also enabled amino acids to bond, the first step towards creating a functional protein.

“Systems of interconnected thin fractures and cracks…are thought to be ubiquitous in volcanic and geothermal environments,” wrote the team. By enriching the prebiotic chemicals, such systems could have “provided a steady driving force for a natural origins-of-life laboratory.”

Brewing Life

Around four billion years ago, Earth was a hostile environment, pummeled by meteorites and rife with volcanic eruptions. Yet somehow among the chaos, chemistry generated the first amino acids, nucleotides, fatty lipids, and other building blocks that support life.

Which chemical processes contributed to these molecules is up for debate. When each came along is also a conundrum. Like a “chicken or egg” problem, DNA and RNA direct the creation of proteins in cells—but both genetic carriers also require proteins to replicate.

One theory suggest sulfidic anions, which are molecules that were abundant in early Earth’s lakes and rivers, could be the link. Generated in volcanic eruptions, once dissolved into pools of water they can speed up chemical reactions that convert prebiotic molecules into RNA. Dubbed the “RNA world” hypothesis, the idea suggests that RNA was the first biomolecule to grace Earth because it can carry genetic information and speed up some chemical reactions.

Another idea is meteor impacts on early Earth generated nucleotides, lipids, and amino acids simultaneously, through a process that includes two abundant chemicals—one from meteors and another from Earth—and a dash of UV light.

But there’s one problem: Each set of building blocks requires a different chemical reaction. Depending on slight differences in structure or chemistry, it’s possible one geographic location might have skewed towards one type of prebiotic molecule over another.

How? The new study, published in Nature, offers an answer.

Tunnel Networks

Lab experiments mimicking early Earth usually start with well-defined ingredients that have already been purified. Scientists also clean up intermediate side-products, especially for multiple chemical reaction steps.

The process often results in “vanishingly small concentrations of the desired product,” or its creation can even be completely inhibited, wrote the team. The reactions also require multiple spatially separated chambers, which hardly resembles Earth’s natural environment.

The new study took inspiration from geology. Early Earth had complex networks of water-filled cracks found in a variety of rocks in volcanos and geothermal systems. The cracks, generated by overheating rocks, formed natural “straws” that could potentially filter a complex mix of molecules using a heat gradient.

Each molecule favors a preferred temperature based on its size and electrical charge. When exposed to different temperatures, it naturally moves towards its ideal pick. Called thermophoresis, the process separates a soup of ingredients into multiple distinct layers in one step.

The team mimicked a single thin rock fracture using a heat flow chamber. Roughly the size of a bank card, the chamber had tiny cracks 170 micrometers across, about the width of a human hair. To create a temperature gradient, one side of the chamber was heated to 104 degrees Fahrenheit and the other end chilled to 77 degrees Fahrenheit.

In a first test, the team added a mix of prebiotic compounds that included amino acids and DNA nucleotides into the chamber. After 18 hours, the components separated into layers like tiramisu. For example, glycine—the smallest of amino acids—became concentrated towards the top, whereas other amino acids with higher thermophoretic strength stuck to the bottom. Similarly, DNA letters and other life-sustaining chemicals also separated in the cracks, with some enriched by up to 45 percent.

Although promising, the system didn’t resemble early Earth, which had highly interconnected cracks varying in size. To better mimic natural conditions, the team next strung up three chambers, with the first branching into two others. This was roughly 23 times more efficient at enriching prebiotic chemicals than a single chamber.

Using a computer simulation, the team then modeled the behavior of a 20-by-20 interlinked chamber system, using a realistic flow rate of prebiotic chemicals. The chambers further enriched the brew, with glycine enriching over 2,000 times more than another amino acids.

Chemical Reactions

Cleaner ingredients are a great start for the formation of complex molecules. But lots of chemical reaction require additional chemicals, which also need to be enriched. Here, the team zeroed in on a reaction stitching two glycine molecules together.

At the heart is trimetaphosphate (TMP), which helps guide the reaction. TMP is especially interesting for prebiotic chemistry, and it was scarce on early Earth, explained the team, which “makes its selective enrichment critical.” A single chamber increased TMP levels when mixed with other chemicals.

Using a computer simulation, a TMP and glycine mix increased the final product—a doubled glycine—by five orders of magnitude.

“These results show that otherwise challenging prebiotic reactions are massively boosted” with heat flows that selectively enrich chemicals in different regions, wrote the team.

In all, they tested over 50 prebiotic molecules and found the fractures readily separated them. Because each crack can have a different mix of molecules, it could explain the rise of multiple life-sustaining building blocks.

Still, how life’s building blocks came together to form organisms remains mysterious. Heat flows and rock fissures are likely just one piece of the puzzle. The ultimate test will be to see if, and how, these purified prebiotics link up to form a cell.

Image Credit: Christof B. Mast

Kategorie: Transhumanismus

Quantum Computers Take a Major Step With Error Correction Breakthrough

Singularity HUB - 4 Duben, 2024 - 23:26

For quantum computers to go from research curiosities to practically useful devices, researchers need to get their errors under control. New research from Microsoft and Quantinuum has now taken a major step in that direction.

Today’s quantum computers are stuck firmly in the “noisy intermediate-scale quantum” (NISQ) era. While companies have had some success stringing large numbers of qubits together, they are highly susceptible to noise which can quickly degrade their quantum states. This makes it impossible to carry out computations with enough steps to be practically useful.

While some have claimed that these noisy devices could still be put to practical use, the consensus is that quantum error correction schemes will be vital for the full potential of the technology to be realized. But error correction is difficult in quantum computers because reading the quantum state of a qubit causes it to collapse.

Researchers have devised ways to get around this using error correction codes that spread each bit of quantum information across multiple physical qubits to create what is known as a logical qubit. This provides redundancy and makes it possible to detect and correct errors in the physical qubits without impacting the information in the logical qubit.

The challenge is that, until recently, it was assumed it could take roughly 1,000 physical qubits to create each logical qubit. Today’s largest quantum processors only have around that many qubits, suggesting that creating enough logical qubits for meaningful computations was still a distant goal.

That changed last year when researchers from Harvard and startup QuEra showed they could generate 48 logical qubits from just 280 physical ones. And now the collaboration between Microsoft and Quantinuum has gone a step further by showing that they can not only create logical qubits but can actually use them to suppress error rates by a factor of 800 and carry out more than 14,000 experimental routines without a single error.

“What we did here gives me goosebumps,” Microsoft’s Krysta Svore told New Scientist. “We have shown that error correction is repeatable, it is working, and it is reliable.”

The researchers were working with Quantinuum’s H2 quantum processor, which relies on trapped-ion technology and is relatively small at just 32 qubits. But by applying error correction codes developed by Microsoft, they were able to generate four logical qubits that only experienced an error every 100,000 runs.

One of the biggest achievements, the Microsoft team notes in a blog post, was the fact that they were able to diagnose and correct errors without destroying the logical qubits. This is thanks to an approach known as “active syndrome extraction” which is able to read information about the nature of the noise impacting qubits, rather than their state, Svore told IEEE Spectrum.

However, the error correction scheme had a shelf life. When the researchers carried out multiple operations on a logical qubit, followed by error correction, they found that by the second round the error rates were only half of those found in the physical qubits and by the third round there was no statistically significant impact.

And impressive as the results are, the Microsoft team points out in their blog post that creating truly powerful quantum computers will require logical qubits that make errors only once every 100 million operations.

Regardless, the result marks a massive jump in capabilities for error correction, which Quantinuum claimed in a press release represents the beginning of a new era in quantum computing. While that might be jumping the gun slightly, it certainly suggests that people’s timelines for when we will achieve fault-tolerant quantum computing may need to be updated.

Image Credit: Quantinuum H2 quantum computer / Quantinuum

Kategorie: Transhumanismus

Environmental DNA Is Everywhere. Scientists Are Gathering It All.

Singularity HUB - 2 Duben, 2024 - 21:18

In the late 1980s, at a federal research facility in Pensacola, Florida, Tamar Barkay used mud in a way that proved revolutionary in a manner she could never have imagined at the time: a crude version of a technique that is now shaking up many scientific fields. Barkay had collected several samples of mud—one from an inland reservoir, another from a brackish bayou, and a third from a low-lying saltwater swamp. She put these sediment samples in glass bottles in the lab, and then added mercury, creating what amounted to toxic sludge.

At the time, Barkay worked for the Environmental Protection Agency and she wanted to know how microorganisms in mud interact with mercury, an industrial pollutant, which required an understanding of all the organisms in a given environment—not just the tiny portion that could be successfully grown in petri dishes in the lab. But the underlying question was so basic that it remains one of those fundamental driving queries across biology. As Barkay, who is now retired, put it in a recent interview from Boulder, Colorado: “Who is there?” And, just as important, she added: “What are they doing there?”

Such questions are still relevant today, asked by ecologists, public health officials, conservation biologists, forensic practitioners, and those studying evolution and ancient environments—and they drive shoe-leather epidemiologists and biologists to far-flung corners of the world.

The 1987 paper Barkay and her colleagues published in the Journal of Microbiological Methods outlined a method“Direct Environmental DNA Extraction”—that would allow researchers to take a census. It was a practical tool, albeit a rather messy one, for detecting who was out there. Barkay used it for the rest of her career.

Today, the study gets cited as an early glimpse of eDNA, or environmental DNA, a relatively inexpensive, widespread, potentially automated way to observe the diversity and distribution of life. Unlike previous techniques, which could identify DNA from, say, a single organism, the method also collects the swirling cloud of other genetic material that surrounds it. In recent years, the field has grown significantly. “It’s got its own journal,” said Eske Willerslev, an evolutionary geneticist at the University of Copenhagen. “It’s got its own society, scientific society. It has become an established field.”

“We’re all flaky, right? There’s bits of cellular debris sloughing off all the time.”

eDNA serves as a surveillance tool, offering researchers a means of detecting the seemingly undetectable. By sampling eDNA, or mixtures of genetic material—that is, fragments of DNA, the blueprint of life—in water, soil, ice cores, cotton swabs, or practically any environment imaginable, even thin air, it is now possible to search for a specific organism or assemble a snapshot of all the organisms in a given place. Instead of setting up a camera to see who crosses the beach at night, eDNA pulls that information out of footprints in the sand. “We’re all flaky, right?” said Robert Hanner, a biologist at the University of Guelph in Canada. “There’s bits of cellular debris sloughing off all the time.”

As a method for confirming the presence of something, eDNA isn’t failproof. For instance, the organism detected in eDNA might not actually live in the location where the sample was collected; Hanner gave the example of a passing bird, a heron, that ate a salamander and then pooped out some of its DNA, which could be one reason signals of the amphibian are present in some areas where they’ve never been physically found.

Still, eDNA has the ability to help sleuth out genetic traces, some of which slough off in the environment, offering a thrilling—and potentially chilling—way to collect information about organisms, including humans, as they go about their everyday business.

The conceptual basis for eDNA—pronounced EE-DEE-EN-AY, not ED-NUH—dates back a hundred years, before the advent of so-called molecular biology, and it is often attributed to Edmond Locard, a French criminologist working in the early 20th century. In a series of papers published in 1929, Locard proposed a principle: Every contact leaves a trace. In essence, eDNA brings Locard’s principle to the 21st century.

For the first several decades, the field that became eDNA—Barkay’s work in the 1980s included—focused largely on microbial life. Looking back at its evolution, eDNA appeared slow to claw its way out of the proverbial mud.

It wasn’t until 2003 that the method turned up a vanished ecosystem. Led by Willerslev, the 2003 study pulled ancient DNA from less than a teaspoon of sediment, demonstrating for the first time the feasibility of detecting larger organisms with the technique, including plants and woolly mammoths. In the same study, sediment collected in a New Zealand cave (which notably had not been frozen) revealed an extinct bird: the moa. What is perhaps most remarkable is that these applications for studying ancient DNA stemmed from a prodigious amount of dung dropped on the ground hundreds of thousands of years ago.

Willerslev had first come up with the idea a few years earlier while contemplating a more recent pile of dung: In between his master’s degree and Ph.D. in Copenhagen, he found himself at loose ends, struggling to obtain bones, skeletal remains, or other physical specimens to study. But one autumn, he gazed out the window at “a dog taking a crap on the street,” he recalled. The scene prompted him to think about the DNA in feces, and how it washed away with rain, leaving no visible trace. But Willerslev wondered, “‘Could it be that the DNA could survive?’ That’s what I then set up to try to find out.”

The paper demonstrated the remarkable persistence of DNA, which, he said, survives in the environment for much longer than previous estimates suggested. Willerslev has since analyzed eDNA in frozen tundra in modern-day Greenland, dating back 2 million years ago, and he is working on samples from Angkor Wat, the enormous temple complex in Cambodia believed to have been built in the 12th century. “It should be the worst DNA preservation you can imagine,” he said. “I mean, it’s hot and humid.”

But, he said, “we can get DNA out.”

eDNA has the ability to help sleuth out genetic traces, offering a thrilling—and potentially chilling—way to collect information about organisms as they go about their everyday business.

Willerslev is now hardly alone in seeing a potential tool with seemingly limitless applications—especially now as advances enable researchers to sequence and analyze larger quantities of genetic information. “It’s an open window for many, many things,” he said, “and much more than I can think of, I’m sure.” It was not just ancient mammoths; eDNA could reveal present-day organisms hiding in our midst.

Scientists use eDNA to track creatures of all shapes and sizes, be it a single species, such as tiny bits of invasive algae, eels in Loch Ness, or a sightless sand-dwelling mole that hasn’t been seen in nearly 90 years; researchers sample entire communities, say, by looking at the eDNA found on wildflower blossoms or the eDNA blowing in the wind as a proxy for all the visiting birds and bees and other animal pollinators.

The next evolutionary leap forward in eDNA’s history took shape around the search for organisms currently living in earth’s aquatic environments. In 2008, a headline appeared: “Water retains DNA memory of hidden species.” It came not from the supermarket tabloid, but the respected trade publication Chemistry World, describing work by French researcher Pierre Taberlet and his colleagues. The group sought out brown-and-green bullfrogs, which can weigh more than 2 pounds and, because they mow down everything in their path, are considered an invasive species in western Europe. Finding bullfrogs usually involved skilled herpetologists scanning shorelines with binoculars who then returned after sunset to listen for their calls. The 2008 paper suggested an easier way—a survey that required a lot less personnel.

“You could get DNA from that species directly out of the water,” said Philip Thomsen, a biologist at Aarhus University (who was not involved in the study). “And that really kickstarted the field of environmental DNA.”

Frogs can be hard to detect, and they are not, of course, the only species that eludes more traditional, boots-on-the-ground detection. Thomsen began work on another organism that notoriously confounds measurement: fish. Counting fish is sometimes said to vaguely resemble counting trees—except they’re free-roaming, in dark places, and fish counters are doing their tally while blindfolded. Environmental DNA dropped the blindfold. One review of published literature on the technology—though it came with caveats, including imperfect and imprecise detections or details on abundance—found that eDNA studies on freshwater and marine fish and amphibians outnumbered terrestrial counterparts 7:1.

In 2011, Thomsen, then a Ph.D. candidate in Willerslev’s lab, published a paper demonstrating that the method could detect rare and threatened species, such as those in low abundance in Europe, including amphibians, mammals like the otter, crustaceans, and dragonflies. “We showed that only, like, a shot glass of water really was enough to detect these organisms,” he told Undark. It was clear: The method had direct applications in conservation biology for the detection and monitoring of species.

In 2012, the journal Molecular Ecology published a special issue on eDNA, and Taberlet and several colleagues outlined a working definition of eDNA as any DNA isolated from environmental samples. The method described two similar but slightly different approaches: One can answer a yes or no question: Is the bullfrog (or whatever) present or not? It does so by scanning the metaphoric barcode, short sequences of DNA that are particular to a species or family, called primers; the checkout scanner is a common technique called quantitative real-time polymerase chain reaction, or qPCR.

Scientists use eDNA to track creatures of all shapes and sizes, be it tiny bits of invasive algae, eels in Loch Ness, or a sightless sand-dwelling mole that hasn’t been seen in nearly 90 years.

Another approach, commonly known as DNA metabarcoding, essentially spits out a list of organisms present in a given sample. “You sort of ask the question, what is here?” Thomsen said. “And then you get all of the known things, but you also get some surprises, right? Because there were some species that you didn’t know were actually present.”

One aims to find the needle in a haystack; the other attempts to reveal the whole haystack. eDNA differs from more traditional sampling techniques where organisms, like fish, are caught, manipulated, stressed, and sometimes killed. The data obtained are objective; it’s standardized and unbiased.

“eDNA, one way or the other, is going to stay as one of the important methodologies in biological sciences,” said Mehrdad Hajibabaei, a molecular biologist at University of Guelph, who pioneered the metabarcoding approach, and who traced fish some 9,800 feet under the Labrador Sea. “Every day I see something bubbling up that didn’t occur to me.”

In recent years, the field of eDNA has expanded. The method’s sensitivity allows researchers to sample previously out-of-reach environments, for example, capturing eDNA from the air—an approach that highlights eDNA’s promises and its potential pitfalls. Airborne eDNA appears to circulate on a global dust belt, suggesting its abundance and omnipresence, and it can be filtered and analyzed to monitor plants and terrestrial animals. But eDNA blowing in the wind can lead to inadvertent contamination.

In 2019, Thomsen, for instance, left two bottles of ultra-pure water out in the open—one in a grassland, and the other near a marine harbor. After a few hours, the water contained detectable eDNA associated with birds and herring, suggesting that traces of non-terrestrial species settled into the samples; the organisms obviously did not inhabit the bottles. “So it must come from the air,” Thomsen told Undark. The results suggest a two-fold problem: For one, trace evidence can move around, where two organisms that come into contact can then tote around the other’s DNA, and just because certain DNA is present doesn’t mean that the species is actually there.

Moreover, there’s also no guarantee that the presence of eDNA indicates that a species is alive, and field surveys are still needed, he said, to understand a species’ breeding success, its health, or the status of its habitat. So far, then, eDNA does not necessarily replace physical observations or collections. In another study, in which Thomsen’s group collected eDNA on flowers to look for pollinating birds, more than half of the eDNA reported in the paper came from humans, contamination that potentially muddied the results and made it harder to detect the pollinators in question.

Similarly, in May 2023, a University of Florida team that previously studied sea turtles by the eDNA traces left as they crawl along the beach published a paper that turned up human DNA. The samples were intact enough to detect key mutations that might someday be used to identify individual people, suggesting that the biological surveillance also raised unanswered questions about ethical testing on humans and informed consent. If eDNA served as a seine net, then it indiscriminately swept up information about biodiversity and inevitably ended up with, as the UF team’s paper put it, “human genetic by-catch.”

While the privacy issues around footprints in the sand, so far, appear to exist mostly in the realm of hypothetical, the use of eDNA in legal litigation relating to wildlife is not only possible but already a reality. It’s also being used in criminal investigations: In 2021, for instance, a group of Chinese researchers reported that eDNA collected off a suspected murderer’s pants had, contrary to his claims, revealed that he’d likely been to the muddy canal where a dead body had been found.

The concerns about off-target eDNA, in terms of accuracy and its reach into human medicine and forensics, highlight another, much broader, shortcoming. As Hanner at the University of Guelph described the problem: “Our regulatory frameworks and policy tend to lag at least a decade or more behind the science.”

“Every day I see something bubbling up that didn’t occur to me.”

Today, there are countless potential regulatory applications for water quality monitoring, evaluating environmental impact (including offshore wind farms and oil and gas drilling to more run-of-the-mill strip mall development), species management, and enforcement of the Endangered Species Act. In a civil court case filed in 2021, the US Fish and Wildlife Service evaluated whether an imperiled fish existed in a particular watershed, using eDNA and more traditional sampling, and found that they did not. The courts said the agency’s lack of protections for that watershed were justified. The issue does not seem to be whether eDNA stood up in court; it did. “But you really can’t say that something does not exist in an environment,” said Hajibabaei.

He recently highlighted the issue of validation: eDNA infers a result, but needs more established criteria for confirming that these results are actually true (that an organism is actually present or absent, or in a certain quantity). A series of special meetings for scientists worked to address these issues of standardization, which he said include protocols, chain of custody, and criteria for data generation and analysis. In a review of eDNA studies, Hajibabaei and his colleagues found that the field is saturated with one-offs, or proof-of-concept studies attempting to show that eDNA analyses work. Research remains overwhelmingly siloed in academia.

As such, practitioners hoping to use eDNA in an applied contexts sometimes ask for the moon. Does the species exist in certain location? For instance, Hajibabaei said, someone recently asked him if he could totally refute the presence of a parasite, proving that it had not appeared in an aquaculture farm. “And I say, ‘Look, there is no way that I can say that is 100 percent.’”

Even with a rigorous analytic framework, he said, the issues with false negatives and false positives are particularly difficult to resolve without doing one of the things eDNA obviates—more traditional collection and manual inspection. Despite the limitations, a handful of companies are already starting to commercialize the technique. For instance, future applications could help a company confirm whether the bridge it is building will harm any locally endangered animals; an aquaculture outfit determine if the waters where it farms its fish are infested with sea lice; or a landowner who is curious whether new plantings are attracting a wider range of native bees.

The problem is rather fundamental given eDNA’s reputation as an indirect way of detecting the undetectable—or as a workaround in contexts when it’s simply not possible to dip a net and catch all the organisms in the sea.

“It is very hard to validate some of these scenarios,” Hajibabaei said. “And that’s basically the nature of the beast.”

eDNA opens up a lot of possibilities, answering a question originally posed by Barkay (and no doubt many others): “Who is there?” But increasingly it’s providing hints that get at the “What are they doing there?” question, too. Elizabeth Clare, a professor of biology at York University in Toronto, studies biodiversity. She said she has observed bats roosting in one spot during the day, but, by collecting airborne eDNA, she could also infer where bats socialize at night. In another study, domesticated dog eDNA turned up in red fox scat. The two canids did not appear to be interbreeding, but researchers did wonder if their closeness had led to confusion, or cross-contamination, before ultimately settling on another explanation: Foxes apparently ate dog poop.

So while eDNA does not inherently reveal animal behavior, by some accounts the field is making strides towards providing clues as to what an organism might be doing, and how it’s interacting with other species, in a given environment—gleaning information about health without directly observing behavior.

Take another possibility: large-scale biomonitoring. Indeed, for the last three years, more people than ever before have participated in a bold experiment that is already up and running: the collection of environmental samples from public sewers to track viral Covid-19 particles and other organisms that infect humans. Technically, wastewater sampling involves a related approach called eRNA, because some viruses only have genetic information stored in the form of RNA, rather than DNA. Still, the same principles apply. (Studies also suggest RNA, which determines which proteins an organism is expressing, could be used to assess ecosystem health; organisms that are healthy may express entirely different proteins compared to those that are stressed.) In addition to monitoring the prevalence of diseases, wastewater surveillance demonstrates how an existing infrastructure designed to do one thing—sewers were designed to collect waste—could be fashioned into a powerful tool for studying something else, like detecting pathogens.

Clare has a habit of doing just that. “I personally am one of those people who tends to use tools—not the way they were intended,” she said. Clare was among the researchers who noticed a gap in the research: There was a lot less eDNA work done on terrestrial organisms. So, she began working with what might be called a natural filter, that is worms that suck blood from mammals. “It’s a lot easier to collect 1,000 leeches than it is to find the animals. But they have blood-meals inside them and the blood carries the DNA of the animals they interacted with,” she said. “It’s like having a bunch of field assistants out surveying for you.” Then, one of her students thought the same thing for dung beetles, which are even easier to collect.

Clare is now spearheading a new application for another continuous monitoring system—leveraging existing air-quality monitors that measure pollutants, such as fine particulate matter, while also simultaneously vacuuming eDNA out of the sky. In late 2023, she only had a small sample set, but had already found that, as a byproduct of routine air quality monitoring, these preexisting tools doubled as filters for the material she is after. It was, more or less, a regulated, transcontinental network collecting samples in a very consistent way over long periods of time. “You could then use it to build up time series and high-resolution data on entire continents,” she said.

In the UK alone, Clare said, there are an estimated 150 different sites sucking a known quantity of air, every week, all year long, which amount to some 8,000 measurements a year. Clare and her co-authors recently analyzed at a tiny subset of these—17 measurements from two locations—and were able to identify more than 180 different taxonomic groups, more than 80 different kinds of plants and fungi, 26 different species of mammal, 34 different species of birds, plus at least 35 kinds of insects.

Certainly, other long-term ecological research sites exist. The US has a network of such facilities. But their scope of study does not include a globally distributed infrastructure that measures biodiversity constantly—including the passage of migrating birds overhead to the expansion and contraction of species with climate change. Arguably, eDNA will likely complement, rather than supplant, the distributed network of people, who record real-time, high-resolution, tempo-spatial observations on websites such as eBird or iNaturalist. Like a fuzzy image of an entirely new galaxy coming into view, the current resolution remains low.

“It’s sort of a generalized collection system, which is pretty much unheard of in biodiversity science,” said Clare. She was referring to the capacity to pull eDNA signals out of thin air, but the sentiment spoke to the method as a whole: “It’s not perfect,” she said, “but there’s nothing else that really does that.”

This article was originally published on Undark. Read the original article.

Image Credit: Undark + DALL-E

Kategorie: Transhumanismus

This Robot Predicts When You’ll Smile—Then Grins Back Right on Cue

Singularity HUB - 1 Duben, 2024 - 23:19

Comedy clubs are my favorite weekend outings. Rally some friends, grab a few drinks, and when a joke lands for us all—there’s a magical moment when our eyes meet, and we share a cheeky grin.

Smiling can turn strangers into the dearest of friends. It spurs meet-cute Hollywood plots, repairs broken relationships, and is inextricably linked to fuzzy, warm feelings of joy.

At least for people. For robots, their attempts at genuine smiles often fall into the uncanny valley—close enough to resemble a human, but causing a touch of unease. Logically, you know what they’re trying to do. But gut feelings tell you something’s not right.

It may be because of timing. Robots are trained to mimic the facial expression of a smile. But they don’t know when to turn the grin on. When humans connect, we genuinely smile in tandem without any conscious planning. Robots take time to analyze a person’s facial expressions to reproduce a grin. To a human, even milliseconds of delay raises hair on the back of the neck—like a horror movie, something feels manipulative and wrong.

Last week, a team at Columbia University showed off an algorithm that teaches robots to share a smile with their human operators. The AI analyzes slight facial changes to predict its operators’ expressions about 800 milliseconds before they happen—just enough time for the robot to grin back.

The team trained a soft robotic humanoid face called Emo to anticipate and match the expressions of its human companion. With a silicone face tinted in blue, Emo looks like a 60s science fiction alien. But it readily grinned along with its human partner on the same “emotional” wavelength.

Humanoid robots are often clunky and stilted when communicating with humans, wrote Dr. Rachael Jack at the University of Glasgow, who was not involved in the study. ChatGPT and other large language algorithms can already make an AI’s speech sound human, but non-verbal communications are hard to replicate.

Programming social skills—at least for facial expression—into physical robots is a first step toward helping “social robots to join the human social world,” she wrote.

Under the Hood

From robotaxis to robo-servers that bring you food and drinks, autonomous robots are increasingly entering our lives.

In London, New York, Munich, and Seoul, autonomous robots zip through chaotic airports offering customer assistance—checking in, finding a gate, or recovering lost luggage. In Singapore, several seven-foot-tall robots with 360-degree vision roam an airport flagging potential security problems. During the pandemic, robot dogs enforced social distancing.

But robots can do more. For dangerous jobs—such as cleaning the wreckage of destroyed houses or bridges—they could pioneer rescue efforts and increase safety for first responders. With an increasingly aging global population, they could help nurses to support the elderly.

Current humanoid robots are cartoonishly adorable. But the main ingredient for robots to enter our world is trust. As scientists build robots with increasingly human-like faces, we want their expressions to match our expectations. It’s not just about mimicking a facial expression. A genuine shared “yeah I know” smile over a cringe-worthy joke forms a bond.

Non-verbal communications—expressions, hand gestures, body postures—are tools we use to express ourselves. With ChatGPT and other generative AI, machines can already “communicate in video and verbally,” said study author Dr. Hod Lipson to Science.

But when it comes to the real world—where a glance, a wink, and smile can make all the difference—it’s “a channel that’s missing right now,” said Lipson. “Smiling at the wrong time could backfire. [If even a few milliseconds too late], it feels like you’re pandering maybe.”

Say Cheese

To get robots into non-verbal action, the team focused on one aspect—a shared smile. Previous studies have pre-programmed robots to mimic a smile. But because they’re not spontaneous, it causes a slight but noticeable delay and makes the grin look fake.

“There’s a lot of things that go into non-verbal communication” that are hard to quantify, said Lipson. “The reason we need to say ‘cheese’ when we take a photo is because smiling on demand is actually pretty hard.”

The new study focused on timing.

The team engineered an algorithm that anticipates a person’s smile and makes a human-like animatronic face grin in tandem. Called Emo, the robotic face has 26 gears—think artificial muscles—enveloped in a stretchy silicone “skin.” Each gear is attached to the main robotic “skeleton” with magnets to move its eyebrows, eyes, mouth, and neck. Emo’s eyes have built-in cameras to record its environment and control its eyeball movements and blinking motions.

By itself, Emo can track its own facial expressions. The goal of the new study was to help it interpret others’ emotions. The team used a trick any introverted teenager might know: They asked Emo to look in the mirror to learn how to control its gears and form a perfect facial expression, such as a smile. The robot gradually learned to match its expressions with motor commands—say, “lift the cheeks.” The team then removed any programming that could potentially stretch the face too much, injuring to the robot’s silicon skin.

“Turns out…[making] a robot face that can smile was incredibly challenging from a mechanical point of view. It’s harder than making a robotic hand,” said Lipson. “We’re very good at spotting inauthentic smiles. So we’re very sensitive to that.”

To counteract the uncanny valley, the team trained Emo to predict facial movements using videos of humans laughing, surprised, frowning, crying, and making other expressions. Emotions are universal: When you smile, the corners of your mouth curl into a crescent moon. When you cry, the brows furrow together.

The AI analyzed facial movements of each scene frame-by-frame. By measuring distances between the eyes, mouth, and other “facial landmarks,” it found telltale signs that correspond to a particular emotion—for example, an uptick of the corner of your mouth suggests a hint of a smile, whereas a downward motion may descend into a frown.

Once trained, the AI took less than a second to recognize these facial landmarks. When powering Emo, the robot face could anticipate a smile based on human interactions within a second, so that it grinned with its participant.

To be clear, the AI doesn’t “feel.” Rather, it behaves as a human would when chuckling to a funny stand-up with a genuine-seeming smile.

Facial expressions aren’t the only cues we notice when interacting with people. Subtle head shakes, nods, raised eyebrows, or hand gestures all make a mark. Regardless of cultures, “ums,” “ahhs,” and “likes”—or their equivalents—are integrated into everyday interactions. For now, Emo is like a baby who learned how to smile. It doesn’t yet understand other contexts.

“There’s a lot more to go,” said Lipson. We’re just scratching the surface of non-verbal communications for AI. But “if you think engaging with ChatGPT is interesting, just wait until these things become physical, and all bets are off.”

Image Credit: Yuhang Hu, Columbia Engineering via YouTube

Kategorie: Transhumanismus

This Week’s Awesome Tech Stories From Around the Web (Through March 30)

Singularity HUB - 30 Březen, 2024 - 16:00
COMPUTING

The Best Qubits for Quantum Computing Might Just Be Atoms
Philip Ball | Quanta
“In the search for the most scalable hardware to use for quantum computers, qubits made of individual atoms are having a breakout moment. …’We believe we can pack tens or even hundreds of thousands in a centimeter-scale device,’ [Mark Saffman, a physicist at the University of Wisconsin] said.”

ARTIFICIAL INTELLIGENCE

AI Chatbots Are Improving at an Even Faster Rate Than Computer Chips
Chris Stokel-Walker | New Scientist
“Besiroglu and his colleagues analyzed the performance of 231 LLMs developed between 2012 and 2023 and found that, on average, the computing power required for subsequent versions of an LLM to hit a given benchmark halved every eight months. That is far faster than Moore’s law, a computing rule of thumb coined in 1965 that suggests the number of transistors on a chip, a measure of performance, doubles every 18 to 24 months.”

FUTURE

How AI Could Explode the Economy
Dylan Matthews | Vox
“Imagine everything humans have achieved since the days when we lived in caves: wheels, writing, bronze and iron smelting, pyramids and the Great Wall, ocean-traversing ships, mechanical reaping, railroads, telegraphy, electricity, photography, film, recorded music, laundry machines, television, the internet, cellphones. Now imagine accomplishing 10 times all that—in just a quarter century. This is a very, very, very strange world we’re contemplating. It’s strange enough that it’s fair to wonder whether it’s even possible.”

DIGITAL MEDIA

What’s Next for Generative Video
Will Douglas Heaven | MIT Technology Review
“The first batch of models that could turn text into video appeared in late 2022, from companies including Meta, Google, and video-tech startup Runway. It was a neat trick, but the results were grainy, glitchy, and just a few seconds long. Fast-forward 18 months, and the best of Sora’s high-definition, photorealistic output is so stunning that some breathless observers are predicting the death of Hollywood. …As we continue to get to grips what’s ahead—good and bad—here are four things to think about.”

SENSORS

Salt-Sized Sensors Mimic the Brain
Gwendolyn Rak | IEEE Spectrum
“To gain a better understanding of the brain, why not draw inspiration from it? At least, that’s what researchers at Brown University did, by building a wireless communications system that mimics the brain using an array of tiny silicon sensors, each the size of a grain of sand. The researchers hope that the technology could one day be used in implantable brain-machine interfaces to read brain activity.”

ROBOTICS

Understanding Humanoid Robots
Brian Heater | TechCrunch
“A lot of smart people have faith in the form factor and plenty of others remain skeptical. One thing I’m confident saying, however, is that whether or not future factories will be populated with humanoid robots on a meaningful scale, all of this work will amount to something. Even the most skeptical roboticists I’ve spoken to on the subject have pointed to the NASA model, where the race to land humans on the moon led to the invention of products we use on Earth to this day.”

INTERNET

Blazing Bits Transmitted 4.5 Million Times Faster Than Broadband
Michael Franco | New Atlas
“An international research team has sent an astounding amount of data at a nearly incomprehensible speed. It’s the fastest data transmission ever using a single optical fiber and shows just how speedy the process can get using current materials.”

COMPUTING

How We’ll Reach a 1 Trillion Transistor GPU
Mark Liu and HS Philip Wong | IEEE Spectrum
“We forecast that within a decade a multichiplet GPU will have more than 1 trillion transistors. We’ll need to link all these chiplets together in a 3D stack, but fortunately, industry has been able to rapidly scale down the pitch of vertical interconnects, increasing the density of connections. And there is plenty of room for more. We see no reason why the interconnect density can’t grow by an order of magnitude, and even beyond.”

SPACE

Astronomers Watch in Real Time as Epic Supernova Potentially Births a Black Hole
Isaac Schultz | Gizmodo
“‘Calculations of the circumstellar material emitted in the explosion, as well as this material’s density and mass before and after the supernova, create a discrepancy, which makes it very likely that the missing mass ended up in a black hole that was formed in the aftermath of the explosion—something that’s usually very hard to determine,’ said study co-author Ido Irani, a researcher at the Weizmann Institute.”

ARTIFICIAL INTELLIGENCE

Large Language Models’ Emergent Abilities Are a Mirage
Stephen Ornes | Wired
“[In some tasks measured by the BIG-bench project, LLM] performance remained near zero for a while, then performance jumped. Other studies found similar leaps in ability. The authors described this as ‘breakthrough’ behavior; other researchers have likened it to a phase transition in physics, like when liquid water freezes into ice. …[But] a new paper by a trio of researchers at Stanford University posits that the sudden appearance of these abilities is just a consequence of the way researchers measure the LLM’s performance. The abilities, they argue, are neither unpredictable nor sudden.”

Image Credit: AedrianUnsplash

Kategorie: Transhumanismus

A New Treatment Rejuvenates Aging Immune Systems in Elderly Mice

Singularity HUB - 29 Březen, 2024 - 19:32

Our immune system is like a well-trained brigade.

Each unit has a unique specialty. Some cells directly kill invading foes; others release protein “markers” to attract immune cell types to a target. Together, they’re a formidable force that fights off biological threats—both pathogens from outside the body and cancer or senescent “zombie” cells from within.

With age, the camaraderie breaks down. Some units flare up, causing chronic inflammation that wreaks havoc in the brain and body. These cells increase the risk of dementia, heart disease, and gradually sap muscles. Other units that battle novel pathogens—such as a new strain of flu—slowly dwindle, making it harder to ward off infections.

All these cells come from a single source: a type of stem cell in bone marrow.

This week, in a study published in Nature, scientists say they restored the balance between the units in aged mice, reverting their immune systems back to a youthful state. Using an antibody, the team targeted a subpopulation of stem cells that eventually develops into the immune cells underlying chronic inflammation. The antibodies latched onto targets and rallied other immune cells to wipe them out.

In elderly mice, the one-shot treatment reinvigorated their immune systems. When challenged with a vaccine, the mice generated a stronger immune response than non-treated peers and readily fought off later viral infections.

Rejuvenating the immune system isn’t just about tackling pathogens. An aged immune system increases the risk of common age-related medical problems, such as dementia, stroke, and heart attacks.

“Eliminating the underlying drivers of aging is central to preventing several age-related diseases,” wrote stem cell scientists Drs. Yasar Arfat Kasu and Robert Signer at the University of California, San Diego, who were not involved in the study. The intervention “could thus have an outsized impact on enhancing immunity, reducing the incidence and severity of chronic inflammatory diseases and preventing blood disorders.”

Stem Cell Succession

All blood cells arise from a single source: hematopoietic stem cells, or blood stem cells, that reside in bone marrow.

Some of these stem cells eventually become “fighter” white blood cells, including killer T cells that—true to their name—directly destroy cancerous cells and infections. Others become B cells that pump out antibodies to tag invaders for elimination. This unit of the immune system is dubbed “adaptive” because it can tackle new intruders the body has never seen.

Still more blood stem cells transform into myriad other immune cell types—including those that literally eat their foes. These cells form the innate immune unit, which is present at birth and the first line of defense throughout our lifetime.

Unlike their adaptive comrades, which more precisely target invaders, the innate unit uses a “burn it all” strategy to fight off infections by increasing local inflammation. It’s a double-edged sword. While useful in youth, with age the unit becomes dominant, causing chronic inflammation that gradually damages the body.

The reason for this can be found in the immune system’s stem cell origins.

Blood stem cells come in multiple types. Some produce both immune units equally; others are biased towards the innate unit. With age, the latter gradually take over, increasing chronic inflammation while lowering protection against new pathogens. This is, in part, why elderly people are advised to get new flu shots, and why they were first in line for vaccination against Covid-19.

The new study describes a practical approach to rebalancing the aged immune system. Using an antibody-based therapy, the scientists directly obliterated the population of stem cells that lead to chronic inflammation.

Blood Bath

Like most cells, blood stem cells have a unique fingerprint—a set of proteins that dot their surfaces. A subset of the cells, dubbed my-HSCs, are more likely to produce cells in the innate immune system, which triggers chronic inflammation with age.

By mining multiple gene expression datasets from blood stem cells, the team found three protein markers they could use to identify and target my-HSCs cells in aged mice. They then engineered an antibody to target the cells for elimination.

Just a week after infusing it into elderly mice, the antibody had reduced the number of myHSC cells in their bone marrow without harming other blood stem cells. A genetic screen confirmed the mice’s immune profile was more like that of young mice.

The one-shot treatment lasted “strikingly” long, wrote Kasu and Signer. A single injection reduced the troublesome stem cells for at least two months—roughly a twelfth of a mouse’s lifespan. With my-HSCs no longer dominant, healthy blood stem cells gained ground inside the bone marrow. For at least four months, the treated mice produced more cells in the adaptive immune unit than their similarly aged peers, while having less overall inflammation.

As an ultimate test, the team challenged elderly mice with a difficult virus. To beat the infection, multiple components of the adaptive immune system had to rev up and work in concert.

Some elderly mice received a vaccine and the antibody treatment. Others only received the vaccine. Those treated with the antibody mounted a larger protective immune response. When given a dose of the virus, their immune systems rapidly recruited adaptive immune cells, and fought off the infection—whereas those receiving only the vaccine struggled.

Restoring Balance

The study shows that not all blood stem cells are alike. Eliminating those that cause inflammation directly changes the biological “age” of the entire immune system, allowing it to better tackle damaging changes in the body and fight off infections.

Like a leaking garbage can, innate immune cells can dump inflammatory molecules into their neighborhood. By cleaning up the source, the antibody could have also changed the environment the cells live in, so they are better able to thrive during aging.

Additionally, the immune system is an “eye in the sky” for monitoring cancer. Reviving immune function could restore the surveillance systems needed to eliminate cancer cells. The antibody treatment here could potentially tag-team with CAR T therapy or classic anti-cancer therapies, such as chemotherapy, as a one-two punch against the disease.

But it isn’t coming to clinics soon. Without unexpected setbacks or regulatory hiccups, the team estimates three to five years before testing in people. As a next step, they’re looking to expand the therapy to tackle other disorders related to a malfunctioning immune system.

Image Credit: Volker Brinkmann

Kategorie: Transhumanismus
Syndikovat obsah