Singularity HUB

Syndikovat obsah Singularity Hub
News and Insights on Technology, Science, and the Future from Singularity Group
Aktualizace: 54 min 12 sek zpět

This Week’s Awesome Tech Stories From Around the Web (Through April 27)

27 Duben, 2024 - 16:00
ARTIFICIAL INTELLIGENCE

Meta’s Open Source Llama 3 Is Already Nipping at OpenAI’s Heels
Will Knight | Wired
“OpenAI changed the world with ChatGPT, setting off a wave of AI investment and drawing more than 2 million developers to its cloud APIs. But if open source models prove competitive, developers and entrepreneurs may decide to stop paying to access the latest model from OpenAI or Google and use Llama 3 or one of the other increasingly powerful open source models that are popping up.”

BIOTECH

‘Real Hope’ for Cancer Cure as Personal mRNA Vaccine for Melanoma Trialed
Andrew Gregory | The Guardian
“Experts are testing new jabs that are custom-built for each patient and tell their body to hunt down cancer cells to prevent the disease ever coming back. A phase 2 trial found the vaccines dramatically reduced the risk of the cancer returning in melanoma patients. Now a final, phase 3, trial has been launched and is being led by University College London Hospitals NHS Foundation Trust (UCLH). Dr Heather Shaw, the national coordinating investigator for the trial, said the jabs had the potential to cure people with melanoma and are being tested in other cancers, including lung, bladder and kidney.”

DIGITAL MEDIA

An AI Startup Made a Hyperrealistic Deepfake of Me That’s So Good It’s Scary
Melissa Heikkilä | MIT Technology Review
“Until now, all AI-generated videos of people have tended to have some stiffness, glitchiness, or other unnatural elements that make them pretty easy to differentiate from reality. Because they’re so close to the real thing but not quite it, these videos can make people feel annoyed or uneasy or icky—a phenomenon commonly known as the uncanny valley. Synthesia claims its new technology will finally lead us out of the valley.”

ENERGY

Nuclear Fusion Experiment Overcomes Two Key Operating Hurdles
Matthew Sparkes | New Scientist
“A nuclear fusion reaction has overcome two key barriers to operating in a ‘sweet spot’ needed for optimal power production: boosting the plasma density and keeping that denser plasma contained. The milestone is yet another stepping stone towards fusion power, although a commercial reactor is still probably years away.”

FUTURE

Daniel Dennett: ‘ Why Civilization Is More Fragile Than We Realized’
Tom Chatfield | BBC
“[Dennett’s] warning was not of a takeover by some superintelligence, but of a threat he believed that nonetheless could be existential for civilization, rooted in the vulnerabilities of human nature. ‘If we turn this wonderful technology we have for knowledge into a weapon for disinformation,’ he told me, ‘we are in deep trouble.’ Why? ‘Because we won’t know what we know, and we won’t know who to trust, and we won’t know whether we’re informed or misinformed. We may become either paranoid and hyper-skeptical, or just apathetic and unmoved. Both of those are very dangerous avenues. And they’re upon us.'”

ENVIRONMENT

California Just Went 9.25 Hours Using Only Renewable Energy
Adele Peters | Fast Company
“Last Saturday, as 39 million Californians went about their daily lives—taking showers, doing laundry, or charging their electric cars—the whole state ran on 100% clean electricity for more than nine hours. The same thing happened on Sunday, as the state was powered without fossil fuels for more than eight hours. It was the ninth straight day that solar, wind, hydropower, geothermal, and battery storage fully powered the electric grid for at least some portion of the time. Over the last six and a half weeks, that’s happened nearly every day. In some cases, it’s just for 15 minutes. But often it’s for hours at a time.”

archive pa

TECH

AI Hype Is Deflating. Can AI Companies Find a Way to Turn a Profit?
Gerrit De Vynck | The Washington Post
“Some once-promising start-ups have cratered, and the suite of flashy products launched by the biggest players in the AI race—OpenAI, Microsoft, Google and Meta—have yet to upend the way people work and communicate with one another. While money keeps pouring into AI, very few companies are turning a profit on the tech, which remains hugely expensive to build and run. The road to widespread adoption and business success is still looking long, twisty and full of roadblocks, say tech executives, technologists and financial analysts.”

ARTIFICIAL INTELLIGENCE

Apple Releases Eight Small AI Language Models Aimed at On-Device Use
Benj Edwards | Ars Technica
“In the world of AI, what might be called ‘small language models’ have been growing in popularity recently because they can be run on a local device instead of requiring data center-grade computers in the cloud. On Wednesday, Apple introduced a set of tiny source-available AI language models called OpenELM that are small enough to run directly on a smartphone. They’re mostly proof-of-concept research models for now, but they could form the basis of future on-device AI offerings from Apple.”

SPACE

If Starship Is Real, We’re Going to Need Big Cargo Movers on the Moon and Mars
Eric Berger | Ars Technica
“Unloading tons of cargo on the Moon may seem like a preposterous notion. During Apollo, mass restrictions were so draconian that the Lunar Module could carry two astronauts, their spacesuits, some food, and just 300 pounds (136 kg) of scientific payload down to the lunar surface. By contrast, Starship is designed to carry 100 tons, or more, to the lunar surface in a single mission. This is an insane amount of cargo relative to anything in spaceflight history, but that’s the future that [Jaret] Matthews is aiming toward.”

Image Credit: CARTIST / Unsplash

Kategorie: Transhumanismus

How Quantum Computers Could Illuminate the Full Range of Human Genetic Diversity

26 Duben, 2024 - 19:26

Genomics is revolutionizing medicine and science, but current approaches still struggle to capture the breadth of human genetic diversity. Pangenomes that incorporate many people’s DNA could be the answer, and a new project thinks quantum computers will be a key enabler.

When the Human Genome Project published its first reference genome in 2001, it was based on DNA from just a handful of humans. While less than one percent of our DNA varies from person to person, this can still leave important gaps and limit what we can learn from genomic analyses.

That’s why the concept of a pangenome has become increasingly popular. This refers to a collection of genomic sequences from many different people that have been merged to cover a much greater range of human genetic possibilities.

Assembling these pangenomes is tricky though, and their size and complexity make carrying out computational analyses on them daunting. That’s why the University of Cambridge, the Wellcome Sanger Institute, and the European Molecular Biology Laboratory’s European Bioinformatics Institute have teamed up to see if quantum computers can help.

“We’ve only just scratched the surface of both quantum computing and pangenomics,” David Holland of the Wellcome Sanger Institute said in a press release. “So to bring these two worlds together is incredibly exciting. We don’t know exactly what’s coming, but we see great opportunities for major new advances.”

Pangenomes could be crucial for discovering how different genetic variants impact human biology, or that of other species. The current reference genome is used as a guide to assemble genetic sequences, but due to the variability of human genomes there are often significant chunks of DNA that don’t match up. A pangenome would capture a lot more of that diversity, making it easier to connect the dots and giving us a more complete view of possible human genomes.

Despite their power, pangenomes are difficult to work with. While the genome of a single person is just a linear sequence of genetic data, a pangenome is a complex network that tries to capture all the ways in which its constituent genomes do and don’t overlap.

These so-called “sequence graphs” are challenging to construct and even more challenging to analyze. And it will require high levels of computational power and novel techniques to make use of the rich representation of human diversity contained within.

That’s where this new project sees quantum computers lending a hand. Relying on the quirks of quantum mechanics, they can tackle certain computational problems that are near impossible for classical computers.

While there’s still considerable uncertainty about what kinds of calculations quantum computers will actually be able to run, many hope they will dramatically improve our ability to solve problems relating to complex systems with large numbers of variables. This new project is aimed at developing quantum algorithms that speed up both the production and analysis of pangenomes, though the researchers admit it’s early days.

“We’re starting from scratch because we don’t even know yet how to represent a pangenome in a quantum computing environment,” David Yuan from the European Bioinformatics Institute said in the press release. “If you compare it to the first moon landings, this project is the equivalent of designing a rocket and training the astronauts.”

The project has been awarded $3.5 million, which will be used to develop new algorithms and then test them on simulated quantum hardware using supercomputers. The researchers think the tools they develop could lead to significant breakthroughs in personalized medicine. They could also be applied to pangenomes of viruses and bacteria, improving our ability to track and manage disease outbreaks.

Given its exploratory nature and the difficulty of getting quantum computers to do anything practical, it could be some time before the project bears fruit. But if they succeed, the researchers could significantly expand our ability to make sense of the genes that shape our lives.

Image Credit: Gerd AltmannPixabay

Kategorie: Transhumanismus

This AI Just Designed a More Precise CRISPR Gene Editor for Human Cells From Scratch

25 Duben, 2024 - 22:25

CRISPR has revolutionized science. AI is now taking the gene editor to the next level.

Thanks to its ability to accurately edit the genome, CRISPR tools are now widely used in biotechnology and across medicine to tackle inherited diseases. In late 2023, a therapy using the Nobel Prize-winning tool gained approval from the FDA to treat sickle cell disease. CRISPR has also enabled CAR T cell therapy to battle cancers and been used to lower dangerously high cholesterol levels in clinical trials.

Outside medicine, CRISPR tools are changing the agricultural landscape, with projects ongoing to engineer hornless bulls, nutrient-rich tomatoes, and livestock and fish with more muscle mass.

Despite its real-world impact, CRISPR isn’t perfect. The tool snips both strands of DNA, which can cause dangerous mutations. It also can inadvertently nip unintended areas of the genome and trigger unpredictable side effects.

CRISPR was first discovered in bacteria as a defense mechanism, suggesting that nature hides a bounty of CRISPR components. For the past decade, scientists have screened different natural environments—for example, pond scum—to find other versions of the tool that could potentially increase its efficacy and precision. While successful, this strategy depends on what nature has to offer. Some benefits, such as a smaller size or greater longevity in the body, often come with trade-offs like lower activity or precision.

Rather than relying on evolution, can we fast-track better CRISPR tools with AI?

This week, Profluent, a startup based in California, outlined a strategy that uses AI to dream up a new universe of CRISPR gene editors. Based on large language models—the technology behind the popular ChatGPT—the AI designed several new gene-editing components.

In human cells, the components meshed to reliably edit targeted genes. The efficiency matched classic CRISPR, but with far more precision. The most promising editor, dubbed OpenCRISPR-1, could also precisely swap out single DNA letters—a technology called base editing—with an accuracy that rivals current tools.

“We demonstrate the world’s first successful editing of the human genome using a gene editing system where every component is fully designed by AI,” wrote the authors in a blog post.

Match Made in Heaven

CRISPR and AI have had a long romance.

The CRISPR recipe has two main parts: A “scissor” Cas protein that cuts or nicks the genome and a “bloodhound” RNA guide that tethers the scissor protein to the target gene.

By varying these components, the system becomes a toolbox, with each setup tailored to perform a specific type of gene editing. Some Cas proteins cut both strands of DNA; others give just one strand a quick snip. Alternative versions can also cut RNA, a type of genetic material found in viruses, and can be used as diagnostic tools or antiviral treatments.

Different versions of Cas proteins are often found by searching natural environments or through a process called direct evolution. Here, scientist rationally swap out some parts of the Cas protein to potentially boost efficacy.

It’s a highly time-consuming process. Which is where AI comes in.

Machine learning has already helped predict off-target effects in CRISPR tools. It’s also homed in on smaller Cas proteins to make downsized editors easier to deliver into cells.

Profluent used AI in a novel way: Rather than boosting current systems, they designed CRISPR components from scratch using large language models.

The basis of ChatGPT and DALL-E, these models launched AI into the mainstream. They learn from massive amounts of text, images, music, and other data to distill patterns and concepts. It’s how the algorithms generate images from a single text prompt—say, “unicorn with sunglasses dancing over a rainbow”—or mimic the music style of a given artist.

The same technology has also transformed the protein design world. Like words in a book, proteins are strung from individual molecular “letters” into chains, which then fold in specific ways to make the proteins work. By feeding protein sequences into AI, scientists have already fashioned antibodies and other functional proteins unknown to nature.

“Large generative protein language models capture the underlying blueprint of what makes a natural protein functional,” wrote the team in the blog post. “They promise a shortcut to bypass the random process of evolution and move us towards intentionally designing proteins for a specific purpose.”

Do AIs Dream of CRISPR Sheep?

All large language models need training data. The same is true for an algorithm that generates gene editors. Unlike text, images, or videos that can be easily scraped online, a CRISPR database is harder to find.

The team first screened over 26 terabytes of data about current CRISPR systems and built a CRISPR-Cas atlas—the most extensive to date, according to the researchers.

The search revealed millions of CRISPR-Cas components. The team then trained their ProGen2 language model—which was fine-tuned for protein discovery—using the CRISPR atlas.

The AI eventually generated four million protein sequences with potential Cas activity. After filtering out obvious deadbeats with another computer program, the team zeroed in on a new universe of Cas “protein scissors.”

The algorithm didn’t just dream up proteins like Cas9. Cas proteins come in families, each with its own quirks in gene-editing ability. The AI also designed proteins resembling Cas13, which targets RNA, and Cas12a, which is more compact than Cas9.

Overall, the results expanded the universe of potential Cas proteins nearly five-fold. But do any of them work?

Hello, CRISPR World

For the next test, the team focused on Cas9, because it’s already widely used in biomedical and other fields. They trained the AI on roughly 240,000 different Cas9 protein structures from multiple types of animals, with the goal of generating similar proteins to replace natural ones—but with higher efficacy or precision.

The initial results were surprising: The generated sequences, roughly a million of them, were totally different than natural Cas9 proteins. But using DeepMind’s AlphaFold2, a protein structure prediction AI, the team found the generated protein sequences could adopt similar shapes.

Cas proteins can’t function without a bloodhound RNA guide. With the CRISPR-Cas atlas, the team also trained AI to generate an RNA guide when given a protein sequence.

The result is a CRISPR gene editor with both components—Cas protein and RNA guide— designed by AI. Dubbed OpenCRISPR-1, its gene editing activity was similar to classic CRISPR-Cas9 systems when tested in cultured human kidney cells. Surprisingly, the AI-generated version slashed off-target editing by roughly 95 percent.

With a few tweaks, OpenCRISPR-1 could also perform base editing, which can change single DNA letters. Compared to classic CRISPR, base editing is likely more precise as it limits damage to the genome. In human kidney cells, OpenCRISPR-1 reliably converted one DNA letter to another in three sites across the genome, with an editing rate similar to current base editors.

To be clear, the AI-generated CRISPR tools have only been tested in cells in a dish. For treatments to reach the clinic, they’d need to undergo careful testing for safety and efficacy in living creatures, which can take a long time.

Profluent is openly sharing OpenCRISPR-1 with researchers and commercial groups but keeping the AI that created the tool in-house. “We release OpenCRISPR-1 publicly to facilitate broad, ethical usage across research and commercial applications,” they wrote.

As a preprint, the paper describing their work has yet to be analyzed by expert peer reviewers. Scientists will also have to show OpenCRISPR-1 or variants work in multiple organisms, including plants, mice, and humans. But tantalizingly, the results open a new avenue for generative AI—one that could fundamentally change our genetic blueprint.

Image Credit: Profluent

Kategorie: Transhumanismus

The Crucial Building Blocks of Life on Earth Form More Easily in Outer Space

23 Duben, 2024 - 16:00

The origin of life on Earth is still enigmatic, but we are slowly unraveling the steps involved and the necessary ingredients. Scientists believe life arose in a primordial soup of organic chemicals and biomolecules on the early Earth, eventually leading to actual organisms.

It’s long been suspected that some of these ingredients may have been delivered from space. Now a new study, published in Science Advances, shows that a special group of molecules, known as peptides, can form more easily under the conditions of space than those found on Earth. That means they could have been delivered to the early Earth by meteorites or comets—and that life may be able to form elsewhere, too.

The functions of life are upheld in our cells (and those of all living beings) by large, complex carbon-based (organic) molecules called proteins. How to make the large variety of proteins we need to stay alive is encoded in our DNA, which is itself a large and complex organic molecule.

However, these complex molecules are assembled from a variety of small and simple molecules such as amino acids—the so-called building blocks of life.

To explain the origin of life, we need to understand how and where these building blocks form and under what conditions they spontaneously assemble themselves into more complex structures. Finally, we need to understand the step that enables them to become a confined, self-replicating system—a living organism.

This latest study sheds light on how some of these building blocks might have formed and assembled and how they ended up on Earth.

Steps to Life

DNA is made up of about 20 different amino acids. Like letters of the alphabet, these are arranged in DNA’s double helix structure in different combinations to encrypt our genetic code.

Peptides are also an assemblage of amino acids in a chain-like structure. Peptides can be made up of as little as two amino acids, but also range to hundreds of amino acids.

The assemblage of amino acids into peptides is an important step because peptides provide functions such as catalyzing, or enhancing, reactions that are important to maintaining life. They are also candidate molecules that could have been further assembled into early versions of membranes, confining functional molecules in cell-like structures.

However, despite their potentially important role in the origin of life, it was not so straightforward for peptides to form spontaneously under the environmental conditions on the early Earth. In fact, the scientists behind the current study had previously shown that the cold conditions of space are actually more favorable to the formation of peptides.

The interstellar medium. Image Credit: Charles Carter/Keck Institute for Space Studies

In the very low density clouds of molecules and dust particles in a part of space called the interstellar medium (see above), single atoms of carbon can stick to the surfaces of dust grains together with carbon monoxide and ammonia molecules. They then react to form amino acid-like molecules. When such a cloud becomes denser and dust particles also start to stick together, these molecules can assemble into peptides.

In their new study, the scientists look at the dense environment of dusty disks, from which a new solar system with a star and planets emerges eventually. Such disks form when clouds suddenly collapse under the force of gravity. In this environment, water molecules are much more prevalent—forming ice on the surfaces of any growing agglomerates of particles that could inhibit the reactions that form peptides.

By emulating the reactions likely to occur in the interstellar medium in the laboratory, the study shows that, although the formation of peptides is slightly diminished, it is not prevented. Instead, as rocks and dust combine to form larger bodies such as asteroids and comets, these bodies heat up and allow for liquids to form. This boosts peptide formation in these liquids, and there’s a natural selection of further reactions resulting in even more complex organic molecules. These processes would have occurred during the formation of our own solar system.

Many of the building blocks of life such as amino acids, lipids, and sugars can form in the space environment. Many have been detected in meteorites.

Because peptide formation is more efficient in space than on Earth, and because they can accumulate in comets, their impacts on the early Earth might have delivered loads that boosted the steps towards the origin of life on Earth.

So, what does all this mean for our chances of finding alien life? Well, the building blocks for life are available throughout the universe. How specific the conditions need to be to enable them to self-assemble into living organisms is still an open question. Once we know that, we’ll have a good idea of how widespread, or not, life might be.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image Credit: Aldebaran S / Unsplash

Kategorie: Transhumanismus

A Universal Vaccine Against Any Viral Variant? A New Study Suggests It’s Possible

22 Duben, 2024 - 22:28

From Covid boosters to annual flu shots, most of us are left wondering: Why so many, so often?

There’s a reason to update vaccines. Viruses rapidly mutate, which can help them escape the body’s immune system, putting previously vaccinated people at risk of infection. Using AI modeling, scientists have increasingly been able to predict how viruses will evolve. But they mutate fast, and we’re still playing catch up.

An alternative strategy is to break the cycle with a universal vaccine that can train the body to recognize a virus despite mutation. Such a vaccine could eradicate new flu strains, even if the virus has transformed into nearly unrecognizable forms. The strategy could also finally bring a vaccine for the likes of HIV, which has so far notoriously evaded decades of efforts.

This month, a team from UC California Riverside, led by Dr. Shou-Wei Ding, designed a vaccine that unleashed a surprising component of the body’s immune system against invading viruses.

In baby mice without functional immune cells to ward off infections, the vaccine defended against lethal doses of a deadly virus. The protection lasted at least 90 days after the initial shot.

The strategy relies on a controversial theory. Most plants and fungi have an innate defense against viruses that chops up their genetic material. Called RNA interference (RNAi), scientists have long debated whether the same mechanism exists in mammals—including humans.

“It’s an incredible system because it can be adapted to any virus,” Dr. Olivier Voinnet at the Swiss Federal Institute of Technology, who championed the theory with Ding, told Nature in late 2013.

A Hidden RNA Universe

RNA molecules are usually associated with the translation of genes into proteins.

But they’re not just biological messengers. A wide array of small RNA molecules roam our cells. Some shuttle protein components through the cell during the translation of DNA. Others change how DNA is expressed and may even act as a method of inheritance.

But fundamental to immunity are small interfering RNA molecules, or siRNAs. In plants and invertebrates, these molecules are vicious defenders against viral attacks. To replicate, viruses need to hijack the host cell’s machinery to copy their genetic material—often, it’s RNA. The invaded cells recognize the foreign genetic material and automatically launch an attack.

During this attack, called RNA interference, the cell chops the invading viruses’ RNA genome into tiny chunks–siRNA. The cell then spews these viral siRNA molecules into the body to alert the immune system. The molecules also directly grab onto the invading viruses’ genome, blocking it from replicating.

Here’s the kicker: Vaccines based on antibodies usually target one or two locations on a virus, making them vulnerable to mutation should those locations change their makeup. RNA interference generates thousands of siRNA molecules that cover the entire genome—even if one part of a virus mutates, the rest is still vulnerable to the attack.

This powerful defense system could launch a new generation of vaccines. There’s just one problem. While it’s been observed in plants and flies, whether it exists in mammals has been highly controversial.

“We believe that RNAi has been antiviral for hundreds of millions of years,” Ding told Nature in 2013. “Why would we mammals dump such an effective defense?”

Natural Born Viral Killers

In the 2013 study in Science, Ding and colleagues suggested mammals also have an antiviral siRNA mechanism—it’s just being repressed by a gene carried by most viruses. Dubbed B2, the gene acts like a “brake,” smothering any RNA interference response from host cells by destroying their ability to make siRNA snippets.

Getting rid of B2 should kick RNA interference back into gear. To prove the theory, the team genetically engineered a virus without a functioning B2 gene and tried to infect hamster cells and immunocompromised baby mice. Called Nodamura virus, it’s transmitted by mosquitoes in the wild and is often deadly.

But without B2, even a lethal dose of the virus lost its infectious power. The baby mice rapidly generated a hefty dose of siRNA molecules to clear out the invaders. As a result, the infection never took hold, and the critters—even when already immunocompromised—survived.

“I truly believe that the RNAi response is relevant to at least some viruses that infect mammals,” said Ding at the time.

New-Age Vaccines

Many vaccines contain either a dead or a living but modified version of a virus to train the immune system. When faced with the virus again, the body produces T cells to kill off the target, B cells that pump out antibodies, and other immune “memory” cells to alert against future attacks. But their effects don’t always last, especially if a virus mutates.

Rather than rallying T and B cells, triggering the body’s siRNA response offers another type of immune defense. This can be done by deleting the B2 gene in live viruses. These viruses can be formulated into a new type of vaccine, which the team has been working to develop, relying on RNA interference to ward off invaders. The resulting flood of siRNA molecules triggered by the vaccine would, in theory, also provide some protection against future infection.

“If we make a mutant virus that cannot produce the protein to suppress our RNAi [RNA interference], we can weaken the virus. It can replicate to some level, but then loses the battle to the host RNAi response,” Ding said in a press release about the most recent study.  “A virus weakened in this way can be used as a vaccine for boosting our RNAi immune system.”

In the study, his team tried the strategy against Nodamura virus by removing its B2 gene.

The team vaccinated baby and adult mice, both of which were genetically immunocompromised in that they couldn’t mount T cell or B cell defenses. In just two days, the single shot fully protected the mice against a deadly dose of virus, and the effect lasted over three months.

Viruses are most harmful to vulnerable populations—infants, the elderly, and immunocompromised individuals. Because of their weakened immune systems, current vaccines aren’t always as effective. Triggering siRNA could be a life-saving alternative strategy.

Although it works in mice, whether humans respond similarly remains to be seen. But there’s much to look forward to. The B2 “brake” protein has also been found in lots of other common viruses, including dengue, flu, and a family of viruses that causes fever, rash, and blisters.

The team is already working on a new flu vaccine, using live viruses without the B2 protein. If successful, the vaccine could potentially be made as a nasal spray—forget the needle jab. And if their siRNA theory holds up, such a vaccine might fend off the virus even as it mutates into new strains. The playbook could also be adapted to tackle new Covid variants, RSV, or whatever nature next throws at us.

This vaccine strategy is “broadly applicable to any number of viruses, broadly effective against any variant of a virus, and safe for a broad spectrum of people,” study author Dr. Rong Hai said in the press release. “This could be the universal vaccine that we have been looking for.”

Image Credit: Diana Polekhina / Unsplash

Kategorie: Transhumanismus

This Week’s Awesome Tech Stories From Around the Web (Through April 20)

20 Duben, 2024 - 16:00
ARTIFICIAL INTELLIGENCE

15 Graphs That Explain the State of AI in 2024
Eliza Strickland | IEEE Spectrum
“Each year, the AI Index lands on virtual desks with a louder virtual thud—this year, its 393 pages are a testament to the fact that AI is coming off a really big year in 2023. For the past three years, IEEE Spectrum has read the whole damn thing and pulled out a selection of charts that sum up the current state of AI.”

NEUROSCIENCE

The Next Frontier for Brain Implants Is Artificial Vision
Emily Mullin | Wired
“Elon Musk’s Neuralink and others are developing devices that could provide blind people with a crude sense of sight. …’This is not about getting biological vision back,’ says Philip Troyk, a professor of biomedical engineering at Illinois Tech, who’s leading the study Bussard is in. ‘This is about exploring what artificial vision could be.'”

DIGITAL MEDIA

Microsoft’s VASA-1 Can Deepfake a Person With One Photo and One Audio Track
Benj Edwards | Ars Technica
“On Tuesday, Microsoft Research Asia unveiled VASA-1, an AI model that can create a synchronized animated video of a person talking or singing from a single photo and an existing audio track. In the future, it could power virtual avatars that render locally and don’t require video feeds—or allow anyone with similar tools to take a photo of a person found online and make them appear to say whatever they want.”

TECH

Meta Is Already Training a More Powerful Successor to Llama 3
Will Knight | Wired
“On Thursday morning, Meta released its latest artificial intelligence model, Llama 3, touting it as the most powerful to be made open source so that anyone can use it. The same afternoon, Yann LeCun, Meta’s chief AI scientist, said an even more powerful successor to Llama is in the works. He suggested it could potentially outshine the world’s best closed AI models, including OpenAI’s GPT-4 and Google’s Gemini.”

COMPUTING

Intel Reveals World’s Biggest ‘Brain-Inspired’ Neuromorphic Computer
Matthew Sparkes | New Scientist
“Hala Point contains 1.15 billion artificial neurons across 1152 Loihi 2 achips, and is capable of 380 trillion synaptic operations per second. Mike Davies at Intel says that despite this power it occupies just six racks in a standard server case—a space similar to that of a microwave oven. Larger machines will be possible, says Davies. ‘We built this scale of system because, honestly, a billion neurons was a nice round number,’ he says. ‘I mean, there wasn’t any particular technical engineering challenge that made us stop at this level.'”

AUTOMATION

US Air Force Confirms First Successful AI Dogfight
Emma Roth | The Verge
“Human pilots were on board the X-62A with controls to disable the AI system, but DARPA says the pilots didn’t need to use the safety switch ‘at any point.’ The X-62A went against an F-16 controlled solely by a human pilot, where both aircraft demonstrated ‘high-aspect nose-to-nose engagements’ and got as close as 2,000 feet at 1,200 miles per hour. DARPA doesn’t say which aircraft won the dogfight, however.”

CULTURE

What If Your AI Girlfriend Hated You?
Kate Knibbs | Wired
“It seems as though we’ve arrived at the moment in the AI hype cycle where no idea is too bonkers to launch. This week’s eyebrow-raising AI project is a new twist on the romantic chatbot—a mobile app called AngryGF, which offers its users the uniquely unpleasant experience of getting yelled at via messages from a fake person.”

NEUROSCIENCE

Insects and Other Animals Have Consciousness, Experts Declare
Dan Falk | Quanta
“For decades, there’s been a broad agreement among scientists that animals similar to us—the great apes, for example—have conscious experience, even if their consciousness differs from our own. In recent years, however, researchers have begun to acknowledge that consciousness may also be widespread among animals that are very different from us, including invertebrates with completely different and far simpler nervous systems.”

SCIENCE

Two Lifeforms Merge in Once-in-a-Billion-Years Evolutionary Event
Michael Irving | New Atlas
“Scientists have caught a once-in-a-billion-years evolutionary event in progress, as two lifeforms have merged into one organism that boasts abilities its peers would envy. Last time this happened, Earth got plants. …A species of algae called Braarudosphaera bigelowii was found to have engulfed a cyanobacterium that lets them do something that algae, and plants in general, can’t normally do—’fixing’ nitrogen straight from the air, and combining it with other elements to create more useful compounds.”

Image Credit: Shubham Dhage / Unsplash

Kategorie: Transhumanismus

Cell Therapies Now Beat Back Once Untreatable Blood Cancers. Scientists Are Making Them Even Deadlier.

19 Duben, 2024 - 19:03

Dubbed “living drugs,” CAR T cells are bioengineered from a patient’s own immune cells to make them better able to hunt and destroy cancer.

The treatment is successfully tackling previously untreatable blood cancers. Six therapies are already approved by the FDA. Over a thousand clinical trials are underway. These aren’t limited to cancer—they cover a range of difficult medical problems such as autoimmune diseases, heart conditions, and viral infections including HIV. They may even slow down the biological processes that contribute to aging.

But CAR T has an Achilles heel.

Once injected into the body, the cells often slowly dwindle. Called “exhaustion,” this process erodes therapeutic effect over time and has dire medical consequences. According to Dr. Evan Weber at the University of Pennsylvania, more than 50 percent of people who respond to CAR T therapies eventually relapse. This may also be why CAR T cells have struggled to fight off solid tumors in breast, pancreatic, or deadly brain cancers.

This month, two teams found a potential solution—make CAR T cells more like stem cells. Known for their regenerative abilities, stem cells easily repopulate the body. Both teams identified the same protein “master switch” to make engineered cells resemble stem cells.

One study, led by Weber, found that adding the protein, called FOXO1, revved up metabolism and health in CAR T cells in mice. Another study from a team at the Peter MacCallum Cancer Center in Australia found FOXO1-boosted cells appeared genetically similar to immune stem cells and were better able to fend off solid tumors.

While still early, “these findings may help improve the design of CAR T cell therapies and potentially benefit a wider range of patients,” said Weber in a press release.

I Remember

Here’s how CAR T cell therapy usually works.

The approach focuses on T cells, a particular type of immune cell that naturally hunts downs and eliminates infections and cancers inside the body. Enemy cells are dotted with a specific set of proteins, a kind of cellular fingerprint, that T cells recognize and latch onto.

Tumors also have a unique signature. But they can be sneaky, with some eventually developing ways to evade immune surveillance. In solid cancers, for example, they can pump out chemicals that fight off immune cell defenders, allowing the cancer to grow and spread.

CAR T cells are designed to override these barriers.

To make them, medical practitioners remove T cells from the body and genetically engineer them to produce tailormade protein hooks targeting a particular protein on tumor cells. The supercharged T cells are then grown in petri dishes and transfused back into the body.

In the beginning, CAR T was a last-resort blood cancer treatment, but now it’s a first-line therapy. Keeping the engineered cells around inside the body, however, has been a struggle. With time, the cells stop dividing and become dysfunctional, potentially allowing the cancer to relapse.

The Translator

To tackle cell exhaustion, Weber’s team found inspiration in the body itself.

Our immune system has a cellular ledger tracking previous infections. The cells making up this ledger are called memory T cells. They’re a formidable military reserve, a portion of which resemble stem cells. When the immune system detects an invader it’s seen before—a virus, bacteria, or cancer cell—these reserve cells rapidly proliferate to fend off the attack.

CAR T cells don’t usually have this ability. Inside multiple cancers, they eventually die off—allowing cancers to return. Why?

In 2012, Dr. Crystal Mackall at Stanford University found several changes in gene expression that lead to CAR T cell exhaustion. In the new study, together with Weber, the team discovered a protein, FOXO1, that could lengthen CAR T’s effects.

In one test, a drug that inhibited FOXO1 caused CAR T cells to rapidly fail and eventually die in petri dishes. Erasing genes encoding FOXO1 also hindered the cells and increased signs of CAR T exhaustion. When infused into mice with leukemia, CAR T cells without FOXO1 couldn’t treat the cancer. By contrast, increasing levels of FOXO1 helped the cells readily fight it off.

Analyzing genes related to FOXO1, the team found they were mostly connected to immune cell memory. It’s likely that adding the gene encoding FOXO1 to CAR T cells promotes a stable memory for the cells, so they can easily recognize potential harm—be it cancer or pathogen—long after the initial infection.

When treating mice with leukemia, a single dose of the FOXO1-enhanced cells decreased cancer growth and increased survival up to five-fold compared to standard CAR T therapy. The enhanced treatment also tackled a type of bone cancer in mice, which is often hard to treat without surgery and chemotherapy.

An Immune Link

Meanwhile, the Australian team also zeroed in on FOXO1. Led by Drs. Junyun Lai, Paul Beavis, and Phillip Darcy, the team was looking for protein candidates to enhance CAR T longevity.

The idea was, like their natural counterparts, engineered CAR T cells also need a healthy metabolism to thrive and divide.

They started by analyzing a protein previously shown to enhance CAR T metabolism, potentially lowering the chances of exhaustion. Mapping the epigenome and transcriptome in CAR T cells—both of which tell us how genes are expressed—they also discovered FOXO1 regulating CAR T cell longevity.

As a proof of concept, the team induced exhaustion in the engineered cells by increasingly restricting their ability to divide.

In mice with cancer, cells supercharged with FOXO1 lasted longer by months than those that hadn’t been boosted. The critters’ liver and kidney functions remained normal, and they didn’t lose weight during the treatment, a marker of overall health. The FOXO1 boost also changed how genes were expressed in the cells—they looked younger, as if in a stem cell-like state.

The new recipe also worked in T cells donated by six people with cancer who had undergone standard CAR T therapy. Adding a dose of FOXO1 to these cells increased their metabolism.

Multiple CAR T clinical trials are ongoing. But “the effects of such cells are transient and do not provide long-term protection against exhaustion,” wrote Darcy and team. In other words, durability is key for CAR T cells to live up to their full potential.

A FOXO1 boost offers a way—although it may not be the only way.

“By studying factors that drive memory in T cells, like FOXO1, we can enhance our understanding of why CAR T cells persist and work more effectively in some patients compared to others,” said Weber.

Image Credit: Gerardo Sotillo, Stanford Medicine

Kategorie: Transhumanismus

Scientists Create Atomically Thin Gold With Century-Old Japanese Knife Making Technique

18 Duben, 2024 - 19:36

Graphene has been hailed as a wonder material, but it also set off a rush to find other promising atomically thin materials. Now researchers have managed to create a 2D version of gold they call “goldene,” which could have a host of applications in chemistry.

Scientists had speculated about the possibility of creating layers of carbon just a single atom thick for many decades. But it wasn’t until 2004 that a team from the University of Manchester in the UK first produced graphene sheets using the remarkably simple technique of peeling them off a lump of graphite with common sticky tape.

The resulting material’s high strength, high conductivity, and unusual optical properties set off a stampede to find applications for it. But it also spurred researchers to investigate what kinds of exotic capabilities other ultra-thin materials could have.

Gold is one material scientists have long been eager to make as thin as graphene, but so far, efforts have been in vain. Now though, researchers from Linköping University in Sweden have borrowed from an old Japanese forging technique to create ultra-thin flakes of what they’re calling “goldene.”

“If you make a material extremely thin, something extraordinary happens,” Shun Kashiwaya, who led the research, said in a press release. “The same thing happens with gold.”

Making goldene has proven tough in the past because its atoms tend to clump together. So, even if you can create a 2D sheet of gold atoms they quickly roll up to create nanoparticles instead.

The researchers got around this by taking a ceramic called titanium silicon carbide, which features ultra-thin layers of silicon between layers of titanium carbide, and coating it with gold. They then heated it in a furnace, which caused the gold to diffuse into the material and replace the silicon layers in a process known as intercalation.

This created atomically thin layers of gold embedded in the ceramic. To get it out, they had to borrow a century-old technique developed by Japanese knife makers. They used a chemical formulation known as Murakami’s reagent, which etches away carbon residue, to slowly reveal the gold sheets.

The researchers had to experiment with different concentrations of the reagent and various etching times. They also had to add a detergent-like chemical called a surfactant that protected the gold sheets from the etching liquid and prevented them from curling up. The gold flakes could then be sieved out of the solution to be examined more closely.

In a paper in Nature Synthesis, the researchers describe how they used an electron microscope to confirm that the gold layers were indeed just one atom thick. They also showed that the goldene flakes were semiconductors.

It’s not the first time someone has claimed to have created goldene, notes Nature. But previous attempts have involved creating the ultra-thin sheets sandwiched between other materials, and the Linköping team say their effort is the first to create a “free-standing 2D metal.”

The material could have a range of use cases, the researchers say. Gold nanoparticles already show promise as catalysts that can turn plastic waste and biomass into valuable materials, they note in their paper, and they have properties that could prove useful for energy harvesting, creating photonic devices, or even splitting water to create hydrogen fuel.

It will take work to tweak the synthesis method so it can produce commercially useful amounts of the material, a challenge that has delayed the full arrival of graphene as a widely used product too. But the team is also investigating whether similar approaches can be applied to other useful catalytic metals. Graphene might not be the only wonder material in town for long.

Image Credit: Nature Synthesis (CC BY 4.0)

Kategorie: Transhumanismus

Boston Dynamics Says Farewell to Its Humanoid Atlas Robot—Then Brings It Back Fully Electric

18 Duben, 2024 - 00:29

Yesterday, Boston Dynamics announced it was retiring its hydraulic Atlas robot. Atlas has long been the standard bearer of advanced humanoid robots. Over the years, the company was known as much for its research robots as it was for slick viral videos of them working out in military fatigues, forming dance mobs, and doing parkour. Fittingly, the company put together a send-off video of Atlas’s greatest hits and blunders.

But there were clues this wasn’t really the end, not least of which was the specific inclusion of the word “hydraulic” and the last line of the video, “‘Til we meet again, Atlas.” It wasn’t a long hiatus. Today, the company released hydraulic Atlas’s successor—electric Atlas.

The new Atlas is notable for several reasons. First, and most obviously, Boston Dynamics has finally done away with hydraulic actuators in favor of electric motors. To be clear, Atlas has long had an onboard battery pack—but now it’s fully electric. The advantages of going electric include less cost, noise, weight, and complexity. It also allows for a more polished design. From the company’s own Spot robot to a host of other humanoid robots, fully electric models are the norm these days. So, it’s about time Atlas made the switch.

Without a mess of hydraulic hoses to contend with, the new Atlas can now also contort itself in new ways. As you’ll note in the release video, the robot rises to its feet—a crucial skill for a walking robot—in a very, let’s say, special way. It folds its legs up along its torso and impossibly, for a human at least, pivots up through its waist (no hands). Once standing Atlas swivels its head 180 degrees, then does the same thing at each hip joint and the waist. It takes a few watches to really appreciate all the weirdness there.

The takeaway is that while Atlas looks like us, it’s capable of movements we aren’t and therefore has more flexibility in how it completes future tasks.

This theme of same-but-different is evident in its head too. Instead of opting for a human-like head that risks slipping into the uncanny valley, the team chose a featureless (for now) lighted circle. In an interview with IEEE Spectrum, Boston Dynamics CEO, Robert Playter, said the human-like designs they tried seemed “a little bit threatening or dystopian.”

“We’re trying to project something else: a friendly place to look to gain some understanding about the intent of the robot,” he said. “The design borrows from some friendly shapes that we’d seen in the past. For example, there’s the old Pixar lamp that everybody fell in love with decades ago, and that informed some of the design for us.”

While most of these upgrades are improvements, there is one area where it’s not totally clear how well the new form will fair: strength and power.

Hydraulics are known to provide both, and Atlas pushed its hydraulics to their limits carrying heavy objects, executing backflips, and doing 180-degree, in-air twists. According to the press release and Playter’s interviews, little has been lost in this category. In fact, they say, electric Atlas is stronger than hydraulic Atlas. Still, as with all things robotics, the ultimate proof of how capable it is will likely be in video form, which we’ll eagerly await.

Despite big design updates, the company’s messaging is perhaps more notable. Atlas used to be a research robot. Now, the company intends to sell them commercially.

This isn’t terribly surprising. There are now a number of companies competing in the humanoid robots space, including Agility, 1X, Tesla, Apptronik, and Figure—which just raised $675 million at a $2.6 billion valuation. Several are making rapid progress, with a heavy focus on AI, and have kicked off real-world pilots.

Where does Boston Dynamics fit in? With Atlas, the company has been the clear leader for years. So, it’s not starting from the ground floor. Also, thanks to its Spot and Stretch robots, the company already has experience commercializing and selling advanced robots, from identifying product-market fit to dealing with logistics and servicing. But AI was, until recently, less of a focus. Now, they’re folding reinforcement learning into Spot, have begun experimenting with generative AI too, and promise more is coming.

Hyundai acquired Boston Dynamics for $1.1 billion in 2021. This may prove advantageous, as they have access to a world-class manufacturing company along with its resources and expertise producing and selling machines at scale. It’s also an opportunity to pilot Atlas in real-world situations and perfect it for future customers. Plans are already in motion to put Atlas to work at Hyundai next year.

Still, it’s worth noting that, although humanoid robots are attracting attention, getting big time investment, and being tried out in commercial contexts, there’s likely a ways to go before they reach the kind of generality some companies are touting. Playter says Boston Dynamics is going for multi-purpose, but still niche, robots in the near term.

“It definitely needs to be a multi-use case robot. I believe that because I don’t think there’s very many examples where a single repetitive task is going to warrant these complex robots,” he said. “I also think, though, that the practical matter is that you’re going to have to focus on a class of use cases, and really making them useful for the end customer.”

Humanoid robots that tidy your house and do the dishes may not be imminent, but the field is hot, and AI is bringing a degree of generality not possible a year ago. Now that Boston Dynamics has thrown its name in the hat, things will only get more interesting from here. We’ll be keeping a close eye on YouTube to see what new tricks Atlas has up its sleeve.

Image Credit: Boston Dynamics

Kategorie: Transhumanismus

Exploding Stars Are Rare—but if One Was Close Enough, It Could Threaten Life on Earth

16 Duben, 2024 - 19:37

Stars like the sun are remarkably constant. They vary in brightness by only 0.1 percent over years and decades, thanks to the fusion of hydrogen into helium that powers them. This process will keep the sun shining steadily for about 5 billion more years, but when stars exhaust their nuclear fuel, their deaths can lead to pyrotechnics.

The sun will eventually die by growing large and then condensing into a type of star called a white dwarf. But stars over eight times more massive than the sun die violently in an explosion called a supernova.

Supernovae happen across the Milky Way only a few times a century, and these violent explosions are usually remote enough that people here on Earth don’t notice. For a dying star to have any effect on life on our planet, it would have to go supernova within 100 light years from Earth.

I’m an astronomer who studies cosmology and black holes.

In my writing about cosmic endings, I’ve described the threat posed by stellar cataclysms such as supernovae and related phenomena such as gamma-ray bursts. Most of these cataclysms are remote, but when they occur closer to home they can pose a threat to life on Earth.

The Death of a Massive Star

Very few stars are massive enough to die in a supernova. But when one does, it briefly rivals the brightness of billions of stars. At one supernova per 50 years, and with 100 billion galaxies in the universe, somewhere in the universe a supernova explodes every hundredth of a second.

The dying star emits high-energy radiation as gamma rays. Gamma rays are a form of electromagnetic radiation with wavelengths much shorter than light waves, meaning they’re invisible to the human eye. The dying star also releases a torrent of high-energy particles in the form of cosmic rays: subatomic particles moving at close to the speed of light.

Supernovae in the Milky Way are rare, but a few have been close enough to Earth that historical records discuss them. In 185 AD, a star appeared in a place where no star had previously been seen. It was probably a supernova.

Observers around the world saw a bright star suddenly appear in 1006 AD. Astronomers later matched it to a supernova 7,200 light years away. Then, in 1054 AD, Chinese astronomers recorded a star visible in the daytime sky that astronomers subsequently identified as a supernova 6,500 light years away.

Johannes Kepler, the astronomer who observed what was likely a supernova in 1604. Image Credit: Kepler-Museum in Weil der Stadt

Johannes Kepler observed the last supernova in the Milky Way in 1604, so in a statistical sense, the next one is overdue.

At 600 light years away, the red supergiant Betelgeuse in the constellation of Orion is the nearest massive star getting close to the end of its life. When it goes supernova, it will shine as bright as the full moon for those watching from Earth, without causing any damage to life on our planet.

Radiation Damage

If a star goes supernova close enough to Earth, the gamma-ray radiation could damage some of the planetary protection that allows life to thrive on Earth. There’s a time delay due to the finite speed of light. If a supernova goes off 100 light years away, it takes 100 years for us to see it.

Astronomers have found evidence of a supernova 300 light years away that exploded 2.5 million years ago. Radioactive atoms trapped in seafloor sediments are the telltale signs of this event. Radiation from gamma rays eroded the ozone layer, which protects life on Earth from the sun’s harmful radiation. This event would have cooled the climate, leading to the extinction of some ancient species.

Safety from a supernova comes with greater distance. Gamma rays and cosmic rays spread out in all directions once emitted from a supernova, so the fraction that reach the Earth decreases with greater distance. For example, imagine two identical supernovae, with one 10 times closer to Earth than the other. Earth would receive radiation that’s about a hundred times stronger from the closer event.

A supernova within 30 light years would be catastrophic, severely depleting the ozone layer, disrupting the marine food chain and likely causing mass extinction. Some astronomers guess that nearby supernovae triggered a series of mass extinctions 360 to 375 million years ago. Luckily, these events happen within 30 light years only every few hundred million years.

When Neutron Stars Collide

But supernovae aren’t the only events that emit gamma rays. Neutron star collisions cause high-energy phenomena ranging from gamma rays to gravitational waves.

Left behind after a supernova explosion, neutron stars are city-size balls of matter with the density of an atomic nucleus, so 300 trillion times denser than the sun. These collisions created many of the gold and precious metals on Earth. The intense pressure caused by two ultradense objects colliding forces neutrons into atomic nuclei, which creates heavier elements such as gold and platinum.

A neutron star collision generates an intense burst of gamma rays. These gamma rays are concentrated into a narrow jet of radiation that packs a big punch.

If the Earth were in the line of fire of a gamma-ray burst within 10,000 light years, or 10 percent of the diameter of the galaxy, the burst would severely damage the ozone layer. It would also damage the DNA inside organisms’ cells, at a level that would kill many simple life forms like bacteria.

That sounds ominous, but neutron stars do not typically form in pairs, so there is only one collision in the Milky Way about every 10,000 years. They are 100 times rarer than supernova explosions. Across the entire universe, there is a neutron star collision every few minutes.

Gamma-ray bursts may not hold an imminent threat to life on Earth, but over very long time scales, bursts will inevitably hit the Earth. The odds of a gamma-ray burst triggering a mass extinction are 50 percent in the past 500 million years and 90 percent in the 4 billion years since there has been life on Earth.

By that math, it’s quite likely that a gamma-ray burst caused one of the five mass extinctions in the past 500 million years. Astronomers have argued that a gamma-ray burst caused the first mass extinction 440 million years ago, when 60 percent of all marine creatures disappeared.

A Recent Reminder

The most extreme astrophysical events have a long reach. Astronomers were reminded of this in October 2022, when a pulse of radiation swept through the solar system and overloaded all of the gamma-ray telescopes in space.

It was the brightest gamma-ray burst to occur since human civilization began. The radiation caused a sudden disturbance to the Earth’s ionosphere, even though the source was an explosion nearly two billion light years away. Life on Earth was unaffected, but the fact that it altered the ionosphere is sobering—a similar burst in the Milky Way would be a million times brighter.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image Credit: NASA, ESA, Joel Kastner (RIT)

Kategorie: Transhumanismus

A New Photonic Computer Chip Uses Light to Slash AI Energy Costs

15 Duben, 2024 - 22:45

AI models are power hogs.

As the algorithms grow and become more complex, they’re increasingly taxing current computer chips. Multiple companies have designed chips tailored to AI to reduce power draw. But they’re all based on one fundamental rule—they use electricity.

This month, a team from Tsinghua University in China switched up the recipe. They built a neural network chip that uses light rather than electricity to run AI tasks at a fraction of the energy cost of NVIDIA’s H100, a state-of-the-art chip used to train and run AI models.

Called Taichi, the chip combines two types of light-based processing into its internal structure. Compared to previous optical chips, Taichi is far more accurate for relatively simple tasks such as recognizing hand-written numbers or other images. Unlike its predecessors, the chip can generate content too. It can make basic images in a style based on the Dutch artist Vincent van Gogh, for example, or classical musical numbers inspired by Johann Sebastian Bach.

Part of Taichi’s efficiency is due to its structure. The chip is made of multiple components called chiplets. Similar to the brain’s organization, each chiplet performs its own calculations in parallel, the results of which are then integrated with the others to reach a solution.

Faced with a challenging problem of separating images over 1,000 categories, Taichi was successful nearly 92 percent of the time, matching current chip performance, but slashing energy consumption over a thousand-fold.

For AI, “the trend of dealing with more advanced tasks [is] irreversible,” wrote the authors. “Taichi paves the way for large-scale photonic [light-based] computing,” leading to more flexible AI with lower energy costs.

Chip on the Shoulder

Today’s computer chips don’t mesh well with AI.

Part of the problem is structural. Processing and memory on traditional chips are physically separated. Shuttling data between them takes up enormous amounts of energy and time.

While efficient for solving relatively simple problems, the setup is incredibly power hungry when it comes to complex AI, like the large language models powering ChatGPT.

The main problem is how computer chips are built. Each calculation relies on transistors, which switch on or off to represent the 0s and 1s used in calculations. Engineers have dramatically shrunk transistors over the decades so they can cram ever more onto chips. But current chip technology is cruising towards a breaking point where we can’t go smaller.

Scientists have long sought to revamp current chips. One strategy inspired by the brain relies on “synapses”—the biological “dock” connecting neurons—that compute and store information at the same location. These brain-inspired, or neuromorphic, chips slash energy consumption and speed up calculations. But like current chips, they rely on electricity.

Another idea is to use a different computing mechanism altogether: light. “Photonic computing” is “attracting ever-growing attention,” wrote the authors. Rather than using electricity, it may be possible to hijack light particles to power AI at the speed of light.

Let There Be Light

Compared to electricity-based chips, light uses far less power and can simultaneously tackle multiple calculations. Tapping into these properties, scientists have built optical neural networks that use photons—particles of light—for AI chips, instead of electricity.

These chips can work two ways. In one, chips scatter light signals into engineered channels that eventually combine the rays to solve a problem. Called diffraction, these optical neural networks pack artificial neurons closely together and minimize energy costs. But they can’t be easily changed, meaning they can only work on a single, simple problem.

A different setup depends on another property of light called interference. Like ocean waves, light waves combine and cancel each other out. When inside micro-tunnels on a chip, they can collide to boost or inhibit each other—these interference patterns can be used for calculations. Chips based on interference can be easily reconfigured using a device called an interferometer. Problem is, they’re physically bulky and consume tons of energy.

Then there’s the problem of accuracy. Even in the sculpted channels often used for interference experiments, light bounces and scatters, making calculations unreliable. For a single optical neural network, the errors are tolerable. But with larger optical networks and more sophisticated problems, noise rises exponentially and becomes untenable.

This is why light-based neural networks can’t be easily scaled up. So far, they’ve only been able to solve basic tasks, such as recognizing numbers or vowels.

“Magnifying the scale of existing architectures would not proportionally improve the performances,” wrote the team.

Double Trouble

The new AI, Taichi, combined the two traits to push optical neural networks towards real-world use.

Rather than configuring a single neural network, the team used a chiplet method, which delegated different parts of a task to multiple functional blocks. Each block had its own strengths: One was set up to analyze diffraction, which could compress large amounts of data in a short period of time. Another block was embedded with interferometers to provide interference, allowing the chip to be easily reconfigured between tasks.

Compared to deep learning, Taichi took a “shallow” approach whereby the task is spread across multiple chiplets.

With standard deep learning structures, errors tend to accumulate over layers and time. This setup nips problems that come from sequential processing in the bud. When faced with a problem, Taichi distributes the workload across multiple independent clusters, making it easier to tackle larger problems with minimal errors.

The strategy paid off.

Taichi has the computational capacity of 4,256 total artificial neurons, with nearly 14 million parameters mimicking the brain connections that encode learning and memory. When sorting images into 1,000 categories, the photonic chip was nearly 92 percent accurate, comparable to “currently popular electronic neural networks,” wrote the team.

The chip also excelled in other standard AI image-recognition tests, such as identifying hand-written characters from different alphabets.

As a final test, the team challenged the photonic AI to grasp and recreate content in the style of different artists and musicians. When trained with Bach’s repertoire, the AI eventually learned the pitch and overall style of the musician. Similarly, images from van Gogh or Edvard Munch—the artist behind the famous painting, The Scream—fed into the AI allowed it to generate images in a similar style, although many looked like a toddler’s recreation.

Optical neural networks still have much further to go. But if used broadly, they could be a more energy-efficient alternative to current AI systems. Taichi is over 100 times more energy efficient than previous iterations. But the chip still requires lasers for power and data transfer units, which are hard to condense.

Next, the team is hoping to integrate readily available mini lasers and other components into a single, cohesive photonic chip. Meanwhile, they hope Taichi will “accelerate the development of more powerful optical solutions” that could eventually lead to “a new era” of powerful and energy-efficient AI.

Image Credit: spainter_vfx / Shutterstock.com

Kategorie: Transhumanismus

This Week’s Awesome Tech Stories From Around the Web (Through April 13)

13 Duben, 2024 - 16:00
ROBOTICS

Is Robotics About to Have Its Own ChatGPT Moment?
Melissa Heikkilä | MIT Technology Review
“For decades, roboticists have more or less focused on controlling robots’ ‘bodies’—their arms, legs, levers, wheels, and the like—via purpose-driven software. But a new generation of scientists and inventors believes that the previously missing ingredient of AI can give robots the ability to learn new skills and adapt to new environments faster than ever before. This new approach, just maybe, can finally bring robots out of the factory and into our homes.”

ARTIFICIAL INTELLIGENCE

Humans Forget. AI Assistants Will Remember Everything
Boone Ashworth | Wired
“Human brains, Gruber says, are really good at story retrieval, but not great at remembering details, like specific dates, names, or faces. He has been arguing for digital AI assistants that can analyze everything you do on your devices and index all those details for later reference.”

BIOTECH

The Effort to Make a Breakthrough Cancer Therapy Cheaper
Cassandra Willyard | MIT Technology Review
“CAR-T therapies are already showing promise beyond blood cancers. Earlier this year, researchers reported stunning results in 15 patients with lupus and other autoimmune diseases. CAR-T is also being tested as a treatment for solid tumors, heart disease, aging, HIV infection, and more. As the number of people eligible for CAR-T therapies increases, so will the pressure to reduce the cost.”

ETHICS

Students Are Likely Writing Millions of Papers With AI
Amanda Hoover | Wired
“A year ago, Turnitin rolled out an AI writing detection tool that was trained on its trove of papers written by students as well as other AI-generated texts. Since then, more than 200 million papers have been reviewed by the detector, predominantly written by high school and college students. Turnitin found that 11 percent may contain AI-written language in 20 percent of its content, with 3 percent of the total papers reviewed getting flagged for having 80 percent or more AI writing.”

SCIENCE

Physicists Capture First-Ever Image of an Electron Crystal
Isaac Schultz | Gizmodo
“Electrons are typically seen flitting around their atoms, but a team of physicists has now imaged the particles in a very different state: nestled together in a quantum phase called a Wigner crystal, without a nucleus at their core. The phase is named after Eugene Wigner, who predicted in 1934 that electrons would crystallize in a lattice when certain interactions between them are strong enough. The recent team used high-resolution scanning tunneling microscopy to directly image the predicted crystal.”

GADGETS

Review: Humane Ai Pin
Julian Chokkattu | Wired
“Humane has potential with the Ai Pin. I like being able to access an assistant so quickly, but right now, there’s nothing here that makes me want to use it over my smartphone. Humane says this is just version 1.0 and that many of the missing features I’ve mentioned will arrive later. I’ll be happy to give it another go then.”

SPACE

The Moon Likely Turned Itself Inside Out 4.2 Billion Years Ago
Passant Rabie | Gizmodo
“A team of researchers from the University of Arizona found new evidence that supports one of the wildest formation theories for the moon, which suggests that Earth’s natural satellite may have turned itself inside out a few million years after it came to be. In a new study published Monday in Nature Geoscience, the researchers looked at subtle variations in the moon’s gravitational field to provide the first physical evidence of a sinking mineral-rich layer.”

TECH

How Tech Giants Cut Corners to Harvest Data for AI
Cade Metz, Cecilia Kang, Sheera Frenkel, Stuart A. Thompson, and Nico Grantade | The New York Times
“The race to lead AI has become a desperate hunt for the digital data needed to advance the technology. To obtain that data, tech companies including OpenAI, Google and Meta have cut corners, ignored corporate policies and debated bending the law, according to an examination by The New York Times.”

ENERGY

Artificial Intelligence’s ‘Insatiable’ Energy Needs Not Sustainable, Arm CEO Says
Peter Landers | The Wall Street Journal
“In a January report, the International Energy Agency said a request to ChatGPT requires 2.9 watt-hours of electricity on average—equivalent to turning on a 60-watt lightbulb for just under three minutes. That is nearly 10 times as much as the average Google search. The agency said power demand by the AI industry is expected to grow by at least 10 times between 2023 and 2026.”

FUTURE

Someday, Earth Will Have a Final Total Solar Eclipse
Katherine Kornei | The New York Times
“The total solar eclipse visible on Monday over parts of Mexico, the United States and Canada was a perfect confluence of the sun and the moon in the sky. But it’s also the kind of event that comes with an expiration date: At some point in the distant future, Earth will experience its last total solar eclipse. That’s because the moon is drifting away from Earth, so our nearest celestial neighbor will one day, millions or even billions of years in the future, appear too small in the sky to completely obscure the sun.”
archive page

Image Credit: Tim Foster / Unsplash

Kategorie: Transhumanismus

Elon Musk Doubles Down on Mars Dreams and Details What’s Next for SpaceX’s Starship

12 Duben, 2024 - 21:13

Elon Musk has long been open about his dreams of using SpaceX to spread humanity’s presence further into the solar system. And last weekend, he gave an updated outline of his vision for how the company’s rockets could enable the colonization of Mars.

The serial entrepreneur has been clear for a number of years that the main motivation for founding SpaceX was to make humans a multiplanetary species. For a long time, that seemed like the kind of aspirational goal one might set to inspire and motivate engineers rather than one with a realistic chance of coming to fruition.

But following the successful launch of the company’s mammoth Starship vehicle last month, the idea is beginning to look less far-fetched. And in a speech at the company’s facilities in South Texas, Musk explained how he envisions using Starship to deliver millions of tons of cargo to Mars over the next couple of decades to create a self-sustaining civilization.

“Starship is the first design of a rocket that is actually capable of making life multiplanetary,” Musk said. “No rocket before this has had the potential to extend life to another planet.”

In a slightly rambling opening to the speech, Musk explained that making humans multiplanetary could be an essential insurance policy in case anything catastrophic happens to Earth. The red planet is the most obvious choice, he said, as it’s neither too close nor too far from Earth and has many of the raw ingredients required to support a functioning settlement.

But he estimates it will require us to deliver several million tons of cargo to the surface to get that civilization up and running. Starship is central to those plans, and Musk outlined the company’s roadmap for the massive rocket over the coming years.

Key to the vision is making the vehicle entirely reusable. That means the first hurdle is proving SpaceX can land and reuse both the Super Heavy first stage rocket and the Starship spacecraft itself. The second of those challenges will be tougher, as the vehicle must survive reentry to the atmosphere—in the most recent test, it broke up on its way back to Earth.

Musk says they plan to demonstrate the ability to land and reuse the Super Heavy booster this year, which he thinks has an 80 to 90 percent chance of success. Assuming they can get Starship to survive the extreme heat of reentry, they are also going to attempt landing the vehicle on a mock launch pad out at sea in 2024, with the aim of being able to land and reuse it by next year.

Proving the rocket works and is reusable is just the very first step in Musk’s Mars ambitions though. To achieve his goal of delivering a million people to the red planet in the next 20 years, SpaceX will have to massively ramp up its production and launch capabilities.

The company is currently building a second launch tower at its base in South Texas and is also planning to build two more at Cape Canaveral in Florida. Musk said the Texas sites would be mostly used for test launches and development work, with the Florida ones being the main hub for launches once Starship begins commercial operations.

SpaceX plans to build six Starships this year, according to Musk, but it is also building what he called a “giant factory” that will enable it to massively ramp up production of the spacecraft. The long-term goal is to produce multiple Starships a day. That’s crucial, according to Musk, because Starships initially won’t return from Mars and will instead be used as raw materials to construct structures on the surface.

The company also plans to continue development of Starship, boosting its carrying capacity from around 100 tons today to 200 tons in the future and enabling it to complete multiple launches in a day. SpaceX also hopes to demonstrate ship-to-ship refueling in orbit next year. It will be necessary to replenish the fuel used up by Starship on launch so it has a full tank as it sets off for Mars.

Those missions will depart when the orbits of Earth and Mars bring them close together, an alignment that only happens every 26 months. As such, Musk envisions entire armadas of Starships setting off together whenever these windows arrive.

SpaceX has done some early work on what needs to happen once Starships arrive at the red planet. They’ve identified promising landing sites and the infrastructure that will need setting up, including power generation, ice-mining facilities, propellant factories, and communication networks. But Musk admits they’ve yet to start development of any of these.

One glaring omission in the talk was any detail on who’s going to be paying for all of this. While the goal of making humankind multiplanetary is a noble one, it’s far from clear how the endeavor would make money for those who put up the funds to make it possible.

Musk estimates that the cost of each launch could eventually fall to just $2 to $3 million. And he noted that profits from the company’s Starlink satellites and Falcon 9 launch vehicle are currently paying for Starship’s development. But those revenue streams are unlikely to cover the thousands of launches a year required to make his Mars dreams a reality.

Still, the very fact that the questions these days are more about economics than technical feasibility is testament to the rapid progress SpaceX has made. The dream of becoming a multiplanetary species may not be science fiction for much longer.

Image Credit: SpaceX

Kategorie: Transhumanismus

This Company Is Growing Mini Livers Inside People to Fight Liver Disease

11 Duben, 2024 - 23:10

Growing a substitute liver inside a human body sounds like science fiction.

Yet a patient with severe liver damage just received an injection that could grow an additional “mini liver” directly inside their body. If all goes well, it’ll take up the failing liver’s job of filtering toxins from the blood.

For people with end-stage liver disease, a transplant is the only solution. But matching donor organs are hard to come by. Across the globe, two million people die from liver failure each year.

The new treatment, helmed by biotechnology company LyGenesis, offers an unusual solution. Rather than transplanting a whole new liver, the team is injecting healthy donor liver cells into lymph nodes in the patient’s upper abdomen. In a few months, it’s hoped the cells will gradually replicate and grow into a functional miniature liver.

The patient is part of a Phase 2a clinical trial, a stage that begins to gauge whether a therapy is effective. In up to 12 people with end-stage liver disease, the trial will test multiple doses to find the “Goldilocks” zone of treatment—effective with minimal side effects.

If successful, the therapy could sidestep the transplant organ shortage problem, not just for liver disease, but potentially also for kidney failure or diabetes. The math also works in favor of patients. Instead of one donor organ per recipient, healthy cells from one person could help multiple people in need of new organs.

A Living Bioreactor

Most of us don’t think about lymph nodes until we catch a cold, and they swell up painfully under the chin. These structures are dotted throughout the body. Like tiny cellular nurseries, they help immune cells proliferate to fend off invading viruses and bacteria.

They also have a dark side. Lymph nodes aid the spread of breast and other types of cancers. Because they’re highly connected to a highway of lymphatic vessels, cancer cells tunnel into them and take advantage of nutrients in the blood to grow and spread across the body.

What seems like a biological downfall may benefit regenerative medicine. If lymph nodes can support both immune cells and cancer growth, they may also be able to incubate other cell types and grow them into tissues—or even replacement organs.

The idea diverges from usual regenerative therapies, such as stem cell treatments, which aim to revive damaged tissues at the spot of injury. This is a hard ask: When organs fail, they often scar and spew out toxic chemicals that prevent engrafted cells from growing.

Lymph nodes offer a way to skip these cellular cesspools entirely.

Growing organs inside lymph nodes may sound far-fetched, but over a decade ago, LyGenesis’ chief scientific officer and co-founder, Dr. Eric Lagasse, showed it was possible in mice. In one test, his team injected liver cells directly into a lymph node inside a mouse’s belly. They found the grafted cells stayed in the “nursery,” rather than roaming the body and causing unexpected side effects.

In a mouse model of lethal liver failure, an infusion of healthy liver cells into the lymph node grew into a mini liver in just twelve weeks. The transplanted cells took over their host, developing into cube-like cells characteristic of normal liver cells and leaving behind just a sliver of normal lymph node cells.

The graft could support immune system growth and grew cells to shuttle bile and other digestive chemicals. It also boosted the mice’s average survival rate. Without treatment, most mice died within 10 weeks of the start of the study. Most mice injected with liver cells survived past 30 weeks.

A similar strategy worked in dogs and pigs with damaged livers. Injecting donor cells into lymph nodes formed mini livers in less than two months in pigs. Under the microscope, the newborn structures resembled the liver’s intricate architecture, including “highways” for bile to easily flow along instead of accumulating, which causes even more damage and scarring.

The body has over 500 hundred lymph nodes. Injecting into other lymph nodes located elsewhere also grew mini livers, but they weren’t as effective.

“It’s all about location, location, location,” said Lagasse at the time.

A Daring Trial

With prior experience guiding their clinical trial, LyGenesis dosed a first patient in late March.

The team used a technique called endoscopic ultrasound to direct the cells into the designated lymph node. In the procedure, a thin, flexible tube with a small ultrasound device is inserted through the mouth into the digestive track. The ultrasound generates an image of the surrounding tissue and helps guide the tube to the target lymph node for injection.

The procedure may sound difficult, but compared to a liver transplant, it’s minimally invasive. In an interview with Nature, Dr. Michael Hufford, CEO of LyGenesis, said the patient is recovering well and already discharged from the clinic.

The company aims to enroll all 12 patients by mid 2025 to test the therapy’s safety and efficacy.

Many questions remain. The transplanted cells could grow into mini livers of different sizes, based on chemical signals from the body. Although not a problem in mice and pigs, could they potentially overgrow in humans? Meanwhile, patients receiving the treatment will need to take a hefty dose of medications to suppress their immune systems. How these will interact with the transplants is also unknown.

Another question is dosage. Lymph nodes are plentiful. The trial will inject liver cells into up to five lymph nodes to see if multiple mini livers can grow and function without side effects.

If successful, the therapy has wider reach.

In diabetic mice, seeding lymph nodes with pancreatic cellular clusters restored their blood sugar levels. A similar strategy could combat Type 1 diabetes in humans. The company is also looking into whether the technology can revive kidney function or even combat aging.

But for now, Hufford is focused on helping millions of people with liver damage. “This therapy will potentially be a remarkable regenerative medicine milestone by helping patients with ESLD [end-stage liver disease] grow new functional ectopic livers in their own body,” he said.

Image Credit: A solution with liver cells in suspension / LyGenesis

Kategorie: Transhumanismus

Harvard’s New Programmable Liquid Shifts Its Properties on Demand

11 Duben, 2024 - 00:37

We’re surrounded by ingenious substances: a menu of metal alloys that can wrap up leftovers or skin rockets, paints in any color imaginable, and ever-morphing digital displays. Virtually all of these exploit the natural properties of the underlying materials.

But an emerging class of materials is more versatile, even programmable.

Known as metamaterials, these substances are meticulously engineered such that their structural makeup—as opposed to their composition—determines their properties. Some metamaterials might make long-distance wireless power transfer practical, others could bring “invisibility cloaks” or futuristic materials that respond to brainwaves.

But most examples are solid metamaterials—a Harvard team wondered if they could make a metafluid. As it turns out, yes, absolutely. The team recently described their results in Nature.

“Unlike solid metamaterials, metafluids have the unique ability to flow and adapt to the shape of their container,” Katia Bertoldi, a professor in applied mechanics at Harvard and senior author of the paper, said in a press release. “Our goal was to create a metafluid that not only possesses these remarkable attributes but also provides a platform for programmable viscosity, compressibility, and optical properties.”

The team’s metafluid is made up of hundreds of thousands of tiny, stretchy spheres—each between 50 to 500 microns across—suspended in oil. The spheres change shape depending on the pressure of the surrounding oil. At higher pressures, they deform, one hemisphere collapsing inward into a kind of half moon shape. They then resume their original spherical shape when the pressure is relieved.

The metafluid’s properties—such as viscosity or opacity—change depending on which of these shapes its constituent spheres assume. The fluid’s properties can be fine-tuned based on how many spheres are in the liquid and how big or thick they are.

Greater pressure causes the spheres to collapse. When the pressure is relieved, they resume their spherical shape. Credit: Adel Djellouli/Harvard SEAS

As a proof of concept, the team filled a hydraulic robotic gripper with their metafluid. Robots usually have to be programmed to sense objects and adjust grip strength. The team showed the gripper could automatically adapt to a blueberry, a glass, and an egg without additional sensing or programming required. The pressure of each object “programmed” the liquid to adjust, allowing the gripper to pick up all three, undamaged, with ease.

The team also showed the metafluid could switch from opaque, when its constituents were spherical, to more transparent, when they collapsed. The latter shape, the researchers said, functions like a lens focusing light, while the former scatters light.

The metafluid obscures the Harvard logo then becomes more transparent as the capsules collapse. Credit: Adel Djellouli/Harvard SEAS

Also of note, the metafluid behaves like a Newtonian fluid when its components are spherical, meaning its viscosity only changes with shifts in temperature. When they collapse, however, it becomes a non-Newtonian fluid, where its viscosity changes depending on the shear forces present. The greater the shear force—that is, parallel forces pushing in opposite directions—the more liquid the metafluid becomes.

Next, the team will investigate additional properties—such as how their creation’s acoustics and thermodynamics change with pressure—and look into commercialization. Making the elastic spheres themselves is fairly straightforward, and they think metafluids like theirs might be useful in robots, as “intelligent” shock absorbers, or in color-changing e-inks.

“The application space for these scalable, easy-to-produce metafluids is huge,” said Bertoldi.

Of course, the team’s creation is still in the research phase. There are a plenty of hoops yet to navigate before it shows up in products we all might enjoy. Still, the work adds to a growing list of metamaterials—and shows the promise of going from solid to liquid.

Image Credit: Adel Djellouli/Harvard SEAS

Kategorie: Transhumanismus

3 Body Problem: Is the Universe Really a ‘Dark Forest’ Full of Hostile Aliens in Hiding?

9 Duben, 2024 - 20:17

We have no good reason to believe that aliens have ever contacted Earth. Sure, there are conspiracy theories, and some rather strange reports about harm to cattle, but nothing credible. Physicist Enrico Fermi found this odd. His formulation of the puzzle, proposed in the 1950s and now known as the Fermi Paradox, is still key to the search for extraterrestrial life (SETI) and messaging by sending signals into space (METI).

The Earth is about 4.5 billion years old, and life is at least 3.5 billion years old. The paradox states that, given the scale of the universe, favorable conditions for life are likely to have occurred many, many times. So where is everyone? We have good reasons to believe that there must be life out there, but nobody has come to call.

This is an issue that the character Ye Wenjie wrestles with in the first episode of Netflix’s 3 Body Problem. Working at a radio observatory, she does finally receive a message from a member of an alien civilization—telling her they are a pacifist and urging her not to respond to the message or Earth will be attacked.

The series will ultimately offer a detailed, elegant solution to the Fermi Paradox, but we will have to wait until the second season.

Or you can read the second book in Cixin Liu’s series, The Dark Forest. Without spoilers, the explanation set out in the books runs as follows: “The universe is a dark forest. Every civilization is an armed hunter stalking through the trees like a ghost, gently pushing aside branches that block the path and trying to tread without sound.”

Ultimately, everybody is hiding from everyone else. Differential rates of technological progress make an ongoing balance of power impossible, leaving the most rapidly progressing civilizations in a position to wipe out anyone else.

In this ever-threatening environment, those who play the survival game best are the ones who survive longest. We have joined a game which has been going on before our arrival, and the strategy that everyone has learned is to hide. Nobody who knows the game will be foolish enough to contact anyone—or to respond to a message.

Liu has depicted what he calls “the worst of all possible universes,” continuing a trend within Chinese science fiction. He is not saying that our universe is an actual dark forest, with one survival strategy of silence and predation prevailing everywhere, but that such a universe is possible and interesting.

Liu’s dark forest theory is also sufficiently plausible to have reinforced a trend in the scientific discussion in the west—away from worries about mutual incomprehensibility, and towards concerns about direct threat.

We can see its potential influence in the protocol for what to do on first contact that was proposed in 2020 by the prominent astrobiologists Kelly Smith and John Traphagan. “First, do nothing,” they conclude, because doing something could lead to disaster.

In the case of alien contact, Earth should be notified using pre-established signaling rather than anything improvised, they argue. And we should avoid doing anything that might disclose information about who we are. Defensive behavior would show our familiarity with conflict, so that would not be a good idea. Returning messages would give away the location of Earth—also a bad idea.

Again, the Smith and Traphagan thought is not that the dark forest theory is correct. Benevolent aliens really could be out there. The thought is simply that first contact would involve a high civilization-level risk.

This is different from assumptions from a great deal of Russian literature about space of the Soviet era, which suggested that advanced civilizations would necessarily have progressed beyond conflict, and would therefore share a comradely attitude. This no longer seems to be regarded as a plausible guide to protocols for contact.

Misinterpreting Darwin

The interesting thing is that the dark forest theory is almost certainly wrong. Or at least, it is wrong in our universe. It sets up a scenario in which there is a Darwinian process of natural selection, a competition for survival.

Charles Darwin’s account of competition for survival is evidence-based. By contrast, we have absolutely no evidence about alien behavior, or about competition within or between other civilizations. This makes for entertaining guesswork rather than good science, even if we accept the idea that natural selection could operate at group level, at the level of civilizations.

Even if you were to assume the universe did operate in accordance with Darwinian evolution, the argument is questionable. No actual forest is like the dark one. They are noisy places where co-evolution occurs.

Creatures evolve together, in mutual interdependence, and not alone. Parasites depend upon hosts, flowers depend upon birds for pollination. Every creature in a forest depends upon insects. Mutual connection does lead to encounters which are nasty, brutish and short, but it also takes other forms. That is how forests in our world work.

Interestingly, Liu acknowledges this interdependence as a counterpoint to the dark forest theory. The viewer, and the reader, are told repeatedly that “in nature, nothing exists alone”—a quote from Rachel Carson’s Silent Spring (1962). This is a text which tells us that bugs can be our friends and not our enemies.

There are many galaxies out there, and potentially plenty of life. Image Credit: X-ray: NASA/CXC/SAO

In Liu’s story, this is used to explain why some humans immediately go over to the side of the aliens, and why the urge to make contact is so strong, in spite of all the risks. Ye Wenjie ultimately replies to the alien warning.

The Carson allusions do not reinstate the old Russian idea that aliens will be advanced and therefore comradely. But they do help to paint a more varied and realistic picture than the dark forest theory.

For this reason, the dark forest solution to the Fermi Paradox is unconvincing. The fact that we do not hear anyone is just as likely to indicate that they are too far off, or we are listening in all the wrong ways, or else that there is no forest and nothing else to be heard.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image Credit: ESO/A. Ghizzi Panizza (www.albertoghizzipanizza.com)

Kategorie: Transhumanismus

Your Brain Breaks Its Own DNA to Form Memories That Can Last a Lifetime

8 Duben, 2024 - 21:55

Some memories last a lifetime. The awe of seeing a full solar eclipse. The first smile you shared with your partner. The glimpse of a beloved pet who just passed away in their sleep.

Other memories, not so much. Few of us remember what we had for lunch a week ago. Why do some memories last, while others fade away?

Surprisingly, the answer may be broken DNA and inflammation in the brain. On the surface, these processes sound utterly detrimental to brain function. Broken DNA strands are usually associated with cancer, and inflammation is linked to aging.

But a new study in mice suggests that breaking and repairing DNA in neurons paves the way for long-lasting memories.

We form memories when electrical signals zap through neurons in the hippocampus, a seahorse-shaped region deep inside the brain. The electrical pulses wire groups of neurons together into networks that encode memories. The signals only capture brief snippets of a treasured experience, yet some can be replayed over and over for decades (although they do gradually decay like a broken record).

Like artificial neural networks, which power most of today’s AI, scientists have long thought that rewiring the brain’s connections happens fast and is prone to changes. But the new study found a subset of neurons that alter their connections to encode long-lasting memories.

To do this, strangely, the neurons recruit proteins that normally fend off bacteria and cause inflammation.

“Inflammation of brain neurons is usually considered to be a bad thing, since it can lead to neurological problems such as Alzheimer’s and Parkinson’s disease,” said study author Dr. Jelena Radulovic at Albert Einstein College of Medicine in a press release. “But our findings suggest that inflammation in certain neurons in the brain’s hippocampal region is essential for making long-lasting memories.”

Should I Stay or Should I Go?

We all have a mental scrapbook for our lives. When playing a memory—the whens, wheres, whos, and whats—our minds transport us through time to relive the experience.

The hippocampus is at the heart of this ability. In the 1950s, a man known as H.M. had his hippocampus removed to treat epilepsy. After the surgery, he retained old memories, but could no longer form new ones, suggesting that the brain region is a hotspot for encoding memories.

But what does DNA have to do with the hippocampus or memory?

It comes down to how brain cells are wired. Neurons connect with each other through little bumps called synapses. Like docks between two opposing shores, synapses pump out chemicals to transmit messages from one neuron to another. Depending on the signals, synapses can form a strong connection to their neighboring neurons, or they can dial down communications.

This ability to rewire the brain is called synaptic plasticity. Scientists have long thought it’s the basis of memory. When learning something new, electrical signals flow through neurons triggering a cascade of molecules. These stimulate genes that restructure the synapse to either bump up or decrease their connection with neighbors. In the hippocampus, this “dial” can rapidly change overall neural network wiring to record new memories.

Synaptic plasticity comes at a cost. Synapses are made up of a collection of proteins produced from DNA inside cells. With new learning, electrical signals from neurons cause temporary snips to DNA inside neurons.

DNA damage isn’t always detrimental. It’s been associated with memory formation since 2021. One study found breakage of our genetic material is widespread in the brain and was surprisingly linked to better memory in mice. After learning a task, mice had more DNA breaks in multiple types of brain cells, hinting that the temporary damage may be part of the brain’s learning and memory process.

But the results were only for brief memories. Do similar mechanisms also drive long-term ones?

“What enables brief experiences, encoded over just seconds, to be replayed again and again during a lifetime remains a mystery,” Drs.  Benjamin Kelvington and Ted Abel at the Iowa Neuroscience Institute, who were not involved in the work, wrote in Nature.

The Memory Omelet

To find an answer, the team used a standard method for assessing memory. They hosted mice in different chambers: Some were comfortable; others gave the critters a tiny electrical zap to the paws, just enough that they disliked the habitat. The mice rapidly learned to prefer the comfortable room.

The team then compared gene expression from mice with a recent memory—roughly four days after the test—to those nearly a month after the stay.

Surprisingly, genes involved in inflammation flared up in addition to those normally associated with synaptic plasticity. Digging deeper, the team found a protein called TLR9. Usually known as part of the body’s first line of defense against dangerous bacteria, TLR9 boosts the body’s immune response against DNA fragments from invading bacteria. Here, however, the gene became highly active in neurons inside the hippocampus—especially those with persistent DNA breaks that last for days.

What does it do? In one test, the team deleted the gene encoding TLR9 in the hippocampus. When challenged with the chamber test, these mice struggled to remember the “dangerous” chamber in a long-term memory test compared to peers with the gene intact.

Interestingly, the team found that TLR9 could sense DNA breakage. Deleting the gene prevented mouse cells from recognizing DNA breaks, causing not just loss of long-term memory, but also overall genomic instability in their neurons.

“One of the most important contributions of this study is the insight into the connection between DNA damage…and the persistent cellular changes associated with long-term memory,” wrote Kelvington and Abel.

Memory Mystery

How long-term memories persist remains a mystery. Immune responses are likely just one aspect.

In 2021, the same team found that net-like structures around neurons are crucial for long-term memory. The new study pinpointed TLR9 as a protein that helps form these structures, providing a molecular mechanism between different brain components that support lasting memories.

The results suggest “we are using our own DNA as a signaling system,” Radulovic told Nature, so that we can “retain information over a long time.”

Lots of questions remain. Does DNA damage predispose certain neurons to the formation of memory-encoding networks? And perhaps more pressing, inflammation is often associated with neurodegenerative disorders, such as Alzheimer’s disease. TLR9, which helped the mice remember dangerous chambers in this study, was previously involved in triggering dementia when expressed in microglia, the brain’s immune cells.

“How is it that, in neurons, activation of TLR9 is crucial for memory formation, whereas, in microglia, it produces neurodegeneration—the antithesis of memory?” asked Kelvington and Abel. “What separates detrimental DNA damage and inflammation from that which is essential for memory?”

Image Credit: geralt / Pixabay

Kategorie: Transhumanismus

This Week’s Awesome Tech Stories From Around the Web (Through April 6)

6 Duben, 2024 - 16:00
COMPUTING

To Build a Better AI Supercomputer, Let There Be Light
Will Knight | Wired
“Lightmatter wants to directly connect hundreds of thousands or even millions of GPUs—those silicon chips that are crucial to AI training—using optical links. Reducing the conversion bottleneck should allow data to move between chips at much higher speeds than is possible today, potentially enabling distributed AI supercomputers of extraordinary scale.“

ROBOTICS

Apple Has Been Secretly Building Home Robots That Could End up as a New Product Line, Report Says
Aaron Mok | Business Insider
“Apple is in the early stages of looking into making home robots, a move that appears to be an effort to create its ‘next big thing’ after it killed its self-driving car project earlier this year, sources familiar with the matter told Bloomberg. Engineers are looking into developing a robot that could follow users around their houses, Bloomberg reported. They’re also exploring a tabletop at-home device that uses robotics to rotate the display, a more advanced project than the mobile robot.”

SPACE

A Tantalizing ‘Hint’ That Astronomers Got Dark Energy All Wrong
Dennis Overbye | The New York Times
“On Thursday, astronomers who are conducting what they describe as the biggest and most precise survey yet of the history of the universe announced that they might have discovered a major flaw in their understanding of dark energy, the mysterious force that is speeding up the expansion of the cosmos. Dark energy was assumed to be a constant force in the universe, both currently and throughout cosmic history. But the new data suggest that it may be more changeable, growing stronger or weaker over time, reversing or even fading away.”

COMPUTING

How ASML Took Over the Chipmaking Chessboard
Mat Honan and James O’Donnell | MIT Technology Review
“When asked what he thought might eventually cause Moore’s Law to finally stall out, van den Brink rejected the premise entirely. ‘There’s no reason to believe this will stop. You won’t get the answer from me where it will end,’ he said. ‘It will end when we’re running out of ideas where the value we create with all this will not balance with the cost it will take. Then it will end. And not by the lack of ideas.'”

TRANSPORTATION

The Very First Jet Suit Grand Prix Takes Off in Dubai
Mike Hanlon | New Atlas
“A new sport kicked away this month when the first ever jet-suit race was held in Dubai. Each racer wore an array of seven 130-hp jet engines (two on each arm and three in the backpack for a total 1,050 hp) that are controlled by hand-throttles. After that, the pilots use the three thrust vectors to gain lift, move forward and try to stay above ground level while negotiating the course…faster than anyone else.“

ROBOTICS

Toyota’s Bubble-ized Humanoid Grasps With Its Whole Body
Evan Ackerman | IEEE Spectrum
“Many of those motions look very human-like, because this is how humans manipulate things. Not to throw too much shade at all those humanoid warehouse robots, but as is pointed out in the video above, using just our hands outstretched in front of us to lift things is not how humans do it, because using other parts of our bodies to provide extra support makes lifting easier.”

FUTURE

‘A Brief History of the Future’ Offers a Hopeful Antidote to Cynical Tech Takes
Devin Coldewey | TechCrunch
“The future, he said, isn’t just what a Silicon Valley publicist tells you, or what ‘Big Dystopia’ warns you of, or even what a TechCrunch writer predicts. In the six-episode series, he talks with dozens of individuals, companies and communities about how they’re working to improve and secure a future they may never see. From mushroom leather to ocean cleanup to death doulas, Wallach finds people who see the same scary future we do but are choosing to do something about it, even if that thing seems hopelessly small or naïve.”

TECH

This AI Startup Wants You to Talk to Houses, Cars, and Factories
Steven Levy | Wired
“We’ve all been astonished at how chatbots seem to understand the world. But what if they were truly connect to the real world? What if the dataset behind the chat interface was physical reality itself, captured in real time by interpreting the input of billions of sensors sprinkled around the globe? That’s the idea behind Archetype AI, an ambitious startup launching today. As cofounder and CEO Ivan Poupyrev puts it, ‘Think of ChatGPT, but for physical reality.'”

FUTURE

How One Tech Skeptic Decided AI Might Benefit the Middle Class
Steve Lohr | The New York Times
“David Autor seems an unlikely AI optimist. The labor economist at the Massachusetts Institute of Technology is best known for his in-depth studies showing how much technology and trade have eroded the incomes of millions of American workers over the years. But Mr. Autor is now making the case that the new wave of technology—generative artificial intelligence, which can produce hyper-realistic images and video and convincingly imitate humans’ voices and writing—could reverse that trend.”

Image Credit: Harole Ethan / Unsplash

Kategorie: Transhumanismus

Life’s Origins: How Fissures in Hot Rocks May Have Kickstarted Biochemistry

5 Duben, 2024 - 21:17

How did the building blocks of life originate?

The question has long vexed scientists. Early Earth was dotted with pools of water rich in chemicals—a primordial soup. Yet biomolecules supporting life emerged from the mixtures, setting the stage for the appearance of the first cells.

Life was kickstarted when two components formed. One was a molecular carrier—like, for example, DNA—to pass along and remix genetic blueprints. The other component was made up of proteins, the workhorses and structural elements of the body.

Both biomolecules are highly complex. In humans, DNA has four different chemical “letters,” called nucleotides, whereas proteins are made of 20 types of amino acids. The components have distinct structures, and their creation requires slightly different chemistries. The final products need to be in large enough amounts to string them together into DNA or proteins.

Scientists can purify the components in the lab using additives. But it begs the question: How did it happen on early Earth?

The answer, suggests Dr. Christof Mast, a researcher at Ludwig Maximilians University of Munich, may be cracks in rocks like those occurring in the volcanoes or geothermal systems that were abundant on early Earth. It’s possible that temperature differences along the cracks naturally separate and concentrate biomolecule components, providing a passive system to purify biomolecules.

Inspired by geology, the team developed heat flow chambers roughly the size of a bank card, each containing minuscule fractures with a temperature gradient. When given a mixture of amino acids or nucleotides—a “prebiotic mix”—the components readily separated.

Adding more chambers further concentrated the chemicals, even those that were similar in structure. The network of fractures also enabled amino acids to bond, the first step towards creating a functional protein.

“Systems of interconnected thin fractures and cracks…are thought to be ubiquitous in volcanic and geothermal environments,” wrote the team. By enriching the prebiotic chemicals, such systems could have “provided a steady driving force for a natural origins-of-life laboratory.”

Brewing Life

Around four billion years ago, Earth was a hostile environment, pummeled by meteorites and rife with volcanic eruptions. Yet somehow among the chaos, chemistry generated the first amino acids, nucleotides, fatty lipids, and other building blocks that support life.

Which chemical processes contributed to these molecules is up for debate. When each came along is also a conundrum. Like a “chicken or egg” problem, DNA and RNA direct the creation of proteins in cells—but both genetic carriers also require proteins to replicate.

One theory suggest sulfidic anions, which are molecules that were abundant in early Earth’s lakes and rivers, could be the link. Generated in volcanic eruptions, once dissolved into pools of water they can speed up chemical reactions that convert prebiotic molecules into RNA. Dubbed the “RNA world” hypothesis, the idea suggests that RNA was the first biomolecule to grace Earth because it can carry genetic information and speed up some chemical reactions.

Another idea is meteor impacts on early Earth generated nucleotides, lipids, and amino acids simultaneously, through a process that includes two abundant chemicals—one from meteors and another from Earth—and a dash of UV light.

But there’s one problem: Each set of building blocks requires a different chemical reaction. Depending on slight differences in structure or chemistry, it’s possible one geographic location might have skewed towards one type of prebiotic molecule over another.

How? The new study, published in Nature, offers an answer.

Tunnel Networks

Lab experiments mimicking early Earth usually start with well-defined ingredients that have already been purified. Scientists also clean up intermediate side-products, especially for multiple chemical reaction steps.

The process often results in “vanishingly small concentrations of the desired product,” or its creation can even be completely inhibited, wrote the team. The reactions also require multiple spatially separated chambers, which hardly resembles Earth’s natural environment.

The new study took inspiration from geology. Early Earth had complex networks of water-filled cracks found in a variety of rocks in volcanos and geothermal systems. The cracks, generated by overheating rocks, formed natural “straws” that could potentially filter a complex mix of molecules using a heat gradient.

Each molecule favors a preferred temperature based on its size and electrical charge. When exposed to different temperatures, it naturally moves towards its ideal pick. Called thermophoresis, the process separates a soup of ingredients into multiple distinct layers in one step.

The team mimicked a single thin rock fracture using a heat flow chamber. Roughly the size of a bank card, the chamber had tiny cracks 170 micrometers across, about the width of a human hair. To create a temperature gradient, one side of the chamber was heated to 104 degrees Fahrenheit and the other end chilled to 77 degrees Fahrenheit.

In a first test, the team added a mix of prebiotic compounds that included amino acids and DNA nucleotides into the chamber. After 18 hours, the components separated into layers like tiramisu. For example, glycine—the smallest of amino acids—became concentrated towards the top, whereas other amino acids with higher thermophoretic strength stuck to the bottom. Similarly, DNA letters and other life-sustaining chemicals also separated in the cracks, with some enriched by up to 45 percent.

Although promising, the system didn’t resemble early Earth, which had highly interconnected cracks varying in size. To better mimic natural conditions, the team next strung up three chambers, with the first branching into two others. This was roughly 23 times more efficient at enriching prebiotic chemicals than a single chamber.

Using a computer simulation, the team then modeled the behavior of a 20-by-20 interlinked chamber system, using a realistic flow rate of prebiotic chemicals. The chambers further enriched the brew, with glycine enriching over 2,000 times more than another amino acids.

Chemical Reactions

Cleaner ingredients are a great start for the formation of complex molecules. But lots of chemical reaction require additional chemicals, which also need to be enriched. Here, the team zeroed in on a reaction stitching two glycine molecules together.

At the heart is trimetaphosphate (TMP), which helps guide the reaction. TMP is especially interesting for prebiotic chemistry, and it was scarce on early Earth, explained the team, which “makes its selective enrichment critical.” A single chamber increased TMP levels when mixed with other chemicals.

Using a computer simulation, a TMP and glycine mix increased the final product—a doubled glycine—by five orders of magnitude.

“These results show that otherwise challenging prebiotic reactions are massively boosted” with heat flows that selectively enrich chemicals in different regions, wrote the team.

In all, they tested over 50 prebiotic molecules and found the fractures readily separated them. Because each crack can have a different mix of molecules, it could explain the rise of multiple life-sustaining building blocks.

Still, how life’s building blocks came together to form organisms remains mysterious. Heat flows and rock fissures are likely just one piece of the puzzle. The ultimate test will be to see if, and how, these purified prebiotics link up to form a cell.

Image Credit: Christof B. Mast

Kategorie: Transhumanismus

Quantum Computers Take a Major Step With Error Correction Breakthrough

4 Duben, 2024 - 23:26

For quantum computers to go from research curiosities to practically useful devices, researchers need to get their errors under control. New research from Microsoft and Quantinuum has now taken a major step in that direction.

Today’s quantum computers are stuck firmly in the “noisy intermediate-scale quantum” (NISQ) era. While companies have had some success stringing large numbers of qubits together, they are highly susceptible to noise which can quickly degrade their quantum states. This makes it impossible to carry out computations with enough steps to be practically useful.

While some have claimed that these noisy devices could still be put to practical use, the consensus is that quantum error correction schemes will be vital for the full potential of the technology to be realized. But error correction is difficult in quantum computers because reading the quantum state of a qubit causes it to collapse.

Researchers have devised ways to get around this using error correction codes that spread each bit of quantum information across multiple physical qubits to create what is known as a logical qubit. This provides redundancy and makes it possible to detect and correct errors in the physical qubits without impacting the information in the logical qubit.

The challenge is that, until recently, it was assumed it could take roughly 1,000 physical qubits to create each logical qubit. Today’s largest quantum processors only have around that many qubits, suggesting that creating enough logical qubits for meaningful computations was still a distant goal.

That changed last year when researchers from Harvard and startup QuEra showed they could generate 48 logical qubits from just 280 physical ones. And now the collaboration between Microsoft and Quantinuum has gone a step further by showing that they can not only create logical qubits but can actually use them to suppress error rates by a factor of 800 and carry out more than 14,000 experimental routines without a single error.

“What we did here gives me goosebumps,” Microsoft’s Krysta Svore told New Scientist. “We have shown that error correction is repeatable, it is working, and it is reliable.”

The researchers were working with Quantinuum’s H2 quantum processor, which relies on trapped-ion technology and is relatively small at just 32 qubits. But by applying error correction codes developed by Microsoft, they were able to generate four logical qubits that only experienced an error every 100,000 runs.

One of the biggest achievements, the Microsoft team notes in a blog post, was the fact that they were able to diagnose and correct errors without destroying the logical qubits. This is thanks to an approach known as “active syndrome extraction” which is able to read information about the nature of the noise impacting qubits, rather than their state, Svore told IEEE Spectrum.

However, the error correction scheme had a shelf life. When the researchers carried out multiple operations on a logical qubit, followed by error correction, they found that by the second round the error rates were only half of those found in the physical qubits and by the third round there was no statistically significant impact.

And impressive as the results are, the Microsoft team points out in their blog post that creating truly powerful quantum computers will require logical qubits that make errors only once every 100 million operations.

Regardless, the result marks a massive jump in capabilities for error correction, which Quantinuum claimed in a press release represents the beginning of a new era in quantum computing. While that might be jumping the gun slightly, it certainly suggests that people’s timelines for when we will achieve fault-tolerant quantum computing may need to be updated.

Image Credit: Quantinuum H2 quantum computer / Quantinuum

Kategorie: Transhumanismus