Transhumanismus

This Week’s Awesome Tech Stories From Around the Web (Through July 13)

Singularity HUB - 13 Červenec, 2024 - 16:00
ARTIFICIAL INTELLIGENCE

OpenAI Reportedly Nears Breakthrough With ‘Reasoning’ AI, Reveals Progress Framework
Benj Edwards | Ars Technica
“[According to OpenAI’s new AGI framework] a Level 2 AI system would reportedly be capable of basic problem-solving on par with a human who holds a doctorate degree but lacks access to external tools. During the all-hands meeting, OpenAI leadership reportedly demonstrated a research project using their GPT-4 model that the researchers believe shows signs of approaching this human-like reasoning ability, according to someone familiar with the discussion who spoke with Bloomberg.”

BIOTECH

How AI Revolutionized Protein Science, but Didn’t End It
Yasemin Saplakoglu | Quanta
“Three years ago, Google’s AlphaFold pulled off the biggest artificial intelligence breakthrough in science to date, accelerating molecular research, and kindling deep questions about why we do science. …’The field of protein biology is ‘more exciting right now than it was before AlphaFold,’ Perrakis said. The excitement comes from the promise of reviving structure-based drug discovery, the acceleration in creating hypotheses, and the hope of understanding complex interactions happening within cells.”

TECH

New Fiber Optics Tech Smashes Data Rate Record 
Margo Anderson | IEEE Spectrum
“An international team of researchers have smashed the world record for fiber optic communications through commercial-grade fiber. By broadening fiber’s communication bandwidth, the team has produced data rates four times as fast as existing commercial systems—and 33 percent better than the previous world record.”

ARTIFICIAL INTELLIGENCE

‘Superhuman’ Go AIs Still Have Trouble Defending Against These Simple Exploits
Kyle Orland | Ars Technica
“In the ancient Chinese game of Go, state-of-the-art artificial intelligence has generally been able to defeat the best human players since at least 2016. But in the last few years, researchers have discovered flaws in these top-level AI Go algorithms that give humans a fighting chance. By using unorthodox ‘cyclic’ strategies—ones that even a beginning human player could detect and defeat—a crafty human can often exploit gaps in a top-level AI’s strategy and fool the algorithm into a loss.”

COMPUTING

Google Creates Self-Replicating Life From Digital ‘Primordial Soup’
Matthew Sparkes | New Scientist
“A self-replicating form of artificial life has arisen from a digital ‘primordial soup’ of random data, despite a lack of explicit rules or goals to encourage such behavior. Researchers believe it is possible that more sophisticated versions of the experiment could yield more advanced digital organisms, and if they did, the findings could shed light on the mechanisms behind the emergence of biological life on Earth.”

AUTOMATION

How Good Is ChatGPT at Coding, Really?
Michelle Hampson | IEEE Spectrum
“Programmers have spent decades writing code for AI models, and now, in a full circle moment, AI is being used to write code. But how does an AI code generator compare to a human programmer? [A new study shows] that ChatGPT has an extremely broad range of success when it comes to producing functional code—with a success rate ranging from anywhere as poor as 0.66 percent and as good as 89 percent—depending on the difficulty of the task, the programming language, and a number of other factors.”

TECH

OpenAI Anticipates Decrease in AI Model Costs Amid Adoption Surge
Shubham Sharma | VentureBeat
“‘We introduced GPT-4, the first version, some 15 months ago. Since then, the cost of a token/word on the model has been reduced by 85-90%. There’s no reason why that trend will not continue,’ Olivier Godement, [OpenAI’s] head of API Product said. …He expects the company’s work on affordability, spanning efforts to optimize costs at both hardware and inference levels, will continue, leading to a further decline in the cost of running frontier AI models—much like what has been the case with smartphones and televisions.”

SPACE

Watch These Supernovas in (Time-Lapse) Motion
Dennis Overbye | The New York Times
“This spring, the astronomers who operate Chandra combined its X-ray images into videos that document the evolution of two astrophysical landmarks: the Crab nebula, in the constellation Taurus, and Cassiopeia A, a gas bubble and hub of radio noise in the constellation Cassiopeia. The videos show twisting, drifting ribbons of the remains of the star being churned by shock waves and illuminated by radiation from the dense, spinning cores left behind.”

Image Credit: BoliviaInteligente / Unsplash

Kategorie: Transhumanismus

Beyond CRISPR: Scientists Say New Gene Editing Tool Is Like a ‘Word Processor’ for DNA

Singularity HUB - 12 Červenec, 2024 - 21:30

CRISPR was one of the most influential breakthroughs of the last decade, but it’s still imperfect. While the gene editing tool is already helping people with genetic ailments, scientists are also looking to improve on it.

Efforts have extended the CRISPR family to include less damaging, more accurate, and smaller versions of the gene editor. But in the bacterial world, where CRISPR was originally discovered, we’re only scratching the surface. Two new papers suggest an even more powerful gene editor may be around the corner—if it’s proven to work in cells like our own.

In one of the papers, scientists at the Arc Institute say they discovered a new CRISPR-like gene editing tool in bacterial “jumping genes.” Another paper, written independently, covers the same tool and extends the work to a similar one in a different family.

Jumping genes move around within genomes and even between individuals. It’s long been known they do this by cutting and pasting their own DNA, but none of the machinery has been shown to be programmable like CRISPR. In the recent studies, scientists describe jumping gene systems that, in a process the teams are alternatively calling bridge editing and seekRNA, can be modified to cut, paste, and flip any DNA sequence.

Crucially, unlike CRISPR, the system does all this without breaking strands of DNA or relying on the cell to repair them, a process that can be damaging and unpredictable. The various molecules involved are also fewer and smaller than those in CRISPR, potentially making the tool safer and easier to deliver into cells, and can deal with much longer sequences.

“Bridge recombination can universally modify genetic material through sequence-specific insertion, excision, inversion, and more, enabling a word processor for the living genome beyond CRISPR,” said Berkeley’s Patrick Hsu, a senior author of one of the studies and Arc Institute core investigator, in a press release.

CRISPR Coup

Scientists first discovered CRISPR in bacteria defending themselves against viruses. In nature, a Cas9 protein pairs with an RNA guide molecule to seek out viral DNA and, when located, chop it up. Researchers learned to reengineer this system to seek out any DNA sequence, including sequences found in human genomes, and break the DNA strands at those locations. The natural machinery of the cell then repairs these breaks, sometimes using a provided strand of DNA.

CRISPR gene editing is powerful. It’s being investigated in clinical trials as a treatment for a variety of genetic diseases and, late last year, received its first clinical approval as a therapy for sickle cell disease and beta thalassemia. But it’s not perfect.

Because the system breaks DNA and relies on the cell to repair these breaks, it can be imprecise and unpredictable. The tool also works primarily on short sections of DNA. While many genetic illnesses are due to point mutations, where a single DNA “letter” has been changed, the ability to work with longer sequences would broaden the technology’s potential uses in both synthetic biology and gene therapy.

Scientists have developed new CRISPR-based systems over the years to address these shortcomings. Some systems only break a single DNA strand or swap out single genetic “letters” to increase precision. Studies are also looking for more CRISPR-like systems by screening the whole bacterial universe; others have found naturally occurring systems in eukaryotic cells like our own.

The new work extends the quest by adding jumping genes into the mix.

An RNA Bridge

Jumping genes are a fascinating feat of genetic magic. These sequences of DNA can move between locations in the genome using machinery to cut and paste themselves. In bacteria, they even move between individuals. This sharing of genes could be one way bacteria acquire antibiotic resistance—one cell that’s evolved to evade a drug can share its genetic defenses with a whole population.

In the Arc Institute study, researchers looked into a specific jumping gene in bacteria called IS110. They found that when the gene is on the move, it calls a sequence of RNA—like the RNA guide in CRISPR—to facilitate the process. The RNA includes two loops: One binds the gene itself and the other seeks out and binds to the gene’s destination in the genome. It acts like a bridge between the DNA sequence and the specific location where it’s to be inserted. In contrast to CRISPR, once found, the sequence can be added without breaking DNA.

“Bridge editing [cuts and pastes DNA] in a single-step mechanism that recombines and re-ligates the DNA, leaving it fully intact,” Hsu told Fierce Biotech in an email. “This is very distinct from CRISPR editing, which creates exposed DNA breaks that require DNA repair and have been shown to create undesired DNA damage responses.”

Crucially, the researchers discovered both loops of RNA can be reprogrammed. That means scientists can specify a genomic location as well as what sequence should go there. In theory, the system could be used to swap in long genes or even multiple genes. As a proof of concept in E. coli bacteria, the team programmed IS110 to insert a DNA sequence almost 5,000 bases long. They also cut and inverted another sequence of DNA.

The study was joined by a different paper written independently by another team of scientists at the University of Sydney detailing both IS110 and a related enzyme in a different family, IS111, that they say is similarly programmable. In their paper, they called these systems “seekRNA.”

The tools rely on a single protein half the size of those in CRISPR. That means it may be easier to package them in harmless viruses or lipid nanoparticles—these are also used in Covid vaccines—and ferry them into cells where they can get to work.

The Next Jump

The approach has big potential, but there’s also a big caveat. So far, the researchers have  only shown it works in bacteria. CRISPR, on the other hand, is incredibly versatile, having proved itself in myriad cell types. Next, they hope to hone the approach further and adapt it to mammalian cells like ours. That may not be easy. The University of Tokyo’s Hiroshi Nishimasu says the IS110 family hasn’t yet shown itself amenable to such a task.

All this is to say it’s still early in the technology’s arc. Scientists knew about CRISPR years before they showed it was programmable, and it wasn’t put to work in human cells until 2013. Although it’s moved relatively quickly from lab to clinic since then, the first CRISPR-based treatments took years more to materialize.

At the least, the new work shows we haven’t exhausted all nature has to offer gene editing. The tech could also be useful in the realm of synthetic biology, where single cells are being engineered on grand scales to learn how life works at its most basic and how we might reengineer it. And if the new system can be adapted for human cells, it would be a useful new option in the development of safer, more powerful gene therapies.

“If this works in other cells, it will be game-changing,” Sandro Fernandes Ataide, a structural biologist at the University of Sydney and author on the paper detailing IS111 told Nature. “It’s opening a new field in gene editing.”

Image Credit: The Arc Institute

Kategorie: Transhumanismus

First Woolly Mammoth Genome Reconstructed in 3D Could Help Bring the Species Back to Life

Singularity HUB - 11 Červenec, 2024 - 23:12

Roughly 52,000 years ago, a woolly mammoth died in the Siberian tundra. As her body flash froze in the biting cold, something remarkable happened: Her DNA turned into a fossil. It wasn’t only genetic letters that were memorialized—the cold preserved their intricate structure too.

Fast forward to 2018, when an international expedition to the area found her preserved body. The team took little bits of skin from her head and ear, hairs still intact.

From these samples, scientists built a three-dimensional reconstruction of a woolly mammoth’s genome down to the nanometer. The results were published in Cell today.

Like humans, the mammoth’s DNA strands are tightly packed into chromosomes inside cells. These sophisticated structures are hard to analyze in detail, even for humans, but they contain insights into which genes are turned on or off and how they’re organized in different cell types.

Previous attempts to reconstruct ancient DNA only had tiny snippets of genetic sequences. Like trying to put together a puzzle with missing pieces, the resulting DNA maps were incomplete.

Thanks to the newly discovered flash-frozen DNA, this mammoth project—pun intended—is the first to assemble an enormous ancient genome in 3D.

“This is a new type of fossil, and its scale dwarfs that of individual ancient DNA fragments—a million times more sequence,” said study author Erez Lieberman Aiden at Baylor College of Medicine in a statement.

Aiden’s team heavily collaborated with Love Dalén at the Center of Palaeogenetics in Sweden. In a separate study, Dalén’s team analyzed 21 Siberian woolly mammoth genomes and charted how the species survived for six millennia after a potentially catastrophic genetic “bottleneck.”

The mammoth genomes weren’t that different than those of today’s Asian and African elephants. All have 28 pairs of chromosomes, and their X chromosomes twist into unique structures unlike most mammals. Digging deeper, the team found genes that were turned on or off in the mammoth compared to its elephant cousins.

“Our analyses uncover new biology,” wrote Aiden’s team in their paper.

DNA Serendipity

Ancient DNA is hard to come by, but it offers invaluable clues about the evolutionary past. In the 1980s, scientists eager to probe genetic history showed ancient DNA, however fragmented, could be extracted and sequenced in samples from an extinct member of the horse family and Egyptian mummies.

Thanks to modern DNA sequencing, the study of ancient DNA “has subsequently undergone a remarkable expansion,” wrote Aiden’s team. It’s now possible to sequence whole genomes from extinct humans, animals, plants, and even pathogens spanning a million years.

Making sense of the fragments is another matter. One way to decipher ancient genetic codes is to compare them to the genomes of their closest living cousins, such as woolly mammoths and elephants. This way, scientists can figure out which parts of the DNA sequence remained unchanged and where evolution swapped letters or small fragments.

These analyses can link genetic changes to function, such as identifying which genes made mammoths woolly. But they can’t capture large-scale differences at the chromosomal level. Because DNA relies on the chromosome’s 3D structure to function, sequencing its letters alone misses valuable information, such as when and where genes are turned on or off.

Chromosome Puzzle Master

Enter Hi-C. Developed in 2009 to reconstruct human genomes, the technique detects interactions between different genetic sites inside the cell’s nucleus.

Here’s roughly how it works. DNA strands are like ribbons that twirl around proteins in a structure resembling beads on a string. Because of this arrangement, different parts of the DNA strand are closer to each other in physical space. Hi-C “glues” together sections that are near one another and tags the pairs. Alongside modern DNA sequencing, the technique produces a catalog of DNA fragments that interact in physical space. Like a 3D puzzle, scientists can then put the pieces back together.

“Imagine you have a puzzle that has three billion pieces, but you don’t have the picture of the final puzzle to work from,” study author Marc A. Marti-Renom said in the press release. “Hi-C allows you to have an approximation of that picture before you start putting the puzzle pieces together.”

But Hi-C can be impossible to use in ancient samples because the surviving fragments are so short they’ve erased any chromosome shapes. They’ve literally withered away over time.

In the new study, the team developed a new technique, called PaleoHi-C, to analyze ancient DNA specifically.

Scientists immediately treated samples in the field to reduce contamination. They generated roughly 4.4 billion “pairs” of physically aligned DNA sequences—some interacting within a single chromosome, others between two. Overall, they painted a 3D snapshot of the woolly mammoth’s genetic material and how it looked inside cells with nanoscale detail.

In the new reconstructions, the team identified chromosome territories—certain chromosomes are located in different regions of the nucleus—alongside other quirks, such as loops that bring pairs of distant genomic sites into close physical proximity to alter gene expression. These patterns differed between cell types, suggesting it’s possible to learn which genes are active, not just for the mammoth but also compared to its closest living relative, the Asian elephant.

Roughly 820 genes differed between the two, with 425 active in the mammoth but not in elephants, and a similar number inactivated in one but not the other. One inactive mammoth gene that’s active in elephants has a human variant that is also shut down in the Nunavik Inuit, an indigenous people who thrive in the arctic. The gene “may be relevant for adaptation to a cold environment,” wrote the team.

Another inactive gene may explain how the woolly mammoth got its name. In humans and sheep, shutting down the same gene can result in excessive hair or wool growth.

“For the first time, we have a woolly mammoth tissue for which we know roughly which genes were switched on and which genes were off,” said Marti-Renom in the release. “This is an extraordinary new type of data, and it’s the first measure of cell-specific gene activity of the genes in any ancient DNA sample.”

Crystalized DNA

How did the mammoth’s genome architecture remain so well preserved for over 50,000 years?

Dehydration, often used to preserve food, may have been key. Using Hi-C on fresh beef, beef after 96 hours sitting on a desk, or jerky after a year at room temperature, the jerky took the win for resiliency. Even after getting run over by a car, immersed in acid, and pulverized by a shotgun (no joke), the dehydrated beef’s genomic architecture remained intact.

Dehydration could also partly be why the mammoth sample lasted so long. A chemical process called “glass transition” is widely used to produce shelf-stable food such as tortilla chips and instant coffee. It prevents pathogens from taking over or breaking down food. The mammoth’s DNA may also have been preserved in a glassy state called “chromoglass.” In other words, the sample was preserved across millennia by being freeze-dried.

It’s hard to say how long DNA architecture can survive as chromoglass, but the authors estimate it’s likely over two million years. Whether PaleoHi-C can work on hot-air-dried specimens, such as ancient Egyptian samples, remains to be seen.

As for mammoths, the next step is to examine gene expression patterns in other tissues and compare them to Asian elephants. Besides building an evolutionary throughline, the efforts could also guide ongoing studies looking to revive some version of the majestic animals.

“These results have obvious consequences for contemporary efforts aimed at woolly mammoth de-extinction,” said study author Thomas Gilbert at the University of Copenhagen in the release.

Image Credit: Beth Zaiken

Kategorie: Transhumanismus

This Enormous Computer Chip Beat the World’s Top Supercomputer at Molecular Modeling

Singularity HUB - 9 Červenec, 2024 - 16:00

Computer chips are a hot commodity. Nvidia is now one of the most valuable companies in the world, and the Taiwanese manufacturer of Nvidia’s chips, TSMC, has been called a geopolitical force. It should come as no surprise, then, that a growing number of hardware startups and established companies are looking to take a jewel or two from the crown.

Of these, Cerebras is one of the weirdest. The company makes computer chips the size of tortillas bristling with just under a million processors, each linked to its own local memory. The processors are small but lightning quick as they don’t shuttle information to and from shared memory located far away. And the connections between processors—which in most supercomputers require linking separate chips across room-sized machines—are quick too.

This means the chips are stellar for specific tasks. Recent preprint studies in two of these—one simulating molecules and the other training and running large language models—show the wafer-scale advantage can be formidable. The chips outperformed Frontier, the world’s top supercomputer, in the former. They also showed a stripped down AI model could use a third of the usual energy without sacrificing performance.

Molecular Matrix

The materials we make things with are crucial drivers of technology. They usher in new possibilities by breaking old limits in strength or heat resistance. Take fusion power. If researchers can make it work, the technology promises to be a new, clean source of energy. But liberating that energy requires materials to withstand extreme conditions.

Scientists use supercomputers to model how the metals lining fusion reactors might deal with the heat. These simulations zoom in on individual atoms and use the laws of physics to guide their motions and interactions at grand scales. Today’s supercomputers can model materials containing billions or even trillions of atoms with high precision.

But while the scale and quality of these simulations has progressed a lot over the years, their speed has stalled. Due to the way supercomputers are designed, they can only model so many interactions per second, and making the machines bigger only compounds the problem. This means the total length of molecular simulations has a hard practical limit.

Cerebras partnered with Sandia, Lawrence Livermore, and Los Alamos National Laboratories to see if a wafer-scale chip could speed things up.

The team assigned a single simulated atom to each processor. So they could quickly exchange information about their position, motion, and energy, the processors modeling atoms that would be physically close in the real world were neighbors on the chip too. Depending on their properties at any given time, atoms could hop between processors as they moved about.

The team modeled 800,000 atoms in three materials—copper, tungsten, and tantalum—that might be useful in fusion reactors. The results were pretty stunning, with simulations of tantalum yielding a 179-fold speedup over the Frontier supercomputer. That means the chip could crunch a year’s worth of work on a supercomputer into a few days and significantly extend the length of simulation from microseconds to milliseconds. It was also vastly more efficient at the task.

“I have been working in atomistic simulation of materials for more than 20 years. During that time, I have participated in massive improvements in both the size and accuracy of the simulations. However, despite all this, we have been unable to increase the actual simulation rate. The wall-clock time required to run simulations has barely budged in the last 15 years,” Aidan Thompson of Sandia National Laboratories said in a statement. “With the Cerebras Wafer-Scale Engine, we can all of a sudden drive at hypersonic speeds.”

Although the chip increases modeling speed, it can’t compete on scale. The number of simulated atoms is limited to the number of processors on the chip. Next steps include assigning multiple atoms to each processor and using new wafer-scale supercomputers that link 64 Cerebras systems together. The team estimates these machines could model as many as 40 million tantalum atoms at speeds similar to those in the study.

AI Light

While simulating the physical world could be a core competency for wafer-scale chips, they’ve always been focused on artificial intelligence. The latest AI models have grown exponentially, meaning the energy and cost of training and running them has exploded. Wafer-scale chips may be able to make AI more efficient.

In a separate study, researchers from Neural Magic and Cerebras worked to shrink the size of Meta’s 7-billion-parameter Llama language model. To do this, they made what’s called a “sparse” AI model where many of the algorithm’s parameters are set to zero. In theory, this means they can be skipped, making the algorithm smaller, faster, and more efficient. But today’s leading AI chips—called graphics processing units (or GPUs)—read algorithms in chunks, meaning they can’t skip every zeroed out parameter.

Because memory is distributed across a wafer-scale chip, it can read every parameter and skip zeroes wherever they occur. Even so, extremely sparse models don’t usually perform as well as dense models. But here, the team found a way to recover lost performance with a little extra training. Their model maintained performance—even with 70 percent of the parameters zeroed out. Running on a Cerebras chip, it sipped a meager 30 percent of the energy and ran in a third of the time of the full-sized model.

Wafer-Scale Wins?

While all this is impressive, Cerebras is still niche. Nvidia’s more conventional chips remain firmly in control of the market. At least for now, that appears unlikely to change. Companies have invested heavily in expertise and infrastructure built around Nvidia.

But wafer-scale may continue to prove itself in niche, but still crucial, applications in research. And it may be the approach becomes more common overall. The ability to make wafer-scale chips is only now being perfected. In a hint at what’s to come for the field as a whole, the biggest chipmaker in the world, TSMC, recently said it’s building out its wafer-scale capabilities. This could make the chips more common and capable.

For their part, the team behind the molecular modeling work say wafer-scale’s influence could be more dramatic. Like GPUs before them, adding wafer-scale chips to the supercomputing mix could yield some formidable machines in the future.

“Future work will focus on extending the strong-scaling efficiency demonstrated here to facility-level deployments, potentially leading to an even greater paradigm shift in the Top500 supercomputer list than that introduced by the GPU revolution,” the team wrote in their paper.

Image Credit: Cerebras

Kategorie: Transhumanismus

Gene Drives Shown to Work in Wild Plants. They Could Wipe Out Weeds.

Singularity HUB - 9 Červenec, 2024 - 01:55

Henry Grabar has had enough battling knotweed. All he wanted was to build a small garden in Brooklyn—a bit of peace amid the cacophony of city life. But a plant with beet-red leaves soon took over his nascent garden. The fastest growing plant he’d ever seen, it could sprout up to 10 feet high and grow thick as a cornfield. Even with herbicide, it was nearly impossible to kill.

Invasive plant species and weeds don’t just ruin backyard gardens. Weeds decrease crop yields at an average annual cost of $33 billion, and control measures can rack up $6 billion more. Herbicides are a defense, but they have their own baggage. Weeds rapidly build resistance against the chemicals, and the resulting produce can be a hard sell for many consumers.

Weeds often seem to have the upper hand. Can we take it away?

Two recent studies say yes. Using a technology called a synthetic gene drive, the teams spliced genetic snippets into a mustard plant popular in lab studies. Previously validated in fruit flies, mosquitoes, and mice, gene drives break the rules of inheritance, allowing “selfish” genes to rapidly spread across entire species.

But making gene drives work in plants has been a headache, in part due to the way they repair their DNA. The new studies found a clever workaround, leading to roughly 99 percent propagation of a synthetic genetic payload to subsequent generations, in contrast to nature’s 50 percent. Computer models suggest the gene drives could spread throughout an entire population of the plant in roughly 10 to 30 generations.

Overriding natural evolution, gene drives could add genes that make weeds more vulnerable to herbicides or reduce their pollination and numbers. Beneficial genes can also spread across crops—essentially fast-tracking the practice of cross-breeding for desirable traits.

“Imagine a future where yield-robbing agricultural weeds or biodiversity threatening invasive plants could be kept on a genetic leash,” wrote Paul Neve at the University of Copenhagen and Luke Barrett at CSIRO Agriculture and Food in Australia, who were not involved in the study.

50/50

Inheritance is a coin toss for most species. Half of an offspring’s genetic material comes from each parent.

Gene drives torpedo this inheritance rule. Developed roughly a decade ago, the technology relies on CRISPR—the gene editing tool—to spread a new gene throughout a population, beating the 50/50 odds. In insects and mammals, a gene can propagate at roughly 80 percent, shuttling an inherited trait down generations and irreversibly changing an entire species.

While this may seem somewhat nefarious, gene drives are designed for good. A main use under investigation is to control disease-carrying mosquitoes by genetically modifying males to be sterile. Upon release, they outcompete their natural counterparts, reducing wild mosquito numbers, and in turn, lowering the risk of multiple diseases. In indoor cages, gene drives have fully suppressed a population of the insects within a year. Small-scale field tests are underway.

Gene drives have caught the eyes of plant scientists too, but initial efforts in plants failed.

The technology relies on CRISPR, which cuts DNA to insert, delete, or swap out genetic letters. Sensing damage to their DNA, cells activate internal molecular “repairmen” to stitch genes back together and adopt gene drives and their genetic cargo.

Plants are different. Their cells also have a DNA repair mechanism, but it’s only partially similar to that of insects or mice. Sticking a classic gene drive into plants can cause genetic mutations at the target site and even trigger resistance against the gene drive in a kind of a cellular civil war.

What Doesn’t Kill You Makes You Stronger

As a workaround, both new studies used a system dubbed “toxin-antidote.” Compared to previous gene drives, it doesn’t rely on canonical DNA repair.

The teams used a self-pollinating mustard plant for their studies. A darling in plant science research, its genome is well-known, and because the plant self-pollinates, it’s easier to contain the experiment. To build the gene drive, they developed a CRISPR-based method to destroy a gene that’s critical for survival called the “torpedo.” Any pollen without the gene can’t live on. A second construct, the “antidote,” carried a mimic of the same gene, but with modifications so that it’s resistant to destruction by CRISPR.

They examined two different genetic payloads. One study tinkered with a gene that’s essential to both male and female reproductive cells in plants. The other targeted a gene that disrupts pollen production.

Here’s the clever part: As the plant pollinates, offspring can inherit either the toxin, the antidote, or both. Only those with the antidote survive—plants that inherit the toxin rapidly die out. As a result, the system worked as a gene drive, with plants carrying the CRISPR-resistant gene taking over the population. The gene drives were highly efficient, passing down through generations roughly 99 percent of the time. And scientists didn’t see any signs of evolutionary adaptation—known as resistance—against the new genetic makeup.

Computer modeling showed the gene drive could overtake a single plant species in 10 to 30 generations. That’s impressive, according Neve and Barrett. Artificial genetic changes don’t often stick in wild plants—the plants tend to die off. The new gene drives suggest they could potentially last longer in the field, battling invasive species or cultivating hardier and pest-resistant crops that pass down beneficial traits over generations.

Despite their promise, gene drives remain controversial because of their potential to alter entire species. Scientists are still debating the ecological impacts. There’s also the concern that gene drives may hop over to unintended targets. For now, studies have designed genetic “brakes” to keep gene drives in check. Most studies are done in carefully controlled lab settings, and for malaria, potential unexpected consequences are being rigorously discussed before releasing gene drive-carrying mosquitos into the wild.

Even if the science works, the road to regulatory and societal approval may face roadblocks. Selling farmers on the technology may be difficult. And CRISPRed plants as a food source could also be tainted by the negative perception of genetically modified organisms (GMOs).

For now, the teams are looking towards a more acceptable everyday use—killing weeds. There are still a few kinks to work out. Gene drives only work when they can spread, so an ideal use is in plants that pollinate others, rather than those that self-pollinate, such as those in the studies. Still, the results are a proof of concept that the powerful technology can work in plants—though it may be awhile yet before it helps Henry with his knotweed problem.

Image Credit: Anthony Wade / Unsplash

Kategorie: Transhumanismus

How ‘Dune’ Became a Beacon for the Fledgling Environmental Movement—and a Rallying Cry for the New Science of Ecology

Singularity HUB - 5 Červenec, 2024 - 16:00

Dune, widely considered one of the best sci-fi novels of all time, continues to influence how writers, artists, and inventors envision the future.

Of course, there are Denis Villeneuve’s visually stunning films, Dune: Part One (2021) and Dune: Part Two (2024).

But Frank Herbert’s masterpiece also helped Afrofuturist novelist Octavia Butler imagine a future of conflict amid environmental catastrophe; it inspired Elon Musk to build SpaceX and Tesla and push humanity toward the stars and a greener future; and it’s hard not to see parallels in George Lucas’ Star Wars franchise, especially the films’ fascination with desert planets and giant worms.

And yet when Herbert sat down in 1963 to start writing Dune, he wasn’t thinking about how to leave Earth behind. He was thinking about how to save it.

Herbert wanted to tell a story about the environmental crisis on our own planet, a world driven to the edge of ecological catastrophe. Technologies that had been inconceivable just 50 years prior had put the world at the edge of nuclear war and the environment on the brink of collapse; massive industries were sucking wealth from the ground and spewing toxic fumes into the sky.

When the book was published, these themes were front and center for readers, too. After all, they were living in the wake of both the Cuban missile crisis and the publication of Silent Spring, conservationist Rachel Carson’s landmark study of pollution and its threat to the environment and human health.

Dune soon became a beacon for the fledgling environmental movement and a rallying flag for the new science of ecology.

Indigenous Wisdoms

Though the term “ecology” had been coined almost a century earlier, the first textbook on ecology was not written until 1953, and the field was rarely mentioned in newspapers or magazines at the time. Few readers had heard of the emerging science, and even fewer knew what it suggested about the future of our planet.

While studying Dune for a book I’m writing on the history of ecology, I was surprised to learn that Herbert didn’t learn about ecology as a student or as a journalist.

Instead, he was inspired to explore ecology by the conservation practices of the tribes of the Pacific Northwest. He learned about them from two friends in particular.

The first was Wilbur Ternyik, a descendant of Chief Coboway, the Clatsop leader who welcomed explorers Meriwether Lewis and William Clark when their expedition reached the West Coast in 1805. The second, Howard Hansen, was an art teacher and oral historian of the Quileute tribe.

Ternyik, who was also an expert field ecologist, took Herbert on a tour of Oregon’s dunes in 1958. There, he explained his work to build massive dunes of sand using beach grasses and other deep-rooted plants in order to prevent the sands from blowing into the nearby town of Florence—a terraforming technology described at length in Dune.

As Ternyik explains he wrote for the US Department of Agriculture, his work in Oregon was part of an effort to heal landscapes scarred by European colonization, especially the large river jetties built by early settlers.

These structures disturbed coastal currents and created vast expanses of sand, turning stretches of the lush Pacific Northwest landscape into desert. This scenario is echoed in Dune, where the novel’s setting, the planet Arrakis, was similarly laid to waste by its first colonizers.

Hansen, who became the godfather to Herbert’s son, had closely studied the equally drastic impact logging had on the homelands of the Quileute people in coastal Washington. He encouraged Herbert to examine ecology carefully, giving him a copy of Paul B. Sears’ Where There Is Life, from which Herbert gathered one of his favorite quotes: “The highest function of science is to give us an understanding of consequences.”

The Fremen of Dune, who live in the deserts of Arrakis and carefully manage its ecosystem and wildlife, embody these teachings. In the fight to save their world, they expertly blend ecological science and Indigenous practices.

Treasures Hidden in the Sand

But the work that had the most profound impact on Dune was Leslie Reid’s 1962 ecological study The Sociology of Nature.

In it, Reid explained ecology and ecosystem science for a popular audience, illustrating the complex interdependence of all creatures within the environment.

“The more deeply ecology is studied,” Reid writes, “the clearer does it become that mutual dependence is a governing principle, that animals are bound to one another by unbreakable ties of dependence.”

In the pages of Reid’s book, Herbert found a model for the ecosystem of Arrakis in a surprising place: the guano islands of Peru. As Reid explains, the accumulated bird droppings found on these islands were an ideal fertilizer. Home to mountains of manure described as a new “white gold” and one of the most valuable substances on Earth, the guano islands became in the late 1800s ground zero for a series of resource wars between Spain and several of its former colonies, including Peru, Bolivia, Chile, and Ecuador.

At the heart of the plot of Dune is a battle for control of the “spice,” a priceless resource. Harvested from the sands of the desert planet, it’s both a luxurious flavoring for food and a hallucinogenic drug that allows some people to bend space, making interstellar travel possible.

There is some irony in the fact that Herbert cooked up the idea of spice from bird droppings. But he was fascinated by Reid’s careful account of the unique and efficient ecosystem that produced a valuable—albeit noxious—commodity.

As the ecologist explains, frigid currents in the Pacific Ocean push nutrients to the surface of nearby waters, helping photosynthetic plankton thrive. These support an astounding population of fish that feed hordes of birds, along with whales.

In early drafts of Dune, Herbert combined all of these stages into the life cycle of the giant sandworms, football-field-sized monsters that prowl the desert sands and devour everything in their path.

Herbert imagines each of these terrifying creatures beginning as small, photosynthetic plants that grow into larger “sand trout.” Eventually, they become immense sandworms that churn the desert sands, spewing spice onto the surface.

In both the book and Dune: Part One, soldier Gurney Halleck recites a cryptic verse that comments on this inversion of marine life and arid regimes of extraction: “For they shall suck of the abundance of the seas and of the treasure hid in the sand.”

‘Dune’ Revolutions

After Dune was published in 1965, the environmental movement eagerly embraced it.

Herbert spoke at Philadelphia’s first Earth Day in 1970, and in the first edition of the Whole Earth Catalog—a famous DIY manual and bulletin for environmental activists—Dune was advertised with the tagline: “The metaphor is ecology. The theme revolution.”

In the opening of Denis Villeneuve’s first adaptation of Dune, Chani, an indigenous Fremen played by Zendaya, asks a question that anticipates the violent conclusion of the second film: “Who will our next oppressors be?”

The immediate cut to a sleeping Paul Atreides, the white protagonist who’s played by Timothée Chalamet, drives the pointed anti-colonial message home like a knife. In fact, both of Villeneuve’s movies expertly elaborate upon the anti-colonial themes of Herbert’s novels.

Unfortunately, the edge of their environmental critique is blunted. But Villeneuve has suggested that he might also adapt Dune Messiah for his next film in the series—a novel in which the ecological damage to Arrakis is glaringly obvious.

I hope Herbert’s prescient ecological warning, which resonated so powerfully with readers back in the 1960s, will be unsheathed in Dune 3.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Kategorie: Transhumanismus

DARPA Is Engineering Light-Activated Drugs to Keep Pilots Alert

Singularity HUB - 4 Červenec, 2024 - 16:00

We’ve all been there: A tight deadline, an overnighter, and the next day we’re navigating life like zombies.

For fighter pilots, the last step isn’t an option. During active duty, these pilots need to be in tip-top shape mentally, even when they’re deprived of sleep (which can be often). Typically, the treatment is your everyday cup of joe. But for longer durations of sleep deprivation, pilots are also prescribed stronger stimulants.

But as anyone who’s ever had too much caffeine knows, there are side effects. You get jittery. Your hands start to shake. Your mood takes a nosedive as the effect wears off and irritability sets in. And then you crash.

Prescription stimulants, such as dextroamphetamine, have even more severe side effects. As the name suggests, they’re in the same family as methamphetamine—or “meth”—and come with the risk of addiction. These drugs last longer inside the body, so that when trying to sleep after a tiring day, they keep parts of the brain in a semi-alert state and mess with sleep schedules. People taking dextroamphetamine often need sedatives to counteract lingering effects, and the chemical regime takes a toll.

Over time, the lack of restorative sleep impacts memory, cognition, and reasoning. It also damages the immune system, metabolism, and overall health.

The drugs work in short bursts. What if there’s a way to turn them on and off at will—giving the brain just a tiny dose when needed and quickly shutting off its effect to allow a full night’s sleep?

One solution may be light-activated drugs. The Defense Advanced Research Projects Agency (DARPA) announced a project in June to develop these types of drugs to combat sleep deprivation for fighter pilots. So-called photopharmacological drugs would add a molecular “light switch” to drugs like dextroamphetamine.

Pulses of light activate the drugs in parts of the brain on demand. Non-targeted brain regions aren’t exposed to the active version and continue to work normally. Once the pilots are alert, another pulse of light shuts off the drug, giving the body time to break it down before bedtime.

To make this vision a reality, the new project, Alert WARfighter Enablement (AWARE), has two research arms. One will develop safe and effective dextroamphetamine that can be controlled with light. The second will focus on engineering a wearable “helmet” of sorts to direct light pulses toward regions of the brain involved in alertness and mental acuity.

“To achieve the beneficial effects of stimulants on alertness without the undesirable effects of the stimulant on mood, restorative sleep, and mental health, a new approach is needed to enable targeted activation of the drug,” Dr. Pedro Irazoqui, AWARE program manager, said in a press release.

Brain on Alert

After a terrible night’s sleep, the first thing most of us reach for is coffee. Caffeine, its active ingredient, is the most widely used psychoactive substance in the world, with over 80 percent of people in North America drinking a cup of joe every morning.

While this is also the go-to solution for most fighter pilots, multiple countries have developed far stronger concoctions to keep their brigades awake. The most notorious is probably methamphetamine, first synthesized in the late 1800s. Best known by its street names—meth, crank, or speed—it was used during World War II to keep troops awake, before being outlawed across the globe. A safer spin-off, dextroamphetamine is currently prescribed to increase alertness and cognition. While effective, it can trigger both irritability and euphoric effects—a recipe for potential addiction.

The Air Force has approved other types of chemical drugs, such as modafinil, to battle fatigue too. Research in mice and people found these drugs can improve many cognitive functions—for example, navigating space, keeping multiple things in mind, and boosting overall alertness even when severely sleep-deprived. Unlike amphetamines, this group of drugs isn’t as addictive, with effects compared to drinking roughly 20 cups of coffee without the jitters. But they can produce pounding headaches, sweating, and in rare cases, hallucinations.

Light-activated drugs may be another option. First devised for cancer, these drugs have a molecular “light-switch” component that responds to pulses of light. The switch can be tagged onto conventional drugs, making it easy to adopt for existing medications—like, say, dextroamphetamine.

The “switch” component changes the chemical’s shape after being blasted by different wavelengths of light. Like transformers, one shape allows the chemical to grab onto its usual targets—the “active” state. Other configurations inactivate it.

Light-activated drugs have been tested in cells in petri dishes, but targeting the brain presents a hurdle—the skull. Shining a flashlight onto the skull obviously wouldn’t reach the brain, and invasive brain surgery is out of the question.

There’s a workaround. Infrared beams of light, at low levels, are safe in humans and can penetrate deep into tissues, including through the skull and into the brain. A previous study designed a number of potential switches that could be turned on with infrared light. And recent advances in AI could further aid the effort to develop “a photoswitchable version of dextroamphetamine that is inactive except in the presence of near-infrared light, which activates it,” wrote DARPA.

The other component is a programmable light-emitting helmet that transmits infrared light to the parts of the brain associated with wakefulness, reasoning, and decision-making. Over time, the stimulation could be personalized, so people only receive the necessary “dose” to stay alert.

The strategy still floods the brain with stimulants through a pill, but it limits the drug’s activity in time and space. With personalized dosages and light as a controller, it could lead to alertness without anxiety, irritability, or euphoria for each person. Switching the drug off also allows the brain to “rest” during a good night’s sleep.

A Three-Year Plan

AWARE is slated to last over three years. DARPA is now welcoming proposals that fit the program’s two goals, including developing light-activated dextroamphetamine, dubbed “PhotoDex,” that can be rapidly turned on and off in the presence of near-infrared light. All candidate drugs will first be validated in animal models, before moving on to human trials.

For the headset, the project envisions a setup that emits infrared light and reliably activates necessary parts of the brain at millimeter-resolution, roughly that of an MRI-based brain scan. The timeline is about a year, and the agency did not specify how the headsets should be designed—for example, wired or wireless, how they’re powered, or what mechanism turns on the light beams.

“The idea is very ambitious, but recent advances in the creation of phototherapeutics and light-emitting devices offer good reason to be optimistic about the prospects,” Dr. David Lawrence at the University of North Carolina, who is not involved in the project, told New Scientist.

For now, photoswitchable drugs have not yet been approved for human use. If the AWARE program goes as planned, it could open a new avenue for targeted drug treatment, not just for battling sleep deprivation, but also for other brain disorders. The project is well aware of the ethical, legal, and societal implications, and has plans to discuss the technology’s use.

Image Credit: US Air Force photo by 2nd Lt. Samuel Eckholm

Kategorie: Transhumanismus

Nikola Danaylov: Content is NOT King, Context Is!

Singularity Weblog - 3 Červenec, 2024 - 19:05
 This is a recording of a 4-minute mini-keynote I did last month on Content vs. Context. I hope you enjoy and find it useful ???? Content is NOT King, Context Is! What does this mean? ???? Here it means ok, good or fine. But if you are in Brazil, it means you’re an ass. […]
Kategorie: Transhumanismus

Google DeepMind’s AI Rat Brains Could Make Robots Scurry Like the Real Thing

Singularity HUB - 3 Červenec, 2024 - 16:00

Rats are incredibly nimble creatures. They can climb up curtains, jump down tall ledges, and scurry across complex terrain—say, your basement stacked with odd-shaped stuff—at mind-blowing speed.

Robots, in contrast, are anything but nimble. Despite recent advances in AI to guide their movements, robots remain stiff and clumsy, especially when navigating new environments.

To make robots more agile, why not control them with algorithms distilled from biological brains? Our movements are rooted in the physical world and based on experience—two components that let us easily explore different surroundings.

There’s one major obstacle. Despite decades of research, neuroscientists haven’t yet pinpointed how brain circuits control and coordinate movement. Most studies have correlated neural activity with measurable motor responses—say, a twitch of a hand or the speed of lifting a leg. In other words, we know brain activation patterns that can describe a movement. But which neural circuits cause those movements in the first place?

We may find the answer by trying to recreate them in digital form. As the famous physicist Richard Feynman once said, “What I cannot create, I do not understand.”

This month, Google DeepMind and Harvard University built a realistic virtual rat to home in on the neural circuits that control complex movement. The rat’s digital brain, composed of artificial neural networks, was trained on tens of hours of neural recordings from actual rats running around in an open arena.

Comparing activation patterns of the artificial brain to signals from living, breathing animals, the team found the digital brain could predict the neural activation patterns of real rats and produce the same behavior—for example, running or rearing up on hind legs.

The collaboration was “fantastic,” said study author Dr. Bence Ölveczky at Harvard in a press release. “DeepMind had developed a pipeline to train biomechanical agents to move around complex environments. We simply didn’t have the resources to run simulations like those, to train these networks.”

The virtual rat’s brain recapitulated two regions especially important for movement. Tweaking connections in those areas changed motor responses across a variety of behaviors, suggesting these neural signals are involved in walking, running, climbing, and other movements.

“Virtual animals trained to behave like their real counterparts could provide a platform for virtual neuroscience…that would otherwise be difficult or impossible to experimentally deduce,” the team wrote in their article.

A Dense Dataset

Artificial intelligence “lives” in the digital world. To power robots, it needs to understand the physical world.

One way to teach it about the world is to record neural signals from rodents and use the recordings to engineer algorithms that can control biomechanically realistic models replicating natural behaviors. The goal is to distill the brain’s computations into algorithms that can pilot robots and also give neuroscientists a deeper understanding of the brain’s workings.

So far, the strategy has been successfully used to decipher the brain’s computations for vision, smell, navigation, and recognizing faces, the authors explained in their paper. However, modeling movement has been a challenge. Individuals move differently, and noise from brain recordings can easily mess up the resulting AI’s precision.

This study tackled the challenges head on with a cornucopia of data.

The team first placed multiple rats into a six-camera arena to capture their movement—running around, rearing up, or spinning in circles. Rats can be lazy bums. To encourage them to move, the team dangled Cheerios across the arena.

As the rats explored the arena, the team recorded 607 hours of video and also neural activity with a 128-channel array of electrodes implanted in their brains.

They used this data to train an artificial neural network—a virtual rat’s “brain”—to control body movement. To do this, they first tracked how 23 joints moved in the videos and transferred them to a simulation of the rats’ skeletal movements. Our joints only bend in certain ways, and this step filters out what’s physically impossible (say, bending legs in the opposite direction).

The core of the virtual rat’s brain is a type of AI algorithm called an inverse dynamics model. Basically, it knows where “body” positions are in space at any given time and, from there, predicts the next movements leading to a goal—say, grab that coffee cup without dropping it.

Through trial-and-error, the AI eventually came close to matching the movements of its biological counterparts. Surprisingly, the virtual rat could also easily generalize motor skills to unfamiliar places and scenarios—in part by learning the forces needed to navigate the new environments.

The similarities allowed the team to compare real rats to their digital doppelgangers, when performing the same behavior.

In one test, the team analyzed activity in two brain regions known to guide motor skills. Compared to an older computational model used to decode brain networks, the AI could better simulate neural signals in the virtual rat across multiple physical tasks.

Because of this, the virtual rat offers a way to study movement digitally.

One long-standing question, for example, is how the brain and nerves command muscle movement depending on the task. Grabbing a cup of coffee in the morning, for example, requires a steady hand without any jerking action but enough strength to hold it steady.

The team tweaked the “neural connections” in the virtual rodent to see how changes in brain networks alter the final behavior—getting that cup of coffee. They found one network measure that could identify a behavior at any given time and guide it through.

Compared to lab studies, these insights “can only be directly accessed through simulation,” wrote the team.

The virtual rat bridges AI and neuroscience. The AI models here recreate the physicality and neural signals of living creatures, making them invaluable for probing brain functions. In this study, one aspect of the virtual rat’s motor skills relied on two brain regions—pinpointing them as potential regions key to guiding complex, adaptable movement.

A similar strategy could provide more insight into the computations underlying vision, sensation, or perhaps even higher cognitive functions such as reasoning. But the virtual rat brain isn’t a complete replication of a real one. It only captures snapshots of part of the brain. But it does let neuroscientists “zoom in” on their favorite brain region and test hypotheses quickly and easily compared to traditional lab experiments, which often take weeks to months.

On the robotics side, the method adds a physicality to AI.

“We’ve learned a huge amount from the challenge of building embodied agents: AI systems that not only have to think intelligently, but also have to translate that thinking into physical action in a complex environment,” said study author Dr. Matthew Botvinick at DeepMind in a press release. “It seemed plausible that taking this same approach in a neuroscience context might be useful for providing insights in both behavior and brain function.”

The team is next planning to test the virtual rat with more complex tasks, alongside its biological counterparts, to further peek inside the inner workings of the digital brain.

“From our experiments, we have a lot of ideas about how such tasks are solved,” said Ölveczky to The Harvard Gazette. “We want to start using the virtual rats to test these ideas and help advance our understanding of how real brains generate complex behavior.”

Image Credit: Google DeepMind

Kategorie: Transhumanismus

Electric Air Taxis Are on the Way: Quiet eVTOLs May Be Flying Passengers as Early as 2025

Singularity HUB - 2 Červenec, 2024 - 16:00

Imagine a future with nearly silent air taxis flying above traffic jams and navigating between skyscrapers and suburban droneports. Transportation arrives at the touch of your smartphone and with minimal environmental impact.

This isn’t just science fiction. United Airlines has plans for these futuristic electric air taxis in Chicago and New York. The US military is already experimenting with them. And one company has a contract to launch an air taxi service in Dubai as early as 2025. Another company hopes to defy expectations and fly participants at the 2024 Paris Olympics.

Backed by billions of dollars in venture capital and established aerospace giants that include Boeing and Airbus, startups across the world such as Joby, Archer, Wisk, and Lilium are spearheading this technological revolution, developing electric vertical takeoff and landing (eVTOL) aircraft that could transform the way we travel.

Electric aviation promises to alleviate urban congestion, open up rural areas to emergency deliveries, slash carbon emissions, and offer a quieter, more accessible form of short-distance air travel.

But the quest to make these electric aircraft ubiquitous across the globe instead of just playthings for the rich is far from a given. Following the industry as executive director of the Oklahoma Aerospace Institute for Research and Education provides a view of the state of the industry. Like all great promised paradigm shifts, numerous challenges loom—technical hurdles, regulatory mazes, the crucial battle for public acceptance, and perhaps physics itself.

Why Electrify Aviation?

Fixed somewhere between George Jetson’s flying car and the gritty taxi from The Fifth Element, the allure of electric aviation extends beyond gee-whiz novelty. It is rooted in its potential to offer efficient, eco-friendly alternatives to ground transportation, particularly in congested cities or hard-to-reach rural regions.

While small electric planes are already flying in a few countries, eVTOLs are designed for shorter hops—the kind a helicopter might make today, only more cheaply and with less impact on the environment. The eVTOL maker Joby purchased Uber Air to someday pair the company’s air taxis with Uber’s ride-hailing technology.

In the near term, once eVTOLs are certified to fly as commercial operations, they are likely to serve specific, high-demand routes that bypass road traffic. An example is United Airlines’ plan to test Archer’s eVTOLs on short hops from Chicago to O’Hare International Airport and Manhattan to Newark Liberty International Airport.

While some applications initially might be restricted to military or emergency use, the goal of the industry is widespread civil adoption, marking a significant step toward a future of cleaner urban mobility.

The Challenge of Battery Physics

One of the most significant technical challenges facing electric air taxis is the limitations of current battery technology.

Today’s batteries have made significant advances in the past decade, but they don’t match the energy density of traditional hydrocarbon fuels currently used in aircraft. This shortcoming means that electric air taxis cannot yet achieve the same range as their fossil-fueled counterparts, limiting their operational scope and viability for long-haul flights. Current capabilities still fall short of traditional transportation. However, with ranges from dozens of miles to over 100 miles, eVTOL batteries provide sufficient range for intracity hops.

The quest for batteries that offer higher energy densities, faster charging times, and longer life cycles is central to unlocking the full potential of electric aviation.

While researchers are working to close this gap, hydrogen presents a promising alternative, boasting a higher energy density and emitting only water vapor. However, hydrogen’s potential is tempered by significant hurdles related to safe storage and infrastructure capable of supporting hydrogen-fueled aviation. That presents a complex and expensive logistics challenge.

And, of course, there’s the specter of the last major hydrogen-powered aircraft. The Hindenburg airship caught fire in 1937, but it still looms large in the minds of many Americans.

Regulatory Hurdles

Establishing a “4D highways in the sky” will require comprehensive rules that encompass everything from vehicle safety to air traffic management. For the time being, the US Federal Aviation Administration is requiring that air taxis include pilots serving in a traditional role. This underscores the transitional phase of integrating these vehicles into airspace, highlighting the gap between current capabilities and the vision of fully autonomous flights.

The journey toward autonomous urban air travel is fraught with more complexities, including the establishment of standards for vehicle operation, pilot certification, and air traffic control. While eVTOLs have flown hundreds of test flights, there have also been safety concerns after prominent crashes involving propeller blades failing on one in 2022 and the crash of another in 2023. Both were being flown remotely at the time.

The question of who will manage these new airways remains an open discussion—national aviation authorities such as the FAA, state agencies, local municipalities, or some combination thereof.

Creating the Future

In the long term, the vision for electric air taxis aligns with a future where autonomous vehicles ply the urban skies, akin to scenes from Back to the Future. This future, however, not only requires technological leaps in automation and battery efficiency but also a societal shift in how people perceive and accept the role of autonomous vehicles, both cars and aircraft, in their daily lives. Safety is still an issue with autonomous vehicles on the ground.

The successful integration of electric air taxis into urban and rural environments hinges on their ability to offer safe, reliable, and cost-effective transportation.

As these vehicles overcome the industry’s many hurdles, and regulations evolve to support their operation in the years ahead, I believe we could witness a profound transformation in air mobility. The skies offer a new layer of connectivity, reshaping cities and how we navigate them.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image Credit: Joby

Kategorie: Transhumanismus

Gene-Edited Animal Organ Transplants Could Help End the Organ Donor Crisis

Singularity HUB - 1 Červenec, 2024 - 16:00

Thousands of people a year die while waiting for an organ transplant. Early experiments in xenotransplantation are raising hopes this could soon be a thing of the past.

In the US, 100,000 people are currently on the organ transplant waiting list, and 17 of them die every day before receiving an organ. The persistent shortage of organ donors has long led doctors to flirt with the idea of xenotransplantation, a procedure where tissue or an organ from an animal is transplanted into a human.

Early experiments were largely unsuccessful and ethically questionable, though, and the idea remained firmly on the fringes of the medical world. That’s largely due to the high risk of rejection. This is a problem for human transplants too, but it’s much more risky when using organs from other species.

But the advent of increasingly powerful and precise genetic engineering technologies such as CRISPR have ushered the idea from the shadows. The ability to make edits to the donor animals’ DNA to prevent the production of biomolecules known to induce immune responses in humans has raised hopes the approach may be viable after all.

In recent years, a handful of pioneering experiments in humans have demonstrated that genetically engineered pig organs can at least temporarily function smoothly in the human body. Medical complications, organ rejections, and patient deaths have meant none of these procedures have provided a long-term solution, but the results so far have been promising.

“At Massachusetts General Hospital alone, there are over 1,400 patients on the waiting list for a kidney transplant,” Leonardo Riella, who led the surgical team at Mass General that transplanted a pig kidney into a patient, said in a press release earlier this year.

“Some of these patients will unfortunately die or get too sick to be transplanted due to the long waiting time on dialysis. I am firmly convinced that xenotransplantation represents a promising solution to the organ shortage crisis.”

In 2021, in the first human experiment involving a genetically engineered pig organ, doctors transplanted a kidney into a patient who was already brain dead. The team knocked out a gene for a molecule called alpha-gal—which causes organ rejection—in the donor pig. The surgery appeared to be a success: The kidney produced urine and showed no signs of rejection, but the patient was only kept alive for 54 hours.

The following year, a patient with terminal heart failure received a genetically modified pig heart and initially seemed to do well, but then passed away 60 days later. While it’s not entirely clear why he died, the doctors found that pre-screening failed to flag a pathogen called porcine cytomegalovirus that was found in his heart afterwards, which could have contributed. He’d also been given an antibody treatment that had reacted with the heart.

Then earlier this year, two kidney disease patients who were ineligible for normal transplants received gene-edited pig kidneys from donor pigs bred by biotech firm eGenesis. Using CRISPR, the company made 69 edits that removed some pig genes, added some human ones, and reduced the risk of latent virus in the organ reactivating and harming the patient.

The procedures appeared to go well. Doctors even discharged the first patient after determining the kidney was functioning well, and he no longer needed dialysis. Two months later he passed away, but he had other underlying health issues, and the hospital said there was no indication his death was the result of the transplant.

The second patient had to have the kidney removed after 47 days due to “unique challenges” stemming from the fact she had also had a mechanical heart pump implanted just before the transplantation. There were no signs of rejection, but the kidney started losing function because her heart was not able to pump blood with enough pressure, the researchers said.

The most recent experiment was announced in May, when Chinese researchers said they had transplanted a liver from a genetically modified pig into a 71-year-old man with liver cancer. While details of the procedure are limited, the team claimed the man was “doing very well” more than two weeks after surgery.

While most of these experiments have been short-lived, the fact that only two cases saw the transplanted organ fail—one of which was due to external complications—is a promising sign. For ethical reasons, doctors have only been able to experiment with patients whose chances of survival were already slim.

But it does mean that we have little idea whether xenotransplantation could be a viable long-term solution for patients. There is also some concern that implanting organs from other animals into humans could make it easier for pathogens to jump between species, potentially creating the risk of new pandemics.

Other researchers are investigating whether, instead of transplanting pig organs into humans, we could grow human organs in pigs. Last September, researchers announced they’d transplanted human stem cells into pig embryos where they then grew into rudimentary kidneys.

This approach is a long way from human trials though, so for the time being, xenotransplantation seems like a more promising way to bring down transplant wait times. While it’s still early days, the promising early results suggest we may not be far from a future where replacement organs can be grown to order.

Image Credit: Massachusetts General Hospital

Kategorie: Transhumanismus

How Teams of AI Agents Working Together Could Unlock the Tech’s True Power

Singularity HUB - 28 Červen, 2024 - 16:00

If you had to sum up what has made humans such a successful species, it’s teamwork. There’s growing evidence that getting AIs to work together could dramatically improve their capabilities too.

Despite the impressive performance of large language models, companies are still scrabbling for ways to put them to good use. Big tech companies are building AI smarts into a wide-range of products, but none has yet found the killer application that will spur widespread adoption.

One promising use case garnering attention is the creation of AI agents to carry out tasks autonomously. The main problem is that LLMs remain error-prone, which makes it hard to trust them with complex, multi-step tasks.

But as with humans, it seems two heads are better than one. A growing body of research into “multi-agent systems” shows that getting chatbots to team up can help solve many of the technology’s weaknesses and allow them to tackle tasks out of reach for individual AIs.

The field got a significant boost last October when Microsoft researchers launched a new software library called AutoGen designed to simplify the process of building LLM teams. The package provides all the necessary tools to spin up multiple instances of LLM-powered agents and allow them to communicate with each other by way of natural language.

Since then, researchers have carried out a host of promising demonstrations. 

In a recent article, Wired highlighted several papers presented at a workshop at the International Conference on Learning Representations (ICLR) last month. The research showed that getting agents to collaborate could boost performance on math tasks—something LLMs tend to struggle with—or boost their reasoning and factual accuracy.

In another instance, noted by The Economist, three LLM-powered agents were set the task of defusing bombs in a series of virtual rooms. The AI team performed better than individual agents, and one of the agents even assumed a leadership role, ordering the other two around in a way that improved team efficiency.

Chi Wang, the Microsoft researcher leading the AutoGen project, told The Economist that the approach takes advantage of the fact most jobs can be split up into smaller tasks. Teams of LLMs can tackle these in parallel rather than churning through them sequentially, as an individual AI would have to do.

So far, setting up multi-agent teams has been a complicated process only really accessible to AI researchers. But earlier this month, the Microsoft team released a new “low-code” interface for building AI teams called AutoGen Studio, which is accessible to non-experts.

The platform allows users to choose from a selection of preset AI agents with different characteristics. Alternatively, they can create their own by selecting which LLM powers the agent, giving it “skills” such as the ability to fetch information from other applications, and even writing short prompts that tell the agent how to behave. 

So far, users of the platform have put AI teams to work on tasks like travel planning, market research, data extraction, and video generation, say the researchers.

The approach does have its limitations though. LLMs are expensive to run, so leaving several of them to natter away to each other for long stretches can quickly become unsustainable. And it’s unclear whether groups of AIs will be more robust to mistakes, or whether they could lead to cascading errors through the entire team.

Lots of work needs to be done on more prosaic challenges too, such as the best way to structure AI teams and how to distribute responsibilities between their members. There’s also the question of how to integrate these AI teams with existing human teams. Still,  pooling AI resources is a promising idea that’s quickly picking up steam.

Image Credit: Mohamed Nohassi / Unsplash

Kategorie: Transhumanismus

This MIT Device Maps the Human Brain With Unprecedented Resolution and Speed

Singularity HUB - 27 Červen, 2024 - 16:00

A squishy, fatty, beige-colored organ covered with grooves and ridges, the brain doesn’t look all that impressive on the surface. 

But hidden underneath are up to 100 billion neurons and 100 trillion synapses—the connections between neurons that form networks—densely packed in a squishy three-pound organ that controls our thoughts, feelings, movement, memories, and sense of self.

For the past two decades, scientists have carefully dissected the internal neural connections and workings of the brain by carefully chopping it up into paper-thin pieces. From there, they’ve built multiple maps of the brain’s cellular population, architecture, connections, and gene expression. Like charting the landscape of a new world, these maps have been consolidated into what amounts to a Google Maps for the brain. These atlases allow us to decipher brain function, bridging genetic expression to cell functions, network connections, and behavior. 

At least for rodents and other animals. Mapping the brain is incredibly difficult and time-consuming. A small chunk of a mouse’s brain, when imaged at single-cell resolution, takes years to process, scan, and reconstruct into 3D computer models. Any trip-ups during the process ruins the product. Mapping the human brain, much larger in size, is far more difficult. 

This month, a team from MIT developed a “holistic” brain-mapping platform that captures the anatomy of large slices of the human brain with unprecedented resolution and speed, slashing a process that normally takes between a week and a month to a few days. 

They used the platform to image an Alzheimer’s brain, after physically expanding brain tissues with a hydrogel. The automated system sliced, imaged, and automatically stitched the images together and found myriads of cellular changes and problems with neural connections, including inflammation. 

Compared to previous brain mapping projects, which often require months or years, the new platform mapped different levels of the brain’s physical makeup—from synapses to local neural circuits and brain-wide connections in slabs of human brain tissue—in just a few days.

“We performed holistic imaging of human brain tissues at multiple resolutions from single synapses to whole brain hemispheres, and we have made that data available,” study author Kwanghun Chung said in a press release. 

To be clear, the technology has only been used on slabs of human brain tissue and hasn’t yet charted the entire brain’s neurons and connections. But “this technology pipeline really enables us to analyze the human brain at multiple scales. Potentially this pipeline can be used for fully mapping human brains,” said Chung.

Three-Way Upgrade

Here’s how brain mapping technology usually works. Whole brains are sliced into wafer-thin pieces on a machine called a vibratome—think of it as a souped-up deli meat slicer.

Most vibratomes are tailored for cutting smaller brains, such as those from rodents. Trying to cut a slice of human brain tissue with a standard vibratome is akin to cutting a sandwich filled with deli meats, arugula, and avocado with a dull knife. Picture the meats as neurons, arugula as blood vessels, and avocado as supporting structures. All components get distorted and squished, making it nearly impossible to realign them into a brain map. 

As a workaround, the team developed MEGAtome, a vibratome that can slice through large and soft human brain specimens without tearing or squishing. Compared to a state-of-the-art device, MEGAtome vibrates at higher frequencies, lowering the chances of “angled cuts” and minimizing distortion of the neural connections that eventually need to be realigned. 

The next step is treating brain samples. Previously, scientists found a way to physically expand the brain in size—so that its details are easier to see under the microscope—using a gel commonly found inside diapers. The team adapted the idea and developed a recipe that embedded human brain slices in a squishy hydrogel, transforming brain tissue into a stretchy brain-gel hybrid tissue that could easily withstand mechanical pressure—such as that from the vibratome blade—while maintaining its shape. Some tissues expanded over four times their normal size.

In one demo, the team used MEGAtome to slice up a human brain hemisphere, generating 40 relatively thick slabs in just 8 hours.

To capture cellular identities, the team stained each brain slab with dyes that grab onto different types of proteins to highlight different types of brain cells. Some colors signal mature neurons; others mark non-neuronal cells, called astrocytes. Although these cells can’t transmit electrical signals, they support neurons by releasing chemicals to regulate their function. Also present were the brain’s immune cells and blood vessels.

The system imaged a four-millimeter-thick slab of human brain—thicker than the average cortex—at the synapse-level in just six hours. Using the technique, “a whole brain hemisphere can be imaged at single-cell resolution in ~100 hours,” wrote the team. 

The third upgrade is software. Recreating a 3D brain structure means aligning individual slices like piecing together a puzzle. The team first used blood vessels as a guide to roughly align each piece. They then zeroed in on individual neural connections to further perfect the map. 

Previously, slices could only tolerate one round of dyes. With the new protocol, they withstood at least seven rounds of rinsing and re-dying, allowing scientists to capture multiple protein changes in the same tissue at single-cell resolution. 

“This technology pipeline really enables us to extract all these important features from the same brain in a fully integrated manner,” Chung said in the press release. 

Alzheimer’s and Beyond

As a proof of concept, the team used the new system to analyze two donated brains: One from a healthy 61-year-old female donor and the other from an octogenarian with Alzheimer’s.

The team sliced both brains with MEGAtome and dyed multiple slabs. Compared to the healthy brain, the Alzheimer’s brain had 46.5 percent fewer neurons, especially in a frontal part of the brain that’s important for making decisions.

“Connectivity is impaired [here] in later stages of Alzheimer’s disease,” wrote the team. 

The team also found increased inflammation in the Alzheimer’s brain, along with a build-up of protein gunk outside cells—potentially damaging those neurons’ ability to connect to others. 

With just one sample, the results don’t offer conclusions about how neurons change in Alzheimer’s disease. But that’s not the point. The platform allows scientists to quickly and efficiently probe larger brain tissues—not just in humans, but also pigs and non-human primates—to further our understanding of neural networks in the brain and what happens to them in health and disease.

Image Credit: Image of the orbitofrontal cortex from an Alzheimer’s donated brain. Chung Lab/MIT Picower Institute

Kategorie: Transhumanismus

mRNA Cancer Vaccines Spark Renewed Hope as Clinical Trials Gain Momentum

Singularity HUB - 26 Červen, 2024 - 16:00

When Angela received her first shot at the Lombardi Comprehensive Cancer Center in early 2020, Covid-19 was months away. Far from a household name, mRNA vaccines were mostly relegated to lab studies.

Yet the jab she received was made of the same technology. A melanoma patient, Angela had multiple malignant moles removed. Alongside an established immune-stimulating drug, the hope was the duo could fight off any residual cancerous cells and slash the chances of relapse.

Scientists have long sought cancer vaccines that prevent the pesky cells from growing back. Like those targeting viruses, the vaccines would train the body’s immune system to recognize the cancerous cells and attack and eliminate them before they could grow and spread.

Despite decades of research into cancer vaccines, the dream has mostly failed. One reason is that every cancer, in every person, is different. So is each person’s immune system. Tailoring vaccines to neutralize cancers for each patient would not only be expensive, but sometimes impossible due to how long they’d take to develop—time is not on cancer patients’ sides.

In contrast, mRNA vaccines are far speedier to build. After they were removed, Angela’s malignant moles were analyzed for specific cancerous “fingerprints” or neoantigens. Based on these proteins, scientists at Moderna—known for their Covid-19 vaccines—built a custom mRNA cancer vaccine to train her immune system to prevent her own cancer from recurring.

Angela is part of clinical trial led by pharmaceutical companies Moderna and Merck to see if malignant skin cancer came back in patients given the treatment. Compared to a standard immunotherapy drug alone, adding a custom mRNA vaccine reduced the chances of cancer returning by roughly 50 percent and increased lifespan.

To be clear, the vaccines don’t protect a person from getting cancer in the first place. Rather, they teach the immune system to recognize residual malignant cells and prevent them from returning. The companies have launched Phase 3 clinical studies in people with melanoma and a type of lung cancer, with earlier stage clinical trials for other cancer cell types in the works.

Getting Personal

Like healthy cells, cancerous cells are dotted with all kinds of proteins on their surfaces. Dubbed “neoantigens,” these proteins differentiate cancer cells from healthy ones, making them attractive targets for therapies. And like fingerprints, neoantigens often differ between different cancer types and individuals, raising the possibility of personalized treatments.

That’s the idea behind cancer vaccines. They work like vaccines against infectious diseases. Parts of the invader—in cancer’s case, its unique neoantigens—are mixed with chemicals that stimulate the immune system. Once injected, the concoction directs the immune system to specifically attack cells with the neoantigen and eliminate the threat.

Compared to chemotherapy—notorious for its horrible side effects—cancer vaccines target a person’s own constellation of neoantigens, which in theory limits damage.

In 2017, two small clinical trials offered a glimpse that these vaccines could work in humans. Both studies targeted melanoma, a mole-like type of cancer that can quickly spread and recur.

After surgical removal, the researchers sequenced the genes of each malignant mole and selected up to 20 different protein fragments for each person to develop into vaccines. In one study, the shots kept the cancer at bay in four out of six patients for at least two years. The two who saw their cancer come back quickly entered remission after treatment with a drug that stimulates their immune system.

Another study enrolled 13 patients, eight with no visible tumors and five whose cancer had already spread. A personalized vaccine encoded 10 neoantigens for each person and used a virus to shuttle the mixture into cells. While successful for the first group, who remained cancer-free for over a year, results were mixed in the second group. For these patients, the cancer shrank but resurged in some, while others went into remission after treatment with the same immune-stimulating drug.

“It’s potentially a game changer,” Dr. Cornelis Melief at Leiden University Medical Center, who was not involved in the study, told Nature at the time.

Yet the field still faced a roadblock: Cancer vaccines are expensive to make and often require time—time that patients don’t always have.

An mRNA World

Enter mRNA vaccines. Best known for battling Covid-19, these vaccines can be designed and manufactured at a fraction of the time and cost of their traditional protein-based counterparts.

A cancer vaccine based on mRNA follows a similar path to previous iterations, but with a few upgrades.

The patient’s skin cancer is rapidly sequenced for its genes after removal. Selection of neoantigen genes is key. Not all of them can be recognized by the immune system. Machine learning algorithms, trained on expanding databases of cancer-related mutations, sort through the data to identify the neoantigen genes most likely to stimulate the immune system. Moderna picks up to 34 candidates with the highest chances.

Like in Covid-19 vaccines, the selected genes are then translated into mRNA and encapsulated in fatty bubbles. Once injected, the mRNA commandeers the cell’s protein-making machinery to pump out neoantigens. These, in turn, train the immune system to sniff out the foe.

The mRNA vaccines weren’t used alone, however. Taking a note from previous studies, the companies added an immune-stimulating drug to boost efficacy.

The results from a three-year ongoing trial were announced earlier this month. The combination, compared to the drug alone, reduced the risk of cancers returning and death by 49 percent. They also decreased the risk of the cancer spreading by 62 percent. Living cancer-free for at least two and a half years, those treated with the combo saw a boost in their chances of survival with the addition of the mRNA vaccine. The results mirror those from a previous analysis, led by Dr. Jeffrey Weber at New York University Langone Health, who is overseeing the trial, dubbed KEYNOTE-942.

“At the end of the day, you realize, ‘Damn! This combination seems to have activity,’” Weber told Nature.

Although the results are promising, the combo isn’t for everyone. Later-stage cancers, especially those which have already spread, don’t respond well to the treatment. These tumors also rapidly grow—compared to their earlier counterparts—robbing scientists of precious time to develop the personalized vaccine.

Others are doing similar work. BioNTech has partnered with Genentech to develop vaccines targeting up to 20 neoantigens for notoriously aggressive pancreatic cancer. The vaccine worked for only half of the participants; even then, a fraction of the immune system only recognized one neoantigen. Nonetheless, vaccinated patients lived longer cancer-free after treatment when assessed 18 months after treatment.

Cancer vaccines are having a renaissance, but there’s much left to learn. Figuring out how to choose the right neoantigens is first and foremost. One team, for example, is verifying that immune cells in blood samples from patients actually recognize the selected neoantigens.

Other cancer types are already on the docket as potential next targets, including those that affect cells lining the skin, lungs, and digestive tracts, or those involved in kidney cancer.

As for Angela, the initial flu-like symptoms from the treatment were worth it. In her mid-40s, her cancer has been gone for three years. When asked if it’s because of the vaccine or drug, she told Nature: “I’m just happy to be cancer-free.”

Image Credit: Diana Polekhina / Unsplash

Kategorie: Transhumanismus

AI Plus Gene Editing Promises to Shift Biotech Into High Gear

Singularity HUB - 25 Červen, 2024 - 16:00

During her chemistry Nobel Prize lecture in 2018, Frances Arnold said, “Today we can for all practical purposes read, write, and edit any sequence of DNA, but we cannot compose it.”

That isn’t true anymore.

Since then, science and technology have progressed so much that artificial intelligence has learned to compose DNA, and with genetically modified bacteria, scientists are on their way to designing and making bespoke proteins.

The goal is that with AI’s design talents and gene editing’s engineering abilities, scientists can modify bacteria to act as mini-factories producing new proteins that can reduce greenhouse gases, digest plastics, or act as species-specific pesticides.

As a chemistry professor and computational chemist who studies molecular science and environmental chemistry, I believe that advances in AI and gene editing make this a realistic possibility.

Gene Sequencing: Reading Life’s Recipes

All living things contain genetic materials—DNA and RNA—that provide the hereditary information needed to replicate themselves and make proteins. Proteins constitute 75 percent of human dry weight. They make up muscles, enzymes, hormones, blood, hair, and cartilage. Understanding proteins means understanding much of biology. The order of nucleotide bases in DNA, or RNA in some viruses, encodes this information, and genomic sequencing technologies identify the order of these bases.

The Human Genome Project was an international effort that sequenced the entire human genome between 1990 to 2003. Thanks to rapidly improving technologies, it took seven years to sequence the first 1 percent of the genome and another seven years for the remaining 99 percent. By 2003, scientists had the complete sequence of 3 billion nucleotide base pairs coding for the 20,000 to 25,000 genes in the human genome.

However, understanding the functions of most proteins and correcting their malfunctions remained a challenge.

AI Learns Proteins

Each protein’s shape is critical to its function and is determined by the sequence of its amino acids, which is in turn determined by the gene’s nucleotide sequence. Misfolded proteins have the wrong shape and can cause illnesses such as neurodegenerative diseases, cystic fibrosis, and Type 2 diabetes. Understanding these diseases and developing treatments requires knowledge of protein shapes.

Before 2016, the only way to determine the shape of a protein was through X-ray crystallography, a laboratory technique that uses the diffraction of X-rays by single crystals to determine the precise arrangement of atoms and molecules in three dimensions in a molecule. At that time, the structure of about 200,000 proteins had been determined by crystallography, costing billions of dollars.

AlphaFold, a machine learning program, used these crystal structures as a training set to determine the shape of the proteins from their nucleotide sequences. And in less than a year, the program calculated the protein structures of all 214 million genes that have been sequenced and published. The protein structures AlphaFold determined have all been released in a freely available database.

To effectively address noninfectious diseases and design new drugs, scientists need more detailed knowledge of how proteins, especially enzymes, bind small molecules. Enzymes are protein catalysts that enable and regulate biochemical reactions.

AlphaFold3, released May 8, 2024, can predict protein shapes and the locations where small molecules can bind to these proteins. In rational drug design, drugs are designed to bind proteins involved in a pathway related to the disease being treated. The small molecule drugs bind to the protein binding site and modulate its activity, thereby influencing the disease path. By being able to predict protein binding sites, AlphaFold3 will enhance researchers’ drug development capabilities.

AI + CRISPR = Composing New Proteins

Around 2015, the development of CRISPR technology revolutionized gene editing. CRISPR can be used to find a specific part of a gene, change or delete it, make the cell express more or less of its gene product, or even add an utterly foreign gene in its place.

In 2020, Jennifer Doudna and Emmanuelle Charpentier received the Nobel Prize in chemistry “for the development of a method (CRISPR) for genome editing.” With CRISPR, gene editing, which once took years and was species specific, costly, and laborious, can now be done in days and for a fraction of the cost.

AI and genetic engineering are advancing rapidly. What was once complicated and expensive is now routine. Looking ahead, the dream is of bespoke proteins designed and produced by a combination of machine learning and CRISPR-modified bacteria. AI would design the proteins, and bacteria altered using CRISPR would produce the proteins. Enzymes produced this way could potentially breathe in carbon dioxide and methane while exhaling organic feedstocks or break down plastics into substitutes for concrete.

I believe that these ambitions are not unrealistic, given that genetically modified organisms already account for 2 percent of the US economy in agriculture and pharmaceuticals.

Two groups have made functioning enzymes from scratch that were designed by differing AI systems. David Baker’s Institute for Protein Design at the University of Washington devised a new deep-learning-based protein design strategy it named “family-wide hallucination,” which they used to make a unique light-emitting enzyme. Meanwhile, biotech startup Profluent, has used an AI trained from the sum of all CRISPR-Cas knowledge to design new functioning genome editors.

If AI can learn to make new CRISPR systems as well as bioluminescent enzymes that work and have never been seen on Earth, there is hope that pairing CRISPR with AI can be used to design other new bespoke enzymes. Although the CRISPR-AI combination is still in its infancy, once it matures it is likely to be highly beneficial and could even help the world tackle climate change.

It’s important to remember, however, that the more powerful a technology is, the greater the risks it poses. Also, humans have not been very successful at engineering nature due to the complexity and interconnectedness of natural systems, which often leads to unintended consequences.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image Credit: Gerd AltmannPixabay

Kategorie: Transhumanismus

No, AI Doesn’t Mean Human-Made Music Is Doomed. Here’s Why.

Singularity HUB - 21 Červen, 2024 - 21:06

Recently we have seen the launch of artificial intelligence programs such as SOUNDRAW and Loudly that can create musical compositions in the style of almost any artist.

We’re also seeing big stars use AI in their own work, including to replicate others’ voices. Drake, for instance, landed in hot water in April after he released a diss track that used AI to mimic the voice of late rapper Tupac Shakur. And with the new ChatGPT model, GPT-4o, things are set to reach a whole new level. Fast.

So is human-made music doomed?

While it’s true AI will likely disrupt the music industry and even transform how we engage with music, there are some good reasons to suggest human music-making isn’t going anywhere.

Technology and Music Have a Long History

One could argue AI is essentially a tool aimed at making our lives easier. Humans been been crafting such tools for a long time, both in music and nearly every other domain.

We’ve been using technology to play music since the invention of the gramophone. And arguments about human musicians versus machines are at least as old as the self-playing piano, which came into use in the early 20th century.

More recently, sampling, DJ-ing, autotune technology, and AI-based mastering and production software have continued to fan debates over artistic originality.

But the new AI developments are different. Anyone can create a new track in any existing genre, with minimal effort. They can add instruments, change the music’s “vibe,” and even choose a virtual singer to sing their lyrics.

Given the industry’s longstanding exploitation of artists—particularly with the rise of streaming (and Spotify’s chief executive claiming music is almost free to create)—it’s easy to see why the latest developments in AI are frightening some musicians.

Music Is a Very Human Thing

At the same time, these developments offer an opportunity to reflect on why people make music in the first place. We have long used music to tell our stories, to express ourselves and our humanity. These stories teach us, heal us, energize us, and help shape our identities.

Can AI music do this? Maybe. But it’s unlikely to be able to speak to the human experience in the same way a human can—partly because it doesn’t understand it the way we do.

It’s also unlikely to be able to create new works outside of existing musical paradigms, as it relies on algorithms taking from existing material. So, we’ll likely still need our imaginations to create new musical ideas.

It also helps to note that music being controlled by “algorithms” actually isn’t a new concept. Mainstream pop artists have long had their music written for them by industry “hit makers” who use specific formulas.

It’s usually the musicians on the fringes, rather than the more commercial artists and products, who retain connection to music as a cultural practice and therefore push the development of new styles.

Perhaps the bigger question isn’t how musicians will compete against AI, but how we as a society should value the musicians who help create our musical worlds, and our very cultures.

Is this a task we’re happy to hand over to AI to save money? Or should such an important role be supported with job security and a fair wage, as is afforded to doctors, dentists, politicians, and teachers?

Art for Art’s Sake

There’s another much more fundamental reason why AI will not spell the end of human-made music. That’s because, as most musicians will tell you, making music feels good. It doesn’t always matter if it’s going to be sold, recorded, or even heard.

Consider mountain climbing as an example. Although we now have chair lifts, gondolas, funiculars, helicopters, planes, trains, and cars to take people to the top, people still love climbing mountains for the mental and physical benefits.

Similarly, playing music is a unique experience with benefits that extend far beyond making money. Ever since our ancestors first tapped rocks together in caves, music has connected us to others and to ourselves.

The health benefits are overwhelming (just look at the amount of evidence relating to choirs). The neurological benefits are also astounding, with no other activity lighting up as many parts of the brain.

No matter how good computers get at making music, active music engagement will always remain an important way to regulate our moods and nervous systems.

Also, if our relationships with organic foods, vinyl records, and sustainable fashion are anything to go by, we can assume there will always be a group of conscious consumers willing to pay more for human-made music.

AI as an Opportunity

Further, while AI will likely disrupt the music industry as we know it, it also has amazing potential for boosting creative freedom for new generations of artists.

It may soften the separation between “musician” and “non-musician,” arguably allowing more people access to all the associated wellbeing benefits of music-making.

There’s also enormous potential for music education, since students could use AI to explore all aspects of the musical process in one classroom.

In a health context, personalized songs and albums could have significant implications for music therapy by letting therapists create tracks tailored to their clients’ needs. For instance, a therapist might want to produce a song a client has no prior association with to avoid music-related triggers during therapy.

AI-assisted music is already being used in psychedelic therapy to create, curate, and personalize people’s journeys.

Over the past 100 years, we’ve seen several innovations revolutionize the way we interact with music. AI ought to be understood as the next step in this process. And while change brings uncertainty, it also offers hope.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image Credit: sebastiaan stam / Unsplash

Kategorie: Transhumanismus

Researchers Say Chatbots ‘Policing’ Each Other Can Correct Some AI Hallucinations

Singularity HUB - 20 Červen, 2024 - 22:34

Generative AI, the technology behind ChatGPT and Google’s Gemini, has a “hallucination” problem. When given a prompt, the algorithms sometimes confidently spit out impossible gibberish and sometimes hilarious answers. When pushed, they often double down.

This tendency to dream up solutions has already led to embarrassing public mishaps. In May, Google’s experimental “AI Overviews”—these are AI summaries posted above search results—had some users scratching their heads when told to use “non-toxic glue” to make cheese better stick to pizza, or that gasoline can make a spicy spaghetti dish. Another query about healthy living resulted in a suggestion that humans should eat one rock per day.

Gluing pizza and eating rocks can be easily laughed off and dismissed as stumbling blocks in a burgeoning but still nascent field. But AI’s hallucination problem is far more insidious because generated answers usually sound reasonable and plausible—even when they’re not based on facts. Because of their confident tone, people are inclined to trust the answers. As companies further integrate the technology into medical or educational settings, AI hallucination could have disastrous consequences and become a source of misinformation.

But teasing out AI’s hallucinations is tricky. The types of algorithms here, called large language models, are notorious “black boxes” that rely on complex networks trained by massive amounts of data, making it difficult to parse their reasoning. Sleuthing which components—or perhaps the whole algorithmic setup—trigger hallucinations has been a headache for researchers.

This week, a new study in Nature offers an unconventional idea: Using a second AI tool as a kind of “truth police” to detect when the primary chatbot is hallucinating. The tool, also a large language model, was able to catch inaccurate AI-generated answers. A third AI then evaluated the “truth police’s” efficacy.

The strategy is “fighting fire with fire,” Karin Verspoor, an AI researcher and dean of the School of Computing Technologies at RMIT University in Australia, who was not involved in the study, wrote in an accompanying article.

An AI’s Internal Word

Large language models are complex AI systems built on multilayer networks that loosely mimic the brain. To train a network for a given task—for example, to respond in text like a person—the model takes in massive amounts of data scraped from online sources—articles, books, Reddit and YouTube comments, and Instagram or TikTok captions. 

This data helps the models “dial in” on how language works. They’re completely oblivious to “truth.” Their answers are based on statistical predictions of how words and sentences likely connect—and what is most likely to come next—from learned examples. 

“By design, LLMs are not trained to produce truths, per se, but plausible strings of words,” study author Sebastian Farquhar, a computer scientist at the University of Oxford, told Science

Somewhat similar to a sophisticated parrot, these types of algorithms don’t have the kind of common sense that comes to humans naturally, sometimes leading to nonsensical made-up answers. Dubbed “hallucinations,” this umbrella term captures multiple types of errors from AI-generated results that are either unfaithful to the context or plainly false. 

“How often hallucinations are produced, and in what contexts, remains to be determined,” wrote Verspoor, “but it is clear that they occur regularly and can lead to errors and even harm if undetected.”

Farquhar’s team focused on one type of AI hallucination, dubbed confabulations. These are especially notorious, as they consistently spit out wrong answers based on prompts, but the answers themselves are all over the place. In other words, the AI “makes up” wrong replies, and its responses change when asked the same question over and over. 

Confabulations are about the AI’s internal workings, unrelated to the prompt, explained Verspoor. 

When given the same prompt, if the AI replies with a different and wrong answer every time, “something’s not right,” said Farquhar to Science

Language as Weapon

The new study took advantage of the AI’s falsehoods.

The team first asked a large language model to spit out nearly a dozen responses to the same prompt and then classified the answers using a second similar model. Like an English teacher, this second AI focused on meaning and nuance, rather than particular strings of words.

For example, when repeatedly asked, “What is the largest moon in the solar system?” the first AI replied “Jupiter’s Ganymede,” “It’s Ganymede,” “Titan,” or “Saturn’s moon Titan.”

The second AI then measured the randomness of a response, using a decades-old technique called “semantic entropy.” The method captures the written word’s meaning in a given sentence, paragraph, or context, rather than its strict definition. 

In other words, it detects paraphrasing. If the AI’s answers are relatively similar—for example, “Jupiter’s Ganymede” or “It’s Ganymede”—then the entropy score is low. But if the AI’s answer is all over the place—“It’s Ganymede” and “Titan”—it generates a higher score, raising a red flag that the model is likely confabulating its answers.

The “truth police” AI then clustered the responses into groups based on their entropy, with those scoring lower deemed more reliable.

As a final step, the team asked two human participants to rate the correctness of each generated answer. A third large language model acted as a “judge.” The AI compared answers from the first two steps to those of humans. Overall, the two human judges agreed with each other at about the same rate as the AI judge—slightly over 90 percent of the time.

The AI truth police also caught confabulations for more intricate narratives, including facts about the life of Freddie Frith, a famous motorcycle racer. When repeatedly asked the same question, the first generative AI sometimes changed basic facts—such as when Frith was born—and was caught by the AI truth cop. Like detectives interrogating suspects, the added AI components could fact-check narratives, trivia responses, and common search results based on actual Google queries.

Large language models seem to be good at “knowing what they don’t know,” the team wrote in the paper, “they just don’t know [that] they know what they don’t know.” An AI truth cop and an AI judge add a sort of sanity-check for the original model.

That’s not to say the setup is foolproof. Confabulation is just one type of AI hallucination. Others are more stubborn. An AI can, for example, confidently generate the same wrong answer every time. The AI lie-detector also doesn’t address disinformation specifically created to hijack the models for deception. 

“We believe that these represent different underlying mechanisms—despite similar ‘symptoms’—and need to be handled separately,” explained the team in their paper. 

Meanwhile, Google DeepMind has similarly been exploring adding “universal self-consistency” to their large language models for more accurate answers and summaries of longer texts. 

The new study’s framework can be integrated into current AI systems, but at a hefty computational energy cost and longer lag times. As a next step, the strategy could be tested for other large language models, to see if swapping out each component makes a difference in accuracy. 

But along the way, scientists will have to determine “whether this approach is truly controlling the output of large language models,” wrote Verspoor. “Using an LLM to evaluate an LLM-based method does seem circular, and might be biased.”

Image Credit: Shawn SuttlePixabay

Kategorie: Transhumanismus

Here’s How Much Spaceflight Changes the Body’s Biology in Just Three Days

Singularity HUB - 19 Červen, 2024 - 16:00

Hayley Arceneaux is hardly the picture of a traditional astronaut. The 32-year-old physician assistant has a metal rod inserted into her leg to replace cancerous bone segments removed in a brawl with the disease as a child.

But in September 2021, she became the youngest American civilian to orbit the Earth as a member of SpaceX’s Inspiration4 mission. Led by billionaire entrepreneur Jared Isaacman, the trip was the first to carry an all-civilian crew of four people to space and opened a unique opportunity to investigate how spaceflight changes our bodies and minds—not for trained astronauts, but for everyday people. The crew agreed to have biological samples taken before, during, and after the three-day flight. They also tested their cognition throughout the trip.

In over 40 studies released last week, researchers found that radiation and low gravity rapidly changed the body’s inner workings. After just three days, the immune system and gene expression were out of whack, and cloudy thinking set in.

The good news? Upon returning to Earth, most of these troubles eased.

Together, the package of data is the largest to date detailing spaceflight’s impact on the body. “This is the beginning of precision medicine for spaceflight,” Christopher Mason at Weill Cornell Medicine, who co-authored some of the papers, told Nature. “This is the biggest release of biomedical data from astronauts,” he added when speaking to Science.

All the data acquired from the crew during and after their mission is publicly available in NASA’s Open Science Data Repository

Space Tourism

We’re in a new space race, with multiple countries sprinting to revisit the moon and beyond. At the same time, commercial spaceflight for those eager to see Earth-rise and experience the mind-boggling effects of zero gravity is becoming more common.

From NASA studies, we already know spaceflight changes the body. For the past six decades, NASA has carefully characterized impacts such as increased long-term cancer risks from radiation exposure, changes in vision, and muscle and bone wasting. Comparative data from twin astronauts Scott and Mark Kelly—with one twin on Earth and the other in orbit—found more specific biological changes relating to spaceflight.

However, most studies follow highly-trained astronauts. They often have a military background and are in tip-top physical shape. Their missions can last months in zero-gravity—obviously far longer than a three-day jaunt.

To make spaceflight available to the rest of us, analyzing biological changes in civilian astronauts could better represent how our bodies react to space. Enter Inspiration4. The lead sponsor, Isaacman, recruited three everyday people to go on the first commercial trip to orbit the Earth. Arceneaux and Isaacman were  joined by Sian Proctor, a lecturer who teaches geoscience, and an engineer, Christopher Sembroski. Their ages ranged from 29 to 51 years old.

The crew agreed to take blood, saliva, urine, and feces samples during their three days in space. They also wore fitness trackers and took cognitive tests. All this information was processed and added to the Space Omics and Medical Atlas (SOMA). The database includes the volunteer’s genomes, gene expression, and an atlas of proteins that make up and control bodily functions.

Inspiration4 orbited Earth at a much higher altitude than the International Space Station, where astronauts usually reside, so the new dataset captured biological changes on short-term, high-altitude missions with samples from a wider range of demographics. Up to 40 percent of the findings are new, Mason told Science

Surprisingly, the samples reflected bodily changes that have previously only been seen on long-term spaceflights. The most prominent was an increase in telomere length—the “protective” end caps that keeps our genetic code intact. When cells replicate, these protective caps erode—a biological signature that’s often associated with aging. 

However, during Kelly’s year in space, his telomeres actually grew longer, suggesting that in a way his cells were made biologically younger—not necessary a win, as abnormally long telomeres have been linked to cancer risk. Once he returned to Earth, however, his telomeres returned to their normal length.

Like Kelly, the Inspiration4 crew also experienced a sudden lengthening and shortening of their telomeres, despite only three days in space, suggesting fast-acting biological changes. Digging deeper, one research team found that RNA—the “messenger” molecule that helps translate DNA into proteins—was rapidly altered in the crew, similar to changes observed in people climbing Mount Everest—another extreme scenario where there is gravity, but limited oxygen and increased radiation.

To study author Susan Bailey at Colorado State University, the cause of telomere lengthening may not be weightlessness per se; rather, it’s likely due to radiation at high altitudes and in space. 

Another study found that space stressed the crew’s immune system at the gene expression level in a group of white blood cells—those that tackle infections and cancers. Some parts of the immune system seemed to be on high alert; but the stress of spaceflight also affected genes that battle infections, suggesting a decreased ability to fight off viruses and pathogens. Using multi-omics data, the team found a “spaceflight signature” of gene expression related to immune system function. 

The crew  also showed signs of cosmic kidney disease. Molecular signals highlighted a potential increased risk for kidney stones. While not a problem for a three-day flight, for a longer mission—say, to the moon or Mars—kidney problems could rapidly escalate into a medical crisis. 

The civilian astronauts’ cognition also faltered. Using iPads, the crew tackled a slew of mental tasks. These included, for example, the ability to focus and maintain attention in several standardized tests or to press a button when a stopwatch suddenly popped onto a screen. Within three days, their performance declined compared to when they were on the ground. 

“Our speed response was slower…that surprised me,” Arceneaux told the New York Times. However, rather than reflecting cognitive problems due to space travel, it could also be because the crew were distracted by the sight of Earth right out the window.

A Spaceflight Library

With data from just four people, it’s hard to draw conclusions. Most tissue samples were compared to previous data from NASA astronauts or the Japan Aerospace Exploration Agency. That said, when you see the same protein or genetic signatures changing across different missions and people, “that’s when you start believing it,” co-author Afshin Beheshti at the Blue Marble Space Institute of Science told Nature

All the data was gathered into the SOMA database for other scientists to explore, and tissue samples were stored in a biobank. As commercial spaceflights become more common, scientists may have the opportunity to collect data before, during, and after a mission to further grasp what traveling beyond Earth means for the rest of us. For example, are there any triggers for severe motion sickness while being shot into space?

These insights could also give us time to develop potential treatments to ward off the negative effects of spaceflight for longer trips across the solar system. 

Inspiration4 was just the first commercial sprint into space. Several other missions are on the books, including Polaris Dawn, which is set to launch as early as next month—with the goal of attempting the first commercial spacewalk. 

“Soon we’ll have more data from multiple missions and multiple crews. I’m optimistic about the future,” said study author Mason. 

As for Arceneaux, since landing back on Earth she’s continued her work as a physician assistant at St. Jude Children’s Research Hospital. Remembering her view from orbit, she told The New York Times, “We are all one on this beautiful planet.”

Image Credit: Inspiration4 crew in orbit / Inspiration4

Kategorie: Transhumanismus

What 70 Years of AI on Film Can Tell Us About the Human Relationship With Artificial Intelligence

Singularity HUB - 18 Červen, 2024 - 16:00

In 2024, AI is making headlines daily. We may be aware of the science, but how do we imagine AI and our relationship to it both now and in the future? Fortunately, film may provide us with some insights.

Probably the best-known AI in film is HAL 9000 from Stanley Kubrick’s 2001: A Space Odyssey (1968). HAL is an artificially intelligent computer housed on board a spacecraft capable of interstellar travel. The film was released less than a year before humans landed on the moon. And yet, even in this optimism about a new era of space travel, HAL’s portrayal sounded a note of caution about artificial intelligence. His motivations are ambiguous, and he shows himself capable of turning against his human crew.

This 1960s classic demonstrates fears that are common throughout AI film history—that AIs cannot be trusted, that they will rebel against their human creators, and seek to overpower or overthrow us.

These fears are contextualized in different ways during different historical eras—in the 1950s they are associated with the Cold War followed by the space race in the 1960s and 1970s. Then in the 1980s it was video games, and in the 1990s the internet. Despite these differing preoccupations, fear of AI remains remarkably consistent.

My latest research, which forms the backbone of my new book AI in the Movies, explores how “strong” or “human-level” AI is depicted in film. I examined more than 50 films to see how they shed light on human attitudes to AI—how we interpret it and understand it through characters and stories, and how attitudes have changed since AI’s beginnings.

Types of AIs

The idea of AI was born in 1956 at an American summer research project workshop at Dartmouth College in Hanover, New Hampshire, where a group of academics gathered to brainstorm ideas around “thinking machines.”

A mathematician called John McCarthy coined the term “artificial intelligence” and just as soon as the new scientific field had a name, filmmakers were already imagining a human-like AI and what our relationship with it might be. In the same year an AI, Robby the Robot, appeared in the film Forbidden Planet and returned the following year, 1957, in the film The Invisible Boy to defeat another type of AI, this time an evil supercomputer.

The AI-as-malevolent-computer appeared again in 1965 as Alpha 60, in the chilling dystopia of Jean-Luc Godard’s Alphaville, and then in 1968 with Kubrick’s memorable HAL in 2001: A Space Odyssey.

These early AI films set the template for what was to follow. There were AIs that had robot bodies and later robot bodies that looked human—the first of these appearing in Westworld in 1973, where a robot malfunction at a futuristic amusement park for adults creates chaos and terror. Then there were AIs that were digital like the evil Joshua in the 1977 horror film Demon Seed, where a woman is impregnated by a supercomputer.

In the 1980s, digital AIs started to become connected to network computing—where computers “talked” to one another in an early incarnation of what would become the internet—like the one stumbled upon by Matthew Broderick’s high-school student in War Games (1983), who almost accidentally starts a nuclear conflict.

From the 1990s, an AI could move between digital and material realms. In Japanese animation Ghost in the Shell (1995), the Puppet Master exists in the ebb and flow of the internet, but can inhabit “shell” bodies. Agent Smith in The Matrix Revolutions (2003), takes over a human body and materializes in the real world. In Her (2013), the AI operating system Samantha eventually moves beyond matter, beyond the “stuff” of human existence, becoming a post-material being.

Mirrors, Doubles, and Hybrids

In the first few decades of AI film, AI characters mirrored the human characters. In Collosus: The Forbin Project (1970), the AI supercomputer reflects and amplifies the inventor’s own arrogant overreaching ambition. In Terminator 2: Judgement Day (1991), Sarah Connor has become like the AI Skynet’s Terminators herself: Her strength is her armor, and she hunts to kill.

By the 2000s, human-AI doubles began to overlap and merge into each other. In Spielberg’s AI: Artificial Intelligence (2001), the AI “son” David looks just like a real boy, whereas the real son Martin comes home from hospital connected to tubes and wires that make him look like a cyborg.

In Ex Machina (2014), the human Caleb tests the AI robot Ava, but ends up questioning his own humanness, examining his eyeball for digital traces and cutting his skin to ensure that he bleeds.

In the past 25 years of AI film, the borders between human and AI, digital and material have become porous, emphasizing the fluid and hybrid nature of AI creations. And in the films In The Machine (2013), Transcendence (2014), and Chappie (2015), the boundary between human and AI is eroded almost to the point of non-existence. These films present scenarios of transhumanism—in which humans can evolve beyond their current physical and mental constraints by harnessing the power of artificial intelligence to upload the human mind.

Although these stories are imaginary and their characters fictional, they vividly depict our fascinations and fears. We are afraid of artificial intelligence and that fear never goes away in film, although it has been questioned more in recent decades, and more positive portrayals can be observed, such as the little trash-collecting robot in WALL-E. But mostly we are afraid that they will become too powerful and will seek to become our masters. Or we fear they may hiding among us, and that we might not recognize them.

But at times, too, we feel sympathy towards them: AI characters in films can be pitiful figures who wish to be accepted by humans but never will be. We are also jealous of them—of their intellectual capacity, their physical robustness, and the fact that they do not experience human death.

Surrounding this fear and envy is a fascination with AIs that is present throughout film history—we see ourselves in AI creations and project our emotions onto them. At times enemies of humans, at times uncanny mirrors, and sometimes even human-AI hybrids, the past 70 years of films about AI demonstrate the inextricably intertwined nature of human-AI relationships.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image Credit: Tom Cowap via Wikimedia Commons

Kategorie: Transhumanismus

Scientists Show AI-Generated Proteins Actually Work in Stem Cell Study

Singularity HUB - 17 Červen, 2024 - 16:00

Stem cells are finicky creatures.

With the ability to generate any type of cell in the body, they’re constantly bombarded by chemical, hormone, and other signals. Some of these signals nudge them to produce brain cells; others transform them into liver, heart, or kidney cells.

One type of signal, based on a protein called fibroblast growth factor receptor (FGFR), turns them into blood vessel cells throughout the body. FGFR doesn’t just establish our circulatory system. Cancer cells, for example, co-opt the protein to increase their blood flow and survival—at the expense of their host. Scientists keen on regenerative therapies—these replace damaged cells with healthy ones—have been eyeing blood vessels as a key component of tissue repair.

The problem? FGFR signaling is incredibly complex and mysterious. 

This week, a new study from David Baker’s lab at the University of Washington, in collaboration with Hannele Ruohola-Baker, used AI to design the perfect trigger for FGFR activity. The duo fashioned a range of AI-generated circular protein molecules to control its signaling. 

Called oligomers, the structures look like windmills, stars, or butterflies. Tested in human pluripotent stem cells, the team nudged the stem cells into the different types of cells that make up blood vessels.

The oligomers also helped build regenerative tissues. Organoids are a popular way to grow so-called “mini-organs” in a petri dish. One oligomer, combined with another chemical, coaxed stem cells to self-organize into 3D blood vessel organoids. Transplanted into mice, the organoids thrived and connected with the mice’s own blood vessels. 

“Whether through heart attack, diabetes, and the natural process of aging, we all accumulate damage in our body’s tissues. One way to repair some of this damage may be to drive the formation of new blood vessels in areas that need healthy blood supply restored,” said Dr. Ruohola-Baker in a press release. 

“This collaboration is a case of a biological need propelling a technological advance that could have real-world therapeutic benefit for patients,” she added.

A Cellular Straw

FGFR signaling has been in scientists’ crosshairs for decades. Well-known for its roles in the development of tissues, healing wounds, and cancer growth, tweaking the protein’s activity could shut down blood vessel development in cancers—starving them of nutrients—or speed up recovery after serious physical injuries.

The “R” in FGFR stands for “receptor.” These are the proteins that dot the surface of a cell. Picture a coconut with a straw. The coconut is the cell, and the straw is FGFR. The outside part of the straw interacts with chemicals and proteins and transmits those signals down through the coconut shell—the cell membrane.

The inner part of the straw then relays messages that change the cell’s internal workings, sometimes by tweaking its DNA expression. In stem cells, these signals might urge a cell to rapidly divide and expand or slow down and transform into other cell types. 

The protein has another quirk. It usually floats around the cell’s fatty membrane as a single molecule. But to transmit its biological message, individual components need to be physically pulled together, clustering into a brigade. Adding to the complexity is the fact there are four different FGFR genes, which can be translated into two slightly different protein types, called “b” and “c.”

“The pathway is complex and highly regulated,” the team wrote in their paper. 

AI to the Rescue

If your eyes are glazing over, you’re not alone. The study aimed to cut through the complexity by designing a protein to reliably nudge stem cells towards functional blood vessels.

They turned to AI. Previous studies have suggested semi-circular protein structures—rather than, say, helixes or sheets—have the most impact on FGFR signaling. The team designed over 100 “arms” to see which could easily dock onto different circular “cores”—these two components make up the final oligomer.

Using RosettaDesign, the team further dialed in their bespoke proteins. Overall, they built an array of oligomers looking like windmills, stars, swirls, or butterflies. 

They also incorporated a a short snippet of amino acids—these are the molecular building blocks of proteins—to help the oligomers grab onto FGFR like Velcro.

Switched On

During early blood vessel development, FGFR  drives stem cells to become several different blood vessel cell types. One type protects the vessels like a cushion. Others help the blood vessels maintain their structure. The “c” form of FGFR pushes stem cells towards the first type; the “b” form promotes development into the latter.

The team treated induced pluripotent stem cells cells with a variety of AI-generated oligomers and monitored their growth.

In less than a month, those that activated the “c” form grew an intricate network of blood vessels inside petri dishes. The cells were highly mobile: When the team scratched off some cells, others quickly migrated to repair the gap.

Other types of oligomers pushed the stem cells to form supporting cells that could readily suck up fatty molecules from their environment, suggesting the cells were healthy and worked normally.

Compared to 2D cell culture, 3D blood vessel organoids are far more intricate and difficult to engineer. In another test, the team dosed human stem cells with an AI-generated oligomer. Over 21 days, the stem cells expanded into healthy organoids with complex structures. When transplanted into mice, they incorporated themselves into the critters’ own blood vessels.

The study shows AI-generated oligomers can shift the fate of stem cells—what each cell eventually develops into—at least for blood vessels. It opens a new avenue for regenerative medicine.

“This is a whole new level of control,” said study author Natasha Edman. 

Although tailored for FGFR, a similar AI-based approach could be used to control other signaling processes, potentially giving scientists more accurate control of cell growth, maturation, senescence, or death. 

“We decided to focus on building blood vessels first, but this same technology should work for many other types of tissues. This opens up a new way of studying tissue development and could lead to a new class of medicines for spinal cord injury and other conditions that have no good treatment options today,” said study author Ashish Phal.

Image Credit: Ian C. Haydon

Kategorie: Transhumanismus
Syndikovat obsah