Transhumanismus

A Google DeepMind AI Just Discovered 380,000 New Materials. This Robot Is Cooking Them Up.

Singularity HUB - 30 Listopad, 2023 - 23:40

A robot chemist just teamed up with an AI brain to create a trove of new materials.

Two collaborative studies from Google DeepMind and the University of California, Berkeley, describe a system that predicts the properties of new materials—including those potentially useful in batteries and solar cells—and produces them with a robotic arm.

We take everyday materials for granted: plastic cups for a holiday feast, components in our smartphones, or synthetic fibers in jackets that keep us warm when chilly winds strike.

Scientists have painstakingly discovered roughly 20,000 different types of materials that let us build anything from computer chips to puffy coats and airplane wings. Tens of thousands more potentially useful materials are in the works. Yet we’ve only scratched the surface.

The Berkeley team developed a chef-like robot that mixes and heats ingredients, automatically transforming recipes into materials. As a “taste test,” the system, dubbed the A-Lab, analyzes the chemical properties of each final product to see if it hits the mark.

Meanwhile, DeepMind’s AI dreamed up myriad recipes for the A-Lab chef to cook. It’s a hefty list. Using a popular machine learning strategy, the AI found two million chemical structures and 380,000 new stable materials—many counter to human intuition. The work is an “order-of-magnitude” expansion on the materials that we currently know, the authors wrote.

Using DeepMind’s cookbook, A-Lab ran for 17 days and synthesized 41 out of 58 target chemicals—a win that would’ve taken months, if not years, of traditional experiments.

Together, the collaboration could launch a new era of materials science. “It’s very impressive,” said Dr. Andrew Rosen at Princeton University, who was not involved in the work.

Let’s Talk Chemicals

Look around you. Many things we take for granted—that smartphone screen you may be scrolling on—are based on materials chemistry.

Scientists have long used trial and error to discover chemically stable structures. Like Lego blocks, these components can be built into complex materials that resist dramatic temperature changes or high pressures, allowing us to explore the world from deep sea to outer space.

Once mapped, scientists capture the crystal structures of these components and save those structures for reference. Tens of thousands are already deposited into databanks.

In the new study, DeepMind took advantage of these known crystal structures. The team trained an AI system on a massive library with hundreds of thousands of materials called the Materials Project. The library includes materials we’re already familiar with and use, alongside thousands of structures with unknown but potentially useful properties.

DeepMind’s new AI trained on 20,000 known inorganic crystals—and another 28,000 promising candidates—from the Materials Project to learn what properties make a material desirable.

Essentially, the AI works like a cook testing recipes: Add a little something here, change some ingredients there, and through trial-and-error, it reaches the desired results. Fed data from the dataset, it generated predictions for potentially stable new chemicals, along with their properties. The results were fed back into the AI to further hone its “recipes.”

Over many rounds, the training allowed the AI to make small mistakes. Rather than swapping out multiple chemical structures at the same time—a potentially catastrophic move—the AI iteratively evaluated small chemical changes. For example, instead of replacing one chemical component with another, it could try to only substitute half. If the swaps didn’t work, no problem, the system weeded out any candidates that weren’t stable.

The AI eventually produced 2.2 million chemical structures, 380,000 of which it predicted would be stable if synthesized. Over 500 of the newly found materials were related to lithium-ion conductors, which play a critical part in today’s batteries.

“This is like ChatGPT for materials discovery,” said Dr. Carla Gomes at Cornell University, who was not involved in the research.

Mind to Matter

DeepMind’s AI predictions are just that: What looks good on paper may not always work out.

Here’s where A-Lab comes in. A team led by Dr. Gerbrand Ceder at UC Berkeley and the Lawrence Berkeley National Laboratory built an automated robotic system directed by an AI trained on more than 30,000 published chemical recipes. Using robotic arms, A-Lab builds new materials by picking, mixing, and heating ingredients according to a recipe.

Over two weeks of training, A-Lab produced a string of recipes for 41 new materials without any human input. It wasn’t a total success: 17 materials failed to meet their mark. However, with a dash of human intervention, the robot synthesized these materials without a hitch.

Together, the two studies open a universe of novel compounds that might meet today’s global challenges. Next steps include adding chemical and physical properties to the algorithm to further improve its understanding of the physical world and synthesizing more materials for testing.

DeepMind is releasing their AI and some of its chemical recipes to the public. Meanwhile, A-Lab is running recipes from the database and uploading their results to the Materials Project.

To Ceder, an AI-generated map of new materials could “change the world.” It’s not A-lab itself, he said. Rather, it’s “the knowledge and information that it generates.”

Image Credit: Marilyn Sargent/Berkeley Lab

Kategorie: Transhumanismus

Merriam-Webster’s Word of the Year Reflects Growing Concerns Over AI’s Ability to Deceive

Singularity HUB - 28 Listopad, 2023 - 21:38

When Merriam-Webster announced that its word of the year for 2023 was “authentic,” it did so with over a month to go in the calendar year.

Even then, the dictionary publisher was late to the game.

In a lexicographic form of Christmas creep, Collins English Dictionary announced its 2023 word of the year, “AI,” on October 31. Cambridge University Press followed suit on November 15 with “hallucinate,” a word used to refer to incorrect or misleading information provided by generative AI programs.

At any rate, terms related to artificial intelligence appear to rule the roost, with “authentic” also falling under that umbrella.

AI and the Authenticity Crisis

For the past 20 years, Merriam-Webster, the oldest dictionary publisher in the US, has chosen a word of the year—a term that encapsulates, in one form or another, the zeitgeist of that past year. In 2020, the word was “pandemic.” The next year’s winner? “Vaccine.”

“Authentic” is, at first glance, a little less obvious.

According to the publisher’s editor-at-large, Peter Sokolowski, 2023 represented “a kind of crisis of authenticity.” He added that the choice was also informed by the number of online users who looked up the word’s meaning throughout the year.

The word “authentic,” in the sense of something that is accurate or authoritative, has its roots in French and Latin. The Oxford English Dictionary has identified its usage in English as early as the late 14th century.

And yet the concept—particularly as it applies to human creations and human behavior—is slippery.

Is a photograph made from film more authentic than one made from a digital camera? Does an authentic scotch have to be made at a small-batch distillery in Scotland? When socializing, are you being authentic—or just plain rude—when you skirt niceties and small talk? Does being your authentic self mean pursuing something that feels natural, even at the expense of cultural or legal constraints?

The more you think about it, the more it seems like an ever-elusive ideal—one further complicated by advances in artificial intelligence.

How Much Human Touch?

Intelligence of the artificial variety—as in nonhuman, inauthentic, computer-generated intelligence—was the technology story of the past year.

At the end of 2022, OpenAI publicly released ChatGPT, a chatbot derived from so-called large language models. It was widely seen as a breakthrough in artificial intelligence, but its rapid adoption led to questions about the accuracy of its answers.

The chatbot also became popular among students, which compelled teachers to grapple with how to ensure their assignments weren’t being completed by ChatGPT.

Issues of authenticity have arisen in other areas as well. In November 2023, a track described as the “last Beatles song” was released. “Now and Then” is a compilation of music originally written and performed by John Lennon in the 1970s, with additional music recorded by the other band members in the 1990s. A machine learning algorithm was recently employed to separate Lennon’s vocals from his piano accompaniment, and this allowed a final version to be released.

But is it an authentic Beatles song? Not everyone is convinced.

Advances in technology have also allowed the manipulation of audio and video recordings. Referred to as “deepfakes,” such transformations can make it appear that a celebrity or a politician said something that they did not—a troubling prospect as the US heads into what is sure to be a contentious 2024 election season.

Writing for The Conversation in May 2023, education scholar Victor R. Lee explored the AI-fueled authenticity crisis.

Our judgments of authenticity are knee-jerk, he explained, honed over years of experience. Sure, occasionally we’re fooled, but our antennae are generally reliable. Generative AI short-circuits this cognitive framework.

“That’s because back when it took a lot of time to produce original new content, there was a general assumption … that it only could have been made by skilled individuals putting in a lot of effort and acting with the best of intentions,” he wrote.

“These are not safe assumptions anymore,” he added. “If it looks like a duck, walks like a duck, and quacks like a duck, everyone will need to consider that it may not have actually hatched from an egg.”

Though there seems to be a general understanding that human minds and human hands must play some role in creating something authentic or being authentic, authenticity has always been a difficult concept to define.

So it’s somewhat fitting that as our collective handle on reality has become ever more tenuous, an elusive word for an abstract ideal is Merriam-Webster’s word of the year.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image Credit: 愚木混株 cdd20 / Unsplash 

Kategorie: Transhumanismus

An AI Tool Just Revealed Almost 200 New Systems for CRISPR Gene Editing

Singularity HUB - 27 Listopad, 2023 - 23:36

CRISPR has a problem: an embarrassment of riches.

Ever since the gene editing system rocketed to fame, scientists have been looking for variants with better precision and accuracy.

One search method screens for genes related to CRISPR-Cas9 in the DNA of bacteria and other creatures. Another artificially evolves CRISPR components in the lab to give them better therapeutic properties—like greater stability, safety, and efficiency inside the human body.

This data is stored in databases containing billions of genetic sequences. While there may be exotic CRISPR systems hidden in these libraries, there are simply too many entries to search.

This month, a team at MIT and Harvard led by CRISPR pioneer Dr. Feng Zhang took inspiration from an existing big-data approach and used AI to narrow the sea of genetic sequences to a handful that are similar to known CRISPR systems.

The AI scoured open-source databases with genomes from uncommon bacteria—including those found in breweries, coal mines, chilly Antarctic shores, and (no kidding) dog saliva.

In just a few weeks, the algorithm pinpointed thousands of potential new biological “parts” that could make up 188 new CRISPR-based systems—including some that are exceedingly rare.

Several of the new candidates stood out. For example, some could more precisely lock onto the target gene for editing with fewer side effects. Other variations aren’t directly usable but could provide insight into how some existing CRISPR systems work—for example, those targeting RNA, the “messenger” molecule directing cells to build proteins from DNA.

“Biodiversity is such a treasure trove,” said Zhang. “Doing this analysis kind of allows us to kill two birds with one stone: both study biology and also potentially find useful things,” he added.

A Wild Hunt

Although CRISPR is known for its gene editing prowess in humans, scientists first discovered the system in bacteria where it combats viral infections.

Scientists have long collected bacterial samples from nooks and crannies all over the globe. Thanks to increasingly affordable and efficient DNA sequencing, many of these samples—some from unexpected sources such as pond scum—have had their genetic blueprint mapped out and deposited into databases.

Zhang is no stranger to the hunt for new CRISPR systems. “A number of years ago, we started to ask, ‘What is there beyond CRISPR, and are there other RNA-programmable systems out there in nature?’” Zhang told MIT News earlier this year.

CRISPR is made up of two structures. One is a “bloodhound” guide RNA sequence, usually about 20 bases long, that targets a particular gene. The other is the scissors-like Cas protein. Once inside a cell, the bloodhound finds the target, and the scissors snip the gene. More recent versions of the system, such as base editing or prime editing, use different types of Cas proteins to perform single-letter DNA swaps or even edit RNA targets.

Back in 2021, Zhang’s lab traced the origins of the CRISPR family tree, identifying an entirely new family line. Dubbed OMEGA, these systems use foreign guide RNAs and protein scissors, yet they could still readily snip DNA in human cells cultured in petri dishes.

More recently, the team expanded their search to a new branch of life: eukaryotes. Members in this family—including plants, animals, and humans—have their DNA tightly wrapped inside a nut-like structure. Bacteria, in contrast, don’t have these structures. By screening fungi, algae, and clams (yup, biodiversity is weird and awesome), the team found proteins they call Fanzors that can be reprogrammed to edit human DNA—a first proof that a CRISPR-like mechanism also exists in eukaryotes.

But the goal isn’t to hunt down shiny, new gene editors just for the sake of it. Rather, it’s to tap nature’s gene editing prowess to build a collection of gene editors, each with its own strengths, that can treat genetic disorders and help us understand our body’s inner workings.

Collectively, scientists have discovered six main CRISPR systems—some collaborate with different Cas enzymes, for instance, while others specialize in either DNA or RNA.

“Nature is amazing. There’s so much diversity,” Zhang said. “There are probably more RNA-programmable systems out there, and we’re continuing to explore and will hopefully discover more.”

Bioengineering Scrabble

That’s what the team built the new AI, called FLSHclust, to do. They transformed technology that analyzes bewilderingly large datasets—like software highlighting similarities in large deposits of document, audio, or image files—into a tool to hunt genes related to CRISPR.

Once complete, the algorithm analyzed gene sequences from bacteria and collected them into groups—a bit like clustering colors into a rainbow, grouping similar colors together so it’s easier to find the shade you’re after. From here, the team honed in on genes associated with CRISPR.

The algorithm combed through multiple open-source databases including hundreds of thousands of genomes from bacteria and archaea and millions of mystery DNA sequences. In all, it scanned billions of protein-encoding genes and grouped them into roughly 500 million clusters. In these, the team identified 188 genes no one has yet associated with CRISPR and that could make up thousands of new CRISPR systems.

Two systems, developed from microbes in the guts of animals and the Black sea, used a 32-base  guide RNA instead of the usual 20 used in CRISPR-Cas9. Like a search query, the longer it is, the more precise the results. These longer guide RNA “queries” suggest the systems could have fewer side effects. Another system is like a previous CRISPR-based diagnostic system called SHERLOCK, which can rapidly sense a single DNA or RNA molecule from an infectious invader.

When tested in cultured human cells, both systems could snip a single strand of the targeted gene and insert small genetic sequences at roughly 13 percent efficiency. It doesn’t sound like much, but it’s a baseline that can be improved.

The team also uncovered genes for a new CRISPR system targeting RNA previously unknown to science. Only found after close scrutiny, it seems this version and any yet to be discovered aren’t easily captured by sampling bacteria around the world and are thus extremely rare in nature.

“Some of these microbial systems were exclusively found in water from coal mines,” said study author Dr. Soumya Kannan. “If someone hadn’t been interested in that, we may never have seen those systems.”

It’s still too early to known whether these systems can be used in human gene editing. Those that randomly chop up DNA, for example, would be useless for therapeutic purposes. However, the AI can mine a vast universe of genetic data to find potential “unicorn” gene sequences and is now available to other scientists for further exploration.

Image Credit: NIH

Kategorie: Transhumanismus

DeepMind Defines Artificial General Intelligence and Ranks Today’s Leading Chatbots

Singularity HUB - 26 Listopad, 2023 - 16:00

Artificial general intelligence, or AGI, has become a much-abused buzzword in the AI industry. Now, Google DeepMind wants to put the idea on a firmer footing.

The concept at the heart of the term AGI is that a hallmark of human intelligence is its generality. While specialist computer programs might easily outperform us at picking stocks or translating French to German, our superpower is the fact we can learn to do both.

Recreating this kind of flexibility in machines is the holy grail for many AI researchers, and is often speculated to be the first step towards artificial superintelligence. But what exactly people mean by AGI is rarely specified, and the idea is frequently described in binary terms, where AGI represents a piece of software that has crossed some mythical boundary, and once on the other side, it’s on par with humans.

Researchers at Google DeepMind are now attempting to make the discussion more precise by concretely defining the term. Crucially, they suggest that rather than approaching AGI as an end goal, we should instead think about different levels of AGI, with today’s leading chatbots representing the first rung on the ladder.

“We argue that it is critical for the AI research community to explicitly reflect on what we mean by AGI, and aspire to quantify attributes like the performance, generality, and autonomy of AI systems,” the team writes in a preprint published on arXiv.

The researchers note that they took inspiration from autonomous driving, where capabilities are split into six levels of autonomy, which they say enable clear discussion of progress in the field.

To work out what they should include in their own framework, they studied some of the leading definitions of AGI proposed by others. By looking at some of the core ideas shared across these definitions, they identified six principles any definition of AGI needs to conform with.

For a start, a definition should focus on capabilities rather than the specific mechanisms AI uses to achieve them. This removes the need for AI to think like a human or be conscious to qualify as AGI.

They also suggest that generality alone is not enough for AGI, the models also need to hit certain thresholds of performance in the tasks they perform. This performance doesn’t need to be proven in the real world, they say—it’s enough to simply demonstrate a model has the potential to outperform humans at a task.

While some believe true AGI will not be possible unless AI is embodied in physical robotic machinery, the DeepMind team say this is not a prerequisite for AGI. The focus, they say, should be on tasks that fall in the cognitive and metacognitive—for instance, learning to learn—realms.

Another requirement is that benchmarks for progress have “ecological validity,” which means AI is measured on real-world tasks valued by humans. And finally, the researchers say the focus should be on charting progress in the development of AGI rather than fixating on a single endpoint.

Based on these principles, the team proposes a framework they call “Levels of AGI” that outlines a way to categorize algorithms based on their performance and generality. The levels range from “emerging,” which refers to a model equal to or slightly better than an unskilled human, to “competent,” “expert,” “virtuoso,” and “superhuman,” which denotes a model that outperforms all humans. These levels can be applied to either narrow or general AI, which helps distinguish between highly specialized programs and those designed to solve a wide range of tasks.

The researchers say some narrow AI algorithms, like DeepMind’s protein-folding algorithm AlphaFold, for instance, have already reached the superhuman level. More controversially, they suggest leading AI chatbots like OpenAI’s ChatGPT and Google’s Bard are examples of emerging AGI.

Julian Togelius, an AI researcher at New York University, told MIT Technology Review that separating out performance and generality is a useful way to distinguish previous AI advances from progress towards AGI. And more broadly, the effort helps to bring some precision to the AGI discussion. “This provides some much-needed clarity on the topic,” he says. “Too many people sling around the term AGI without having thought much about what they mean.”

The framework outlined by the DeepMind team is unlikely to win everyone over, and there are bound to be disagreements about how different models should be ranked. But with any luck, it will get people to think more deeply about a critical concept at the heart of the field.

Image Credit: Resource Database / Unsplash

Kategorie: Transhumanismus

Did This Chemical Reaction Create the Building Blocks of Life on Earth?

Singularity HUB - 25 Listopad, 2023 - 16:00

How did life begin? How did chemical reactions on the early Earth create complex, self-replicating structures that developed into living things as we know them?

According to one school of thought, before the current era of DNA-based life, there was a kind of molecule called RNA (or ribonucleic acid). RNA—which is still a crucial component of life today—can replicate itself and catalyze other chemical reactions.

But RNA molecules themselves are made from smaller components called ribonucleotides. How would these building blocks have formed on the early Earth and then combined into RNA?

Chemists like me are trying to recreate the chain of reactions required to form RNA at the dawn of life, but it’s a challenging task. We know whatever chemical reaction created ribonucleotides must have been able to happen in the messy, complicated environment found on our planet billions of years ago.

I have been studying whether “autocatalytic” reactions may have played a part. These are reactions that produce chemicals that encourage the same reaction to happen again, which means they can sustain themselves in a wide range of circumstances.

In our latest work, my colleagues and I have integrated autocatalysis into a well-known chemical pathway for producing the ribonucleotide building blocks, which could have plausibly happened with the simple molecules and complex conditions found on the early Earth.

The Formose Reaction

Autocatalytic reactions play crucial roles in biology, from regulating our heartbeats to forming patterns on seashells. In fact, the replication of life itself, where one cell takes in nutrients and energy from the environment to produce two cells, is a particularly complicated example of autocatalysis.

A chemical reaction called the formose reaction, first discovered in 1861, is one of the best examples of an autocatalytic reaction that could have happened on the early Earth.

The formose reaction was discovered by Russian chemist Alexander Butlerov in 1861. Image Credit: Butlerov, A. M. 1828-1886 / Wikimedia

In essence, the formose reaction starts with one molecule of a simple compound called glycolaldehyde (made of hydrogen, carbon and oxygen) and ends with two. The mechanism relies on a constant supply of another simple compound called formaldehyde.

A reaction between glycolaldehyde and formaldehyde makes a bigger molecule, splitting off fragments that feed back into the reaction and keep it going. However, once the formaldehyde runs out, the reaction stops, and the products start to degrade from complex sugar molecules into tar.

The formose reaction shares some common ingredients with a well-known chemical pathway to make ribonucleotides, known as the Powner–Sutherland pathway. However, until now no one has tried to connect the two—with good reason.

The formose reaction is notorious for being “unselective.” This means it produces a lot of useless molecules alongside the actual products you want.

An Autocatalytic Twist in the Pathway to Ribonucleotides

In our study, we tried adding another simple molecule called cyanamide to the formose reaction. This makes it possible for some of the molecules made during the reaction to be “siphoned off” to produce ribonucleotides.

The reaction still does not produce a large quantity of ribonucleotide building blocks. However, the ones it does produce are more stable and less likely to degrade.

What’s interesting about our study is the integration of the formose reaction and ribonucleotide production. Previous investigations have studied each separately, which reflects how chemists usually think about making molecules.

Generally speaking, chemists tend to avoid complexity so as to maximize the quantity and purity of a product. However, this reductionist approach can prevent us from investigating dynamic interactions between different chemical pathways.

These interactions, which happen everywhere in the real world outside the lab, are arguably the bridge between chemistry and biology.

Industrial Applications

Autocatalysis also has industrial applications. When you add cyanamide to the formose reaction, another of the products is a compound called 2-aminooxazole, which is used in chemistry research and the production of many pharmaceuticals.

Conventional 2-aminooxazole production often uses cyanamide and glycolaldehyde, the latter of which is expensive. If it can be made using the formose reaction, only a small amount of glycolaldehyde will be needed to kickstart the reaction, cutting costs.

Our lab is currently optimizing this procedure in the hope we can manipulate the autocatalytic reaction to make common chemical reactions cheaper and more efficient, and their pharmaceutical products more accessible. Maybe it won’t be as big a deal as the creation of life itself, but we think it could still be worthwhile.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image Credit: Sangharsh Lohakare / Unsplash 

Kategorie: Transhumanismus

Scientists 3D Print a Complex Robotic Hand With Bones, Tendons, and Ligaments

Singularity HUB - 24 Listopad, 2023 - 16:00

We don’t think twice about using our hands throughout the day for tasks that still thwart sophisticated robots—pouring coffee without spilling when half-awake, folding laundry without ripping delicate fabrics.

The complexity of our hands is partly to thank. They are wonders of biological engineering: Hard skeleton keeps their shape and integrity and lets fingers bear weight. Soft tissues, such as muscles and ligaments, give them dexterity. Thanks to evolution, all these “biomaterials” self-assemble.

Recreating them artificially is another matter.

Scientists have tried to use additive manufacturing—better known as 3D printing—to recreate complex structures from hands to hearts. But the technology stumbles when integrating multiple materials into one printing process. 3D printing a robotic hand, for example, requires multiple printers—one to make the skeleton, another for soft tissue materials—and the assembly of parts. These multiple steps increase manufacturing time and complexity.

Scientists have long sought to combine different materials into a single 3D printing process. A team from the soft robotics lab at ETH Zurich has found a way.

The team equipped a 3D inkjet printer—which is based on the same technology in normal office printers—with machine vision, allowing it to rapidly adapt to different materials. The approach, called vision-controlled jetting, continuously gathers information about a structure’s shape during printing to fine-tune how it prints the next layer, regardless of the type of material.

In a test, the team 3D printed a synthetic hand in one go. Complete with skeleton, ligaments, and tendons, the hand can grasp different objects when it “feels” pressure at its fingertips.

They also 3D printed a structure like a human heart, complete with chambers, one-way valves, and the ability to pump fluid at a rate roughly 40 percent of an adult human’s heart.

The study is “very impressive,” Dr. Yong Lin Kong at the University of Utah, who was not involved in the work but wrote an accompanying commentary, told Nature. 3D inkjet printing is already a mature technology, he added, but this study shows machine vision makes it possible to expand the technology’s capabilities to more complex structures and multiple materials.

The Problem With 3D Inkjet Printing

Recreating a structure using conventional methods is tedious and error-prone. Engineers cast a mold to form the desired shape—say, the skeleton of a hand—then combine the initial structure with other materials.

It’s a mind-numbing process requiring careful calibration. Like installing a cabinet door, any errors leave it lopsided. For something as complex as a robot hand, the results can be rather Frankenstein.

Traditional methods also make it difficult to incorporate materials with different properties, and they tend to lack the fine details required in something as complex as a synthetic hand. All these limitations kneecap what a robotic hand—and other functional structures—can do.

Then 3D inkjet printing came along. Common versions of these printers squeeze a liquid resin material through hundreds of thousands of individually controlled nozzles—like an office printer printing a photo at high resolution. Once a layer is printed, a UV light “sets” the resin, turning it from liquid to solid. Then the printer gets to work on the next layer. In this way, the printer builds a 3D object, layer by layer, at the microscopic level.

Although incredibly quick and precise, the technology has its problems. It isn’t great at binding different materials together, for instance. To 3D print a functional robot, engineers must either print parts with multiple printers and then assemble them after, or they can print an initial structure, cast around the part, and add additional types of materials with desired properties.

One main drawback is the thickness of each layer isn’t always the same. Differences in the speed of “ink,” interference between nozzles, and shrinkage during the “setting” process can all cause tiny differences. But these inconsistencies add up with more layers, resulting in malfunctioning objects and printing failure.

Engineers tackle this problem by adding a blade or roller. Like flattening newly laid concrete during roadwork, this step levels each layer before the next one starts. The solution, unfortunately, comes with other headaches. Because the rollers are only compatible with some materials—others gunk up the scraper—they limit the range of materials that can be used.

What if we don’t need this step at all?

Eyes on the Prize

The team’s solution is machine vision. Rather than scraping away extra material, scanning each layer as it’s printing helps the system detect and compensate for small mistakes in real time.

The machine vision system uses four cameras and two lasers to scan the entire printing surface at microscopic resolution.

This process helps the printer self-correct, explained the team. By understanding where there’s too much or too little material, the printer can change the amount of ink deposited in the next layer, essentially filling previous “potholes.” The result is a powerful 3D printing system in which extra material doesn’t need to be scraped off.

This isn’t the first time machine vision has been used in 3D printers. But the new system can scan 660 times faster than older ones, and it can analyze the growing structure’s physical shape in less than a second, wrote Kong. This allows the 3D printer to access a much larger library of materials, including substances that support complex structures during printing but are removed later.

Translation? The system can print a new generation of bio-inspired robots far faster than any previous technologies.

As a test, the team printed a synthetic hand with two types of materials: a rigid, load-bearing material to act as a skeleton and a soft bendable material to make tendons and ligaments. They printed channels throughout the hand to control its movement with air pressure and at the same time integrated a membrane to sense touch—essentially, the fingertips.

They hooked the hand to external electrical components and integrated it into a little walking robot. Thanks to its pressure-sensing fingertips, it could pick up different objects—a pen or an empty plastic water bottle.

The system also printed a human-like heart structure with multiple chambers. When pressurizing the synthetic heart, it pumped fluids like its biological counterpart.

Everything was printed in one go.

Next Steps

The results are fascinating because they feel like a breakthrough for a technology that’s already in a mature state, Kong said. Although commercially available for decades, just by adding machine vision gives the technology new life.

“Excitingly, these diverse examples were printed using just a few materials,” he added. The team aims to expand the materials they can print with and directly add electronic sensors for sensing and movement during printing. The system could also incorporate other fabrication methods—for example, spraying a coat of biologically active molecules to the surface of the hands.

Robert Katzschmann, a professor at ETH Zurich and an author on the new paper, is optimistic about the system’s broader use. “You could think of medical implants…[or] use this for prototyping things in tissue engineering,” he said. “The technology itself will only grow.”

Image Credit: ETH Zurich/Thomas Buchner

Kategorie: Transhumanismus

OpenAI Mayhem: What We Know Now, Don’t Know Yet, and What Could Be Next

Singularity HUB - 23 Listopad, 2023 - 02:47

If you’d never heard of OpenAI before last week, you probably have now. The level of attention given to recent mayhem at the company leading the AI boom underscores how much this moment has captured the collective imagination.

Last Friday, OpenAI’s board of directors fired the company’s cofounder, CEO, and fellow board member, Sam Altman. The decision was led by OpenAI cofounder and chief scientist, Ilya Sutskever, and three independent board members. Greg Brockman, cofounder and OpenAI president, was also forced out as chairman and chose to resign instead of remaining at the company. Altman was replaced by interim CEO Mira Murati, formerly the company’s CTO.

It was a shocking turn of events for the hottest thing in tech. And rumors swirled in the vacuum left by a vague statement explaining the decision. But these were only the first shots in the neck-snapping round of power ping-pong to come.

Over the weekend, details emerged that the board’s decision was not due to “malfeasance” on Altman’s part. Altman was said to be negotiating a return as CEO; then he was considering founding a new AI startup with Brockman; then the two were headed to Microsoft, after CEO Satya Nadella said he’d hire them to lead a new AI lab at his company.

On Monday, interim CEO Emmett Shear—the former CEO of Twitch who’d replaced Murati the night before—was facing open revolt. Over 95 percent of OpenAI employees signed a letter demanding the board resign and Altman be reinstated. If that didn’t happen, they would follow him to Microsoft, which was offering jobs and matching compensation. Also signing the letter were Murati and Sutskever, who now said he regretted his involvement in the board’s decision to remove Altman.

Finally, Tuesday night, after earlier rumors that negotiations were back on, the company announced the various parties had reached a tentative agreement to rehire Altman as CEO.

Two original board members would depart—Helen Toner, a director of strategy at Georgetown University’s Center for Security and Emerging Technology, and Tasha McCauley, an entrepreneur and researcher at RAND Corporation—while Quora CEO Adam D’Angelo would stay on. The company would also add two new board members—the economist Larry Summers and former Salesforce co-CEO Bret Taylor (as chairman)—and likely expand the board further at some point in the future. There would also be an independent investigation into Altman’s conduct and the process by which he was removed.

That’s what we know. But the unpredictability of events so far suggests the story isn’t over. Here’s what we still don’t know and what might be next.

How OpenAI Is Organized

The events of the last five days are extraordinary in the world of tech, not least because founders usually hold significant power on their own boards.

But OpenAI is different.

The company was originally founded in 2015 as a nonprofit with the audacious mission of building artificial general intelligence broadly beneficial to humankind. They hoped that by divorcing the organization’s mission from financial incentives, both goals could be achieved.

But in 2018, OpenAI leaders realized they needed a lot more computing power and financial backing to make progress. They created a capped-profit company—controlled by the nonprofit board and its mission—to work on commercializing products and attracting talent and investors. Microsoft led the way and has poured over $10 billion into the company.

Crucially, however, Microsoft and other investors had little control over the business in the traditional sense. The buck stopped with the non-profit board.

OpenAI organizational structure. Source: OpenAI

Then came ChatGPT. The chatbot sensation kicked off a hype cycle not seen in years (which is saying something). With Altman at the helm, OpenAI has pushed to commercialize ChatGPT at a rapid pace, culminating in its first developer conference earlier this month but also putting significant strain on the company’s non-profit mission.

What We Don’t Know

All this set the stage for Altman’s sudden ouster and comeback. But not every detail is in stone yet, and more of the story is likely to unfold in the days ahead.

Let’s lay some of that out.

The current agreement looks durable, but given recent history…

The wording used to describe Altman’s return as CEO isn’t ironclad. The phrase “agreement in principle” means it could yet unravel. But though the details are still being hammered out, given intense pressure from investors and employees, it appears very likely the path of least resistance will be to keep the team together and move on.

The board’s reasons for acting have not been confirmed in detail.

There’s been plenty of speculation and commentary from sources about why the board chose to fire Altman.

One explanation is the growing tension between the board’s mission to keep AI safe and the company’s commercial activity and pace of development boiled over. Sutskever and fellow board members Toner, McCauley, and D’Angelo were focused on minimizing AI risk, a crucial part of the non-profit’s mandate. They believe AI must be developed with the utmost care lest it cause irreparable damage to society at large. The heavy push to move fast and sell products is at odds with this view. It’s a schism that extends beyond OpenAI to the tech community more generally.

Reports also suggest Altman’s other activities in the area—like an AI chipmaking project and his rumored talks with Jonny Ive about an AI device—or recent breakthroughs that haven’t yet been announced may have contributed to the decision as well. But utimately, we don’t know, and so far, official details have yet to be shared by those involved.

How the company will be organized in the future is TBD.

The company called the new roster of board members “initial,” suggesting it could grow. If OpenAI’s organizational structure brought on the chaos, it’s reasonable to expect investors will demand change. After watching the company nearly evaporate, an expanded board may offer seats to those with financial stakes in the company, and its structure may be reworked. But again, the details have yet to be ironed out. For now, it remains an open question.

What’s Next?

The last five days seem to have blindsided pretty much everyone, but that, itself, is somewhat surprising. OpenAI’s organizational structure was no secret. Nor was the inherent tension between its mission and commercial activities. Now it seems nearly certain, however, that the tension between the two will yield to financial forces.

Altman has been vocal about his desire to keep AI safe: It’s a reason he helped found the company. But he’s also pushed OpenAI to do business in the name of progress. As the organization continues to work with Microsoft and court new investors—a deal with Thrive Capital valuing the company at $80 billion was in the works before the madness—guardrails, assurances, and more control will likely be pre-requisites.

“I think we definitely would want some governance changes,” Microsoft CEO, Satya Nadella, told Bloomberg News Monday. “Surprises are bad, and we just want to make sure that things are done in a way that will allow us continue to partner well.”

Perhaps this outcome is just confirmation of how things already stood, despite OpenAI’s organizational structure. That is, the company and nearly all involved were already operating as if it were a more traditional for-profit venture.

Meanwhile, though the board’s actions might have been motivated by a desire to slow things down, they may end up having the opposite effect. OpenAI is set to pick up where it left off: Same CEO, team, investors, products, and pace but perhaps fewer dissenting voices.

It also means those worried about the most advanced AI being controlled by a handful of corporations will push more urgently for regulation, call on governments to better fund academic research, or put their faith in open-source AI efforts as a counterbalance.

No matter the exact outcome—expect the wild ride in AI to continue.

Image Credit: OpenAI

Kategorie: Transhumanismus

‘Breakthrough’ CRISPR Treatment Slashes Cholesterol in First Human Clinical Trial

Singularity HUB - 21 Listopad, 2023 - 16:00

CRISPR-based therapies just hit another milestone.

In a small clinical trial with 10 people genetically prone to dangerously high levels of cholesterol, a single infusion of the precision gene editor slashed the artery-clogging fat by up to 55 percent. If all goes well, the one-shot treatment could last a lifetime.

The trial, led by Verve Therapeutics, is the first to explore CRISPR for a chronic disease that’s usually managed with decades of daily pills. It also marks the first use of a newer class of gene editors directly in humans. Called base editing, the technology is more precise—and potentially safer—than the original set of CRISPR tools. The new treatment, VERVE-101, uses a base editor to disable a gene encoding a liver protein that regulates cholesterol.

To be clear, these results are just a sneak peek into the trial, which was designed to test for safety, rather than the treatment’s efficacy. Not all participants responded well. Two people suffered severe heart issues, with one case potentially related to the treatment.

Nevertheless, “it is a breakthrough to have shown in humans that in vivo [in the body] base editing works efficiently in the liver,” Dr. Gerald Schwank at the University of Zurich, who wasn’t involved in the trial, told Science.

Give Your Heart a Break

CRISPR has worked wonders for previously untreatable cancers. Last week, it was also approved in the United Kingdom for the blood diseases sickle cell and beta thalassemia.

For these treatments, scientists extract immune cells or blood cells from the body, edit the cells using CRISPR to correct the genetic mistake, and reinfuse the treated cells into the patient. For edited cells to “take,” patients must undergo a grueling treatment to wipe out existing diseased cells in the bone marrow and open space for the edited replacements.

Verve is taking a different approach: Instead of isolating cells for gene editing, the tools are infused into the bloodstream where they edit genes directly inside the body. It’s a big gamble. Most of our cells contain the same DNA. Once injected, the tools could go on a rampage and edit the targeted gene throughout the body, causing dangerous side effects.

Verve tackled this concern head on by pairing base editing with nanoparticles.

The trial targeted PCSK9, a liver protein that keeps low-density lipoprotein (LDL), or “bad cholesterol,” levels at bay. In familial hypercholesterolemia, a single mutated letter in PCSK9 alters its function, causing LDL levels to grow dangerously. People with this inherited disorder are at risk of life-threatening heart problems by the age of 50 and need to take statin drugs to keep their cholesterol in check. But the lifelong regime is tough to maintain.

A Targeted CRISPR Torpedo

Verve designed a “one-and-done” treatment to correct the PCSK9 mutation in these patients.

The therapy employs two key strategies to boost efficacy.

The first is called base editing. The original CRISPR toolset acts like scissors, cutting both strands of DNA, making the edit, and patching the ends back together. The process often leaves room for mistakes, such as the unintended rearranging of sequences that could turn on cancer genes, leading some experts to call it “genetic vandalism.” Base editing, in contrast, is far more precise. Like a scalpel, base editors only nick one DNA strand, and are therefore far less likely to injure non-targeted parts of the genome.

Verve’s treatment encodes the base editor in two different RNA molecules. One instructs the cells to make the components of the gene editing tool—similar to how Covid-19 vaccines work. The other strand of RNA guides the tool to PCSK9. Once edited, the treated gene produces a shortened, non-functional version of the faulty protein responsible for the condition.

The delivery method also boosts efficacy. Base editing components can be encoded into harmless viruses or wrapped inside fatty nanoparticles for delivery. Verve took the second approach because these nanoparticles are often first shuttled into the liver—exactly where the treatment should go—and are less likely to cause an immune reaction than viruses.

There’s just one problem. Base editing has never been used to edit genes in the body before.

A non-human trial in 2021 showed the idea could work. In macaque monkeys, a single shot of the editor into the bloodstream reduced the gene’s function in the liver, causing LDL levels to drop 60 percent. The treatment lasted at least eight months with barely any side effects.

Safety First

The new trial built on previous results to assess the treatment’s safety in 10 patients with familial hypercholesterolemia. One patient dropped out before completing the trial.

The team was careful. To detect potential side effects, six patients were treated with a small dose unlikely to reverse the disorder.

Three patients received a higher dose of the base editor and saw dramatic effects. PCSK9 protein levels in their livers dropped between 47 and 84 percent. Circulating LDL fell to about half its prior levels—an effect that lasted at least six months. Follow-ups are ongoing.

The efficacy of the higher dose came at a price. At lower doses, the treatment was well tolerated overall with minimal side effects. But at higher doses, it seemed to temporarily tax the liver, bumping up markers for liver stress that gradually subsided.

More troubling were two severe events in patients with advanced heart blockage. One person receiving a low dose died from cardiac arrest about five weeks after the treatment. According to a review board, the death was likely due to underlying conditions, not the treatment.

Another patient infused with a higher dose suffered a heart attack a day after treatment, suggesting the episode could have been related. However, he had intermittent chest pains before the infusion that hadn’t been disclosed to the team. His symptoms would have excluded him from the trial.

A Promising Path

Overall, an independent board monitoring data and safety determined the treatment safe. Still, there are plenty of unknowns. Like other gene editing tools, base editing poses the risk of off-target snips—something this trial did not specifically examine. Long-term safety and efficacy of the treatment are also unknown.

But the team is encouraged by these early results. “We are excited to have reached this milestone of positive first-in-human data supporting the significant potential for in vivo liver gene editing as a treatment for patients with [familial hypercholesterolemia],” said Dr. Sekar Kathiresan, CEO and cofounder of Verve.

The trial was conducted in the United Kingdom and New Zealand. Recently, US regulators approved the therapy for testing. They plan to enroll roughly 40 more patients.

Meanwhile, a new version of the therapy, VERVE-102, is already in the works. The newcomer uses a similar base editing technology and an upgraded nanoparticle carrier with potentially better targeting.

If all goes well, the team will launch a randomized, placebo-controlled trial by 2025. So far, the company hasn’t released a price tag for the therapy. But the cost of existing gene therapies can run into the millions of dollars.

To Kathiresan, treatments like this one could benefit more than patients with familial hypercholesterolemia. High cholesterol is a leading health problem. A dose of the base editor in middle age could potentially nip cholesterol buildup in the bud—and in turn, lower risk of heart disease and death.

“That’s the ultimate vision,” he said.

Image Credit: Scientific Animations / Wikimedia Commons

Kategorie: Transhumanismus

DeepMind Says New Multi-Game AI Is a Step Toward More General Intelligence

Singularity HUB - 20 Listopad, 2023 - 16:00

AI has mastered some of the most complex games known to man, but models are generally tailored to solve specific kinds of challenges. A new DeepMind algorithm that can tackle a much wider variety of games could be a step towards more general AI, its creators say.

Using games as a benchmark for AI has a long pedigree. When IBM’s Deep Blue algorithm beat chess world champion Garry Kasparov in 1997, it was hailed as a milestone for the field. Similarly, when DeepMind’s AlphaGo defeated one of the world’s top Go players, Lee Sedol, in 2016, it led to a flurry of excitement about AI’s potential.

DeepMind built on this success with AlphaZero, a model that mastered a wide variety of games, including chess and shogi. But as impressive as this was, AlphaZero only worked with perfect information games where every detail of the game, other than the opponent’s intentions, is visible to both players. This includes games like Go and chess where both players can always see all the pieces on the board.

In contrast, imperfect information games involve some details being hidden from the other player. Poker is a classic example because players can’t see what hands their opponents are holding. There are now models that can beat professionals at these kinds of games too, but they use an entirely different approach than algorithms like AlphaZero.

Now, researchers at DeepMind have combined elements of both approaches to create a model that can beat humans at chess, Go, and poker. The team claims the breakthrough could accelerate efforts to create more general AI algorithms that can learn to solve a wide variety of tasks.

Researchers building AI to play perfect information games have generally relied on an approach known as tree search. This explores a multitude of ways the game could progress from its current state, with different branches mapping out potential sequences of moves. AlphaGo combined tree search with a machine learning technique in which the model refines its skills by playing itself repeatedly and learning from its mistakes.

When it comes to imperfect information games, researchers tend to instead rely on game theory, using mathematical models to map out the most rational solutions to strategic problems. Game theory is used extensively in economics to understand how people make choices in different situations, many of which involve imperfect information.

In 2016, an AI called DeepStack beat human professionals at no-limit poker, but the model was highly specialized for that particular game. Much of the DeepStack team now works at DeepMind, however, and they’ve combined the techniques they used to build DeepStack with those used in AlphaZero.

The new algorithm, called Student of Games, uses a combination of tree search, self-play, and game-theory to tackle both perfect and imperfect information games. In a paper in Science, the researchers report that the algorithm beat the best openly available poker playing AI, Slumbot, and could also play Go and chess at the level of a human professional, though it couldn’t match specialized algorithms like AlphaZero.

But being a jack-of-all-trades rather than a master of one is arguably a bigger prize in AI research. While deep learning can often achieve superhuman performance on specific tasks, developing more general forms of AI that can be applied to a wide range of problems is trickier. The researchers say a model that can tackle both perfect and imperfect information games is “an important step toward truly general algorithms for arbitrary environments.”

It’s important not to extrapolate too much from the results, Michael Rovatsos from the University of Edinburgh, UK, told New Scientist. The AI was still operating within the simple and controlled environment of a game, where the number of possible actions is limited and the rules are clearly defined. That’s a far cry from the messy realities of the real world.

But even if this is a baby step, being able to combine the leading approaches to two very different kinds of game in a single model is a significant achievement. And one that could certainly be a blueprint for more capable and general models in the future.

Image Credit: Hassan Pasha / Unsplash

Kategorie: Transhumanismus

This Week’s Awesome Tech Stories From Around the Web (Through November 18)

Singularity HUB - 18 Listopad, 2023 - 16:00
ARTIFICIAL INTELLIGENCE

Google DeepMind’s AI Pop Star Clone Will Freak You Out
Angela Watercutter | Wired
“Two new tools using DeepMind’s music generation algorithm Lyria let anyone make YouTube shorts using the AI-generated vocals of Demi Lovato, T-Pain, Troye Sivan and others. …All anyone has to do is type in a topic and pick an artist off a carousel, and the tool writes the lyrics, produces the backing track, and sings the song in the style of the musician selected. It’s wild.”

BIOTECH

The First CRISPR Medicine Just Got Approved
Emily Mullin | Wired
“The first medical treatment that uses CRISPR gene editing was authorized Thursday by the United Kingdom. The one-time therapy, which will be sold under the brand name Casgevy, is for patients with sickle cell disease and a related blood disorder called beta thalassemia, both of which are inherited. The UK approval marks a historic moment for CRISPR, the molecular equivalent of scissors that won its inventors a Nobel Prize in 2020.”

ARTIFICIAL INTELLIGENCE

Google DeepMind Wants to Define What Counts as Artificial General Intelligence
Will Douglas Heaven | MIT Technology Review
“AGI, or artificial general intelligence, is one of the hottest topics in tech today. It’s also one of the most controversial. A big part of the problem is that few people agree on what the term even means. Now a team of Google DeepMind researchers has put out a paper that cuts through the cross talk with not just one new definition for AGI but a whole taxonomy of them.”

TECH

Why Tech Giants Are Hedging Their Bets on OpenAI
Michelle Cheng | Quartz
“Microsoft owns a 49% stake in OpenAI, having invested billions of dollars in the maker of ChatGPT. But the tech titan is also an investor in Inflection AI, which has a chatbot called Pi and is seen as a rival to OpenAI. …Last week, Reuters reported that Google plans to invest hundreds of millions in Character.AI, which builds personalized bots. In late October, Google said it had agreed to sink up to $2 billion into Anthropic, a key rival to OpenAI. What’s happening here?”

ENERGY

Start-Ups With Laser Beams: The Companies Trying to Ignite Fusion Energy
Kenneth Chang | The New York Times
“Take a smidgen of hydrogen, then blast it with lasers to set off a small thermonuclear explosion. Do it right, and maybe you can solve the world’s energy needs. A small group of start-ups has embarked on this quest, pursuing their own variations on this theme—different lasers, different techniques to set off the fusion reactions, different elements to fuse together. ‘There has been rapid growth,’ said Andrew Holland, chief executive of the Fusion Industry Association, a trade group lobbying for policies to speed the development of fusion.”

ARTIFICIAL INTELLIGENCE

Young Children Trounce Large Language Models in a Simple Problem-Solving Task
Ross Pomeroy | Big Think
“Despite their genuine potential to change how society works and functions, large language models get trounced by young children in basic problem-solving tasks testing their ability to innovate, according to new research. The study reveals a key weakness of large language models: They do not innovate. If large language models can someday become innovation engines, their programmers should try to emulate how children learn, the authors contend.”

CULTURE

Sphere and Loathing in Las Vegas
Charlie Warzel | The Atlantic
“I wanted to be cynical about the Sphere and all it represents—our phones as appendages, screens as a mediated form of experiencing the world. There’s plenty to dislike about the thing—the impersonal flashiness of it all, its $30 tequila sodas, the likely staggering electricity bills. But it is also my solemn duty to report to you that the Sphere slaps, much in the same way that, say, the Super Bowl slaps. It’s gaudy, overly commercialized, and cool as hell: a brand-new, non-pharmaceutical sensory experience.”

DIGITAL MEDIA

Meta Brings Us a Step Closer to AI-Generated Movies
Kyle Wiggers | TechCruch
“Like ‘Avengers’ director Joe Russo, I’m becoming increasingly convinced that fully AI-generated movies and TV shows will be possible within our lifetimes. …Now, video generation tech isn’t new. Meta’s experimented with it before, as has Google. …But Emu Video’s 512×512, 16-frames-per-second clips are easily among the best I’ve seen in terms of their fidelity—to the point where my untrained eye has a tough time distinguishing them from the real thing.”

TRANSPORTATION

Joby, Volocopter Fly Electric Air Taxis Over New York City
Aria Alamalhodaei | TechCrunch
“Joby Aviation and Volocopter gave the public a vivid glimpse of what the future of aviation might look like [last] weekend, with both companies performing brief demonstration flights of their electric aircraft in New York City. The demonstration flights were conducted during a press conference on Sunday, during which New York City Mayor Eric Adams announced that the city would electrify two of the three heliports located in Manhattan—Downtown Manhattan Heliport and East 34th Street.”

TECH

Google’s ChatGPT Competitor Will Have to Wait
Maxwell Zeff | Gizmodo
“Google is having a hard time catching up with OpenAI. Google’s competitor to ChatGPT will not be ready until early 2024, after previously telling some cloud customers it would get to use Gemini AI in November of this year, sources told The Information Thursday. …Google’s Gemini was reportedly set to debut in 2023 with image and voice recognition capabilities. The chatbot would have been competitive with OpenAI’s GPT-4, and Anthropic’s Claude 2.”

Image Credit: Brian McGowanUnsplash

Kategorie: Transhumanismus

What Is Quantum Advantage? The Moment Extremely Powerful Quantum Computers Will Arrive

Singularity HUB - 17 Listopad, 2023 - 21:03

Quantum advantage is the milestone the field of quantum computing is fervently working toward, when a quantum computer can solve problems that are beyond the reach of the most powerful non-quantum, or classical, computers.

Quantum refers to the scale of atoms and molecules where the laws of physics as we experience them break down and a different, counterintuitive set of laws apply. Quantum computers take advantage of these strange behaviors to solve problems.

There are some types of problems that are impractical for classical computers to solve, such as cracking state-of-the-art encryption algorithms. Research in recent decades has shown that quantum computers have the potential to solve some of these problems. If a quantum computer can be built that actually does solve one of these problems, it will have demonstrated quantum advantage.

I am a physicist who studies quantum information processing and the control of quantum systems. I believe that this frontier of scientific and technological innovation not only promises groundbreaking advances in computation but also represents a broader surge in quantum technology, including significant advancements in quantum cryptography and quantum sensing.

The Source of Quantum Computing’s Power

Central to quantum computing is the quantum bit, or qubit. Unlike classical bits, which can only be in states of 0 or 1, a qubit can be in any state that is some combination of 0 and 1. This state of neither just 1 or just 0 is known as a quantum superposition. With every additional qubit, the number of states that can be represented by the qubits doubles.

This property is often mistaken for the source of the power of quantum computing. Instead, it comes down to an intricate interplay of superposition, interference , and entanglement.

Interference involves manipulating qubits so that their states combine constructively during computations to amplify correct solutions and destructively to suppress the wrong answers. Constructive interference is what happens when the peaks of two waves—like sound waves or ocean waves—combine to create a higher peak. Destructive interference is what happens when a wave peak and a wave trough combine and cancel each other out. Quantum algorithms, which are few and difficult to devise, set up a sequence of interference patterns that yield the correct answer to a problem.

Entanglement establishes a uniquely quantum correlation between qubits: The state of one cannot be described independently of the others, no matter how far apart the qubits are. This is what Albert Einstein famously dismissed as “spooky action at a distance.” Entanglement’s collective behavior, orchestrated through a quantum computer, enables computational speed-ups that are beyond the reach of classical computers.

Applications of Quantum Computing

Quantum computing has a range of potential uses where it can outperform classical computers. In cryptography, quantum computers pose both an opportunity and a challenge. Most famously, they have the potential to decipher current encryption algorithms, such as the widely used RSA scheme.

One consequence of this is that today’s encryption protocols need to be reengineered to be resistant to future quantum attacks. This recognition has led to the burgeoning field of post-quantum cryptography. After a long process, the National Institute of Standards and Technology recently selected four quantum-resistant algorithms and has begun the process of readying them so that organizations around the world can use them in their encryption technology.

In addition, quantum computing can dramatically speed up quantum simulation: the ability to predict the outcome of experiments operating in the quantum realm. Famed physicist Richard Feynman envisioned this possibility more than 40 years ago. Quantum simulation offers the potential for considerable advancements in chemistry and materials science, aiding in areas such as the intricate modeling of molecular structures for drug discovery and enabling the discovery or creation of materials with novel properties.

Another use of quantum information technology is quantum sensing: detecting and measuring physical properties like electromagnetic energy, gravity, pressure, and temperature with greater sensitivity and precision than non-quantum instruments. Quantum sensing has myriad applications in fields such as environmental monitoring, geological exploration, medical imaging, and surveillance.

Initiatives such as the development of a quantum internet that interconnects quantum computers are crucial steps toward bridging the quantum and classical computing worlds. This network could be secured using quantum cryptographic protocols such as quantum key distribution, which enables ultra-secure communication channels that are protected against computational attacks—including those using quantum computers.

Despite a growing application suite for quantum computing, developing new algorithms that make full use of the quantum advantage—in particular in machine learning—remains a critical area of ongoing research.

A prototype quantum sensor developed by MIT researchers can detect any frequency of electromagnetic waves. Image Credit: Guoqing Wang, CC BY-NC-ND Staying Coherent and Overcoming Errors

The quantum computing field faces significant hurdles in hardware and software development. Quantum computers are highly sensitive to any unintentional interactions with their environments. This leads to the phenomenon of decoherence, where qubits rapidly degrade to the 0 or 1 states of classical bits.

Building large-scale quantum computing systems capable of delivering on the promise of quantum speed-ups requires overcoming decoherence. The key is developing effective methods of suppressing and correcting quantum errors, an area my own research is focused on.

In navigating these challenges, numerous quantum hardware and software startups have emerged alongside well-established technology industry players like Google and IBM. This industry interest, combined with significant investment from governments worldwide, underscores a collective recognition of quantum technology’s transformative potential. These initiatives foster a rich ecosystem where academia and industry collaborate, accelerating progress in the field.

Quantum Advantage Coming Into View

Quantum computing may one day be as disruptive as the arrival of generative AI. Currently, the development of quantum computing technology is at a crucial juncture. On the one hand, the field has already shown early signs of having achieved a narrowly specialized quantum advantage. Researchers at Google and later a team of researchers in China demonstrated quantum advantage for generating a list of random numbers with certain properties. My research team demonstrated a quantum speed-up for a random number guessing game.

On the other hand, there is a tangible risk of entering a “quantum winter,” a period of reduced investment if practical results fail to materialize in the near term.

While the technology industry is working to deliver quantum advantage in products and services in the near term, academic research remains focused on investigating the fundamental principles underpinning this new science and technology. This ongoing basic research, fueled by enthusiastic cadres of new and bright students of the type I encounter almost every day, ensures that the field will continue to progress.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image Credit: xx / xx

Kategorie: Transhumanismus

Google DeepMind AI Nails Super Accurate 10-Day Weather Forecasts

Singularity HUB - 16 Listopad, 2023 - 21:41

This year was a nonstop parade of extreme weather events. Unprecedented heat swept the globe. This summer was the Earth’s hottest since 1880. From flash floods in California and ice storms in Texas to devastating wildfires in Maui and Canada, weather-related events deeply affected lives and communities.

Every second counts when it comes to predicting these events. AI could help.

This week, Google DeepMind released an AI that delivers 10-day weather forecasts with unprecedented accuracy and speed. Called GraphCast, the model can churn through hundreds of weather-related datapoints for a given location and generate predictions in under a minute. When challenged with over a thousand potential weather patterns, the AI beat state-of-the-art systems roughly 90 percent of the time.

But GraphCast isn’t just about building a more accurate weather app for picking wardrobes.

Although not explicitly trained to detect extreme weather patterns, the AI picked up several atmospheric events linked to these patterns. Compared to previous methods, it more accurately tracked cyclone trajectories and detected atmospheric rivers—sinewy regions in the atmosphere associated with flooding.

GraphCast also predicted the onset of extreme temperatures well in advance of current methods. With 2024 set to be even warmer and extreme weather events on the rise, the AI’s predictions could give communities valuable time to prepare and potentially save lives.

“GraphCast is now the most accurate 10-day global weather forecasting system in the world, and can predict extreme weather events further into the future than was previously possible,” the authors wrote in a DeepMind blog post.

Rainy Days

Predicting weather patterns, even just a week ahead, is an old but extremely challenging problem. We base many decisions on these forecasts. Some are embedded in our everyday lives: Should I grab my umbrella today? Other decisions are life-or-death, like when to issue orders to evacuate or shelter in place.

Our current forecasting software is largely based on physical models of the Earth’s atmosphere. By examining the physics of weather systems, scientists have written a number of equations from decades of data, which are then fed into supercomputers to generate predictions.

A prominent example is the Integrated Forecasting System at the European Center for Medium-Range Weather Forecasts. The system uses sophisticated calculations based on our current understanding of weather patterns to churn out predictions every six hours, providing the world with some of the most accurate weather forecasts available.

This system “and modern weather forecasting more generally, are triumphs of science and engineering,” wrote the DeepMind team.

Over the years, physics-based methods have rapidly improved in accuracy, in part thanks to more powerful computers. But they remain time consuming and costly.

This isn’t surprising. Weather is one the most complex physical systems on Earth. You might have heard of the butterfly effect: A butterfly flaps its wings, and this tiny change in the atmosphere alters the trajectory of a tornado. While just a metaphor, it captures the complexity of weather prediction.

GraphCast took a different approach. Forget physics, let’s find patterns in past weather data alone.

An AI Meteorologist

GraphCast builds on a type of neural network that’s previously been used to predict other physics-based systems, such as fluid dynamics.

It has three parts. First, the encoder maps relevant information—say, temperature and altitude at a certain location—onto an intricate graph. Think of this as an abstract infographic that machines can easily understand.

The second part is the processor which learns to analyze and pass information to the final part, the decoder. The decoder then translates the results into a real-world weather-prediction map. Altogether, GraphCast can predict weather patterns for the next six hours.

But six hours isn’t 10 days. Here’s the kicker. The AI can learn from its own forecasts. GraphCast’s predictions are fed back into itself as input, allowing it to progressively predict weather further out in time. It’s a method that’s also used in traditional weather prediction systems, the team wrote.

GraphCast was trained on nearly four decades of historical weather data. Taking a divide-and-conquer strategy, the team split the planet into small patches, roughly 17 by 17 miles at the equator. This resulted in more than a million “points” covering the globe.

For each point, the AI was trained with data collected at two times—one current, the other six hours ago—and included dozens of variables from the Earth’s surface and atmosphere—like temperature, humidity, and wind speed and direction at many different altitudes

The training was computationally intensive and took a month to complete.

Once trained, however, the AI itself is highly efficient. It can produce a 10-day forecast with a single TPU in under a minute. Traditional methods using supercomputers take hours of computation, explained the team.

Ray of Light

To test its abilities, the team pitted GraphCast against the current gold standard for weather prediction.

The AI was more accurate nearly 90 percent of the time. It especially excelled when relying only on data from the troposphere—the layer of atmosphere closest to the Earth and critical for weather forecasting—beating the competition 99.7 percent of the time. GraphCast also outperformed Pangu-Weather, a top competing weather model that uses machine learning.

The team next tested GraphCast in several dangerous weather scenarios: tracking tropical cyclones, detecting atmospheric rivers, and predicting extreme heat and cold. Although not trained on specific “warning signs,” the AI raised the alarm earlier than traditional models.

The model also had help from classic meteorology. For example, the team added existing cyclone tracking software to GraphCast’s forecasts. The combination paid off. In September, the AI successfully predicted the trajectory of Hurricane Lee as it swept up the East Coast towards Nova Scotia. The system accurately predicted the storm’s landfall nine days in advance—three precious days faster than traditional forecasting methods.

GraphCast won’t replace traditional physics-based models. Rather, DeepMind hopes it can bolster them. The European Center for Medium-Range Weather Forecasts is already experimenting with the model to see how it could be integrated into their predictions. DeepMind is also working to improve the AI’s ability to handle uncertainty—a critical need given the weather’s increasingly unpredictable behavior.

GraphCast isn’t the only AI weatherman. DeepMind and Google researchers previously built two regional models that can accurately forecast short-term weather 90 minutes or 24 hours ahead. However, GraphCast can look further ahead. When used with standard weather software, the combination could influence decisions on weather emergencies or guide climate policies. At the least, we might feel more confident about the decision to bring that umbrella to work.

“We believe this marks a turning point in weather forecasting,” the authors wrote.

Image Credit: Google DeepMind

Kategorie: Transhumanismus

OpenAI CEO Sam Altman Says His Company Is Now Building GPT-5

Singularity HUB - 15 Listopad, 2023 - 21:40

At an MIT event in March, OpenAI cofounder and CEO Sam Altman said his team wasn’t yet training its next AI, GPT-5. “We are not and won’t for some time,” he told the audience.

This week, however, new details about GPT-5’s status emerged.

In an interview, Altman told the Financial Times the company is now working to develop GPT-5. Though the article did not specify whether the model is in training—it likely isn’t—Altman did say it would need more data. The data would come from public online sources—which is how such algorithms, called large language models, have previously been trained—and proprietary private datasets.

This lines up with OpenAI’s call last week for organizations to collaborate on private datasets as well as prior work to acquire valuable content from major publishers like the Associated Press and News Corp. In a blog post, the team said they want to partner on text, images, audio, or video but are especially interested in “long-form writing or conversations rather than disconnected snippets” that express “human intention.”

It’s no surprise OpenAI is looking to tap higher quality sources not available publicly. AI’s extreme data needs are a sticking point in its development. The rise of the large language models behind chatbots like ChatGPT was driven by ever-bigger algorithms consuming more data. Of the two, it’s possible even more data that’s higher quality can yield greater near-term results. Recent research suggests smaller models fed larger amounts of data perform as well as or better than larger models fed less.

“The trouble is that, like other high-end human cultural products, good prose ranks among the most difficult things to produce in the known universe,” Ross Andersen wrote in The Atlantic this year. “It is not in infinite supply, and for AI, not any old text will do: Large language models trained on books are much better writers than those trained on huge batches of social-media posts.”

After scraping much of the internet to train GPT-4, it seems the low-hanging fruit has largely been picked. A team of researchers estimated last year the supply of publicly accessible, high-quality online data would run out by 2026. One way around this, at least in the near term, is to make deals with the owners of private information hoards.

Computing is another roadblock Altman addressed in the interview.

Foundation models like OpenAI’s GPT-4 require vast supplies of graphics processing units (GPUs), a type of specialized computer chip widely used to train and run AI. Chipmaker Nvidia is the leading supplier of GPUs, and after the launch of ChatGPT, its chips have been the hottest commodity in tech. Altman said they recently took delivery of a batch of the company’s latest H100 chips, and he expects supply to loosen up even more in 2024.

In addition to greater availability, the new chips appear to be speedier too.

In tests released this week by AI benchmarking organization MLPerf, the chips trained large language models nearly three times faster than the mark set just five months ago. (Since MLPerf first began benchmarking AI chips five years ago, overall performance has improved by a factor of 49.)

Reading between the lines—which has become more challenging as the industry has grown less transparent—the GPT-5 work Altman is alluding to is likely more about assembling the necessary ingredients than training the algorithm itself. The company is working to secure funding from investors—GPT-4 cost over $100 million to train—chips from Nvidia, and quality data from wherever they can lay their hands on it.

Altman didn’t commit to a timeline for GPT-5’s release, but even if training began soon, the algorithm wouldn’t see the light of day for a while. Depending on its size and design, training could take weeks or months. Then the raw algorithm would have to be stress tested and fine-tuned by lots of people to make it safe. It took the company eight months to polish and release GPT-4 after training. And though the competitive landscape is more intense now, it’s also worth noting GPT-4 arrived almost three years after GPT-3.

But it’s best not to get too caught up in version numbers. OpenAI is still pressing forward aggressively with its current technology. Two weeks ago, at its first developer conference, the company launched custom chatbots, called GPTs, as well as GPT-4 Turbo. The enhanced algorithm includes more up-to-date information—extending the cutoff from September 2021 to April 2023—can work with much longer prompts, and is cheaper for developers.

And competitors are hot on OpenAI’s heels. Google DeepMind is currently working on its next AI algorithm, Gemini, and big tech is investing heavily in other leading startups, like Anthropic, Character.AI, and Inflection AI. All this action has governments eyeing regulations they hope can reduce near-term risks posed by algorithmic bias, privacy concerns, and violation of intellectual property rights, as well as make future algorithms safer.

In the longer term, however, it’s not clear if the shortcomings associated with large language models can be solved with more data and bigger algorithms or will require new breakthroughs. In a September profile, Wired’s Steven Levy wrote OpenAI isn’t yet sure what would make for “an exponentially powerful improvement” on GPT-4.

“The biggest thing we’re missing is coming up with new ideas,” Greg Brockman, president at OpenAI, told Levy, “It’s nice to have something that could be a virtual assistant. But that’s not the dream. The dream is to help us solve problems we can’t.”

It was Google’s 2017 invention of transformers that brought the current moment in AI. For several years, researchers made their algorithms bigger, fed them more data, and this scaling yielded almost automatic, often surprising boosts to performance.

But at the MIT event in March, Altman said he thought the age of scaling was over and researchers would find other ways to make the algorithms better. It’s possible his thinking has changed since then. It’s also possible GPT-5 will be better than GPT-4 like the latest smartphone is better than the last, and the technology enabling the next step change hasn’t been born yet. Altman doesn’t seem entirely sure either.

“Until we go train that model, it’s like a fun guessing game for us,” he told FT. “We’re trying to get better at it, because I think it’s important from a safety perspective to predict the capabilities. But I can’t tell you here’s exactly what it’s going to do that GPT-4 didn’t.”

In the meantime, it seems we’ll have more than enough to keep us busy.

Image Credit: Maxim Berg / Unsplash

Kategorie: Transhumanismus

Researchers Warn We Could Run Out of Data to Train AI by 2026. What Then?

Singularity HUB - 14 Listopad, 2023 - 21:17

As artificial intelligence reaches the peak of its popularity, researchers have warned the industry might be running out of training data—the fuel that runs powerful AI systems. This could slow down the growth of AI models, especially large language models, and may even alter the trajectory of the AI revolution.

But why is a potential lack of data an issue, considering how much there is on the web? And is there a way to address the risk?

Why High-Quality Data Is Important for AI

We need a lot of data to train powerful, accurate, and high-quality AI algorithms. For instance, the algorithm powering ChatGPT was originally trained on 570 gigabytes of text data, or about 300 billion words.

Similarly, the Stable Diffusion algorithm (which is behind many AI image-generating apps) was trained on the LAION-5B dataset comprised of 5.8 billion image-text pairs. If an algorithm is trained on an insufficient amount of data, it will produce inaccurate or low-quality outputs.

The quality of the training data is also important. Low-quality data such as social media posts or blurry photographs are easy to source but aren’t sufficient to train high-performing AI models.

Text taken from social media platforms might be biased or prejudiced, or may include disinformation or illegal content which could be replicated by the model. For example, when Microsoft tried to train its AI bot using Twitter content, it learned to produce racist and misogynistic outputs.

This is why AI developers seek out high-quality content such as text from books, online articles, scientific papers, Wikipedia, and certain filtered web content. The Google Assistant was trained on 11,000 romance novels taken from self-publishing site Smashwords to make it more conversational.

Do We Have Enough Data?

The AI industry has been training AI systems on ever-larger datasets, which is why we now have high-performing models such as ChatGPT or DALL-E 3. At the same time, research shows online data stocks are growing much more slowly than datasets used to train AI.

In a paper published last year, a group of researchers predicted we will run out of high-quality text data before 2026 if current AI training trends continue. They also estimated low-quality language data will be exhausted sometime between 2030 and 2050, and low-quality image data between 2030 and 2060.

AI could contribute up to $15.7 trillion to the world economy by 2030, according to accounting and consulting group PwC. But running out of usable data could slow down its development.

Should We Be Worried?

While the above points might alarm some AI fans, the situation may not be as bad as it seems. There are many unknowns about how AI models will develop in the future, as well as a few ways to address the risk of data shortages.

One opportunity is for AI developers to improve algorithms so they use the data they already have more efficiently.

It’s likely in the coming years they will be able to train high-performing AI systems using less data, and possibly less computational power. This would also help reduce AI’s carbon footprint.

Another option is to use AI to create synthetic data to train systems. In other words, developers can simply generate the data they need, curated to suit their particular AI model.

Several projects are already using synthetic content, often sourced from data-generating services such as Mostly AI. This will become more common in the future.

Developers are also searching for content outside the free online space, such as that held by large publishers and offline repositories. Think about the millions of texts published before the internet. Made available digitally, they could provide a new source of data for AI projects.

News Corp, one of the world’s largest news content owners (which has much of its content behind a paywall) recently said it was negotiating content deals with AI developers. Such deals would force AI companies to pay for training data—whereas they have mostly scraped it off the internet for free so far.

Content creators have protested against the unauthorized use of their content to train AI models, with some suing companies such as Microsoft, OpenAI, and Stability AI. Being remunerated for their work may help restore some of the power imbalance that exists between creatives and AI companies.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image Credit: Emil Widlund / Unsplash

Kategorie: Transhumanismus

How Generative AI Could Help Us Predict the Next Pandemic

Singularity HUB - 13 Listopad, 2023 - 21:34

Viruses have an uncanny ability to rapidly evolve. Covid-19 is a stark example. As the virus mutated from beta to delta to omicron, the pandemic dragged on and the world shut down. Scientists scrambled to adapt vaccines and treatments to new variants. The virus had the upper hand; we were playing catch-up.

An AI developed by Harvard University could turn the tide by allowing us to predict new variants before they arrive. Called EVEscape, the AI is a kind of machine “oracle” for viral evolution.

Trained on data collected before the pandemic, the algorithm was able to predict frequent mutations and troubling variants for Covid-19 and generated a list of future concerning variants too. The heart of the tool is a generative AI model, like the ones powering DALL-E or ChatGPT, but it includes several carefully selected biological factors to better reflect viral mutations.

The tool wasn’t built for Covid-19 only: It also accurately predicts variants for flu viruses, HIV, and two understudied viruses that could spark future pandemics.

“We want to know if we can anticipate the variation in viruses and forecast new variants,” said Dr. Debora Marks, who led the study at the Blavatnik Institute at Harvard Medical School. “Because if we can, that’s going to be extremely important for designing vaccines and therapies.”

There was a strong push to use AI to predict viral mutations during the acute phases of the pandemic. While useful, most models relied on information about existing variants and could only produce short-term predictions.

EVEscape, in contrast, uses evolutionary genomics to peek into a virus’s ancestry, resulting in longer forecasts and, potentially, enough time to plan ahead and fight back.

“We want to figure out how we can actually design vaccines and therapies that are future-proof,” said study author Dr. Noor Youssef.

Evolved to Evolve

Though viruses are extremely adaptable to the pressures of natural selection, they still evolve like other living creatures. Their genetic material randomly mutates. Some mutations decrease their ability to infect hosts. Others kill their hosts before they can multiply. But sometimes, viruses stumble across a Goldilocks variant, one that keeps the host healthy enough for the bug to reproduce and spread like wildfire. While great for the survival of viruses, these variants spark global catastrophes for humanity, as in the case of Covid-19.

Scientists have long sought to predict viral mutations and their effects. Unfortunately, it’s impossible to predict all possible mutations. A typical coronavirus has roughly 30,000 genetic letters. The number of potential variants is greater than all the elementary particles—that is, electrons, quarks, and other fundamental particles—in the universe.

The new study zoomed in on a more practical solution. Forget mapping each variant. With limited data, can we at least predict the dangerous ones?

Let’s Play Villain

The team turned to EVE, an AI previously developed to hunt down disease-causing genetic variants in humans. At the algorithm’s core is a deep generative model that can predict protein function without solely relying on human expertise.

The AI learned from evolution. Like archeologists comparing skeletons from hominin cousins to peek into the past, the AI screened DNA sequences encoding proteins across species. The strategy turned up genetic variants in humans critical for health—for example, those implicated in cancer or heart problems.

“You can use these generative models to learn amazing things from evolutionary information—the data have hidden secrets that you can reveal,” said Marks.

The new study retrained EVE to predict concerning genetic variants in viruses. They used SARS-CoV-2, the virus behind Covid-19, as a first proof of concept.

The key was integrating the virus’s biological needs into the AI’s data set.

A virus’s core drive is survival. They rapidly mutate, which sometimes leads to genetic changes that can dodge vaccines or antibody treatments. However, the same mutation may damage a virus’s ability to grasp onto its host and reproduce—an obvious disadvantage.

To rule out these kinds of mutations, the AI compared protein sequences from a broad range of coronaviruses discovered before the pandemic—the original SARS virus, for example, and the “common cold” virus. This comparison revealed which parts of the viral genome are conserved. These genetic stewards are foundational to the virus’s survival. Because other coronaviruses and SARS-CoV-2 share a common genetic ancestry, mutations to these genes likely result in death rather than viable variants.

By contrast, the AI predicted spike proteins to be the flexible component of the virus mostly likely to evolve. Dotted along the virus’s surface, these proteins are already targets for vaccines and antibody therapies. Changes to these proteins could lower the efficacy of current therapies.

Back to the Future

Hindsight is 20/20 when analyzing a pandemic. But having a glimpse of what may come—rather than trying to play catch-up—is essential if we’re to nip the next pandemic in the bud.

To test the AI’s predictive powers, the team matched its predictions to the GISAID (Global Initiative on Sharing All Influenza Data) database to gauge their accuracy. Despite its name, the database contains 750,000 unique sequences of coronavirus genetic sequences.

EVEscape identified variants most likely to spread—like delta and omicron, for instance—with 50 percent of its top predictions seen during the pandemic as of May 2023. When pitted against a previous machine learning method, EVEscape was twice as good at predicting mutations and forecasting which variants were most likely to escape from antibody treatments.

Remembering the Past

EVEscape’s superpower is that it can be used with other viruses. Covid has dominated our attention for the past three years. But lesser-known viruses lurk in silence. Lassa and Nipah viruses, for example, sporadically break out in West African and Southwest Asian countries and have pandemic potential. The viruses can be treated with antibodies, but they rapidly mutate.

Using EVEscape, the team predicted escape mutations in these viruses, including those already known to evade antibodies.

Combining evolutionary genetics and AI, the work shows that “the key to future success relies on remembering the past,” said Drs. Nash D. Rochman and Eugene V. Koonin at the National Center for Biotechnology Information and National Library of Medicine in Maryland, who were not involved in the study.

EVEscape has the power to predict future variants of viruses—even those yet unknown. It could estimate the risk of a pandemic, potentially keeping us one step ahead the next outbreak.

The team is now using the tool to predict the next SARS-CoV-2 variant. They track mutations biweekly and rank each variant’s potential for triggering another Covid wave. The data is shared with the World Health Organization and the code is openly available.

To Rochman and Koonin, the new AI toolkit could help thwart the next pandemic. We can now hope “COVID-19 will forever remain known as the most disruptive pandemic in human history,” they wrote.

Image Credit: A SARS-CoV2 virus particle / National Institute of Allergy and Infectious Diseases, NIH

Kategorie: Transhumanismus

This Week’s Awesome Tech Stories From Around the Web (Through November 11)

Singularity HUB - 11 Listopad, 2023 - 16:00
ARTIFICIAL INTELLIGENCE

Personalized AI Agents Are Here. Is the World Ready for Them?
Kevin Roose | The New York Times
“Very soon, tech companies tell us, AI ‘agents’ will be able to send emails and schedule meetings for us, book restaurant reservations and plane tickets, and handle complex tasks like ‘negotiate a raise with my boss’ or ‘buy Christmas presents for all my family members.’ That phase, though still remote, came a little closer on Monday when OpenAI, the maker of ChatGPT, announced that users could now create their own, personalized chatbots.”

TECH

ChatGPT Continues to Be One of the Fastest-Growing Services Ever
Jon Porter | The Verge
“One hundred million people are using ChatGPT on a weekly basis, OpenAI CEO Sam Altman announced at its first-ever developer conference on Monday. Since releasing its ChatGPT and Whisper models via API in March, the company also now boasts over two million developers, including over 92 percent of Fortune 500 companies.”

COMPUTING

The Humane AI Pin Gets Its Big Reveal But We Still Have a Lot of Questions
Lucas Ropek | Gizmodo
“Humane, a startup founded by two former Apple employees, has launched its hotly anticipated AI pin, a small, cookie-sized device that you stick to the front of your shirt and that, according to its creators, is designed to revolutionize our relationship to computing. While Thursday finally saw the startup unveil some details about its long anticipated product, the jury’s still out on whether it’s actually going to compel you to throw your smartphone in the trash—or if it’ll even prove a functional product you’ll want to buy.”

COMPUTING

NVIDIA’s Eos Supercomputer Just Broke Its Own AI Training Benchmark Record
Andrew Tarantola | Engadget
“On Wednesday, NVIDIA unveiled the newest iteration of its Eos supercomputer, one powered by more than 10,000 H100 Tensor Core GPUs and capable of training a 175 billion-parameter GPT-3 model on 1 billion tokens in under four minutes. That’s three times faster than the previous benchmark on the MLPerf AI industry standard, which NVIDIA set just six months ago.”

FUTURE

I Tried Lab-Grown Chicken at a Michelin-Starred Restaurant
Casey Crownhart | MIT Technology Review
“A swanky restaurant in San Francisco isn’t my usual haunt for reporting on climate and energy. But I recently paid a visit to Bar Crenn, a Michelin-starred spot and one of two restaurants in the US currently serving up lab-grown meat. The two morsels on the plate in front of me were what I’d come for: a one-ounce sampling of cultivated chicken, made in the lab by startup Upside Foods.”

TRANSPORTATION

The World’s Largest Aircraft Breaks Cover in Silicon Valley
Mark Harris | TechCrunch
“As dawn breaks over Silicon Valley, the world is getting its first look at Pathfinder 1, a prototype electric airship that its maker LTA Research hopes will kickstart a new era in climate-friendly air travel, and accelerate the humanitarian work of its funder, Google co-founder Sergey Brin. The airship—its snow-white steampunk profile visible from the busy 101 highway—has taken drone technology such as fly-by-wire controls, electric motors and lidar sensing, and supersized them to something longer than three Boeing 737s, potentially able to carry tons of cargo over many hundreds of miles.”

ENVIRONMENT

In California’s Central Valley, a Massive Carbon Removal Factory Is Pulling CO2 From the Air
Adele Peters | Fast Company
“Across the street from a farm in California’s Central Valley, a gleaming new three-story structure is now quietly pulling CO2 from the air. Run by a Bay Area-based startup called Heirloom, it’s the first commercial ‘direct air capture’ facility in the U.S. Each year, it will capture as much as 1,000 tons of CO2 from the atmosphere, one early step in the company’s plans to scale up to millions of tons a year.”

ENERGY

The First Small-Scale Nuclear Plant in the US Died Before It Could Live
Gregory Barber | Wired
“A six-reactor, 462-megawatt plant was slated to begin construction by 2026 and produce power by the end of the decade. On Wednesday, NuScale and its backers pulled the plug on the multibillion-dollar Idaho Falls plant. They said they no longer believed the first-of-its-kind plant, known as the Carbon Free Power Project (CFPP) would be able to recruit enough additional customers to buy its power.”

Image Credit: Simone Hutsch / Unsplash

Kategorie: Transhumanismus

Biologists Unveil the First Living Yeast Cells With Over 50% Synthetic DNA

Singularity HUB - 10 Listopad, 2023 - 22:40

Our ability to manipulate the genes of living organisms has expanded dramatically in recent years. Now, researchers are a step closer to building genomes from scratch after unveiling a strain of yeast with more than 50 percent synthetic DNA.

Since 2006, an international consortium of researchers called the Synthetic Yeast Genome Project has been attempting to rewrite the entire genome of brewer’s yeast. The organism is an attractive target because it’s a eukaryote like us, and it’s also widely used in the biotechnology industry to produce biofuels, pharmaceuticals, and other high-value chemicals.

While researchers have previously rewritten the genomes of viruses and bacteria, yeast is more challenging because its DNA is split across 16 chromosomes. To speed up progress, the research groups involved each focused on rewriting a different chromosome, before trying to combine them.

The team has now successfully synthesized new versions of all 16 chromosomes and created an entirely novel chromosome. In a series of papers in Cell and Cell Genomics, the team also reports the successful combination of seven of these synthetic chromosomes, plus a fragment of another, in a single cell. Altogether, they account for more than 50 percent of the cell’s DNA.

“Our motivation is to understand the first principles of genome fundamentals by building synthetic genomes,” co-author Patrick Yizhi Cai from the University of Manchester said in a press release. “The team has now re-written the operating system of the budding yeast, which opens up a new era of engineering biology—moving from tinkering a handful of genes to de novo design and construction of entire genomes.”

The synthetic chromosomes are notably different to those of normal yeast. The researchers removed considerable amounts of “junk DNA” that is repetitive and doesn’t code for specific proteins. In particular, they cut stretches of DNA known as transposons—that can naturally recombine in unpredictable ways—to improve the stability of the genome.

They also separated all genes coding for transfer RNA into a completely new 17th genome. These molecules carry amino acids to ribosomes, the cell’s protein factories. Cai told Science tRNA molecules are “DNA damage hotspots.” The group hopes that by separating them out and housing them in a so-called “tRNA neochromosome” will make it easier to keep them under control.

“The tRNA neochromosome is the world’s first completely de novo synthetic chromosome,” says Cai. “Nothing like this exists in nature.”

Another significant alteration could accelerate efforts to find useful new strains of yeast. The team incorporated a system called SCRaMbLE into the genome, making it possible to rapidly rearrange genes within chromosomes. This “inducible evolution system” allows cells to quickly cycle through potentially interesting new genomes.

“It’s kind of like shuffling a deck of cards,” coauthor Jef Boeke from New York University Langone Health told New Scientist. “The scramble system is essentially evolution on hyperspeed, but we can switch it on and off.”

To get several of the modified chromosomes into the same yeast cell, Boeke’s team ran a lengthy cross-breeding program, mating cells with different combinations of genomes. At each step there was an extensive “debugging” process, as synthetic chromosomes interacted in unpredictable ways.

Using this approach, the team incorporated six full chromosomes and part of another one into a cell that survived and grew. They then developed a method called chromosome substitution to transfer the largest yeast chromosome from a donor cell, bumping the total to seven and a half and increasing the total amount of synthetic DNA to over 50 percent.

Getting all 17 synthetic chromosomes into a single cell will require considerable extra work, but crossing the halfway point is a significant achievement. And if the team can create yeast with a fully synthetic genome, it will mark a step change in our ability to manipulate the code of life.

“I like to call this the end of the beginning, not the beginning of the end, because that’s when we’re really going to be able to start shuffling that deck and producing yeast that can do things that we’ve never seen before,” Boeke says in the press release.

Image Credit: Scanning electron micrograph of partially synthetic yeast / Cell/Zhao et al.

Kategorie: Transhumanismus

Spinal Implant Helps a Man With Severe Parkinson’s Walk With Ease Again

Singularity HUB - 9 Listopad, 2023 - 23:33

In his mid-30s, Marc Gauthier noticed a creeping stiffness in his muscles. His hand shook when trying to hold steady. He struggled to maintain his balance while walking.

Then he began to freeze in place. When strolling down narrow streets running errands, his muscles seemed to suddenly disconnect from his brain. He couldn’t take a next step.

Gauthier has Parkinson’s disease. The debilitating brain disorder gradually destroys a type of brain cell related to the planning of movement. Since the 1980s, scientists have explored multiple treatments: transplanting stem cells to replace dying brain cells, using medications to counteract symptoms, and deep brain stimulation—the use of an electrical brain implant to directly zap the brain regions that coordinate movement.

While beneficial to many people, such treatments didn’t completely help Gauthier. Even with a deep brain stimulation device and medications, he struggled to walk without freezing in place.

In 2021, he signed up for a highly experimental trial. He had a small implant inserted into his spinal cord to directly activate nerves connecting his spinal cord and leg muscles. While extensively tested in non-human primates with symptoms resembling Parkinson’s, the therapy had never been tried in humans before.

Once Gauthier adapted to the implant, he found he could stroll the banks of Lake Geneva in Switzerland without any aid after three decades living with the disease.

“I can now walk with much more confidence,” he said in a press conference. Two years after the implant, “I’m not even afraid of stairs anymore.”

Gauthier’s success suggests a new way of tackling movement disorders originating in the brain. The implant mimics the natural signal patterns the brain sends to muscles to control walking, overriding his faulty biological signals.

“There are no therapies to address the severe gait problems that occur at a later stage of Parkinson’s, so it’s impressive to see him walking,” said study author Dr. Jocelyne Bloch to Nature.

The work is an “impressive tour-de-force,” said Drs. Aviv Mizrahi-Kliger and Karunesh Ganguly at the University of California, San Francisco, who were not involved in the study.

An Old Conundrum

It’s easy to take our movement for granted. A skip across a puddle seems mundane. But for people with Parkinson’s disease, it’s a hefty challenge.

We don’t yet fully understand what triggers the disease. A number of genes have been implicated. What’s clear is that the disorder slowly robs a person of the ability to move their muscles as the associated neurons are damaged. These cells pump out dopamine—a brain chemical that’s often linked to unexpected “happy” signals, such as after a surprisingly good meal. However, dopamine has a second job: It’s a traffic controller for muscle movement.

These signals break down in Parkinson’s disease.

One way to treat Parkinson’s is to increase dopamine levels with a drug called levodopa. Deep brain stimulation is another. Here, researchers insert an electrical probe deep inside the brain (hence the name) to activate neural circuits that release dopamine. While effective, the procedure is damaging to surrounding brain tissue, often causing inflammation and scarring.

The new study avoided the brain entirely.

Finding Hot Spots

For over a decade, Dr. Grégoire Courtine at the Swiss Federal Institute of Technology in Lausanne, in collaboration with NeuroRestore, has tried to reconnect mind to muscle.

Courtine’s team previously engineered an implant that helped people with spinal cord injuries walk with minimal training. Our brain controls muscles through the spinal cord. If that highway is damaged, muscles no longer respond to neural commands. Building on their earlier work, the team sought to develop a similar implant for Parkinson’s.

But many mysteries remained: Where should the electrodes go? What level of electrical simulation is necessary to activate the neural circuits? And are muscles going to respond to those artificial signals?

Using data from monkeys and humans, both healthy and with Parkinson’s—or Parkinson’s-like symptoms in the case of the monkeys—the team trained a spinal implant to detect unusual gaits and movements common in the disease. This implant could then also stimulate the spinal cord to restore healthy walking patterns.

The implant is tiny yet powerful. Made of two eight-electrode arrays, it uses various electrical stimulation patterns to mimic the intricacies of a natural neural command. The implant is placed in the spinal cord around the small of the back, lower than prior devices. This lower placement better engages the back and leg muscles essential for maintaining balance and walking.

In monkeys chemically-induced to display Parkinson’s symptoms, the stimulator restored aspects of their gait and balance. The monkeys could move at speeds similar to healthy peers and had better posture than untreated ones. Importantly, the implant prevented them from falling when challenged with obstacles—a problem for advanced Parkinson’s patients.

One Step Forward

These encouraging results led Gauthier to volunteer as the first human participant in an ongoing trial to test spinal cord stimulation in people with Parkinson’s.

Gauthier was first diagnosed with the disorder at 36. Eight years later, he had deep brain stimulation electrodes implanted and was taking the medication levodopa. Even so, his motions were slow and rigid, and he often stumbled or fell.

To personalize his implant, the team captured hours of video of his walking patterns. They then built a model of his muscles, skeleton, and several joints, such as the hips and knees. Using the model, they trained software to compensate for any dysfunction—allowing it to decipher the user’s intent and translate it into electrical zaps in the spinal cord to support the movement.

With the spinal cord implant active, Gauthier’s gait was far closer to that of a healthy person than someone with Parkinson’s. Although most of his previous treatments could control symptoms, he could finally walk with ease along a beachside road.

These results, though promising, are from just one person. The team wants to expand the treatment to six more people with Parkinson’s next year. Also, for now, the spinal stimulator isn’t a replacement for deep brain stimulation. During the trial Gauthier had his deep brain implant on. However, adding spinal cord stimulation lowered his chances of freezing and falling.

Parkinson’s remains an enigma. The disease varies between people and changes over time. Some develop gait and motor symptoms early on; others show symptoms far later. “That’s why researchers need to test the implant in as many people as possible,” study author Eduardo Martín Moraud told El Pais.

Larger trials are necessary before bringing the treatment to the hundreds of thousands of people with Parkinson’s in the US. But for now, one man has regained his life. “My daily life has profoundly improved,” said Gauthier.

Image Credit: CHUV Weber Gilles

Kategorie: Transhumanismus

How the World’s Biggest Optical Telescope Could Crack Some of the Greatest Puzzles in Science

Singularity HUB - 9 Listopad, 2023 - 00:15

Astronomers get to ask some of the most fundamental questions there are, ranging from whether we’re alone in the cosmos to what the nature of the mysterious dark energy and dark matter making up most of the universe is.

Now, a large group of astronomers from all over the world is building the biggest optical telescope ever—the Extremely Large Telescope (ELT)—in Chile. Once construction is completed in 2028, it could provide answers that transform our knowledge of the universe.

With its 39-meter diameter primary mirror, the ELT will contain the largest, most perfect reflecting surface ever made. Its light-collecting power will exceed that of all other large telescopes combined, enabling it to detect objects millions of times fainter than the human eye can see.

There are several reasons why we need such a telescope. Its incredible sensitivity will let it image some of the first galaxies ever formed, with light that has traveled for 13 billion years to reach the telescope. Observations of such distant objects may allow us to refine our understanding of cosmology and the nature of dark matter and dark energy.

Alien Life

The ELT may also offer an answer to the most fundamental question of all: Are we alone in the universe? The ELT is expected to be the first telescope to track down Earth-like exoplanets—planets that orbit other stars but have a similar mass, orbit, and proximity to their host as Earth.

Occupying the so-called Goldilocks zone, these Earth-like planets will orbit their star at just the right distance for water to neither boil nor freeze—providing the conditions for life to exist.

Size comparison between the ELT and other telescope domes. Image Credit: ESO/ Wikipedia, CC BY-SA

The ELT’s camera will have six times better resolution than that of the James Webb Space Telescope, allowing it to take the clearest images yet of exoplanets. But fascinating as these pictures will be, they will not tell the whole story.

To learn if life is likely to exist on an exoplanet, astronomers must complement imaging with spectroscopy. While images reveal shape, size, and structure, spectra tell us about the speed, temperature, and even the chemistry of astronomical objects.

The ELT will contain not one, but four spectrographs—instruments that disperse light into its constituent colors, much like the iconic prism on Pink Floyd’s The Dark Side of the Moon album cover.

Each about the size of a minibus, and carefully environmentally controlled for stability, these spectrographs underpin all of the ELT’s key science cases. For giant exoplanets, the Harmoni instrument will analyze light that has traveled through their atmospheres, looking for signs of water, oxygen, methane, carbon dioxide, and other gases that indicate the existence of life.

To detect much smaller Earth-like exoplanets, the more specialized Andes instrument will be needed. With a cost of around €35 million, Andes will be able to detect tiny changes in the wavelength of light.

From previous satellite missions, astronomers already have a good idea of where to look in the sky for exoplanets. Indeed, there have been several thousand confirmed or “candidate” exoplanets detected using the “transit method.” Here, a space telescope stares at a patch of sky containing thousands of stars and looks for tiny, periodic dips in their intensities, caused when an orbiting planet passes in front of its star.

But Andes will use a different method to hunt for other Earths. As an exoplanet orbits its host star, its gravity tugs on the star, making it wobble. This movement is incredibly small; Earth’s orbit causes the sun to oscillate at just 10 centimeters per second—the walking speed of a tortoise.

Just as the pitch of an ambulance siren rises and falls as it travels towards and away from us, the wavelength of light observed from a wobbling star increases and decreases as the planet traces out its orbit.

Remarkably, Andes will be able to detect this minuscule change in the light’s color. Starlight, while essentially continuous (“white”) from the ultraviolet to the infrared, contains bands where atoms in the outer region of the star absorb specific wavelengths as the light escapes, appearing dark in the spectra.

Tiny shifts in the positions of these features—around 1/10,000th of a pixel on the Andes sensor—may, over months and years, reveal the periodic wobbles. This could ultimately help us to find an Earth 2.0.

At Heriot-Watt University, my team is piloting the development of a laser system known as a frequency comb that will enable Andes to reach such exquisite precision. Like the millimeter ticks on a ruler, the laser will calibrate the Andes spectrograph by providing a spectrum of light structured as thousands of regularly spaced wavelengths.

A spectrograph image from the Southern African Large Telescope. The regularly spaced tick marks are from a laser frequency comb, underneath which are gas emission lines. Image Credit: Rudi Kuhn (SALT) / Derryck Reid (Heriot-Watt University)

This scale will remain constant over decades, mitigating the measurement errors that occur from environmental changes in temperature and pressure.

With the ELT’s construction cost coming in at €1.45 billion, some will question the value of the project. But astronomy has a significance that spans millennia and transcends cultures and national borders. It is only by looking far outside our solar system that we can gain a perspective beyond the here and now.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image Credit: ESO/L. Calçada / Wikipedia

Kategorie: Transhumanismus

Zombie Cells Have a Weakness. An Experimental Anti-Aging Therapy Exploits It.

Singularity HUB - 7 Listopad, 2023 - 21:00

Senescent cells are biochemical waste factories.

A new study suggests that a way to wipe them out is a medicine already approved for eye problems.

Dubbed “zombie cells,” senescent cells slowly accumulate with age or with cancer treatments. The cells lose their ability to perform normal functions. Instead, they leak a toxic chemical soup into their local environment, increasing inflammation and damaging healthy cells.

Over a decade of research has shown eliminating these cells with genetic engineering or drugs can slow down aging symptoms in mice. It’s no wonder investors have poured billions of dollars into these “senolytic” drugs.

There are already hints of early successes. In one early clinical trial, cleaning out zombie cells with a combination of drugs in humans with age-related lung problems was found to be safe. Another study helped middle-aged and older people maintain blood pressure while running up stairs. But battling senescent cells isn’t just about improving athletic abilities. Many more clinical trials are in the works, including strengthening bone integrity and combating Alzheimer’s.

But to Carlos Anerillas, Myriam Gorospe, and their team at the National Institutes of Health (NIH) in Baltimore, therapies have yet to hit zombie cells where it really hurts.

In a study in Nature Aging, the team pinpointed a weakness in these cells: They constantly release toxic chemicals, like a leaky nose during a cold. Called SASP, for senescence-associated secretory phenotype, this stew of inflammatory molecules contributes to aging.

Lucky for us, this constant release of chemicals comes at a price. Zombie cells use a “factory” inside the cell to package and ship their toxic payload to neighboring cells and nearby tissues. All cells have these factories. But the ones in zombie cells go into overdrive.

The new study nailed down a protein pair that’s essential to the zombie cells’ toxic spew and found an FDA-approved drug that inhibits the process. When given to 22-month-old mice—roughly the human equivalent of 70 years old—they had better kidney, liver, and lung function within just two months of treatment.

The work “stands out,” said Yahyah Aman, an editor at Nature Aging. It’s an “exciting target for new senolytic drug development,” added Ming Xu at UConn Health, who wasn’t involved in the study.

A Molecular Metropolitan

Each cell is a bustling city with multiple neighborhoods.

Some house our genetic archives. Others translate those DNA codes into proteins. There are also acid-filled dumpsters and molecular recycling bins to keep each cell clear of waste.

Then there’s the ER. No, not the emergency room, but a fluffy croissant-like structure. Called the endoplasmic reticulum, it’s Grand Central for new proteins. The ER packages proteins and delivers them to internal structures, the cell’s surface, or destinations outside the cell.

These “secretory” packages are powerful regulators that control local cellular functions. Normally, the ER helps cells coordinate their responses with neighboring tissues—say, allowing blood to clot after a scrape or stimulating immune responses to heal the damage.

Senescent cells hijack this process. Instead of productive signaling, they instead release a toxic soup of chemicals. These cells aren’t born harmful. Rather, they’re transformed by a lifetime of injury—damage to their DNA, for example. Faced with so much damage, normal cells would wither away, allowing healthy new cells to replace them in some tissues like the skin.

Zombie cells, in contrast, refuse to die. As long as the harm stays below a lethal level, the cells live on, expelling their deadly brew and harming others in the vicinity.

These traits make zombie cells a valuable target for anti-aging therapies. And there have been promising treatments. Most have relied on existing knowledge or ideas about how zombie cells work. Researchers then seek out chemicals in massive drug libraries that might disrupt their function. While useful, this strategy can miss treatment options.

The new study went rogue. Rather than starting out with hypotheses, they screened the whole human genome to find new vulnerabilities.

A Wild West

In their hunt, the team turned to CRISPR. Famously known as a gene editor, CRISPR is now often used to pinpoint genes and proteins that contribute to cellular functions. Here, the team disrupted every gene in the human genome to pinpoint those that eliminated zombie cells.

Their work paid off. The screen found a protein pair critical for senescent cell survival. The team next looked for an FDA-approved drug to disrupt the pair. They found what they were looking for in verteporfin, a drug approved to treat eye blood vessel disease.

In several zombie cell cultures with the protein pair, the drug drove senescent cells into apoptosis—that is, the “gentle falling of the leaves,” a sort of cell death does no harm to surrounding cells.

Digging deeper, the drug seemed to directly target the zombie cells’ endoplasmic reticulum—their shipping center. Cells treated with the drug couldn’t sustain the delicate multi-layered structure, and it subsequently shriveled into a shape like a wet, crumpled paper towel.

“A shrunken ER triggered a metabolic crisis” in zombie cells, explained Anerillas and Gorospe. It “culminated with their death.”

Ageless Mice

As a proof of concept, the team injected elderly mice—roughly the age of a 70-year-old human—with verteporfin once a month for two months.

In just a week, mice treated with verteporfin showed fewer molecular signs of senescence in their kidney, liver, and lungs. Their fur was more luxurious compared to control mice without the drug.

As we age, immune cells often enter the lungs and cause damage. Verteporfin nixed this infiltration and reduced lung scarring in mice—which is often linked to decreased breathing capacity. Similarly, according to blood tests, the drug also helped restore function in the mice’s kidneys and liver.

Decreased numbers of senescent cells dampened inflammatory signals, which could explain the rejuvenating effects, explained the team. Verteporfin also stopped a “guardian” protein that protects senescent cells from death, further triggering their demise.

Tapping into a zombie cell’s unique vulnerabilities is a new strategy in the development of senolytics. There’s far more to explore. The endoplasmic reticulum isn’t the only cell component in the biological waste factory. Other cellular components that generate senescent cell poisons could also be blocked and help remove the cells themselves.

It’s a promising alternative to existing methods for wiping out senescent cells. The strategy could “greatly expand the catalog of senolytic therapies,” the team wrote.

Image Credit: A HeLA cell undergoing apoptosis. Tom Deerinck / NIH / FLICKR

Kategorie: Transhumanismus
Syndikovat obsah