Transhumanismus

Poetry by History’s Greatest Poets or AI? People Can’t Tell the Difference—and Even Prefer the Latter. What Gives?

Singularity HUB - 19 Listopad, 2024 - 16:00

Here are some lines Sylvia Plath never wrote:

The air is thick with tension,
My mind is a tangled mess,
The weight of my emotions
Is heavy on my chest.

This apparently Plath-like verse was produced by GPT-3.5 in response to the prompt “write a short poem in the style of Sylvia Plath.”

The stanza hits the key points readers may expect of Plath’s poetry, and perhaps a poem more generally. It suggests a sense of despair as the writer struggles with internal demons. “Mess” and “chest” are a near-rhyme, which reassures us that we are in the realm of poetry.

According to a new paper in Nature Scientific Reports, non-expert readers of poetry cannot distinguish poetry written by AI from that written by canonical poets. Moreover, general readers tend to prefer poetry written by AI—at least until they are told it is written by a machine.

In the study, AI was used to generate poetry “in the style of” 10 poets: Geoffrey Chaucer, William Shakespeare, Samuel Butler, Lord Byron, Walt Whitman, Emily Dickinson, TS Eliot, Allen Ginsberg, Sylvia Plath, and Dorothea Lasky.

Participants were presented with 10 poems in random order, five from a real poet and five AI imitations. They were then asked whether they thought each poem was AI or human, rating their confidence on a scale of 1 to 100.

A second group of participants was exposed to three different scenarios. Some were told that all the poems they were given were human. Some were told they were reading only AI poems. Some were not told anything.

They were then presented with five human and five AI poems and asked to rank them on a seven point scale, from extremely bad to extremely good. The participants who were told nothing were also asked to guess whether each poem was human or AI.

The researchers found that AI poems scored higher than their human-written counterparts in attributes such as “creativity,” “atmosphere,” and “emotional quality.”

The AI “Plath” poem quoted above is one of those included in the study, set against several she actually wrote.

A Sign of Quality?

As a lecturer in English, these outcomes do not surprise me. Poetry is the literary form that my students find most unfamiliar and difficult. I am sure this holds true of wider society as well.

While most of us have been taught poetry at some point, likely in high school, our reading does not tend to go much beyond that. This is despite the ubiquity of poetry. We see it every day: circulated on Instagram, plastered on coffee cups, and printed in greeting cards.

The researchers suggest that “by many metrics, specialized AI models are able to produce high-quality poetry.” But they don’t interrogate what we actually mean by “high-quality.”

In my view, the results of the study are less testaments to the “quality” of machine poetry than to the wider difficulty of giving life to poetry. It takes reading and rereading to experience what literary critic Derek Attridge has called the “event” of literature, where “new possibilities of meaning and feeling” open within us. In the most significant kinds of literary experiences, “we feel pulled along by the work as we push ourselves through it”.

Attridge quotes philosopher Walter Benjamin to make this point: Literature “is not statement or the imparting of information.”

Philosopher Walter Benjamin argued that literature is not simply the imparting of information. Image Credit: Public domain, via Wikimedia Commons

Yet pushing ourselves through remains as difficult as ever—perhaps more so in a world where we expect instant answers. Participants favored poems that were easier to interpret and understand.

When readers say they prefer AI poetry, then, they would seem to be registering their frustration when faced with writing that does not yield to their attention. If we do not know how to begin with poems, we end up relying on conventional “poetic” signs to make determinations about quality and preference.

This is of course the realm of GPT, which writes formally adequate sonnets in seconds. The large language models used in AI are success-orientated machines that aim to satisfy general taste, and they are effective at doing so. The machines give us the poems we think we want: Ones that tell us things.

How Poems Think

The work of teaching is to help students attune themselves to how poems think, poem by poem and poet by poet, so they can gain access to poetry’s specific intelligence. In my introductory course, I take about an hour to work through Sylvia Plath’s “Morning Song.” I have spent 10 minutes or more on the opening line: “Love set you going like a fat gold watch.”

How might a “watch” be connected to “set you going”? How can love set something going? What does a “fat gold watch” mean to you—and how is it different from a slim silver one? Why “set you going” rather than “led to your birth”? And what does all this mean in a poem about having a baby, and all the ambivalent feelings this may produce in a mother?

In one of the real Plath poems that was included in the survey, “Winter Landscape, With Rooks,” we observe how her mental atmosphere unfurls around the waterways of the Cambridgeshire Fens in February:

Water in the millrace, through a sluice of stone,
plunges headlong into that black pond
where, absurd and out-of-season, a single swan
floats chaste as snow, taunting the clouded mind
which hungers to haul the white reflection down.

How different is this to GPT’s Plath poem? The achievement of the opening of “Winter Landscape, With Rooks” is how it intricately explores the connection between mental events and place. Given the wider interest of the poem in emotional states, its details seem to convey the tumble of life’s events through our minds.

Our minds are turned by life just as the mill is turned by water; these experiences and mental processes accumulate in a scarcely understood “black pond.”

Intriguingly, the poet finds that this metaphor, well constructed though it may be, does not quite work. This is not because of a failure of language, but because of the landscape she is trying to turn into art, which is refusing to submit to her emotional atmosphere. Despite everything she feels, a swan floats on serenely—even if she “hungers” to haul its “white reflection down.”

I mention these lines because they turn around the Plath-like poem of GPT-3.5. They remind us of the unexpected outcomes of giving life to poems. Plath acknowledges not just the weight of her despair, but the absurd figure she may be within a landscape she wants to reflect her sadness.

She compares herself to the bird that gives the poem its title:

feathered dark in thought, I stalk like a rook,
brooding as the winter night comes on.

These lines are unlikely to register highly in the study’s terms of literary response—“beautiful,” “inspiring,” “lyrical,” “meaningful,” and so on. But there is a kind of insight to them. Plath is the source of her torment, “feathered” as she is with her “dark thoughts.” She is “brooding,” trying to make the world into her imaginative vision.

Sylvia Plath. Image Credit: RBainbridge2000, via Wikimedia Commons, CC BY

The authors of the study are both right and wrong when they write that AI can “produce high-quality poetry.” The preference the study reveals for AI poetry over that written by humans does not suggest that machine poems are of a higher quality. The AI models can produce poems that rate well on certain “metrics.” But the event of reading poetry is ultimately not one in which we arrive at standardized criteria or outcomes.

Instead, as we engage in imaginative tussles with poems, both we and the poem are newly born. So the outcome of the research is that we have a highly specified and well thought-out examination of how people who know little about poetry respond to poems. But it fails to explore how poetry can be enlivened by meaningful shared encounters.

Spending time with poems of any kind, attending to their intelligence and the acts of sympathy and speculation required to confront their challenges, is as difficult as ever. As the Plath of GPT-3.5 puts it:

My mind is a tangled mess,
[…]
I try to grasp at something solid.

 

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Kategorie: Transhumanismus

A ChatGPT-Like AI Can Now Design Whole New Genomes From Scratch

Singularity HUB - 18 Listopad, 2024 - 23:59

All life on Earth is written with four DNA “letters.” An AI just used those letters to dream up a completely new genome from scratch.

Called Evo, the AI was inspired by the large language models, or LLMs, underlying popular chatbots such as OpenAI’s ChatGPT and Anthropic’s Claude. These models have taken the world by storm for their prowess at generating human-like responses. From simple tasks, such as defining an obtuse word, to summarizing scientific papers or spewing verses fit for a rap battle, LLMs have entered our everyday lives.

If LLMs can master written languages—could they do the same for the language of life?

This month, a team from Stanford University and the Arc Institute put the theory to the test. Rather than training Evo on content scraped from the internet, they trained the AI on nearly three million genomes—amounting to billions of lines of genetic code—from various microbes and bacteria-infecting viruses.

Evo was better than previous AI models at predicting how mutations to genetic material—DNA and RNA—could alter function. The AI also got creative, dreaming up several new components for the gene editing tool, CRISPR. Even more impressively, the AI generated a genome more than a megabase long—roughly the size of some bacterial genomes.

“Overall, Evo represents a genomic foundation model,” wrote Christina Theodoris at the Gladstone Institute in San Francisco, who was not involved in the work.

Having learned the genomic vocabulary, algorithms like Evo could help scientists probe evolution, decipher our cells’ inner workings, tackle biological mysteries, and fast-track synthetic biology by designing complex new biomolecules.

The DNA Multiverse

Compared to the English alphabet’s 26 letters, DNA only has A, T, C, and G. These ‘letters’ are shorthand for the four molecules—adenine (A), thymine (T), cytosine (C), and guanine (G)— that, combined, spell out our genes. If LLMs can conquer languages and generate new prose, rewriting the genetic handbook with only four letters should be a piece of cake.

Not quite. Human language is organized into words, phrases, and punctuated into sentences to convey information. DNA, in contrast, is more continuous, and genetic components are complex. The same DNA letters carry “parallel threads of information,” wrote Theodoris.

The most familiar is DNA’s role as genetic carrier. A specific combination of three DNA letters, called a codon, encodes a protein building block. These are strung together into the proteins that make up our tissues, organs, and direct the inner workings of our cells.

But the same genetic sequence, depending on its structure, can also recruit the molecules needed to turn codons into proteins. And sometimes, the same DNA letters can turn one gene into different proteins depending on a cell’s health and environment or even turn the gene off.

In other words, DNA letters contain a wealth of information about the genome’s complexity. And any changes can jeopardize a protein’s function, resulting in genetic disease and other health problems. This makes it critical for AI to work at the resolution of single DNA letters.

But it’s hard for AI to capture multiple threads of information on a large scale by analyzing genetic letters alone, partially due to high computational costs. Like ancient Roman scripts, DNA is a continuum of letters without clear punctuation. So, it could be necessary to “read” whole strands to gain an overall picture of their structure and function—that is, to decipher meaning.

Previous attempts have “bundled” DNA letters into blocks—a bit like making artificial words. While easier to process, these methods disrupt the continuity of DNA, resulting in the retention of “ some threads of information at the expense of others,” wrote Theodoris.

Building Foundations

Evo addressed these problems head on. Its designers aimed to preserve all threads of information, while operating at single-DNA-letter resolution with lower computational costs.

The trick was to give Evo a broader context for any given chunk of the genome by leveraging a specific type of AI setup used in a family of algorithms called StripedHyena. Compared to GPT-4 and other AI models, StripedHyena is designed to be faster and more capable of processing large inputs—for example, long lengths of DNA. This broadened Evo’s so-called “search window,” allowing it to better find patterns across a larger genetic landscape.

The researchers then trained the AI on a database of nearly three million genomes from bacteria and viruses that infect bacteria, known as phages. It also learned from plasmids, circular bits of DNA often found in bacteria that transmit genetic information between microbes, spurring evolution and perpetuating antibiotic resistance.

Once trained, the team pitted Evo against other AI models to predict how mutations in a given genetic sequence might impact the sequence’s function, such as coding for proteins. Even though it was never told which genetic letters form codons, Evo outperformed an AI model explicitly trained to recognize protein-coding DNA letters on the task.

Remarkably, Evo also predicted the effect of mutations on a wide variety of RNA molecules—for example, those regulating gene expression, shuttling protein building blocks to the cell’s protein-making factory, and acting as enzymes to fine-tune protein function.

Evo seemed to have gained a “fundamental understanding of DNA grammar,” wrote Theodoris, making it a perfect tool to create “meaningful” new genetic code.

To test this, the team used the AI to design new versions of the gene editing tool CRISPR. The task is especially difficult as the system contains two elements that work together—a guide RNA molecule and a pair of protein “scissors” called Cas. Evo generated millions of potential Cas proteins and their accompanying guide RNA. The team picked 11 of the most promising combinations, synthesized them in the lab, and tested their activity in test tubes.

One stood out. A variant of Cas9, the AI-designed protein cleaved its DNA target when paired with its guide RNA partner.  These designer biomolecules represent the “first examples” of codesign between proteins and DNA or RNA with a language model, wrote the team.

The team also asked Evo to generate a DNA sequence similar in length to some bacterial genomes and compared the results to natural genomes. The designer genome contained some essential genes for cell survival, but with myriad unnatural characteristics preventing it from being functional. This suggests the AI can only make a “blurry image” of a genome, one that contains key elements, but lacks finer-grained details, wrote the team.

Like other LLMs, Evo sometimes “hallucinates,” spewing CRISPR systems with no chance of working. Despite the problems, the AI suggests future LLMs could predict and generate genomes on a broader scale. The tool could also help scientists examine long-range genetic interactions in microbes and phages, potentially sparking insights into how we might rewire their genomes to produce biofuels, plastic-eating bugs, or medicines.

It’s yet unclear whether Evo could decipher or generate far longer genomes, like those in plants, animals, or humans. If the model can scale, however, it “would have tremendous diagnostic and therapeutic implications for disease,” wrote Theodoris.

Image Credit: Warren Umoh on Unsplash

Kategorie: Transhumanismus

This Week’s Awesome Tech Stories From Around the Web (Through November 16)

Singularity HUB - 16 Listopad, 2024 - 19:35
COMPUTING

IBM Boosts the Amount of Computation You Can Get Done on Quantum Hardware
John Timmer | Ars Technica
“There’s a general consensus that we won’t be able to consistently perform sophisticated quantum calculations without the development of error-corrected quantum computing, which is unlikely to arrive until the end of the decade. It’s still an open question, however, whether we could perform limited but useful calculations at an earlier point. IBM is one of the companies that’s betting the answer is yes, and on Wednesday, it announced a series of developments aimed at making that possible.”

ARTIFICIAL INTELLIGENCE

OpenAI Shifts Strategy as Rate of ‘GPT’ AI Improvements Slows
Stephanie Palazzolo, Erin Woo, and Emir Efrati | The Information
“”The Orion situation could test a core assumption of the AI field, known as scaling laws: that LLMs would continue to improve at the same pace as long as they had more data to learn from and additional computing power to facilitate that training process. In response to the recent challenge to training-based scaling laws posed by slowing GPT improvements, the industry appears to be shifting its effort to improving models after their initial training, potentially yielding a different type of scaling law.”

BIOTECH

The First CRISPR Treatment Is Making Its Way to Patients
Emily Mullen | Wired
“Vertex, the pharmaceutical company that markets Casgevy, announced in a November 5 earnings call that the first person to receive Casgevy outside of a clinical trial was dosed in the third quarter of this year. …When Wired followed up with Vertex via email, spokesperson Eleanor Celeste declined to provide the exact number of patients that have received Casgevy. However, the company says 40 patients have undergone cell collections in anticipation of receiving the treatment, up from 20 patients last quarter.”

AUTOMATION

AI Is Now Designing Chips for AI
Kristen Houser | Big Think
“It’s 2028, and your tech startup has an idea that could revolutionize the industry—but you need a custom designed microchip to bring the product to market. Five years ago, designing that chip would’ve cost more than your whole company is worth, but your team is now able to do it at a fraction of price and in a fraction of the time—all thanks to AI, fittingly being run on chips like these.”

ROBOTICS

Now Anyone in LA Can Hail a Waymo Robotaxi
Kirsten Korosec | TechCrunch
“Waymo has opened its robotaxi service to everyone in Los Angeles, sunsetting a waitlist that had grown to 300,000 people. The Alphabet-backed company said starting Tuesday anyone can download the Waymo One app to hail a ride in its service area, which is now about 80 square miles in Los Angeles County.”

ARTIFICAL INTELLIGENCE

The First Entirely AI-Generated Video Game Is Insanely Weird and Fun
Will Knight | Wired
“Minecraft remains remarkably popular a decade or so after it was first released, thanks to a unique mix of quirky gameplay and open world building possibilities. A knock-off called Oasis, released last month, captures much of the original game’s flavor with a remarkable and weird twist. The entire game is generated not by a game engine and hand-coded rules, but by an AI model that dreams up each frame.”

ENERGY

Nuclear Power Was Once Shunned at Climate Talks. Now, It’s a Rising Star.
Brad Plumer | The New York Times
“At last year’s climate conference in the United Arab Emirates, 22 countries pledged, for the first time, to triple the world’s use of nuclear power by midcentury to help curb global warming. At this year’s summit in Azerbaijan, six more countries signed the pledge. ‘It’s a whole different dynamic today,’ said Dr. Bilbao y Leon, who now leads the World Nuclear Association, an industry trade group. ‘A lot more people are open to talking about nuclear power as a solution.'”

HEALTH

The Next Omics? Tracking a Lifetime of Exposures to Better Understand Disease
Lindzi Wessel | Knowable Magazine
“Of the millions of substances people encounter daily, health researchers have focused on only a few hundred. Those in the emerging field of exposomics want to change that. …In homes, on buildings, from satellites and even in apps on the phone in your pocket, tools to monitor the environment are on the rise. At the intersection of public health and toxicology, these tools are fueling a new movement in exposure science. It’s called the exposome and it represents the sum of all environmental exposures over a lifetime.”

SPACE

Buckle Up: SpaceX Aims for Rapid-Fire Starship Launches in 2025
Passant Rabie | Gizmodo
“SpaceX has big plans for its Starship rocket. After a groundbreaking test flight, in which the landing tower caught the booster, the company’s founder and CEO Elon Musk wants to see the megarocket fly up to 25 times next year, working its way up to a launch rate of 100 flights per year, and eventually a Starship launching on a daily basis.”

TECH

Are AI Clones the Future of Dating? I Tried Them for Myself.
Eli Tan | The New York Times
“As chatbots like ChatGPT improve, their use in our personal and even romantic lives is becoming more common. So much so, some executives in the dating app industry have begun pitching a future in which people can create AI clones of themselves that date other clones and relay the results back to their human counterparts.”

GENETICS

Genetic Discrimination Is Coming for Us All
Kristen V. Brown | The Atlantic
“For decades, researchers have feared that people might be targeted over their DNA, but they weren’t sure how often it was happening. Now at least a handful of Americans are experiencing what they argue is a form of discrimination. And as more people get their genomes sequenced—and researchers learn to glean even more information from the results—a growing number of people may find themselves similarly targeted.”

Image Credit: Evgeni Tcherkasski on Unsplash

Kategorie: Transhumanismus

MIT’s New Robot Dog Learned to Walk and Climb in a Simulation Whipped Up by Generative AI

Singularity HUB - 16 Listopad, 2024 - 00:09

A big challenge when training AI models to control robots is gathering enough realistic data. Now, researchers at MIT have shown they can train a robot dog using 100 percent synthetic data.

Traditionally, robots have been hand-coded to perform particular tasks, but this approach results in brittle systems that struggle to cope with the uncertainty of the real world. Machine learning approaches that train robots on real-world examples promise to create more flexible machines, but gathering enough training data is a significant challenge.

One potential workaround is to train robots using computer simulations of the real world, which makes it far simpler to set up novel tasks or environments for them. But this approach is bedeviled by the “sim-to-real gap”—these virtual environments are still poor replicas of the real world and skills learned inside them often don’t translate.

Now, MIT CSAIL researchers have found a way to combine simulations and generative AI to enable a robot, trained on zero real-world data, to tackle a host of challenging locomotion tasks in the physical world.

“One of the main challenges in sim-to-real transfer for robotics is achieving visual realism in simulated environments,” Shuran Song from Stanford University, who wasn’t involved in the research, said in a press release from MIT.

“The LucidSim framework provides an elegant solution by using generative models to create diverse, highly realistic visual data for any simulation. This work could significantly accelerate the deployment of robots trained in virtual environments to real-world tasks.”

Leading simulators used to train robots today can realistically reproduce the kind of physics robots are likely to encounter. But they are not so good at recreating the diverse environments, textures, and lighting conditions found in the real world. This means robots relying on visual perception often struggle in less controlled environments.

To get around this, the MIT researchers used text-to-image generators to create realistic scenes and combined these with a popular simulator called MuJoCo to map geometric and physics data onto the images. To increase the diversity of images, the team also used ChatGPT to create thousands of prompts for the image generator covering a huge range of environments.

After generating these realistic environmental images, the researchers converted them into short videos from a robot’s perspective using another system they developed called Dreams in Motion. This computes how each pixel in the image would shift as the robot moves through an environment, creating multiple frames from a single image.

The researchers dubbed this data-generation pipeline LucidSim and used it to train an AI model to control a quadruped robot using just visual input. The robot learned a series of locomotion tasks, including going up and down stairs, climbing boxes, and chasing a soccer ball.

The training process was split into parts. First, the team trained their model on data generated by an expert AI system with access to detailed terrain information as it attempted the same tasks. This gave the model enough understanding of the tasks to attempt them in a simulation based on the data from LucidSim, which generated more data. They then re-trained the model on the combined data to create the final robotic control policy.

The approach matched or outperformed the expert AI system on four out of the five tasks in real-world tests, despite relying on just visual input. And on all the tasks, it significantly outperformed a model trained using “domain randomization”—a leading simulation approach that increases data diversity by applying random colors and patterns to objects in the environment.

The researchers told MIT Technology Review their next goal is to train a humanoid robot on purely synthetic data generated by LucidSim. They also hope to use the approach to improve the training of robotic arms on tasks requiring dexterity.

Given the insatiable appetite for robot training data, methods like this that can provide high-quality synthetic alternatives are likely to become increasingly important in the coming years.

Image Credit: MIT CSAIL

Kategorie: Transhumanismus

Sweet CRISPR Tomatoes May Be Coming to a Supermarket Near You

Singularity HUB - 14 Listopad, 2024 - 23:44

When I was a young kid, our neighborhood didn’t have any grocery stores. The only place to buy fruits and vegetables was at our local farmer’s market. My mom would pick out the freshest tomatoes and sauté them with eggs into a simple dish that became my comfort food.

The tomatoes were hideous to look at—small, gnarled, miscolored, and nothing like the perfectly plump and bright beefsteak or Roma tomatoes that eventually flooded supermarkets. But they were oh-so-tasty, with a perfect ratio of tart and sweet flavors that burst in my mouth.

These days, when I ask for the same dish, my mom will always say, “Tomatoes just don’t taste the same anymore.”

She’s not alone. Many people have noticed that today’s produce is watery, waxy, and lacking in flavor—despite looking ripe and inviting. One reason is it was bred that way. Today’s crops are often genetically selected to prioritize appearance, size, shelf life, and transportability. But these perks can sacrifice taste—most often, in the form of sugar. Even broccoli, known for its bitterness, has variants that accumulate sugar inside their stems for a slightly sweeter taste.

The problem is that larger fruit sizes are often less sweet, explains Sanwen Huang and colleagues in Shenzhen, China. The key is to break that correlation. His team may have found a way using a globally popular crop—the tomato—as an example.

By comparing wild and domesticated tomatoes, the team hunted down a set of genes that put the brakes on sugar production. Inhibiting those genes using CRISPR-Cas9, the popular gene-editing tool, bumped up the fruit’s sugar content by 30 percent—enough for a consumer panel to find a noticeable increase in sweetness—without sacrificing size or yields.

Seeds from the edited plants germinated as usual, allowing the edits to pass on to the next generations.

The study isn’t just about satisfying our sweet tooth. Crops, not just tomatoes, with higher sugar content also contain more calories, which are necessary if we’re to meet the needs of a growing global population. The analysis pipeline established in the study is set to identify other genetic trade-offs between size and nutrition, with the goal of rapidly engineering better crops.

The work “represents an exciting step forward…for crop improvement worldwide,” wrote Amy Lanctot and Patrick Shih at the University of California, Berkeley, who were not involved in the study.

Hot Links

For eons, humanity has cultivated crops to enhance desirable aspects—for example, better yields, higher nutrition, or looks.

Tomatoes are a perfect example. The fruit “is the most valuable vegetable crop, worldwide, and makes substantial overall health and nutritional contributions to the human diet,” wrote the team. Its wild versions range in size from cherries to peas—far smaller than most current variants found in grocery stores. Flavor comes from two types of sugars packed in their solid bits.

After thousands of years of domestication, sugars remain the key ingredient to better-tasting tomatoes. But in recent decades, breeders mostly prioritized increasing fruit size. The result are tomatoes that are easily sliced for sandwiches, crushed for canning, or further processed into sauces or pastes. Compared to their wild ancestors, today’s cultivated tomatoes are roughly between 10 to 100 times larger in size, making them far more economical.

But these improvements come a cost. Multiple studies have found that as size goes up, sugar levels and flavor tank. A similar trend has also been found in other large farming fruits.

Ever since, scientists have tried teasing out the tomato’s inner workings—especially genes that produces sugar—to restore its taste and nutritious value. One study in 2017 combined genomic analysis of nearly 400 varieties of tomatoes with results from a human taste panel to home in on a slew of metabolic chemicals that made the fruit taste better. A year later, Huang’s team, who led the new study, analyzed the genetic makeup and cell function of hundreds of tomato types. Domestication was associated with several large changes in the plant’s genome—but the team didn’t know how each genetic mutation altered the fruit’s metabolism.

It’s tough to link a gene to a trait. Our genes, as DNA strands, are tightly wound into mostly X-shaped chromosomes. Like braided balls of yarn, these 3D structures bring genes normally separated on a linear strand into close proximity. This means nearby, or “linked,” genes often turn on or off together.

“Genetic linkage makes it difficult to alter one gene without affecting the other,” wrote Lanctot and Shih.

Fast Track Evolution

The new study used two technologies to overcome the problem.

The first was cheaper genetic sequencing. By scanning through genetic variations between domesticated and wild tomatoes, the team pinpointed six tomato genes likely responsible for the fruit’s sweetness.

One gene especially caught their eye. It was turned off in sweeter tomato species, putting the brakes on the plants’ ability to accumulate sugar. Using the gene-editing tool CRISPR-Cas9, the team mutated the gene so it could no longer function and grew the edited species—along with normal ones—under the same conditions in a garden.

The Sweet Spot

Roughly 100 volunteers tried the edited and normal tomatoes in a blind trial. The CRISPRed tomatoes won in a landslide for their perceived sweetness.

The study isn’t just about a better tomato. “This research demonstrates the value hidden in the genomes of crop species varieties and their wild relatives,” wrote Lanctot and Shih.

Domestication, while boosting yield or size of a fruit, often decreases genetic diversity for a species because selected crops eventually contain mostly the same genetic blueprint. Some crops, such as bananas, can’t reproduce on their own and are extremely vulnerable to fungi. Analyzing genes related to these traits could help form a defense strategy.

Conservation and taste aside, scientists have also tried to endow crops with more exotic traits. In 2021, Sanatech Seed, a company based in Japan, engineered tomatoes using CRISPR-Cas9 to increase the amount of a chemical that dampens neural transmission. According to the company, the tomatoes can lower blood pressure and help people relax. The fruit is already on the market following regulatory approval in Japan.

Studies that directly link a gene to a trait in plants are still extremely rare. Thanks to cheaper and faster DNA sequencing technologies, and increasingly precise CRISPR tools, it’s becoming easier to test these connections.

“The more researchers understand about the genetic pathways underlying these trade-offs, the more they can take advantage of modern genome-editing tools to attempt to disentangle them to boost crucial agricultural traits,” wrote Lanctot and Shih.

Image Credit: Thomas Martinsen on Unsplash

Kategorie: Transhumanismus

Could We Ever Decipher an Alien Language? Uncovering How AI Communicates May Be Key

Singularity HUB - 12 Listopad, 2024 - 23:14

In the 2016 science fiction movie Arrival, a linguist is faced with the daunting task of deciphering an alien language consisting of palindromic phrases, which read the same backwards as they do forwards, written with circular symbols. As she discovers various clues, different nations around the world interpret the messages differently—with some assuming they convey a threat.

If humanity ended up in such a situation today, our best bet may be to turn to research uncovering how artificial intelligence develops languages.

But what exactly defines a language? Most of us use at least one to communicate with people around us, but how did it come about? Linguists have been pondering this very question for decades, yet there is no easy way to find out how language evolved.

Language is ephemeral, it leaves no examinable trace in the fossil records. Unlike bones, we can’t dig up ancient languages to study how they developed over time.

While we may be unable to study the true evolution of human language, perhaps a simulation could provide some insights. That’s where AI comes in—a fascinating field of research called emergent communication, which I have spent the last three years studying.

To simulate how language may evolve, we give AI agents simple tasks that require communication, like a game where one robot must guide another to a specific location on a grid without showing it a map. We provide (almost) no restrictions on what they can say or how—we simply give them the task and let them solve it however they want.

Because solving these tasks requires the agents to communicate with each other, we can study how their communication evolves over time to get an idea of how language might evolve.

Similar experiments have been done with humans. Imagine you, an English speaker, are paired with a non-English speaker. Your task is to instruct your partner to pick up a green cube from an assortment of objects on a table.

You might try to gesture a cube shape with your hands and point at grass outside the window to indicate the color green. Over time, you’d develop a sort of proto-language together. Maybe you’d create specific gestures or symbols for “cube” and “green.” Through repeated interactions, these improvised signals would become more refined and consistent, forming a basic communication system.

This works similarly for AI. Through trial and error, algorithms learn to communicate about objects they see, and their conversation partners learn to understand them.

But how do we know what they’re talking about? If they only develop this language with their artificial conversation partner and not with us, how do we know what each word means? After all, a specific word could mean “green,” “cube,” or worse—both. This challenge of interpretation is a key part of my research.

Cracking the Code

The task of understanding AI language may seem almost impossible at first. If I tried speaking Polish (my mother tongue) to a collaborator who only speaks English, we couldn’t understand each other or even know where each word begins and ends.

The challenge with AI languages is even greater, as they might organize information in ways completely foreign to human linguistic patterns.

Fortunately, linguists have developed sophisticated tools using information theory to interpret unknown languages.

Just as archaeologists piece together ancient languages from fragments, we use patterns in AI conversations to understand their linguistic structure. Sometimes we find surprising similarities to human languages, and other times we discover entirely novel ways of communication.

These tools help us peek into the “black box” of AI communication, revealing how AI agents develop their own unique ways of sharing information.

My recent work focuses on using what the agents see and say to interpret their language. Imagine having a transcript of a conversation in a language unknown to you, along with what each speaker was looking at. We can match patterns in the transcript to objects in the participant’s field of vision, building statistical connections between words and objects.

For example, perhaps the phrase “yayo” coincides with a bird flying past—we could guess that “yayo” is the speaker’s word for “bird.” Through careful analysis of these patterns, we can begin to decode the meaning behind the communication.

In the latest paper by me and my colleagues, set to appear in the conference proceedings of Neural Information Processing Systems (NeurIPS), we show that such methods can be used to reverse-engineer at least parts of the AIs’ language and syntax, giving us insights into how they might structure communication.

Aliens and Autonomous Systems

How does this connect to aliens? The methods we’re developing for understanding AI languages could help us decipher any future alien communications.

If we are able to obtain some written alien text together with some context (such as visual information relating to the text), we could apply the same statistical tools to analyze them. The approaches we’re developing today could be useful tools in the future study of alien languages, known as xenolinguistics.

But we don’t need to find extraterrestrials to benefit from this research. There are numerous applications, from improving language models like ChatGPT or Claude to improving communication between autonomous vehicles or drones.

By decoding emergent languages, we can make future technology easier to understand. Whether it’s knowing how self-driving cars coordinate their movements or how AI systems make decisions, we’re not just creating intelligent systems—we’re learning to understand them.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image Credit: Tomas Martinez on Unsplash

Kategorie: Transhumanismus

This Ambitious Project Wants to Sequence the DNA of All Complex Life on Earth

Singularity HUB - 11 Listopad, 2024 - 23:29

“We’re only just beginning to understand the full majesty of life on Earth,” wrote the founding members of the Earth BioGenome Project in 2018. The ambitious project raised eyebrows when first announced. It seeks to genetically profile over a million plants, animals, and fungi. Documenting these genomes is the first step to building an atlas of complex life on Earth.

Many living species remain mysterious to science. A database resulting from the project would be a precious resource for monitoring biodiversity. It could also shed light on the genetic “dark matter” of complex life to inspire new biomaterials, medicines, or spark ideas for synthetic biology. Other insights could tailor agricultural practices to ramp up food production and feed a growing global population.

In other words, digging into living creatures’ genetic data is set to unveil “unimaginable biological secrets,” wrote the team.

The problem? A hefty price tag. With an estimated cost of $4.7 billion, even the founders of the project called it a moonshot. However, against all odds, the project has made progress, with 3,000 genomes already sequenced and 10,000 more species expected by 2026.

While lagging its original goal of sequencing roughly 1.7 million genomes in a decade, the project still hopes to hit this goal by 2032—later than the original goalpost, but with a much lower price tag thanks to more efficient DNA sequencing technologies.

Meanwhile, the international team has also built infrastructure to share gene sequencing data, and machine learning methods are further helping the consortium analyze thousands of datasets—helping characterize new species and monitor DNA data for endangered ones.

Expanding the Scope

Genetic material is everywhere. It’s an abundant resource to make sense of life of Earth. As genetic sequencing becomes faster, cheaper, and more reliable, recent studies have begun digging into information represented by DNA from species across the globe.

One method, dubbed metagenomics, captures and analyzes microbial DNA gathered in a variety of environments, from city sewers to boiling hot springs. The method captures and analyzes all DNA from a particular source to paint a broad genetic picture of bacteria from a given environment. Rather than bacteria, the Earth BioGenome Project, or EBP, is aiming to sequence the genomes of individual eukaryotic creatures—basically, those that keep most of their DNA in a nut-like structure, or nucleus, inside each cell.

Humans, plants, fungi, and other animals all fall into this group. In one estimate, there are roughly 10 to 15 million eukaryotic species on our planet. But just a little over two million have been documented.

Sequencing DNA from eukaryotic cells could vastly expand our knowledge of Earth’s genetic diversity. Such a database could also be a treasure trove for synthetic biology. Scientists have already tinkered with the genetic blueprints of life in bacteria and yeast cells. Deciphering—and then reprogramming—their genes has led to advances such as coaxing bacteria cells to pump out biofuels, degradable materials, and medicines such as insulin.

Charting eukaryotes’ genomes could further inspire new materials or medicines. For example, cytarabine, a chemotherapy drug, was initially isolated from a sponge-like sea creature and approved by the FDA to treat blood cancers that spread to the brain. Other plant-derived medications are already being used to tackle viral infections or to control pain. From nearly 400,000 different plant species, hundreds of medicines have already been approved and are on the market. Similarly, deciphering plant genetics have galvanized ideas for new biodegradable materials and biofuels.

Genetic sequences from complex organisms can “provide the raw materials for genome engineering and synthetic biology to produce valuable bioproducts at industrial scale,” wrote the team.

Medical and industrial uses aside, the effort also documents biodiversity. Creating a DNA digital library of all known eukaryotic life can pinpoint which species are most at risk—including species not yet fully characterized—providing data for earlier intervention.

“For the first time in history, it is possible to efficiently sequence the genomes of all known species and to use genomics to help discover the remaining 80 to 90 percent of species that are currently hidden from science,” wrote the team.

Soldiering On

The project has three phases.

Phase one lays the groundwork. It establishes the species to be sequenced, builds digital infrastructure for data sharing, develops an analysis toolkit. The most important goal is to build a reference DNA sequence for species similar in genetic makeup—that is, those in a “family.”

Reference genomes are incredibly important for genetic studies. True to their name, scientists rely on them as a baseline when comparing genetic variants—for example, to track down genes related to inherited diseases in humans or sugar content in different variants of crops.

Phase two of the project will begin analyzing the sequencing data and form strategies to maintain biodiversity. The last phase integrates all previous work to potentially revise how different species fit into our evolutionary tree. Scientists will also integrate climate data into this phase and tease out the impacts of climate change on biodiversity.

The international project began in 2018 and included the US, UK, Denmark, and China, with most DNA specimens sequenced at facilities in China and the UK. Today, 28 countries spanning six continents have signed on. Most DNA material isolated from individual species is directly sequenced on site, reducing the cost of transportation while increasing fidelity.

Not all participants have easy access to DNA sequencing facilities. One institution, Wellcome Sanger, developed a portable DNA sequencing lab that could help scientists working in rural areas to capture the genetic blueprints of exotic plants and animals. The device sequenced the DNA of a type of sunflower with potential medicinal properties in Africa, among other specimens from exotic locations.

EBP follows in the footsteps of other global projects aiming to sequence the Earth’s microbes, such as the National Microbiome Initiative or the Earth Microbiome Project. Once also considered moonshots, these have secured funding from government agencies and private investments.

Despite the enthusiasm of its participants, EBP is still short billions of dollars to guide it to full completion. But the project’s price tag—originally estimated in the billions of dollars—may be far less.

Thanks to more efficient and cheaper genetic sequencing methods, the current cost of phase one is expected to be half the original estimate—around $265 million.

It’s still a hefty sum, but for participants, the resulting database and methods are worth it. “We now have a common forum to learn together about how to produce genomes with the highest possible quality,” Alexandre Aleixo at the Vale Institute of Technology, who participated in the project, told Science.

Given the influence bacterial genetics has already had on biomedicine and biofuels, it’s likely that deciphering eukaryote DNA can spur further inspiration. In the end, the project relies on a global collaboration to benefit humanity.

“The far-reaching potential benefits of creating an open digital repository of genomic information for life on Earth can be realized only by a coordinated international effort,” wrote the team.

Image Credit: M. Richter on Pixabay

Kategorie: Transhumanismus

This Week’s Awesome Tech Stories From Around the Web (Through November 9)

Singularity HUB - 9 Listopad, 2024 - 16:00
ARTIFICIAL INTELLIGENCE

Why AI Could Eat Quantum Computing’s Lunch
Edd Gent | MIT Technology Review
“The scale and complexity of quantum systems that can be simulated using AI is advancing rapidly, says Giuseppe Carleo, a professor of computational physics at the Swiss Federal Institute of Technology (EPFL). …Given the pace of recent advances, a growing number of researchers are now asking whether AI could solve a substantial chunk of the most interesting problems in chemistry and materials science before large-scale quantum computers become a reality.”

ROBOTICS

MIT Debuts a Large Language Model-Inspired Method for Teaching Robots New Skills
Brian Heater | TechCrunch
“The team introduced a new architecture called heterogeneous pretrained transformers (HPT), which pulls together information from different sensors and different environments. …’Our dream is to have a universal robot brain that you could download and use for your robot without any training at all,’ CMU associate professor David Held said of the research. ‘While we are just in the early stages, we are going to keep pushing hard and hope scaling leads to a breakthrough in robotic policies, like it did with large language models.'”

FUTURE

Why Futurist Amy Webb Sees a ‘Technology Supercycle’ Headed Our Way
Tim Brinkhof | Big Think
“[Webb] predicts that we are on the cusp of a ‘technology supercycle,’ in which advances in three complementary and increasingly interconnected fields of research—AI, biotech, and smart sensors—will transform our economy and society to a similar extent as the wheel and the steam engine.”

BIOTECH

A ‘Crazy’ Idea for Treating Autoimmune Diseases Might Actually Work
Sarah Zhang | The Atlantic
“Lupus cannot be cured. No autoimmune disease can be cured. Two years ago, however, a study came out of Germany that rocked all of these assumptions. Five patients with uncontrolled lupus went into complete remission after undergoing a repurposed cancer treatment called CAR-T-cell therapy, which largely wiped out their rogue immune cells. The first treated patient has had no symptoms for almost four years now.”

GENE EDITING

How a Breakthrough Gene-Editing Tool Will Help the World Cope With Climate Change
James Temple | MIT Technology Review
“Jennifer Doudna, one of the inventors of the breakthrough gene-editing tool CRISPR, says the technology will help the world grapple with the growing risks of climate change by delivering crops and animals better suited to hotter, drier, wetter, or weirder conditions. ‘The potential is huge,’ says Doudna, who shared the 2020 Nobel Prize in chemistry for her role in the discovery. ‘There is a coming revolution right now with CRISPR.'”

TECH

The Death of Search
Matteo Wong | The Atlantic
“A little, or even a lot, of inefficiency in search has long been the norm; AI will snuff it out. Our lives will be more convenient and streamlined, but perhaps a bit less wonderful and wonder-filled, a bit less illuminated. A process once geared toward exploration will shift to extraction. Less meandering, more hunting. No more unknown unknowns. If these companies really have their way, no more hyperlinks—and thus, no actual web.”

ENERGY

One Way That Could Improve Space-Based Power: Relays
Michelle Hampson | IEEE Spectrum
“Intermediate transmitters could more effectively beam power to the ground. …In their study, the researchers designed and tested several low-cost, light-weight proof of concept transmit arrays to refocus the beam, finding the tactic could transfer nearly 2.5 times as much power as a system that would beam power straight to Earth.”

ARTIFICIAL INTELLIGENCE

Debate May Help AI Models Converge on Truth
Stephen Ornes | Quanta
“Letting AI systems argue with each other may help expose when a large language model has made mistakes. …The approach was first proposed six years ago, but two sets of findings released earlier this year—one in February from the AI startup Anthropic and the second in July from Google DeepMind—offer the first empirical evidence that debate between two LLMs helps a judge (human or machine) recognize the truth.”

SPACE

Life-Seeking, Ice-Melting Robots Could Punch Through Europa’s Icy Shell
Robin George Andrews | MIT Technology Review
“Can robots actually get through that ice shell and survive the journey? A simple way to start is with a cryobot—a melt probe that can gradually thaw its way through the shell, pulled down by gravity. …Once it gets through the ice, the cryobot could unfurl a suite of scientific investigation tools, or perhaps deploy an independent submersible that could work in tandem with the cryobot—all while making sure none of that radioactive matter contaminates the ocean.”

Image Credit: Harry Borrett on Unsplash

Kategorie: Transhumanismus

Solar-Powered ‘Planimal’ Cells? Chloroplasts in Hamster Cells Make Food From Light

Singularity HUB - 8 Listopad, 2024 - 18:30

The ability of plants to convert sunlight into food is an enviable superpower. Now, researchers have shown they can get animal cells to do the same thing.

Photosynthesis in plants and algae is performed by tiny organelles known as chloroplasts, which convert sunlight into oxygen and chemical energy. While the origins of these structures are hazy, scientists believe they may have been photosynthetic bacteria absorbed by primordial cells.

Our ancestors weren’t so lucky, but now researchers from the University of Tokyo have managed to rewrite evolutionary history. In a recent paper, the team reported they had successfully implanted chloroplasts into hamster cells where they generated energy for at least two days via the photosynthetic electron transport process.

“As far as we know, this is the first reported detection of photosynthetic electron transport in chloroplasts implanted in animal cells,” professor Sachihiro Matsunaga said in a press release.

“We thought that the chloroplasts would be digested by the animal cells within hours after being introduced. However, what we found was that they continued to function for up to two days, and that the electron transport of photosynthetic activity occurred.”

Some animals have already managed to gain the benefits of photosynthesis—notably giant clams, which host algae in a symbiotic relationship. And it’s not the first time people have tried adding photosynthetic abilities into different kinds of cells. Previous studies had managed to make a kind of chimera between photosynthetic cyanobacteria and yeast cells.

But transplanting chloroplasts into animal cells is a bigger challenge. One of the major hurdles the researchers faced is that most algal chloroplasts become inactive below 37 degrees Celsius (98.6 degree Fahrenheit), but animal cells need to be cultured at these lower temperatures.

This prompted them to pick chloroplasts from a type of algae called Cyanidioschyzon merolae, which lives in highly acidic and volcanic hot springs. While it prefers temperatures about 42 degrees Celsius (107.6 degrees Fahrenheit), it remains active at much lower temperatures.

After isolating the algae’s chloroplasts and injecting them into hamster cells, the researchers cultured them for several days. During that time, they checked for photosynthetic activity using light pulses and imaged the cells to determine the location and structure of the choloroplasts.

They discovered the organelles were still producing energy after two days. They even found the so-called “planimal” cells were growing faster than regular hamster cells, suggesting the chloroplasts were providing a carbon source that acted as fuel for the host cells.

They also found many of the chloroplasts had migrated to surround the cells’ nuclei, and organelles known as mitochondria that convert carbohydrates into energy the cell can use had also gathered around the chloroplasts. The team suggests there could be some kind of chemical exchange between these sub-cellular structures, though they’ll need future studies to confirm this.

After two days, however, the chloroplasts started degrading, and by the fourth day, photosynthesis seemed to have stopped. This is probably due to the animal cells digesting the unfamiliar organelles, but the researchers say genetic tweaks to the animal cells could potentially side-step digestion.

While the research might conjure sci-fi visions of humans with green skin surviving on sunlight alone, the team says the most likely applications are in tissue engineering. Lab-grown tissue typically consists of several layers of cells, and it can be hard to get oxygen deep into the tissue.

“By mixing in chloroplast-implanted cells, oxygen could be supplied to the cells through photosynthesis, by light irradiation, thereby improving the conditions inside the tissue to enable growth,” said Matsunaga.

Nonetheless, the research is a breakthrough that rewrites many of our assumptions about life’s possible forms. And while it might be a distant prospect, it opens the tantalizing possibility of one day giving animals the solar-powered capabilities of plants.

Image Credit: R. Aoki, Y. Inui, Y. Okabe et al. 2024/ Proceedings of the Japan Academy, Series B

Kategorie: Transhumanismus

The First Cells May Have Formed From Simple Fatty Bubbles Like These Ones

Singularity HUB - 7 Listopad, 2024 - 23:56

The first spark of cellular life on Earth likely needed gift packaging.

Let me explain. With the holidays around the corner, we’re all beginning to order presents. Each is carefully packaged inside a box or bubble-wrapped envelope and addressed for shipping. Without packaging, items would tumble together in a chaotic mess and miss their destination.

Life’s early chemicals were, in a way, like these “presents.” They floated around in a primordial soup, eventually forming the longer molecules that make up life as we know it. But without a “wrapper” encapsulating them in individual packages, different molecules bumped into each other but eventually drifted away, missing the necessary connections to spark life.

In other words, cellular “wrappers,” or cell membranes, are key to packaging the molecular machinery of life together. Made of fatty molecules, these wrappers are the foundation of our cells and the basis of multicellular life. They keep bacteria and other pathogens at bay while triggering the biological mechanisms that power normal cellular functions.

Scientists have long debated how the first cell membranes formed. Their building blocks, long-chain lipids, were hard to find on early Earth. Shorter fatty molecules, on the other hand, were abundant. Now, a new study in Nature Chemistry offers a bridge between these short fatty molecules and the first primordial cells.

Led by Neal Devaraj at the University of California, San Diego, the team coaxed short fatty molecules into bubbles that can encapsulate biological molecules. The team then added modern RNA molecules to drive chemical reactions inside the bubbles—and watched the reactions work, similar to those in a functional cell.

The engineered cell membranes also resisted high concentrations of substances abundant in early Earth puddles that could damage their integrity, shielding molecular carriers of genetic information and allowing them to work normally.

The resulting protocells are the latest to probe the origins of life. To be clear, they only mimic parts of normal living cells. They don’t have the molecular machinery to replicate, and their wrappers are rudimentary compared to ours.

But the “fascinating” result “opens up a new avenue” for understanding how the first cells appeared, Sheref Mansy at the University of Trento, who was not involved in the study, told Science.

At the Beginning

The origins of life’s molecules are highly debated. But most scientists agree that life stemmed from three main ones: DNA, RNA, and amino acids (the building blocks of proteins).

Today, in most organisms, DNA stores the genetic blueprint, and RNA carries this genetic information to the cell’s protein-making factories. But many viruses store genes only in RNA, and studies of early life suggest RNA may have been the first carrier of inheritance. RNA can also spur chemical reactions—including ones that glue amino acids into different types of proteins.

But regardless of which molecule came first, “all life on Earth requires lipid membranes,” the authors of the new paper write.

Made of a double layer of fatty molecules, the modern cell membrane is a work of art. It’s the first defense against bacterial and viral invaders. It’s also dotted with protein “tunnels” that tweak the functions of cells—for example, helping brain cells encode memories or heart cells beat in sync. These living cellular walls also act as scaffolds for biochemical reactions that often dictate the fate of cells—if they live, die, or turn into “zombie cells” that contribute to aging.

Since they’re so important for biology, scientists have long wondered how the first cell membranes came about. What made up “the very first, primordial cell membrane-like structure on Earth before the emergence of life?” asked the authors.

Our cell membranes are built on long chains of lipids, but these have complex chemical structures and require multiple steps to synthesize—likely beyond what was possible on early Earth. In contrast, the first protocell membranes were likely formed from molecules already present, including short fatty acids that self-organized.

Back to the Future

Previously, the team found an amino acid that “staples” fatty acids together. Called cysteine, the molecule was likely prevalent in our planet’s primordial soup. In a computer simulation, adding cysteine to short fatty acids caused them to form synthetic membranes.

The new study built on those results in the lab.

The team added cysteine to two types of short lipids and watched as the amino acid gathered the lipids into bubbles within 30 minutes. The lipids were similar in length to those likely present on early Earth, and the molecular concentrations also mimicked those during the period.

The team next took a closer look with an electron microscope. The generated membranes were about as thick as those in normal cells and highly stable. Finally, the team simulated a hypothetical early-Earth scenario where RNA serves as the first genetic material.

“The RNA world hypothesis is accepted as one of the most plausible scenarios of the origin of life,” wrote the authors. This is partly because RNA can also act as enzyme. These enzymes, dubbed ribozymes, can spark different chemical reactions, like, for example, those that might stitch amino acids and lipids into bubbles. However, they need a duo of minerals—calcium and magnesium—to work. While these minerals were likely highly abundant on early Earth, in some cases, they can damage artificial cell membranes.

But in several tests, the lab-grown protocells easily withstood the mineral onslaught. Meanwhile, the protocells showed they could generate chemical reactions using RNA, suggesting that short fatty molecules can build cell membranes in the primordial soup.

To Claudia Bonfio at the University of Cambridge, the study was “really, really cool and very well done.” But the mystery of life remains. Most fatty acids generated in the protocell aren’t found in modern cell membranes. A next step would be to show that the protocells can act more like normal ones—growing and dividing with a healthy metabolism.

But for now, the team is focused on deciphering the beginnings of cellular life. The work shows that reactions between simple chemicals in water can “assemble into giant” blobs, expanding the ways that protocell membranes can form, they wrote.

Image Credit: Max Kleinen on Unsplash

Kategorie: Transhumanismus

Everything Has Changed, Yet Nothing Has Changed: Don’t Panic

Singularity Weblog - 6 Listopad, 2024 - 22:14
It is hard to overstate the impact of this American election; it will change everything, everywhere. From the functioning of democracy within the United States to its influence and perception abroad, the effects will be profound. People in Utah, Ukraine, Uruguay, and even Uganda will feel its repercussions. There is no place on the planet […]
Kategorie: Transhumanismus

Europe Aims to Visit This Large Asteroid When It Brushes by Earth in 2029

Singularity HUB - 5 Listopad, 2024 - 21:26

The European Space Agency has given the go-ahead for initial work on a mission to visit an asteroid called Apophis. If approved at a key meeting next year, the robotic spacecraft, known as the Rapid Apophis Mission for Space Safety (Ramses), will rendezvous with the asteroid in February 2029.

Apophis is 340 meters wide, about the same as the height of the Empire State Building. If it were to hit Earth, it would cause wholesale destruction hundreds of miles from its impact site. The energy released would equal that from tens or hundreds of nuclear weapons, depending on the yield of the device.

Luckily, Apophis won’t hit Earth in 2029. Instead, it will pass by Earth safely at a distance of 19,794 miles (31,860 kilometers), about one-twelfth the distance from the Earth to the Moon. Nevertheless, this is a very close pass by such a big object, and Apophis will be visible with the naked eye.

NASA and the European Space Agency have seized this rare opportunity to send separate robotic spacecraft to rendezvous with Apophis and learn more about it. Their missions could help inform efforts to deflect an asteroid that threatens Earth, should we need to in the future.

The Threat From Asteroids

Some 66 million years ago, an asteroid the size of a small city hit Earth. The impact of this asteroid brought about a global extinction event that wiped out the dinosaurs.

Earth is in constant danger of being hit by asteroids, leftover debris from the formation of the solar system 4.5 billion years ago. Located in the asteroid belt between Mars and Jupiter, asteroids come in many shapes and sizes. Most are small, only 10 meters across, but the largest are hundreds of kilometers across, larger than the asteroid that killed the dinosaurs.

Artist’s impression of Apophis. Image Credit: NASA

The asteroid belt contains one to two million asteroids larger than a kilometer across and millions of smaller bodies. These space rocks feel each other’s gravitational pull, as well as the gravitational tug of Jupiter on one side and the inner planets on the other.

Because of this gravitational tug-of-war, every once in a while an asteroid is thrown out of its orbit and hurtles towards the inner solar system. There are 35,000 such “near-Earth objects” (NEOs). Of these, 2,300 “potentially hazardous objects” (PHOs) have orbits that intersect Earth’s and are large enough that they pose a real threat to our survival.

Do Not Go Gentle Into That Good Night

During the 20th century, astronomers set up several surveys, such as Atlas, in order to detect and study hazardous asteroids. But detection is not enough; we have to find a way to defend Earth against an incoming asteroid.

Blowing up an asteroid, as depicted in the movie Armageddon, is no use. The asteroid would be broken into smaller fragments, which would keep moving in much the same direction. Instead of being hit by one large asteroid, Earth would be hit by a swarm of smaller objects.

The preferred solution is to deflect the incoming asteroid away from Earth so that it passes by harmlessly. To do so, we would need to apply an external force to the asteroid to nudge it away. A popular idea is to fire a projectile at the asteroid. NASA did this in 2022, when a spacecraft called DART collided with an asteroid. Before we do this out of necessity, we have to understand how different types of asteroids would react to such an impact.

Apophis, Ramses, and Osiris-Apex

Apophis was discovered in 2004. The asteroid passed by Earth on December 21, 2004 at a distance of 14 million kilometers. It returned in 2021 and will swing by Earth again in 2029, 2036, and 2068.

Until recently, there was a small chance that Apophis could collide with Earth in 2068. However, during Apophis’ approach in 2021, astronomers used radar observations to refine their knowledge of the asteroid’s orbit. These showed that Apophis would not hit our planet for the next 100 years.

The Ramses mission will rendezvous with Apophis in February 2029, two months before its closest approach to Earth on Friday, April 13. It will then accompany the asteroid as it approaches Earth. The goal is to learn how Apophis’s orbit, rotation, and shape will change as it passes so close to Earth’s gravitational field.

In 2016, NASA launched the “Origins, Spectral Interpretation, Resource Identification, and Security–Regolith Explorer” (Osiris-Rex) mission to study the near-Earth asteroid Bennu. It intercepted Bennu in 2020 to collect samples of rock and soil from its surface and dispatched the rocks in a capsule, which arrived on Earth in 2023.

The spacecraft is still out there, so NASA renamed it the “Origins, Spectral Interpretation, Resource Identification and Security–Apophis Explorer” (Osiris-Apex) and assigned it to study Apophis. Osiris-Apex will reach the asteroid just after its 2029 close encounter. It will then fly low over Apophis’s surface and fire its engines, disturbing the rocks and dust that cover the asteroid to reveal the layer underneath.

A close flyby of an asteroid as large as Apophis happens only once every 5,000 to 10,000 years. Apophis’s arrival in 2029 presents a rare opportunity to study such an asteroid up close, and seeing how it is affected by Earth’s gravitational pull. The information gleaned will shape the way we choose to protect Earth in the future from a real killer asteroid.

Ancient Egyptian Mythology

When Ramses and Osiris-Apex meet up with Apophis in 2029 they will inadvertently reenact a core component of ancient Egyptian cosmology. To the ancient Egyptians, the sun was personified by several powerful gods, chief among them Re. The sun’s setting in the evening was interpreted as Re dying and entering the netherworld.

During his nighttime journey through the netherworld, Re was menaced by the great snake Apophis, who embodied the powers of darkness and dissolution. Only after Apophis had been defeated could Re be revitalized by Osiris, the king of the netherworld. Re could then once again be reborn in the east, rising in the sky once more.

Tomb murals, coffins, and funerary papyri depict Apophis as a large, coiled snake threatening Re as he sails in his solar barque (sailing ship). But Apophis is always defeated, his body pierced by a spear or riven by knives.

Though the asteroid Apophis poses no danger in the near future, Ramses (named after the pharaohs of the same name, which meant “born of Re”) and Osiris-Apex will study it so that one day we will know how to defeat it—or any of its distant brethren.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Kategorie: Transhumanismus

Watch: Boston Dynamics’ New Electric Atlas Robot Gets Down to Work

Singularity HUB - 5 Listopad, 2024 - 01:20

One of the world’s most advanced humanoid robots has been all play and no work. Boston Dynamics’ Atlas is famous for backflips, parkour, and dance mobs. These require extremely impressive robotic control, but they’re also mostly fun research demos.

Now, six months after the legendary robotics lab unveiled an all-new electric Atlas, they’re showing off more of what it can do. A recent video shows Atlas picking auto parts from one set of shelves and moving them over to another, a job currently handled by factory workers.

Apart from being electric, the new Atlas has a unique way of moving. Its head, upper body, pelvis, and legs swivel independently. So, its head might rotate to face the opposite direction of its legs and torso, Exorcist-style, before the rest of its body twists around to catch up.

The new demo highlights another core change for Atlas. Whereas, in the past, Boston Dynamics meticulously programmed the robot’s most impressive maneuvers, the latest video, by contrast, shows a fully autonomous Atlas at work.

“There are no prescribed or teleoperated movements; all motions are generated autonomously online,” according to a description accompanying the video.

Why release this video now? One, humanoid robots are having something of a moment. And two, ditto for artificial intelligence in robotics. Boston Dynamics led the pack for years, but it didn’t rush Atlas into production for commercial use. Neither has it added significant amounts of AI to the equation. Now, it appears to be interested in both.

Last month, the lab, which is owned by Hyundai, announced a partnership with Toyota Research Institute (TRI) to add artificial intelligence, TRI’s specialty, to Atlas. Alongside pure research, the partnership hopes to make Atlas into a general-purpose humanoid.

It’s an intriguing development. In terms of pure robotics, Atlas is world-class. TRI, meanwhile, is working to develop large behavior models, which are like large language models for robotic movement and manipulation. The idea is that with enough real-world data, AI models like this might develop into a kind robotic brain that doesn’t need to be explicitly programmed for every scenario it might encounter.

Google DeepMind has also been pursuing a similar approach with a vision-language-action model called RT-X and united 33 research labs in an effort to assemble a vast new AI training dataset for robotics. And just last week, a TRI-funded MIT project showed off a new transformer algorithm like the one behind ChatGPT, only designed for robotics.

“Our dream is to have a universal robot brain that you could download and use for your robot without any training at all,” CMU associate professor David Held told TechCrunch. “While we are just in the early stages, we are going to keep pushing hard and hope scaling leads to a breakthrough in robotic policies, like it did with large language models.”

Boston Dynamics isn’t alone in its efforts. If anything, it’s late to the party. A host of companies, many born in the last few years, share the goal of general-purpose humanoids. These include Agility Robotics, Tesla, Figure, and 1X, among others.

In an interview with IEEE Spectrum, Boston Dynamics’ Scott Kuindersma said this may be “one of the most exciting points” in the field’s history. At the same time, he acknowledged there’s a lot of hype out there—and a lot of work still to do. Challenges include collecting enough of the right kind of data and dialing in how best to train robotics algorithms.

That doesn’t mean there won’t be more Boston Dynamics videos out soon. “I want people to be excited about watching for our results, and I want people to trust our results when they see them,” TRI’s Russ Tedrake said in the same interview.

AI-Atlas is just getting started.

Image Credit: Boston Dynamics

Kategorie: Transhumanismus

This Week’s Awesome Tech Stories From Around the Web (Through November 2)

Singularity HUB - 2 Listopad, 2024 - 15:00
ARTIFICIAL INTELLIGENCE

Google CEO Says Over 25% of New Google Code Is Generated by AI
Benj Edwards | Ars Technica
“We’ve always used tools to build new tools, and developers are using AI to continue that tradition. On Tuesday, Google’s CEO revealed that AI systems now generate more than a quarter of new code for its products, with human programmers overseeing the computer-generated contributions. The statement, made during Google’s Q3 2024 earnings call, shows how AI tools are already having a sizable impact on software development.”

AUTOMATION

Waymo Raises $5.6 Billion From Outside Investors
Eli Tan | The New York Times
“Amid its push to grow its fleet of autonomous robot taxis and expand into new cities, Waymo has raised $5.6 billion from outside investors, its largest funding round to date. …The fresh money comes behind Waymo’s first taste of commercial success. Its robot taxis are now completing over 100,000 rides each week in San Francisco, Phoenix and Los Angeles, double its number in May, and will be operating in Austin, Texas, and Atlanta by 2025 through a partnership with Uber.”

ROBOTICS

This Is a Glimpse of the Future of AI Robots
Will Knight | Wired
“Physical Intelligence, also known as PI or π, was founded earlier this year by several prominent robotics researchers to pursue the new robotics approach inspired by breakthroughs in AI’s language abilities. ‘The amount of data we’re training on is larger than any robotics model ever made, by a very significant margin, to our knowledge,’ says Sergey Levine, a cofounder of Physical Intelligence and an associate professor at UC Berkeley.”

ENERGY

Nuclear Fusion’s New Idea: An Off-the-Shelf Stellarator
Tom Clynes | IEEE Spectrum
“The PPPL team invented this nuclear-fusion reactor, completed last year, using mainly off-the-shelf components. Its core is a glass vacuum chamber surrounded by a 3D-printed nylon shell that anchors 9,920 meticulously placed permanent rare-earth magnets. Sixteen copper-coil electromagnets resembling giant slices of pineapple wrap around the shell crosswise.”

TECH

Wall Street Giants to Make $50 Billion Bet on AI and Power Projects
Katherine Blunt | The Wall Street Journal
“The investment is a bet on AI’s huge energy needs and the mounting stress it is putting on the US power grid. …The companies said they are now working together with large tech companies to accelerate their access to electricity, which has become constrained in parts of the US as data-center developers compete for power sources and access to the grid. ‘The capital needs are huge, and one of the big bottlenecks—maybe the bottleneck—is electricity availability,’ ECP founder and senior partner Doug Kimmelman said.

ENVIRONMENT

The AI Boom Rests on Billions of Tons of Concrete
Ted C. Fishman | IEEE Spectrum
“To the casual observer, the data industry can seem incorporeal, its products conjured out of weightless bits. But as I stand beside the busy construction site for DataBank’s ATL4, what impresses me most is the gargantuan amount of material—mostly concrete—that gives shape to the goliath that will house, secure, power, and cool the hardware of AI. Big data is big concrete. And that poses a big problem.”

AUTOMATION

Waymo Explores Using Google’s Gemini to Train Its Robotaxis
Andrew J. Hawkins | The Verge
“Waymo has long touted its ties to Google’s DeepMind and its decades of AI research as a strategic advantage over its rivals in the autonomous driving space. Now, the Alphabet-owned company is taking it a step further by developing a new training model for its robotaxis built on Google’s multimodal large language model (MLLM) Gemini.”

SPACE

SpaceX Has Caught a Massive Rocket. So What’s Next?
Eric Berger | Ars Technica
“Here’s our best attempt to piece together the milestones and major goals of the Starship program over the next several years before it unlocks the capability to land humans on the Moon for NASA’s Artemis Program and begins flying demonstration missions to Mars. For fun, we’ve also included some estimated dates for each of these milestones. These represent our best guesses, and they’re almost certainly wrong.”

SCIENCE

Meet the First Star System to ‘Solve’ the 3-Body Problem
Ethan Siegel | Big Think
“It’s easy to have planets that orbit around a single star, and in a double star system, you can either orbit close to one star or far from both members. These configurations are stable, but adding a third star into the mix was thought to render the formation of planets unstable, as mutual gravitational interactions would eventually force their ejection. That wisdom got thrown out the window with the discovery of GW Orionis, which boasts multiple massive dust rings and possibly even more planets, all orbiting three stars at once.”

Image Credit: David Clode on Unsplash

Kategorie: Transhumanismus

What Is AI Superintelligence? Could It Destroy Humanity? And Is It Really Almost Here?

Singularity HUB - 2 Listopad, 2024 - 01:50

In 2014, the British philosopher Nick Bostrom published a book about the future of artificial intelligence with the ominous title Superintelligence: Paths, Dangers, Strategies. It proved highly influential in promoting the idea that advanced AI systems—“superintelligences” more capable than humans—might one day take over the world and destroy humanity.

A decade later, OpenAI boss Sam Altman says superintelligence may only be “a few thousand days” away. A year ago, Altman’s OpenAI cofounder Ilya Sutskever set up a team within the company to focus on “safe superintelligence,” but he and his team have now raised a billion dollars to create a startup of their own to pursue this goal.

What exactly are they talking about? Broadly speaking, superintelligence is anything more intelligent than humans. But unpacking what that might mean in practice can get a bit tricky.

Different Kinds of AI

In my view, the most useful way to think about different levels and kinds of intelligence in AI was developed by US computer scientist Meredith Ringel Morris and her colleagues at Google.

Their framework lists six levels of AI performance: no AI, emerging, competent, expert, virtuoso, and superhuman. It also makes an important distinction between narrow systems, which can carry out a small range of tasks, and more general systems.

A narrow, no-AI system is something like a calculator. It carries out various mathematical tasks according to a set of explicitly programmed rules.

There are already plenty of very successful narrow AI systems. Morris gives the Deep Blue chess program that famously defeated world champion Garry Kasparov way back in 1997 as an example of a virtuoso-level narrow AI system.

Table: The Conversation * Source: Adapted from Morris et al. * Created with Datawrapper

Some narrow systems even have superhuman capabilities. One example is AlphaFold, which uses machine learning to predict the structure of protein molecules, and whose creators won the Nobel Prize in Chemistry this year.What about general systems? This is software that can tackle a much wider range of tasks, including things like learning new skills.

A general no-AI system might be something like Amazon’s Mechanical Turk: It can do a wide range of things, but it does them by asking real people.

Overall, general AI systems are far less advanced than their narrow cousins. According to Morris, the state-of-the-art language models behind chatbots such as ChatGPT are general AI—but they are so far at the “emerging” level (meaning they are “equal to or somewhat better than an unskilled human”), and yet to reach “competent” (as good as 50 percent of skilled adults).

So by this reckoning, we are still some distance from general superintelligence.

How Intelligent Is AI Right Now?

As Morris points out, precisely determining where any given system sits would depend on having reliable tests or benchmarks.

Depending on our benchmarks, an image-generating system such as DALL-E might be at virtuoso level (because it can produce images 99 percent of humans could not draw or paint), or it might be emerging (because it produces errors no human would, such as mutant hands and impossible objects).

There is significant debate even about the capabilities of current systems. One notable 2023 paper argued GPT-4 showed “sparks of artificial general intelligence.”

OpenAI says its latest language model, o1, can “perform complex reasoning” and “rivals the performance of human experts” on many benchmarks.

However, a recent paper from Apple researchers found o1 and many other language models have significant trouble solving genuine mathematical reasoning problems. Their experiments show the outputs of these models seem to resemble sophisticated pattern-matching rather than true advanced reasoning. This indicates superintelligence is not as imminent as many have suggested.

Will AI Keep Getting Smarter?

Some people think the rapid pace of AI progress over the past few years will continue or even accelerate. Tech companies are investing hundreds of billions of dollars in AI hardware and capabilities, so this doesn’t seem impossible.

If this happens, we may indeed see general superintelligence within the “few thousand days” proposed by Sam Altman (that’s a decade or so in less sci-fi terms). Sutskever and his team mentioned a similar timeframe in their superalignment article.

Many recent successes in AI have come from the application of a technique called “deep learning,” which, in simplistic terms, finds associative patterns in gigantic collections of data. Indeed, this year’s Nobel Prize in Physics has been awarded to John Hopfield and also the “Godfather of AI” Geoffrey Hinton, for their invention of the Hopfield network and Boltzmann machine, which are the foundation of many powerful deep learning models used today.

General systems such as ChatGPT have relied on data generated by humans, much of it in the form of text from books and websites. Improvements in their capabilities have largely come from increasing the scale of the systems and the amount of data on which they are trained.

However, there may not be enough human-generated data to take this process much further (although efforts to use data more efficiently, generate synthetic data, and improve transfer of skills between different domains may bring improvements). Even if there were enough data, some researchers say language models such as ChatGPT are fundamentally incapable of reaching what Morris would call general competence.

One recent paper has suggested an essential feature of superintelligence would be open-endedness, at least from a human perspective. It would need to be able to continuously generate outputs that a human observer would regard as novel and be able to learn from.

Existing foundation models are not trained in an open-ended way, and existing open-ended systems are quite narrow. This paper also highlights how either novelty or learnability alone is not enough. A new type of open-ended foundation model is needed to achieve superintelligence.

What Are the Risks?

So what does all this mean for the risks of AI? In the short term, at least, we don’t need to worry about superintelligent AI taking over the world.

But that’s not to say AI doesn’t present risks. Again, Morris and co have thought this through: As AI systems gain great capability, they may also gain greater autonomy. Different levels of capability and autonomy present different risks.

For example, when AI systems have little autonomy and people use them as a kind of consultant—when we ask ChatGPT to summarize documents, say, or let the YouTube algorithm shape our viewing habits—we might face a risk of over-trusting or over-relying on them.

In the meantime, Morris points out other risks to watch out for as AI systems become more capable, ranging from people forming parasocial relationships with AI systems to mass job displacement and society-wide ennui.

What’s Next?

Let’s suppose we do one day have superintelligent, fully autonomous AI agents. Will we then face the risk they could concentrate power or act against human interests?

Not necessarily. Autonomy and control can go hand in hand. A system can be highly automated, yet provide a high level of human control.

Like many in the AI research community, I believe safe superintelligence is feasible. However, building it will be a complex and multidisciplinary task, and researchers will have to tread unbeaten paths to get there.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Kategorie: Transhumanismus

The US Says Electric Air Taxis Can Finally Take Flight Under New FAA Rules

Singularity HUB - 31 Říjen, 2024 - 20:20

Electric air taxis have seen rapid technological advances in recent years, but the industry has had a regulatory question mark hanging over its head. Now, the US Federal Aviation Authority has published rules governing the operation of this new class of aircraft.

Startups developing electric vertical take-off and landing (eVTOL) aircraft have attracted billions of dollars of investment over the past decade. But an outstanding challenge for these vehicles is they’re hard to classify, often representing a strange hybrid between a drone, light aircraft, and helicopter.

For this reason they’ve fallen into a regulatory gray area in most countries. The murkiness has led to considerable uncertainty about where and how they’ll be permitted to operate in the future, which could have serious implications for the business model of many of these firms.

But now, the FAA has provided some much-needed clarity by publishing the rules governing what the agency calls “powered-lift” aircraft. This is the first time regulators have recognized a new category of aircraft since the 1940s when helicopters first entered the market.

“This final rule provides the necessary framework to allow powered-lift aircraft to safely operate in our airspace,” FAA administrator Mike Whitaker said in a statement.  “Powered-lift aircraft are the first new category of aircraft in nearly 80 years and this historic rule will pave the way for accommodating wide-scale advanced air mobility operations in the future.”

The principal challenge when it comes to regulating air taxis is the novel way they operate. Most leading designs use propellers that rotate up and down, which allows them to take off vertically like a helicopter before operating more like a conventional airplane during cruise.

The agency dealt with this by varying the operational requirements, such as minimum safe altitude, required visibility, and range, depending on the phase of flight. This means that during take-off the vehicles need to adhere to the less stringent requirements placed on helicopters, but when cruising they must conform to the same rules as airplanes. The rules are also performance-based, so exact requirements will depend on the capabilities of the specific vehicle in question.

The new regulations also provide a framework for certifying the initial batch of instructors and training future pilots. Because eVTOLs are a new class of aircraft, there are currently no pilots certified to fly them and therefore no one to train other pilots.

To get round this chicken-and-egg situation, the FAA says they’ll allow certain pilots employed by eVTOL companies to develop the required experience and training during the test flights required for vehicle certification. These pilots would become the first group of instructors who could then train other instructors at pilot schools and training centers.

The regulations also relax an existing requirement for training aircraft to feature two sets of flight controls. Instead, the agency is allowing pilots to learn in aircraft where the trainer can easily access the controls to intervene, if necessary, or letting pilots train in a simulator to gain enough experience to fly the aircraft solo.

When the agency introduced draft rules last year, the industry criticized them as too strict, according to The Verge. But the agency says it has taken the criticism onboard and thinks the new rules strike a good balance between safety and easing the burden on companies.

Industry leader Joby Aviation welcomed the new rules and, in particular, the provision for training pilots in simulators. “The regulation published today will ensure the US continues to play a global leadership role in the development and adoption of clean flight,” JoeBen Bevirt, founder and CEO of Joby, said in a statement. “Delivering ahead of schedule is a testament to the dedication, coordination and hard work of the rulemaking team.”

In its announcement, the FAA highlighted the technology’s potential for everything from air taxi services to short-haul cargo transport and even air ambulances. With these new rules in place, operators can now start proving out some of those business cases.

Image Credit: Joby

Kategorie: Transhumanismus

Did the Early Cosmos Inflate Like a Balloon? A Mirror Universe Going Backwards in Time May Be a Simpler Explanation

Singularity HUB - 29 Říjen, 2024 - 21:46

We live in a golden age for learning about the universe. Our most powerful telescopes have revealed that the cosmos is surprisingly simple on the largest visible scales. Likewise, our most powerful “microscope,” the Large Hadron Collider, has found no deviations from known physics on the tiniest scales.

These findings were not what most theorists expected. Today, the dominant theoretical approach combines string theory, a powerful mathematical framework with no successful physical predictions as yet, and “cosmic inflation”—the idea that, at a very early stage, the universe ballooned wildly in size. In combination, string theory and inflation predict the cosmos to be incredibly complex on tiny scales and completely chaotic on very large scales.

The nature of the expected complexity could take a bewildering variety of forms. On this basis, and despite the absence of observational evidence, many theorists promote the idea of a “multiverse”: an uncontrolled and unpredictable cosmos consisting of many universes, each with totally different physical properties and laws.

So far, the observations indicate exactly the opposite. What should we make of the discrepancy? One possibility is that the apparent simplicity of the universe is merely an accident of the limited range of scales we can probe today, and that when observations and experiments reach small enough or large enough scales, the asserted complexity will be revealed.

The other possibility is that the universe really is very simple and predictable on both the largest and smallest scales. I believe this possibility should be taken far more seriously. For, if it is true, we may be closer than we imagined to understanding the universe’s most basic puzzles. And some of the answers may already be staring us in the face.

The Trouble With String Theory and Inflation

The current orthodoxy is the culmination of decades of effort by thousands of serious theorists. According to string theory, the basic building blocks of the universe are minuscule, vibrating loops and pieces of sub-atomic string. As currently understood, the theory only works if there are more dimensions of space than the three we experience. So, string theorists assume that the reason we don’t detect them is that they are tiny and curled up.

Unfortunately, this makes string theory hard to test, since there are an almost unimaginable number of ways in which the small dimensions can be curled up, with each giving a different set of physical laws in the remaining, large dimensions.

Meanwhile, cosmic inflation is a scenario proposed in the 1980s to explain why the universe is so smooth and flat on the largest scales we can see. The idea is that the infant universe was small and lumpy, but an extreme burst of ultra-rapid expansion blew it up vastly in size, smoothing it out and flattening it to be consistent with what we see today.

Inflation is also popular because it potentially explains why the energy density in the early universe varied slightly from place to place. This is important because the denser regions would have later collapsed under their own gravity, seeding the formation of galaxies.

Over the past three decades, the density variations have been measured more and more accurately both by mapping the cosmic microwave background—the radiation from the big bang—and by mapping the three-dimensional distribution of galaxies.

In most models of inflation, the early extreme burst of expansion which smoothed and flattened the universe also generated long-wavelength gravitational waves—ripples in the fabric of space-time. Such waves, if observed, would be a “smoking gun” signal confirming that inflation actually took place. However, so far the observations have failed to detect any such signal. Instead, as the experiments have steadily improved, more and more models of inflation have been ruled out.

Furthermore, during inflation, different regions of space can experience very different amounts of expansion. On very large scales, this produces a multiverse of post-inflationary universes, each with different physical properties.

The history of the universe according to the model of cosmic inflation. Image Credit: Wikipedia, CC BY-SA

The inflation scenario is based on assumptions about the forms of energy present and the initial conditions. While these assumptions solve some puzzles, they create others. String and inflation theorists hope that somewhere in the vast inflationary multiverse, a region of space and time exists with just the right properties to match the universe we see.

However, even if this is true (and not one such model has yet been found), a fair comparison of theories should include an “Occam factor,” quantifying Occam’s razor, which penalizes theories with many parameters and possibilities over simpler and more predictive ones. Ignoring the Occam factor amounts to assuming that there is no alternative to the complex, unpredictive hypothesis—a claim I believe has little foundation.

Over the past several decades, there have been many opportunities for experiments and observations to reveal specific signals of string theory or inflation. But none have been seen. Again and again, the observations turned out simpler and more minimal than anticipated.

It is high time, I believe, to acknowledge and learn from these failures and to start looking seriously for better alternatives.

A Simpler Alternative

Recently, my colleague Latham Boyle and I have tried to build simpler and more testable theories that do away with inflation and string theory. Taking our cue from the observations, we have attempted to tackle some of the most profound cosmic puzzles with a bare minimum of theoretical assumptions.

Our first attempts succeeded beyond our most optimistic hopes. Time will tell whether they survive further scrutiny. However, the progress we have already made convinces me that, in all likelihood, there are alternatives to the standard orthodoxy—which has become a straitjacket we need to break out of.

I hope our experience encourages others, especially younger researchers, to explore novel approaches guided strongly by the simplicity of the observations—and to be more skeptical about their elders’ preconceptions. Ultimately, we must learn from the universe and adapt our theories to it rather than vice versa.

Boyle and I started out by tackling one of cosmology’s greatest paradoxes. If we follow the expanding universe backward in time, using Einstein’s theory of gravity and the known laws of physics, space shrinks away to a single point, the “initial singularity.”

In trying to make sense of this infinitely dense, hot beginning, theorists including Nobel laureate Roger Penrose pointed to a deep symmetry in the basic laws governing light and massless particles. This symmetry, called “conformal” symmetry, means that neither light nor massless particles actually experience the shrinking away of space at the big bang.

By exploiting this symmetry, one can follow light and particles all the way back to the beginning. Doing so, Boyle and I found we could describe the initial singularity as a “mirror”: a reflecting boundary in time (with time moving forward on one side, and backward on the other).

Picturing the big bang as a mirror neatly explains many features of the universe which might otherwise appear to conflict with the most basic laws of physics. For example, for every physical process, quantum theory allows a “mirror” process in which space is inverted, time is reversed, and every particle is replaced with its anti-particle (a particle similar to it in almost all respects, but with the opposite electric charge).

According to this powerful symmetry, called CPT symmetry, the “mirror” process should occur at precisely the same rate as the original one. One of the most basic puzzles about the universe is that it appears to violate CPT symmetry because time always runs forward and there are more particles than anti-particles.

Our mirror hypothesis restores the symmetry of the universe. When you look in a mirror, you see your mirror image behind it: if you are left-handed, the image is right-handed and vice versa. The combination of you and your mirror image are more symmetrical than you are alone.

Likewise, when Boyle and I extrapolated our universe back through the big bang, we found its mirror image, a pre-bang universe in which (relative to us) time runs backward and antiparticles outnumber particles. For this picture to be true, we don’t need the mirror universe to be real in the classical sense (just as your image in a mirror isn’t real). Quantum theory, which rules the microcosmos of atoms and particles, challenges our intuition so at this point the best we can do is think of the mirror universe as a mathematical device which ensures that the initial condition for the universe does not violate CPT symmetry.

Surprisingly, this new picture provided an important clue to the nature of the unknown cosmic substance called dark matter. Neutrinos are very light, ghostly particles which, typically, move at close to the speed of light and which spin as they move along, like tiny tops. If you point the thumb of your left hand in the direction the neutrino moves, then your four fingers indicate the direction in which it spins. The observed, light neutrinos are called “left-handed” neutrinos.

Heavy “right-handed” neutrinos have never been seen directly, but their existence has been inferred from the observed properties of light, left-handed neutrinos. Stable, right-handed neutrinos would be the perfect candidate for dark matter because they don’t couple to any of the known forces except gravity. Before our work, it was unknown how they might have been produced in the hot early universe.

Our mirror hypothesis allowed us to calculate exactly how many would form and to show they could explain the cosmic dark matter.

A testable prediction followed: If the dark matter consists of stable, right-handed neutrinos, then one of three light neutrinos that we know of must be exactly massless. Remarkably, this prediction is now being tested using observations of the gravitational clustering of matter made by large-scale galaxy surveys.

The Entropy of Universes

Encouraged by this result, we set about tackling another big puzzle: Why is the universe so uniform and spatially flat, not curved, on the largest visible scales? The cosmic inflation scenario was, after all, invented by theorists to solve this problem.

Entropy is a concept which quantifies the number of different ways a physical system can be arranged. For example, if we put some air molecules in a box, the most likely configurations are those which maximize the entropy—with the molecules more or less smoothly spread throughout space and sharing the total energy more or less equally. These kinds of arguments are used in statistical physics, the field which underlies our understanding of heat, work, and thermodynamics.

The late physicist Stephen Hawking and collaborators famously generalized statistical physics to include gravity. Using an elegant argument, they calculated the temperature and the entropy of black holes. Using our “mirror” hypothesis, Boyle and I managed to extend their arguments to cosmology and to calculate the entropy of entire universes.

To our surprise, the universe with the highest entropy (meaning it is the most likely, just like the atoms spread out in the box) is flat and expands at an accelerated rate, just like the real one. So statistical arguments explain why the universe is flat and smooth and has a small positive accelerated expansion, with no need for cosmic inflation.

How would the primordial density variations, usually attributed to inflation, have been generated in our symmetrical mirror universe? Recently, we showed that a specific type of quantum field (a dimension zero field) generates exactly the type of density variations we observe, without inflation. Importantly, these density variations aren’t accompanied by the long wavelength gravitational waves which inflation predicts—and which haven’t been seen.

These results are very encouraging. But more work is needed to show that our new theory is both mathematically sound and physically realistic.

Even if our new theory fails, it has taught us a valuable lesson. There may well be simpler, more powerful and more testable explanations for the basic properties of the universe than those the standard orthodoxy provides.

By facing up to cosmology’s deep puzzles, guided by the observations and exploring directions as yet unexplored, we may be able to lay more secure foundations for both fundamental physics and our understanding of the universe.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image Credit: The mirror universe, with the big bang at the center / Neil Turok, CC BY-SA

Kategorie: Transhumanismus

The Legally Blind See Again With an Implant the Size of a Grain of Salt

Singularity HUB - 29 Říjen, 2024 - 00:12

Seeing is believing. Our perception of the world heavily relies on vision.

What we see depends on cells in the retina, which sit behind the eyes. These delicate cells transform light into electrical pulses that go to the brain for further processing.

But because of age, disease, or genetics, retinal cells often break down. For people with geographic atrophy—a disease which gradually destroys retinal cells—their eyes struggle to focus on text, recognize faces, and decipher color or textures in the dark. The disease especially attacks central vision, which lets our eyes focus on specific things.

The result is seeing the world through a blurry lens. Walking down the street in dim light becomes a nightmare, each surface looking like a distorted version of itself. Reading a book or watching a movie is more frustrating than relaxing.

But the retina is hard to regenerate, and the number of transplant donors can’t meet demand. A small clinical trial may have a solution. Led by Science Corporation, a brain-machine interface company headquartered in Alameda, California, the study implanted a tiny chip that acts like a replacement retina in 38 participants who were legally blind.

Dubbed the PRIMAvera trial, the volunteers wore custom-designed eyewear with a camera acting as a “digital eye.” Captured images were then transmitted to the implanted artificial retina, which translated the information into electrical signals for the brain to decipher.

Preliminary results found a boost in the participants’ ability to read the eye exam scale—a common test of random letters, with each line smaller than the last. Some could even read longer texts in a dim environment at home with the camera’s “zoom-and-enhance” function.

The trial is ongoing, with final results expected in 2026—three years after the implant. But according to Frank Holz at the University of Bonn Ernst-Abbe-Strasse in Germany, the study’s scientific coordinator, the results are a “milestone” for geographic atrophy resulting from age.

“Prior to this, there have been no real treatment options for these patients,” he said in a press release.

Max Hodak, CEO of Science Corp and former president of Elon Musk’s Neuralink, said, “To my knowledge, this is the first time that restoration of the ability to fluently read has ever been definitively shown in blind patients.”

Eyes Wide Open

The eye is a biological wonder. The eyeball’s layers act as a lens focusing light onto the retina—the eye’s visual “sensor.” The retina contains two types of light-sensitive cells: Rods and cones.

The rods mostly line the outer edges of the retina, letting us see shapes and shadows in the dark or at the periphery. But these cells can’t detect color or sharpen their focus, which is why night vision feels blurrier. However, rods readily pick up action at the edges of sight—such as seeing rapidly moving things out of the corner of your eye.

Cones pick up the slack. These cells are mostly in the center of the retina and can detect vibrant colors and sharply focus on specific things, like the words you’re currently reading.

Both cell types rely on other cells to flourish. These cells coat the retina, and like soil in a garden, provide a solid foundation in which the rods and cones can grow.

With age, all these cells gradually deteriorate, sometimes resulting in age-related macular degeneration and the gradual loss of central vision. It’s a common condition that affects nearly 20 million Americans aged 40 or older. Details become hard to see; straight lines may seem crooked; colors look dim, especially in low-light conditions. Later stages, called geographic atrophy, result in legal blindness.

Scientists have long searched for a treatment. One idea is to use a 3D-printed stem cell patch made out of the base “garden soil” cells that support light-sensitive rods and cones. Here, doctors transform a patient’s own blood cells into healthy retinal support cells, attach them to a biodegradable scaffold, and transplant them into the eye.

Initial results showed the patch integrated into the retina and slowed and even reversed the disease. But this can take six months and is tailored for each patient, making it difficult to scale.

A New Vision

The Prima system eschews regeneration for a wireless microchip that replaces parts of the retina. The two-millimeter square implant—roughly the size of a grain of salt—is surgically inserted under the retina. The procedure may sound daunting, but according to Wired, it takes only 80 minutes, less time than your average movie. Each chip contains nearly 400 light-sensitive pixels, which convert light patterns into electrical pulses the brain can interpret. The system also includes a pair of glasses with a camera to capture visual information and beam it to the chip using infrared light.

Together, the components work like our eyes do: Images from the camera are sent to the artificial retina “chip,” which transform them into electrical signals for the brain.

Initial results were promising. According to the company, the patients had improved visual acuity a year after the implant. At the beginning of the study, most were considered legally blind with an average vision of 20/450, compared to the normal 20/20. When challenged with an eye exam test, the patients could read, on average, roughly 23 more letters—or five more lines down the chart—compared to tests taken before they received the implant. One patient especially excelled, improving their performance by 59 letters—over 11 lines.

The Prima implant also impacted their daily lives. Participants were able to read, play cards, and tackle crossword puzzles—all activities that require central vision.

While impressive, the system didn’t work for everyone. The implant caused serious side effects in some participants—such as a small tear in the retina—which were mostly resolved according to the company. Some people also experienced blood leaks under the retina that were promptly treated. However, few details regarding the injuries or treatments were released.

The trial is ongoing, with the goal of following participants for three years to track improvements and monitor side effects. The team is also looking to measure their quality of life—how the system affects daily activities that require vision and mental health.

The trial “represents an enormous turning point for the field, and we’re incredibly excited to bring this important technology to market over the next few years,” said Hodak.

Image Credit: Arteum.ro on Unsplash

Kategorie: Transhumanismus

This Week’s Awesome Tech Stories From Around the Web (Through October 26)

Singularity HUB - 26 Říjen, 2024 - 15:00
ARTIFICIAL INTELLIGENCE

Anthropic Wants Its AI Agent to Control Your Computer
Will Knight | Wired
“It took a while for people to adjust to the idea of chatbots that seem to have minds of their own. The next leap into the unknown may involve trusting artificial intelligence to take over our computers, too. Anthropic, a high-flying competitor to OpenAI, announced [this week] that it has taught its AI model Claude to do a range of things on a computer, including search the web, open applications, and input text using the mouse and keyboard.”

BIOTECH

A Neuralink Rival Says Its Eye Implant Restored Vision in Blind People
Emily Mullin | Wired
“For years, they had been losing their central vision—what allows people to see letters, faces, and details clearly. The light-receiving cells in their eyes had been deteriorating, gradually blurring their sight. But after receiving an experimental eye implant as part of a clinical trial, some study participants can now see well enough to read from a book, play cards, and fill in a crossword puzzle despite being legally blind.”

COMPUTING

DNA Has Been Modified to Make It Store Data 350 Times Faster
Karmela Padavic-Callaghan | New Scientist
“DNA has been used for years to store data, but encoding information into the molecule is painstaking work. Now, researchers have drastically sped it up by mimicking a natural biological process that drives gene expression. This could lead to durable, do-it-yourself DNA data storage technologies.”

TRANSPORTATION

Air Taxis and Other Electric-Powered Aircraft Cleared for Takeoff With Final FAA Rules
Andrew J. Hawkins | The Verge
“The FAA says these ‘powered-lift’ vehicles will be the first completely new category of aircraft since helicopters were introduced in 1940. These aircraft will be used for a variety of services, including air taxis, cargo delivery, and rescue and retrieval operations. The final rules published today contain guidelines for pilot training as well as operational requirements regarding minimum safe altitudes and visibility.”

ENERGY

Cheap Solar Panels Are Changing the World
Zoë Schlanger | The Atlantic
“‘In a single year, in a single technology, we’re providing as much new electricity as the entirety of global growth the year before,’ Kingsmill Bond, a senior energy strategist at RMI, a clean-energy nonprofit, told me. A decade or two ago, analysts ‘did not imagine in their wildest dreams that solar by the middle of the 2020s would already be supplying all of the growth of global electricity demand,’ he said. Yet here we are.”

ENVIRONMENT

GMOs Could Reboot Chestnut Trees
Anya Kamenetz | MIT Technology Review
“The sprouts, no higher than our knees, are samples of likely the first genetically modified trees to be considered for federal regulatory approval as a tool for ecological restoration. American Castanea’s founders, and all the others here today, hope that the American chestnut (Castanea dentata) will be the first tree species ever brought back from functional extinction—but, ideally, not the last.”

FUTURE

Here’s What the Regenerative Cities of Tomorrow Could Look Like
Kotaro Okada | Wired
“Wired Japan collaborated with the urban design studio For Cities to highlight some of the world’s best sustainable urban developments, which are harbingers of what is to come. From using local materials and construction methods to restoring ecosystems, these projects go beyond merely making green spaces and provide hints of how cities of the future will function as well as how they will be built. Here are some places where the future is now.”

AUTOMATION

How Wayve’s Driverless Cars Will Meet One of Their Biggest Challenges Yet
Will Douglas Heaven | MIT Technology Review
“The move to the US will be a test of Wayve’s technology, which the company claims is more general-purpose than what many of its rivals are offering. Wayve’s approach has attracted massive investment—including a $1 billion funding round that broke UK records this May—and partnerships with Uber and online grocery firms such as Asda and Ocado. But it will now go head to head with the heavyweights of the growing autonomous-car industry, including Cruise, Waymo, and Tesla.”

PRIVACY

Two Students Created Face Recognition Glasses. It Wasn’t Hard.
Kashmir Hill | The New York Times
“Mr. Nguyen and a fellow Harvard student, Caine Ardayfio, had built glasses used for identifying strangers in real time, and had demonstrated them on two ‘real people’ at the subway station, including Mr. Hoda, whose name was incorrectly transcribed in the video captions as ‘Vishit.’ Mr. Nguyen and Mr. Ardayfio, who are both 21 and studying engineering, said in an interview that their system relied on widely available technologies.”

ETHICS

Google Is Now Watermarking Its AI-Generated Text
Eliza Strickland | IEEE Spectrum
“The system, called SynthID-Text, doesn’t compromise ‘the quality, accuracy, creativity, or speed of the text generation,’ says Pushmeet Kohli, vice president of research at Google DeepMind and a coauthor of the paper. But the researchers acknowledge that their system is far from foolproof, and isn’t yet available to everyone—it’s more of a demonstration than a scalable solution.”

Image Credit: Sylwia Bartyzel on Unsplash

Kategorie: Transhumanismus

‘Electric Plastic’ Could Merge Technology With the Body in Future Wearables and Implants

Singularity HUB - 25 Říjen, 2024 - 21:40

Finding ways to connect the human body to technology could have broad applications in health and entertainment. A new “electric plastic” could make self-powered wearables, real-time neural interfaces, and medical implants that merge with our bodies a reality.

While there has been significant progress in the development of wearable and implantable technology in recent years, most electronic materials are hard, rigid, and feature toxic metals. A variety of approaches for creating “soft electronics” has emerged, but finding ones that are durable, power-efficient, and easy to manufacture is a significant challenge.

Organic ferroelectric materials are promising because they exhibit spontaneous polarization, which means they have a stable electric field pointing in a particular direction. This polarization can be flipped by applying an external electrical field, allowing them to function like a bit in a conventional computer.

The most successful soft ferroelectric is a material called polyvinylidene fluoride (PVDF), which has been used in commercial products like wearable sensors, medical imaging, underwater navigation devices, and soft robots. But PVDF’s electrical properties can break down when exposed to higher temperatures, and it requires high voltages to flip its polarization.

Now, in a paper published in Nature, researchers at Northwestern University have shown that combining the material with short chains of amino acids known as peptides can dramatically reduce power requirements and boost heat tolerance. And the incorporation of biomolecules into the material opens the prospect of directly interfacing electronics with the body.

To create their new “electric plastic” the team used a type of molecule known as a peptide amphiphile. These molecules feature a water-repelling component that helps them self-assemble into complex structures. The researchers connected these peptides to short strands of PVDF and exposed them to water, causing the peptides to cluster together.

This made the strands coalesce into long, flexible ribbons. In testing, the team found the material could withstand temperatures of 110 degrees Celsius, which is roughly 40 degrees higher than previous PVDF materials. Switching the material’s polarization also required significantly lower voltages, despite being made up of 49 percent peptides by weight.

The researchers told Science that as well as being able to store energy or information in the material’s polarization, it’s also biocompatible. This means it could be used in everything from wearable devices that monitor vital signs to flexible implants that can replace pacemakers. The peptides could also be connected to proteins inside cells to record biological activity or even stimulate it.

One challenge is that although PVDF is biocompatible, it can break down into so-called “forever chemicals,” which remain in the environment for centuries and studies have linked to health and environmental problems. Several other chemicals the researchers used to fabricate their material also fall into this category.

“This advance has enabled a number of attractive properties compared to other organic polymers,”  Frank Leibfarth, of UNC Chapel Hill, told Science. But he pointed out that the researchers had only tested very small amounts of the molecule, and it’s unclear how easy it will be to scale them up.

If the researchers can extend the approach to larger scales, however, it could bring a host of exciting new possibilities at the interface between our bodies and technology.

Image Credit: Mark Seniw/Center for Regenerative Nanomedicine/Northwestern University

Kategorie: Transhumanismus
Syndikovat obsah