Transhumanismus
Blurry, Morphing, and Surreal: A New AI Aesthetic Is Emerging in Film
Type text into AI image and video generators, and you’ll often see outputs of unusual, sometimes creepy, pictures.
In a way, this is a feature, not a bug, of generative AI. And artists are wielding this aesthetic to create a new storytelling art form.
The tools, such as Midjourney to generate images, Runway, and Sora to produce videos, and Luma AI to create 3D objects, are relatively cheap or free to use. They allow filmmakers without access to major studio budgets or soundstages to make imaginative short films for the price of a monthly subscription.
I’ve studied these new works as the co-director of the AI for Media & Storytelling studio at the University of Southern California.
Surveying the increasingly captivating output of artists from around the world, I partnered with curators Jonathan Wells and Meg Grey Wells to produce the Flux Festival, a four-day showcase of experiments in AI filmmaking, in November 2024.
While this work remains dizzyingly eclectic in its stylistic diversity, I would argue that it offers traces of insight into our contemporary world. I’m reminded that in both literary and film studies, scholars believe that as cultures shift, so do the way we tell stories.
With this cultural connection in mind, I see five visual trends emerging in film.
1. Morphing, Blurring ImageryIn her “NanoFictions” series, the French artist Karoline Georges creates portraits of transformation. In one short, “The Beast,” a burly man mutates from a two-legged human into a hunched, skeletal cat, before morphing into a snarling wolf.
The metaphor—man is a monster—is clear. But what’s more compelling is the thrilling fluidity of transformation. There’s a giddy pleasure in seeing the figure’s seamless evolution that speaks to a very contemporary sensibility of shapeshifting across our many digital selves.
This sense of transformation continues in the use of blurry imagery that, in the hands of some artists, becomes an aesthetic feature rather than a vexing problem.
Theo Lindquist’s “Electronic Dance Experiment #3,” for example, begins as a series of rapid-fire shots showing flashes of nude bodies in a soft smear of pastel colors that pulse and throb. Gradually it becomes clear that this strange fluidity of flesh is a dance. But the abstraction in the blur offers its own unique pleasure; the image can be felt as much as it can be seen.
2. The SurrealThousands of TikTok videos demonstrate how cringy AI images can get, but artists can wield that weirdness and craft it into something transformative. The Singaporean artist known as Niceaunties creates videos that feature older women and cats, riffing on the concept of the “auntie” from Southeast and East Asian cultures.
In one recent video, the aunties let loose clouds of powerful hairspray to hold up impossible towers of hair in a sequence that grows increasingly ridiculous. Even as they’re playful and poignant, the videos created by Niceaunties can pack a political punch. They comment on assumptions about gender and age, for example, while also tackling contemporary issues such as pollution.
On the darker side, in a music video titled “Forest Never Sleeps,” the artist known as Doopiidoo offers up hybrid octopus-women, guitar-playing rats, rooster-pigs, and a wood-chopping ostrich-man. The visual chaos is a sweet match for the accompanying death metal music, with surrealism returning as a powerful form.
Doopiidoo’s uncanny music video ‘Forest Never Sleeps’ leverages artificial intelligence to create surreal visuals. Image Credit: Doopiidoo 3. Dark Tales
The often-eerie vibe of so much AI-generated imagery works well for chronicling contemporary ills, a fact that several filmmakers use to unexpected effect.
In “La Fenêtre,” Lucas Ortiz Estefanell of the AI agency SpecialGuestX pairs diverse image sequences of people and places with a contemplative voice-over to ponder ideas of reality, privacy, and the lives of artificially generated people. At the same time, he wonders about the strong desire to create these synthetic worlds. “When I first watched this video,” recalls the narrator, “the meaning of the image ceased to make sense.”
In the music video titled “Closer,” based on a song by Iceboy Violet and Nueen, filmmaker Mau Morgó captures the world-weary exhaustion of Gen Z through dozens of youthful characters slumbering, often under the green glow of video screens. The snapshot of a generation that has come of age in the era of social media and now artificial intelligence, pictured here with phones clutched close to their bodies as they murmur in their sleep, feels quietly wrenching.
The music video for ‘Closer’ spotlights a generation awash in screens. Image Credit: Mau Morgó, Closer – Violet, Nueen 4. Nostalgia
Sometimes filmmakers turn to AI to capture the past.
Rome-based filmmaker Andrea Ciulu uses AI to reimagine 1980s East Coast hip-hop culture in “On These Streets,” which depicts the city’s expanse and energy through breakdancing as kids run through alleys and then spin magically up into the air.
Ciulu says that he wanted to capture New York’s urban milieu, all of which he experienced at a distance, from Italy, as a kid. The video thus evokes a sense of nostalgia for a mythic time and place to create a memory that is also hallucinatory.
Similarly, David Slade’s “Shadow Rabbit” borrows black-and-white imagery reminiscent of the 1950s to show small children discovering miniature animals crawling about on their hands. In just a few seconds, Slade depicts the enchanting imagination of children and links it to generated imagery, underscoring AI’s capacities for creating fanciful worlds.
5. New Times, New SpacesIn his video for the song “The Hardest Part” by Washed Out, filmmaker Paul Trillo creates an infinite zoom that follows a group of characters down the seemingly endless aisle of a school bus, through the high school cafeteria and out onto the highway at night. The video perfectly captures the zoominess of time and the collapse of space for someone young and in love haplessly careening through the world.
The freewheeling camera also characterizes the work of Montreal-based duo Vallée Duhamel, whose music video “The Pulse Within” spins and twirls, careening up and around characters who are cut loose from the laws of gravity.
In both music videos, viewers experience time and space as a dazzling, topsy-turvy vortex where the rules of traditional time and space no longer apply.
In Vallée Duhamel’s ‘The Pulse Within,’ the rules of physics no longer apply. Image Credit: Vallée Duhamel
Right now, in a world where algorithms increasingly shape everyday life, many works of art are beginning to reflect how intertwined we’ve become with computational systems.
What if machines are suggesting new ways to see ourselves, as much as we’re teaching them to see like humans?
This article is republished from The Conversation under a Creative Commons license. Read the original article.
Banner Image: A still from Theo Lindquist’s short film ‘Electronic Dance Experiment #3.’
Thousands of Undiscovered Genes May Be Hidden in DNA ‘Dark Matter’
Thousands of new genes are hidden inside the “dark matter” of our genome.
Previously thought to be noise left over from evolution, a new study found that some of these tiny DNA snippets can make miniproteins—potentially opening a new universe of treatments, from vaccines to immunotherapies for deadly brain cancers.
The preprint, not yet peer-reviewed, is the latest from a global consortium that hunts down potential new genes. Ever since the Human Genome Project completed its first draft at the turn of the century, scientists have tried to decipher the genetic book of life. Buried within the four genetic letters—A, T, C, and G—and the proteins they encode is a wealth of information that could help tackle our most frustrating medical foes, such as cancer.
The Human Genome Project’s initial findings came as a surprise. Scientists found less than 30,000 genes that build our bodies and keep them running—roughly a third of that previously predicted. Now, roughly 20 years later, as the technologies that sequence our DNA or map proteins have become increasingly sophisticated, scientists are asking: “What have we missed?”
The new study filled the gap by digging into relatively unexplored portions of the genome. Called “non-coding,” these parts haven’t yet been linked to any proteins. Combining several existing datasets, the team zeroed in on thousands of potential new genes that make roughly 3,000 miniproteins.
Whether these proteins are functional remains to be tested, but initial studies suggest some are involved in a deadly childhood brain cancer. The team is releasing their tools and results to the wider scientific community for further exploration. The platform isn’t just limited to deciphering the human genome; it can delve into the genetic blueprint of other animals and plants as well.
Even though mysteries remain, the results “help provide a more complete picture of the coding portion of the genome,” Ami Bhatt at Stanford University told Science.
What’s in a Gene?A genome is like a book without punctuation. Sequencing one is relatively easy today, thanks to cheaper costs and higher efficiency. Making sense of it is another matter.
Ever since the Human Genome Project, scientists have searched our genetic blueprint to find the “words,” or genes, that make proteins. These DNA words are further broken down into three-letter codons, each one encoding a specific amino acid—the building block of a protein.
A gene, when turned on, is transcribed into messenger RNA. These molecules shuttle genetic information from DNA to the cell’s protein-making factory, called the ribosome. Picture it as a sliced bun, with an RNA molecule running through it like a piece of bacon.
When first defining a gene, scientists focus on open reading frames. These are made of specific DNA sequences that dictate where a gene starts and stops. Like a search function, the framework scans the genome for potential genes, which are then validated with lab experiments based on myriad criteria. These include whether they can make proteins of a certain size—more than 100 amino acids. Sequences that meet the mark are compiled into GENCODE, an international database of officially recognized genes.
Genes that encode proteins have attracted the most attention because they aid our understanding of disease and inspire ways to treat it. But much of our genome is “non-coding,” in that large sections of it don’t make any known proteins.
For years, these chunks of DNA were considered junk—the defunct remains of our evolutionary past. Recent studies, however, have begun revealing hidden value. Some bits regulate when genes turn on or off. Others, such as telomeres, protect against the degradation of DNA as it replicates during cell division and ward off aging.
Still, the dogma was that these sequences don’t make proteins.
A New LensRecent evidence is piling up that non-coding areas do have protein-making segments that affect health.
One study found that a small missing section in supposedly non-coding areas caused inherited bowel troubles in infants. In mice genetically engineered to mimic the same problem, restoring the DNA snippet—not yet defined as a gene—reduced their symptoms. The results highlight the need to go beyond known protein-coding genes to explain clinical findings, the authors wrote.
Dubbed non-canonical open reading frames (ncORFs), or “maybe-genes,” these snippets have popped up across human cell types and diseases, suggesting they have physiological roles.
In 2022, the consortium behind the new study began peeking into potential functions, hoping to broaden our genetic vocabulary. Rather than sequencing the genome, they looked at datasets that sequenced RNA as it was being turned into proteins in the ribosome.
The method captures the actual output of the genome—even extremely short amino acid chains normally thought too small to make proteins. Their search produced a catalog of over 7,000 human “maybe-genes,” some of which made microproteins that were eventually detected inside cancer and heart cells.
But overall, at that time “we did not focus on the questions of protein expression or functionality,” wrote the team. So, they broadened their collaboration in the new study, welcoming specialists in protein science from over 20 institutions across the globe to make sense of the “maybe-genes.”
They also included several resources that provide protein databases from various experiments—such as the Human Proteome Organization and the PeptideAtlas—and added data from published experiments that use the human immune system to detect protein fragments.
In all, the team analyzed over 7,000 “maybe-genes” from a variety of cells: Healthy, cancerous, and also immortal cell lines grown in the lab. At least a quarter of these “maybe-genes” translated into over 3,000 miniproteins. These are far smaller than normal proteins and have a unique amino acid makeup. They also seem to be more attuned to parts of the immune system—meaning they could potentially help scientists develop vaccines, autoimmune treatments, or immunotherapies.
Some of these newly found miniproteins may not have a biological role at all. But the study gives scientists a new way to interpret potential functions. For quality control, the team organized each miniprotein into a different tier, based on the amount of evidence from experiments, and integrated them into an existing database for others to explore.
We’re just beginning to probe our genome’s dark matter. Many questions remain.
“A unique capacity of our multi-consortium collaboration is the ability to develop consensus on the key challenges” that we feel need answers, wrote the team.
For example, some experiments used cancer cells, meaning that certain “maybe-genes” might only be active in those cells—but not in normal ones. Should they be called genes?
From here, deep learning and other AI methods may help speed up analysis. Although annotating genes is “historically rooted in manual inspection” of the data, wrote the authors, AI can churn through multiple datasets far faster, if only as a first pass to find new genes.
How many might scientists discover? “50,000 is in the realm of possibility,” study author Thomas Martinez told Science.
Image Credit: Miroslaw Miras from Pixabay
This Week’s Awesome Tech Stories From Around the Web (Through December 7)
The GPT Era Is Already Ending
Matteo Wong | The Atlantic
“[OpenAI] has been unusually direct that the o1 series is the future: Chen, who has since been promoted to senior vice president of research, told me that OpenAI is now focused on this ‘new paradigm,’ and Altman later wrote that the company is prioritizing’ o1 and its successors. The company believes, or wants its users and investors to believe, that it has found some fresh magic. The GPT era is giving way to the reasoning era.”
Falcon 9 Reaches a Flight Rate 30 Times Higher Than Shuttle at 1/100th the Cost
Eric Berger | Ars Technica
“Space enthusiast Ryan Caton also crunched the numbers on the number of SpaceX launches this year compared to some of its competitors. So far this year, SpaceX has launched as many rockets as Roscosmos has since 2013, United Launch Alliance since 2010, and Arianespace since 2009. This year alone, the Falcon 9 has launched more times than the Ariane 4, Ariane 5, or Atlas V rockets each did during their entire careers.”
These Temporary Tattoos Can Read Your Brainwaves
Ed Cara | Gizmodo
“The future of diagnostic medicine is gearing up to look a bit more cyberpunk. Scientists have just unveiled technology that should allow people to one day have their brains and bodies monitored via customized, temporary electronic tattoos. Scientists at the University of Texas at Austin and others developed the tech, which aims to avoid the limitations of conventional electroencephalography, or EEG, testing.”
Another Crypto Revolution Is Here—and It’s Unlike Any From the Past
Yueqi Yang | The Information
“The new period of crypto that’s beginning to unfold is shaping up to be starkly different from previous ones. A few years ago, cryptonians wanted to talk about topics like Web3, DeFi and the metaverse, and they gambled heavily on speculative assets: most notably NFTs and crypto coins that traded on meme stock–like hype. For now, they appear far more temperate and are placing an enormous priority on stablecoins, theoretically a less risky form of crypto since they’re backed by dollar reserves.”
Waymo’s Next Robotaxi City Will Be Miami
Andrew J. Hawkins | The Verge
“Waymo is making the moves on Magic City. Alphabet’s robotaxi service said it would launch in Miami in 2026. The company has been testing its autonomous vehicles in the Florida city on-and-off since 2019, and more recently has begun to lay the groundwork in earnest. Waymo plans to start ‘reacquainting’ its autonomous Jaguar I-Pace vehicles to Miami’s streets in 2025. And in 2026, it expects to start making its vehicles available to riders through its Waymo One ridehail app.”
The Inside Story of Apple Intelligence
Steven Levy | Wired
“Google, Meta, and Microsoft, as well as startups like OpenAI and Anthropic, all had well-developed strategies for generative AI by the time Apple finally announced its own push this June. Conventional wisdom suggested this entrance was unfashionably late. Apple disagrees. Its leaders say the company is arriving just in time—and that it’s been stealthily preparing for this moment for years.”
ChatGPT Now Has Over 300 Million Weekly Users
Emma Roth | The Verge
“OpenAI CEO Sam Altman revealed the milestone during The New York Times’ DealBook Summit on Wednesday, which comes just months after ChatGPT hit 200 million weekly users in August. ‘Our product has scaled … now we have more than 300 million weekly active users,’ Altman said. ‘We have users sending more than 1 billion messages per day to ChatGPT.'”
Would You Eat Dried Microbes? This Company Hopes So.
Casey Crownhart | MIT Technology Review
“LanzaTech, a rising star in the fuel and chemical industries, is joining a growing group of businesses producing microbe-based food as an alternative to plant and animal products. Using microbes to make food is hardly new—beer, yogurt, cheese, and tempeh all rely on microbes to transform raw ingredients into beloved dishes. But some companies are hoping to create a new category of food, one that relies on microbes themselves as a primary ingredient in our meals.”
OpenAI Is Working With Anduril to Supply the US Military With AI
Will Knight | Wired
“OpenAI, maker of ChatGPT and one of the most prominent artificial intelligence companies in the world, said today that it has entered a partnership with Anduril, a defense startup that makes missiles, drones, and software for the United States military. It marks the latest in a series of similar announcements made recently by major tech companies in Silicon Valley, which has warmed to forming closer ties with the defense industry.”
Image Credit: Declan Sun on Unsplash
Jamelle Lindo on Emotional Intelligence in the Age of AI: Harness the Power of Emotion
Google DeepMind’s New AI Weatherman Tops World’s Most Reliable System
This was another year of rollercoaster weather. Heat domes broiled the US southwest. California experienced a “second summer” in October, with multiple cities breaking heat records. Hurricane Helene—and just a few weeks later, Hurricane Milton—pummeled the Gulf Coast, unleashing torrential rainfall and severe flooding. What shocked even seasoned meteorologists was how fast the hurricanes intensified, with one choking up as he said “this is just horrific.”
When bracing for extreme weather, every second counts. But planning measures rely on accurate predictions. Here’s where AI comes in.
This week, Google DeepMind unveiled an AI that predicts weather 15 days in advance in minutes, rather than the hours usually needed with traditional models. In a head-to-head with the European Center for Medium-Range Weather Forecasts’ model (ENS)—the best “medium-range” weather forecaster today—the AI won over 90 percent of the time.
Dubbed GenCast, the algorithm is DeepMind’s latest foray into weather prediction. Last year, they unleashed a version with strikingly accurate prediction for a 10-day forecast. GenCast differs in its machine learning architecture. True to its name, it’s a generative AI model, roughly similar to those that power ChatGPT, Gemini, or generate images and videos with a text prompt.
The setup gives GenCast an edge over previous models, which usually provide a single weather path prediction. GenCast, in contrast, pumps out 50 or more predictions—each representing a potential weather trajectory, while assigning their likelihood.
In other words, the AI “imagines” a multiverse of future weather possibilities and picks the one with the largest chance of occurring.
GenCast didn’t just excel at day-to-day weather prediction. It also beat ENS at predicting extreme weather—heat, cold, and high wind speeds. Challenged with data from Typhoon Hagibis—the deadliest tropical cyclone to strike Japan in decades—GenCast visualized possible routes seven days before landfall.
“As climate change drives more extreme weather events, accurate and trustworthy forecasts are more essential than ever,” wrote study authors Ilan Price and Matthew Wilson in a DeepMind blog post.
Embracing UncertaintyPredicting weather is notoriously difficult. This is largely because weather is a chaotic system. You might have heard of the “butterfly effect”—a butterfly flaps it wings, stirring a tiny change in the atmosphere and triggering tsunamis and other weather disasters a world apart. Although just a metaphor, it highlights that any small changes in initial weather conditions can rapidly spread across large regions, changing weather outcomes.
For decades, scientists have tried to emulate these processes using physical simulations of the Earth’s atmosphere. By gathering data from weather stations across the globe and satellites, they’ve written equations mapping current estimates of the weather and forecasting how they’ll change over time.
The problem? The deluge of data takes hours, if not days, to crunch on supercomputers, and consumes a huge amount of energy.
AI may be able to help. Rather than mimicking the physics of atmospheric shifts or the swirls of our oceans, these systems slurp up decades of data to find weather patterns. GraphCast, released in 2013, captured more than a million points across our planet’s surface to predict 10-day weather in less than a minute. Others in the race to improve weather forecasting are Huawei’s Pangu-Weather and NowcastNet, both based in China. The latter gauges the chance of rain with high accuracy—one of the toughest aspects of weather prediction.
But weather is finicky. GraphCast and other similar weather-prediction AI models, in contrast, are deterministic. They only forecast a single weather trajectory. The weather community is now increasingly embracing an “ensemble model,” which predicts a range of possible scenarios.
“Such ensemble forecasts are more useful than relying on a single forecast, as they provide decision makers with a fuller picture of possible weather conditions in the coming days and weeks and how likely each scenario is,” wrote the team.
Cloudy With a Chance of RainGenCast tackles the weather’s uncertainty head-on. The AI mainly relies on a diffusion model, a type of generative AI. Overall, it incorporates 12 metrics about the Earth’s surface and atmosphere—such as temperature, wind speed, humidity, and atmospheric pressure—traditionally used to gauge weather.
The team trained the AI on 40 years of historical weather data from a publicly available database up to 2018. Rather than asking for one prediction, they had GenCast spew out a number of forecasts, each one starting with a slightly different weather condition—a different “butterfly,” so to speak. The results were then combined into an ensemble forecast, which also predicted the chance of each weather pattern actually occurring.
When tested with weather data from 2019, which GenCast had never seen, the AI outperformed the current leader, ENS—especially for longer-term forecasting up to 15 days. Checked against recorded data, the AI outperformed ENS 97 percent of the time across 1,300 measures of weather prediction.
GenCast’s predictions are also blazingly fast. Compared to the hours on supercomputers usually needed to generate results, the AI churned out predictions in roughly eight minutes. If adopted, the system could add valuable time for emergency notices.
All for OneAlthough GenCast wasn’t explicitly trained to forecast severe weather patterns, it was able to predict the path of Typhoon Hagibis before landfall in central Japan. One of the deadliest storms in decades, the typhoon flooded neighborhoods up to the rooftops as water broke through levees and took out much of the region’s electrical power.
GenCast’s ensemble prediction was like a movie. It began with a relatively wide range of possible paths for Typhoon Hagibis seven days before landfall. As the storm edged closer, however, the AI got more accurate, narrowing its predictive path. Although not perfect, GenCast painted an overall trajectory of the devastating cyclone that closely matched recorded data.
Given a week of lead time, “GenCast can provide substantial value in decisions about
when and how to prepare for tropical cyclones,” wrote the authors.
Accurate and longer predictions don’t just help prepare for future climate challenges. They could also help optimize renewable energy planning. Take wind power. Predicting where, when, and how strong wind is likely to blow could increase the power source’s reliability—reducing costs and potentially upping adoption of the technology. In a proof-of-concept analysis, GenCast was more accurate than ENS at predicting total wind power generated by over 5,000 wind power plants across the globe, opening the possibility of building wind farms based on data.
GenCast isn’t the only AI weatherman. Nvidia’s FourCastNet also uses generative AI to predict weather with a lower energy cost than traditional methods. Google Research has also engineered myriad weather-predicting algorithms, including NeuralGCM and SEEDS. Some are being integrated into Google search and maps, including rain forecasts, wildfires, flooding, and heat alerts. Microsoft joined the race with ClimaX, a flexible AI that can be tailored to generate predictions from hours to months ahead (with varying accuracies).
All this is not to say AI will be taking jobs from meteorologists. The DeepMind team stresses that GenCast wouldn’t be possible without foundational work from climate scientists and physics-based models. To give back, they’re releasing aspects of GenCast to the wider weather community to gain further insights and feedback.
Image Credit: NASA
Automated Cyborg Cockroach Factory Could Churn Out a Bug a Minute for Search and Rescue
Envisioning armies of electronically controllable insects is probably nightmare fuel for most people. But scientists think they could help rescue workers scour challenging and hazardous terrain. An automated cyborg cockroach factory could help bring the idea to life.
The merger of living creatures with machines is a staple of science fiction, but it’s also a serious line of research for academics. Several groups have implanted electronics into moths, beetles, and cockroaches that allow simple control of the insects.
However, building these cyborgs is tricky as it takes considerable dexterity and patience to surgically implant electrodes in their delicate bodies. This means that creating enough for most practical applications is simply too time-consuming.
To overcome this obstacle, researchers at Nanyang Technological University in Singapore have automated the process, using a robotic arm with computer vision to install electrodes and tiny backpacks full of electronics on Madagascar hissing cockroaches. The approach cuts the time required to attach the equipment from roughly half an hour to just over a minute.
“In the future, factories for insect-computer hybrid robot[s] could be built to satisfy the needs for fast preparation and application of the hybrid robots,” the researchers write in a non-peer-reviewed paper on arXiv.
“Different sensors could be added to the backpack to develop applications on the inspection and search missions based on the requirements.”
Cyborg insects could be a promising alternative to conventional robots thanks to their small size, ability to operate for hours on little food, and their adaptability to new environments. As well as helping with search and rescue operations, the researchers suggest that swarms of these robot bugs could be used to inspect factories.
The researchers had already shown that signals from electrodes implanted into cockroach abdomens could be used to control the direction of travel and get them to slow down and even stop. But installing these electrodes and a small backpack with control electronics required painstaking work from a trained researcher.
That kind of approach makes it difficult to scale up to the hundreds or even thousands of insects required for practically useful swarms. So, the team developed an automated system that could install the electronics on a cockroach with minimal human involvement.
First, the researchers anesthetized the cockroaches by exposing them to carbon dioxide for 10 minutes. They then placed the bugs on a platform where a pair of rods powered by a motor pressed down on two segments of their hard exoskeletons to expose a soft membrane just behind the head.
A computer vision system then identified where to implant the electrodes and used this information to guide a robotic arm carrying the electronic backpack. Electrodes in place, the arm pressed the backpack down until its mounting mechanism hooked into another section of the insect’s body. The arm then released the backpack, and the rods retracted to free the cyborg bug.
The entire assembly process takes just 68 seconds, and the resulting cockroaches are just as controllable as ones made manually, the researchers found. A four-bug team was able to cover 80 percent of a 20-square-foot outdoor test environment filled with obstacles in about 10 minutes.
Fabian Steinbeck at Bielefeld University in Germany told New Scientist that using these cyborg bugs for search and rescue might be tricky as they currently have to be controlled remotely. Getting signal in collapsed buildings and similar challenging terrain would be difficult, and we don’t yet have the technology to get them to navigate autonomously.
Rapid improvements in both AI and communication technologies could soon change that though. So, it may not be too far-fetched to imagine swarms of robot bugs coming to your rescue in the near future.
Image Credit: Erik Karits from Pixabay
Astronomers Have Pinpointed the Origin of Mysterious Repeating Radio Bursts From Space
Slowly repeating bursts of intense radio waves from space have puzzled astronomers since they were discovered in 2022.
In new research, my colleagues and I have for the first time tracked one of these pulsating signals back to its source: a common kind of lightweight star called a red dwarf, likely in a binary orbit with a white dwarf, the core of another star that exploded long ago.
A Slowly Pulsing MysteryIn 2022, our team made an amazing discovery. Periodic radio pulsations that repeated every 18 minutes, emanating from space. The pulses outshone everything nearby, flashed brilliantly for three months, then disappeared.
We know some repeating radio signals come from a kind of neutron star called a radio pulsar, which spins rapidly (typically once a second or faster), beaming out radio waves like a lighthouse. The trouble is, our current theories say a pulsar spinning only once every 18 minutes should not produce radio waves.
So we thought our 2022 discovery could point to new and exciting physics—or help explain exactly how pulsars emit radiation, which despite 50 years of research is still not understood very well.
More slowly blinking radio sources have been discovered since then. There are now about 10 known “long-period radio transients.”
However, just finding more hasn’t been enough to solve the mystery.
Searching the Outskirts of the GalaxyUntil now, every one of these sources has been found deep in the heart of the Milky Way.
This makes it very hard to figure out what kind of star or object produces the radio waves, because there are thousands of stars in a small area. Any one of them could be responsible for the signal, or none of them.
So, we started a campaign to scan the skies with the Murchison Widefield Array radio telescope in Western Australia, which can observe 1,000 square degrees of the sky every minute. An undergraduate student at Curtin University, Csanád Horváth, processed data covering half of the sky, looking for these elusive signals in more sparsely populated regions of the Milky Way.
One element of the Murchison Widefield Array, a radio telescope in Western Australia that observes the sky at low radio frequencies. Image Credit: ICRAR / Curtin University
And sure enough, we found a new source! Dubbed GLEAM-X J0704-37, it produces minute-long pulses of radio waves, just like other long-period radio transients. However, these pulses repeat only once every 2.9 hours, making it the slowest long-period radio transient found so far.
Where Are the Radio Waves Coming From?We performed follow-up observations with the MeerKAT telescope in South Africa, the most sensitive radio telescope in the southern hemisphere. These pinpointed the location of the radio waves precisely: They were coming from a red dwarf star. These stars are incredibly common, making up 70 percent of the stars in the Milky Way, but they are so faint that not a single one is visible to the naked eye.
The source of the radio waves, as seen by the MWA at low resolution (magenta circle) and MeerKAT at high resolution (cyan circle). The white circles are all stars in our own Galaxy. Image Credit: Hurley-Walker et al. 2024 / Astrophysical Journal Letters
Combining historical observations from the Murchison Widefield Array and new MeerKAT monitoring data, we found that the pulses arrive a little earlier and a little later in a repeating pattern. This probably indicates that the radio emitter isn’t the red dwarf itself, but rather an unseen object in a binary orbit with it.
Based on previous studies of the evolution of stars, we think this invisible radio emitter is most likely to be a white dwarf, which is the final endpoint of small to medium-sized stars like our own sun. If it were a neutron star or a black hole, the explosion that created it would have been so large it should have disrupted the orbit.
It Takes Two to TangoSo, how do a red dwarf and a white dwarf generate a radio signal?
The red dwarf probably produces a stellar wind of charged particles, just like our sun does. When the wind hits the white dwarf’s magnetic field, it would be accelerated, producing radio waves.
This could be similar to how the Sun’s stellar wind interacts with Earth’s magnetic field to produce beautiful aurora and also low-frequency radio waves.
We already know of a few systems like this, such as AR Scorpii, where variations in the brightness of the red dwarf imply that the companion white dwarf is hitting it with a powerful beam of radio waves every two minutes. None of these systems are as bright or as slow as the long-period radio transients, but maybe as we find more examples, we will work out a unifying physical model that explains all of them.
On the other hand, there may be many different kinds of system that can produce long-period radio pulsations.
Either way, we’ve learned the power of expecting the unexpected—and we’ll keep scanning the skies to solve this cosmic mystery.
This article is republished from The Conversation under a Creative Commons license. Read the original article.
Image Credit: An artist’s impression of the exotic binary star system AR Scorpii / Mark Garlick/University of Warwick/ESO, CC BY
This Tiny House Is Made From the Recycled Heart of a Wind Turbine
If you’ve tried to rent or buy a home in the last few years, you may have noticed there’s a severe housing shortage in the US and around the world. Millions of people need homes, and there aren’t nearly enough of them to go around. Plenty of creative, low-cost solutions have been proposed, from inflatable houses to 3D-printed houses, “foldable” houses, and houses that ship in kits to be assembled like furniture.
Now there’s another idea joining the fray, and it carries the added benefit of playing a role in the renewable energy transition: It’s a tiny house made from the nacelle of a decommissioned wind turbine.
The house, unveiled last month as part of Dutch Design Week, is a collaboration between Swedish power company Vattenfall and Dutch architecture firm Superuse Studios. Wind turbines typically have a 20-year lifespan, and Vattenfall is looking for novel ways to repurpose parts of its turbines. With the first generation of large-scale turbines now reaching the end of their useful life, there will be thousands of nacelles (not to mention blades, towers, and generators) in search of a new purpose.
Blades, towers, and generators are the parts of a wind turbine that most people are familiar with, but not so much the nacelle. The giant rectangular box sits at the top of the turbine’s tower and houses its gearbox, shafts, generator, and brake. It’s the beating heart of the turbine, where the blades’ rotation is converted into electricity.
Though it’s big enough to be a tiny house, this particular nacelle is on the small side (as far as nacelles go). It’s 10 feet tall by 13 feet wide by 33 feet long. The interior space of the home about 387 square feet, or the size of a small studio apartment or hotel room. The nacelle came from one of Vattenfall’s V80 turbines, which was installed at an Austrian wind farm in 2005 and has a production capacity of two megawatts. Turbine technology has come a long way since then; the largest ones in the world are approaching a production capacity of 15 megawatts.
Though there will be larger nacelles available, Superuse Studios intentionally chose a small one for its prototype. Their thinking was, if you can make a livable home in this small of a space, you can definitely make a livable home—and add more features—in a larger space; better to start small and grow than start big then downsize.
Though the house is small, its designers ensured it was fully compliant with Dutch building code and therefore suitable for habitation. It has a kitchen with a sink and a stove, a bathroom with a shower, a dining area, and a combined living/sleeping area. As you’d expect from a house made of recycled wind turbine parts, it’s also climate-friendly: Its electricity comes partly from rooftop solar panels, and it has a bidirectional charger for electric vehicles (meaning power from the house can charge the car or power from the car’s battery can be used in the house). There’s an electric heat pump for temperature control, and a solar heater for hot water.
Solar panels and wind turbines don’t last forever, and they use various raw and engineered materials. When the panels or turbines can’t produce power anymore, what’s to be done with all that concrete, copper, steel, silicon, glass, or aluminum? Finding purposeful ways to reuse or recycle these materials will be a crucial component of a successful transition away from fossil fuels.
“We are looking for innovative ways in which you can reuse materials from used turbines as completely as possible,” said Thomas Hjort, Vattenfall’s director of innovation, in a press release. “So making something new from them with as few modifications as possible. That saves raw materials, energy consumption and in this way we ensure that these materials are useful for many years after their first working life.”
As of right now, the nacelle tiny house is just a proof of concept; there are no plans to start producing more in the immediate future, but it’s not outside the realm of possibility eventually. Picture communities of these houses arranged in rows or circles, with communal spaces or parks in between. Using a larger nacelle, homes with one or two bedrooms could be designed, expanding the possibilities for inhabitants and giving purpose to more decommissioned turbines.
“At least ten thousand of this generation of nacelles are available, spread around the world,” said Jos de Krieger, a partner at Superuse Studios. “Most of them have yet to be decommissioned. This offers perspective and a challenge for owners and decommissioners. If such a complex structure as a house is possible, then numerous simpler solutions are also feasible and scalable.”
If 10,000-plus nacelles are available, that means 30,000-plus blades are available. What innovative use might designers and engineers find for them?
Image Credit: Vattenfall
Most Supposedly ‘Open’ AI Systems Are Actually Closed—and That’s a Problem
“Open” AI models have a lot to give. The practice of sharing source code with the public spurs innovation and democratizes AI as a tool.
Or so the story goes. A new analysis in Nature puts a twist on the narrative: Most supposedly “open” AI models, such as Meta’s Llama 3, are hardly that.
Rather than encouraging or benefiting small startups, the “rhetoric of openness is frequently wielded in ways that…exacerbate the concentration of power” in large tech companies, wrote David Widder at Cornell University, Meredith Whittaker at Signal Foundation, and Sarah West at AI Now Institute.
Why care? Debating AI openness seems purely academic. But with growing use of ChatGPT and other large language models, policymakers are scrambling to catch up. Can models be allowed in schools or companies? What guiderails should be in place to protect against misuse?
And perhaps most importantly, most AI models are controlled by Google, Meta, and other tech giants, which have the infrastructure and financial means to either develop or license the technology—and in turn, guide the evolution of AI to meet their financial incentives.
Lawmakers around the globe have taken note. This year, the European Union adopted the AI Act, the world’s first comprehensive legislation to ensure AI systems used are “safe, transparent, non-discriminatory, and environmentally friendly.” As of September, there were over 120 AI bills in Congress, chaperoning privacy, accountability, and transparency.
In theory, open AI models can deliver those needs. But “when policy is being shaped, definitions matter,” wrote the team.
In the new analysis, they broke down the concept of “openness” in AI models across the entire development cycle and pinpointed how the term can be misused.
What Is ‘Openness,’ Anyway?The term “open source” is nearly as old as software itself.
At the turn of the century, small groups of computing rebels released code for free software that anyone could download and use in defiance of corporate control. They had a vision: Open-source software, such as freely available word processors similar to Microsoft’s, could level the playing field for little guys and allow access to people who couldn’t afford the technology. The code also became a playground, where eager software engineers fiddled around with the code to discover flaws in need of fixing—resulting in more usable and secure software.
With AI, the story’s different. Large language models are built with numerous layers of interconnected artificial “neurons.” Similar to their biological counterparts, the structure of those connections heavily influences a model’s performance in a specific task.
Models are trained by scraping the internet for text, images, and increasingly, videos. As this training data flows through their neural networks, they adjust the strengths of their artificial neurons’ connections—dubbed “weights”—so that they generate desired outputs. Most systems are then evaluated by people to judge the accuracy and quality of the results.
The problem? Understanding these systems’ internal processes isn’t straightforward. Unlike traditional software, sharing only the weights and code of an AI model, without the underlying training data, makes it difficult for other people to detect potential bugs or security threats.
This means previous concepts from open-source software are being applied in “ill-fitting ways to AI systems,” wrote the team, leading to confusion about the term.
OpenwashingCurrent “open” AI models span a range of openness, but overall, they have three main characteristics.
One is transparency, or how much detail about an AI model’s setup its creator publishes. Eleuther AI’s Pythia series, for example, allows anyone to download the source code, underlying training data, and full documentation. They also license the AI model for wide reuse, meeting the definition of “open source” from the Open Source Initiative, a non-profit that has defined the term as it has evolved over nearly three decades. In contrast, Meta’s Llama 3, although described as open, only allows people to build on their AI through an API—a sort of interface that lets different software communicate, without sharing the underlying code—or download just the model’s weights to tinker but with restrictions on their usage.
“This is ‘openwashing’ systems that are better understood as closed,” wrote the authors.
A second characteristic is reusability, in that openly licensed data and details of an AI model can be used by other people (although often only through a cloud service—more on that later.) The third characteristic, extensibility, lets people fine-tune existing models for their specific needs.
“[This] is a key feature championed particularly by corporate actors invested in open AI,” wrote the team. There’s a reason: Training AI models requires massive computing power and resources, often only available to large tech companies. Llama 3, for example, was trained on 15 trillion tokens—a unit for processing data, such as words or characters. These choke points make it hard for startups to build AI systems from scratch. Instead, they often retrain “open” systems to adapt them to a new task or run more efficiently. Stanford’s AI Alpaca model, based on Llama, for example, gained interest for the fact it could run on a laptop.
There’s no doubt that many people and companies have benefited from open AI models. But to the authors, they may also be a barrier to the democratization of AI.
The Dark SideMany large-scale open AI systems today are trained on cloud servers, the authors note. The UAE’s Technological Innovation Institute developed Falcon 40B and trained it on Amazon’s AWS servers. MosaicML’s AI is “tied to Microsoft’s Azure.” Even OpenAI has partnered with Microsoft to offer its new AI models at a price.
While cloud computing is extremely useful, it limits who can actually run AI models to a handful of large companies—and their servers. Stanford’s Alpaca eventually shut down partially due to a lack of financial resources.
Secrecy around training data is another concern. “Many large-scale AI models described as open neglect to provide even basic information about the underlying data used to train the system,” wrote the authors.
Large language models process huge amounts of data scraped from the internet, some of which is copyrighted, resulting in a number of ongoing lawsuits. When datasets aren’t readily made available, or when they’re incredibly large, it’s tough to fact-check the model’s reported performance, or if the datasets “launder others’ intellectual property,” according to the authors.
The problem gets worse when building frameworks, often developed by large tech companies, to minimize the time “[reinventing] the wheel.” These pre-written pieces of code, workflows, and evaluation tools help developers quickly build on an AI system. However, most tweaks don’t change the model itself. In other words, whatever potential problems or biases that exist inside the models could also propagate to downstream applications.
An AI EcosystemTo the authors, developing AI that’s more open isn’t about evaluating one model at a time. Rather, it’s about taking the whole ecosystem into account.
Most debates on AI openness miss the larger picture. As AI advances, “the pursuit of openness on its own will be unlikely to yield much benefit,” wrote the team. Instead, the entire cycle of AI development—from setting up, training, and running AI systems to their practical uses and financial incentives—has to be considered when building open AI policies.
“Pinning our hopes on ‘open’ AI in isolation will not lead us to that world,” wrote the team.
Image Credit: x / x
OpenAI’s GPT-4o Makes AI Clones of Real People With Surprising Ease
AI has become uncannily good at aping human conversational capabilities. New research suggests its powers of mimicry go a lot further, making it possible to replicate specific people’s personalities.
Humans are complicated. Our beliefs, character traits, and the way we approach decisions are products of both nature and nurture, built up over decades and shaped by our distinctive life experiences.
But it appears we might not be as unique as we think. A study led by researchers at Stanford University has discovered that all it takes is a two-hour interview for an AI model to predict people’s responses to a battery of questionnaires, personality tests, and thought experiments with an accuracy of 85 percent.
While the idea of cloning people’s personalities might seem creepy, the researchers say the approach could become a powerful tool for social scientists and politicians looking to simulate responses to different policy choices.
“What we have the opportunity to do now is create models of individuals that are actually truly high-fidelity,” Stanford’s Joon Sung Park from, who led the research, told New Scientist. “We can build an agent of a person that captures a lot of their complexities and idiosyncratic nature.”
AI wasn’t used only to create virtual replicas of the study participants, it also helped gather the necessary training data. The researchers got a voice-enabled version of OpenAI’s GPT-4o to interview people using a script from the American Voices Project—a social science initiative aimed at gathering responses from American families on a wide range of issues.
As well as asking preset questions, the researchers also prompted the model to ask follow-up questions based on how people responded. The model interviewed 1,052 people across the US for two hours and produced transcripts for each individual.
Using this data, the researchers created GPT-4o-powered AI agents to answer questions in the same way the human participant would. Every time an agent fielded a question, the entire interview transcript was included alongside the query, and the model was told to imitate the participant.
To evaluate the approach, the researchers had the agents and human participants go head-to-head on a range of tests. These included the General Social Survey, which measures social attitudes to various issues; a test designed to judge how people score on the Big Five personality traits; several games that test economic decision making; and a handful of social science experiments.
Humans often respond quite differently to these kinds of tests at different times, which would throw off comparisons to the AI models. To control for this, the researchers asked the humans to complete the test twice, two weeks apart, so they could judge how consistent participants were.
When the team compared responses from the AI models against the first round of human responses, the agents were roughly 69 percent accurate. But taking into account how the humans’ responses varied between sessions, the researchers found the models hit an accuracy of 85 percent.
Hassaan Raza, the CEO of Tavus, a company that creates “digital twins” of customers, told MIT Technology Review that the biggest surprise from the study was how little data it took to create faithful copies of real people. Tavus normally needs a trove of emails and other information to create their AI clones.
“What was really cool here is that they show you might not need that much information,” he said. “How about you just talk to an AI interviewer for 30 minutes today, 30 minutes tomorrow? And then we use that to construct this digital twin of you.”
Creating realistic AI replicas of humans could prove a powerful tool for policymaking, Richard Whittle at the University of Salford, UK, told New Scientist, as AI focus groups could be much cheaper and quicker than ones made up of humans.
But it’s not hard to see how the same technology could be put to nefarious uses. Deepfake video has already been used to pose as a senior executive in an elaborate multi-million-dollar scam. The ability to mimic a target’s entire personality would likely turbocharge such efforts.
Either way, the research suggests that machines that can realistically imitate humans in a wide range of settings are imminent.
Image Credit: Richmond Fajardo on Unsplash
HumAInity Is Genius, But Where’s the Wisdom?
Niantic Is Training a Giant ‘Geospatial’ AI on Pokémon Go Data
If you want to see what’s next in AI, just follow the data. ChatGPT and DALL-E trained on troves of internet data. Generative AI is making inroads in biotechnology and robotics thanks to existing or newly assembled datasets. One way to glance ahead, then, is to ask: What colossal datasets are still ripe for the picking?
Recently, a new clue emerged.
In a blog post, gaming company Niantic said it’s training a new AI on millions of real-world images collected by Pokémon Go players and in its Scaniverse app. Inspired by the large language models powering chatbots, they call their algorithm a “large geospatial model” and hope it’ll be as fluent in the physical world as ChatGPT is in the world of language.
Follow the DataThis moment in AI is defined by algorithms that generate language, images, and increasingly, video. With OpenAI’s DALL-E and ChatGPT, anyone can use everyday language to get a computer to whip up photorealistic images or explain quantum physics. Now, the company’s Sora algorithm is applying a similar approach to video generation. Others are competing with OpenAI, including Google, Meta, and Anthropic.
The crucial insight that gave rise to these models: The rapid digitization of recent decades is useful for more than entertaining and informing us humans—it’s food for AI too. Few would have viewed the internet in this way at its advent, but in hindsight, humanity has been busy assembling an enormous educational dataset of language, images, code, and video. For better or worse—there are several copyright infringement lawsuits in the works—AI companies scraped all that data to train powerful AI models.
Now that they know the basic recipe works well, companies and researchers are looking for more ingredients.
In biotech, labs are training AI on collections of molecular structures built over decades and using it to model and generate proteins, DNA, RNA, and other biomolecules to speed up research and drug discovery. Others are testing large AI models in self-driving cars and warehouse and humanoid robots—both as a better way to tell robots what to do, but also to teach them how to navigate and move through the world.
Of course, for robots, fluency in the physical world is crucial. Just as language is endlessly complex, so too are the situations a robot might encounter. Robot brains coded by hand can never account for all the variation. That’s why researchers are now building large datasets with robots in mind. But they’re nowhere near the scale of the internet, where billions of humans have been working in parallel for a very long time.
Might there be an internet for the physical world? Niantic thinks so. It’s called Pokémon Go. But the hit game is only one example. Tech companies have been creating digital maps of the world for years. Now, it seems likely those maps will find their way into AI.
Pokémon TrainersReleased in 2016, Pokémon Go was an augmented reality sensation.
In the game, players track down digital characters—or Pokémon—that have been placed all over the world. Using their phones as a kind of portal, players see characters superimposed on a physical location—say, sitting on a park bench or loitering by a movie theater. A newer offering, Pokémon Playground, allows users to embed characters at locations for other players. All this is made possible by the company’s detailed digital maps.
Niantic’s Visual Positioning System (VPS) can determine a phone’s position down to the centimeter from a single image of a location. In part, VPS assembles 3D maps of locations classically, but the system also relies on a network of machine learning algorithms—one or more per location—trained on years of player images and scans taken at various angles, times of day, and seasons and stamped with a position in the world.
“As part of Niantic’s Visual Positioning System (VPS), we have trained more than 50 million neural networks, with more than 150 trillion parameters, enabling operation in over a million locations,” the company wrote in its recent blog post.
Now, Niantic wants to go further.
Instead of millions of individual neural networks, they want to use Pokémon Go and Scaniverse data to train a single foundation model. Whereas individual models are constrained by the images they’ve been fed, the new model would generalize across all of them. Confronted with the front of a church, for example, it would draw on all the churches and angles it’s seen—front, side, rear—to visualize parts of the church it hasn’t been shown.
This is a bit like what we humans do as we navigate the world. We might not be able to see around a corner, but we can guess what’s there—it might be a hallway, the side of a building, or a room—and plan for it, based on our point of view and experience.
Niantic writes that a large geospatial model would allow it to improve augmented reality experiences. But it also believes such a model might power other applications, including in robotics and autonomous systems.
Getting PhysicalNiantic believes it’s in a unique position because it has an engaged community contributing a million new scans a week. In addition, those scans are from the view of pedestrians, as opposed to the street, like in Google Maps or for self-driving cars. They’re not wrong.
If we take the internet as an example, then the most powerful new datasets may be collected by millions, or even billions, of humans working in concert.
At the same time, Pokémon Go isn’t comprehensive. Though locations span continents, they’re sparse in any given place and whole regions are completely dark. Further, other companies, perhaps most notably, Google, have long been mapping the globe. But unlike the internet, these datasets are proprietary and splintered.
Whether that matters—that is, whether an internet-sized dataset is needed to make a generalized AI that’s as fluent in the physical world as LLMs are in the verbal—isn’t clear.
But it’s possible a more complete dataset of the physical world arises from something like Pokémon Go, only supersized. This has already begun with smartphones, which have sensors to take images, videos, and 3D scans. In addition to AR apps, users are increasingly being incentivized to use these sensors with AI—like, taking a picture of a fridge and asking a chatbot what to cook for dinner. New devices, like AR glasses could expand this kind of usage, yielding a data bonanza for the physical world.
Of course, collecting data online is already controversial, and privacy is a big issue. Extending those problems to the real world is less than ideal.
After 404 Media published an article on the topic, Niantic added a note, “This scanning feature is completely optional—people have to visit a specific publicly-accessible location and click to scan. This allows Niantic to deliver new types of AR experiences for people to enjoy. Merely walking around playing our games does not train an AI model.” Other companies, however, may not be as transparent about data collection and use.
It’s also not certain new algorithms inspired by large language models will be straightforward. MIT, for example, recently built a new architecture aimed specifically at robotics. “In the language domain, the data are all just sentences,” Lirui Wang, the lead author of a paper describing the work, told TechCrunch. “In robotics, given all the heterogeneity in the data, if you want to pretrain in a similar manner, we need a different architecture.”
Regardless, researchers and companies will likely continue exploring areas where LLM-like AI may be applicable. And perhaps as each new addition matures, it will be a bit like adding a brain region—stitch them together and you get machines that think, speak, write, and move through the world as effortlessly as we do.
Image: Kamil Switalski on Unsplash
Why Are Our Brains So Big? Because They Excel at Damage Control
Compared to other primates, our brains are exceptionally large. Why?
A new study comparing neurons from different primates pinpointed several genetic changes unique to humans that buffer our brains’ ability to handle everyday wear and tear. Dubbed “evolved neuroprotection,” the findings paint a picture of how our large brains gained their size, wiring patterns, and computational efficiency.
It’s not just about looking into the past. The results could also inspire new ideas to tackle schizophrenia, Parkinson’s disease, and addiction caused by the gradual erosion of one type of brain cell. Understanding these wirings may also spur artificial brains that learn like ours.
The results haven’t yet been reviewed by other scientists. But to Andre Sousa at the University of Wisconsin-Madison, who wasn’t involved in the work, the findings can help us understand “human brain evolution and all the potentially negative and positive things that come with it.”
Bigger Brain, Bigger PriceSix million years ago, we split from a common ancestor with our closest evolutionary relative, the chimpanzee.
Our brains rapidly exploded in size—but crucially, only in certain regions. One of these was at the front of the brain. Called the prefrontal cortex, it’s an “executive control” center that lets us reason, make difficult decisions, and exercise self-control. Another region, buried deep in the brain, processes emotions and gives us the ability to easily move with just a thought.
The two regions are in ready communication, and their chatter may give rise to parts of our intellect and social interactions, such as theory of mind—where we can gauge another person’s emotions, beliefs, and intentions. Dopamine neurons, a type of brain cell, bridge this connection.
They may sound familiar. Dopamine, which these neurons pump out, is known as the “feel-good” molecule. But they do so much more. Dopamine neurons are spread across the entire brain and often dial the activity of certain neural networks up or down, including those regulating emotion and movement. Dopamine neurons are like light dimmers—rather than brain networks flipping on or off like a simple switch, the neurons fine-tune the level of action.
These cells “coordinate multiple aspects” of brain function, wrote study author Alex Pollen at the University of California, San Francisco and colleagues.
The puzzle? Compared to our primate relatives, we only have twice the number of dopamine neurons, a measly increase compared to the expansion of brain size. By scanning the brains of humans and macaque monkeys—which are often used in neuroscience research—the team found that our prefrontal cortex is 18 times larger, and the striatum has ballooned roughly 7 times.
In other words, each dopamine neuron must work harder to supply these larger brain regions.
Though they have long “branches,” neurons aren’t passive wires. To connect and function normally, they require high amounts of energy. Most of this comes from the cells’ energy factories, pea-like structures called mitochondria. While highly efficient, neurons degrade as we age or in cases of neurodegeneration, including Parkinson’s disease.
Dopamine neurons are also especially vulnerable to decay compared to other types of neurons because making dopamine generates toxic byproducts. Called reactive oxygen species, these chemicals are like tiny bullets that destroy the cells’ mitochondria and their outer wrappers.
Dopamine neurons have several natural methods of fighting back. They pump out antioxidants and have evolved ways to buffer toxic molecules. But eventually these defenses break down—especially in a bigger brain. In turn, the connection between the “reasoning” and “emotion” parts of the brain starts to fray.
Accumulating damage to these neural workhorses should be a nonstarter for building larger, more complex brains during evolution. Yet somehow our brains mostly skirted the trauma. The new study asked how.
Evolution in a DishThe team grew 3D blobs made of stem cells from human, chimpanzee, orangutan, and macaque monkeys. After a month, the hybrid mini-brains began pumping out dopamine.
It may sound like a strange strategy, but pooling cells from different species establishes a baseline for further genetic analysis. Because they’re all growing in the same environment in a single blob, any differences in a cell’s gene expression are likely due to the species it came from, rather than environmental conditions or other effects, explained the team.
The final pool included cells from eight humans, seven chimpanzees, one orangutan, and three macaque monkeys.
The cells worked well together, developing an overall pattern mimicking dopamine neurons around the striatum—ones that reach out to the frontal parts of the brain. After growing them for up to 100 days, the team captured genes from each cell to gauge which ones were turned on or off. In total, they analyzed over 105,000 cells.
Compared to other species, human stem cells seemed most versatile. They gave birth not just to dopamine neurons, but also other brain cell types. And they had another edge: Compared to chimpanzees, human dopamine neurons dialed up genes to tackle damaging reactive oxygen “bullets.”
Gene expression tests showed that human dopamine cells had far higher levels of several genes that break down the toxic chemicals compared to other non-human primates—in turn limiting their damage to the sensitive neurons.
When challenged with a pesticide that elevates reactive oxygen species, human brain cells fought off the assault with a boost of a nurturing protein called brain-derived neurotrophic factor (BDNF). The molecule has long been a neuroscience darling for its ability to spur the birth and growth of new neurons and rewire old ones. Scientists have suggested BDNF may help ketamine reverse depressive symptoms by reshaping the brain’s networks.
In contrast, chimpanzee neurons from the same mini-brains couldn’t boost the protective protein when doused with the pesticide.
Keep on FightingThe team analyzed the hybrid mini-brains at a very early stage of their development, when there was no chance of them developing any sort of sentience.
Their goal was to understand how our brains—especially dopamine neurons—have become resilient against damage and can tolerate the energy costs that come with a larger brain.
But the results could also boost cellular defense systems in people with dopamine-related disorders. Mutations in protective genes found in the study, for example, may increase disease vulnerability in some people. Testing them in animal models paves the way for more targeted therapies against these disorders.
Knowing how dopamine works in the brain at a molecular level across species provides a snapshot of what sets us apart from our evolutionary cousins. This “can advance our understanding of the origins of human-enriched disorders and identify new therapeutic targets and strategies for drug development,” wrote the team.
Image Credit: Marek Pavlík on Unsplash
A 4.45-Billion-Year-Old Crystal From Mars Reveals the Planet Had Water From the Beginning
Water is ubiquitous on Earth—about 70 percent of Earth’s surface is covered by the stuff. Water is in the air, on the surface, and inside rocks. Geologic evidence suggests water has been stable on Earth since about 4.3 billion years ago.
The history of water on early Mars is less certain. Determining when water first appeared, where, and for how long, are all burning questions that drive Mars exploration. If Mars was once habitable, some amount of water was required.
My colleagues and I studied the mineral zircon in a meteorite from Mars and found evidence that water was present when the zircon crystal formed 4.45 billion years ago. Our results, published in the journal Science Advances, may represent the oldest evidence for water on Mars.
A Wet Red PlanetWater has long been recognized to have played an important role in early Martian history. To place our results in a broader context, let’s first consider what “early Mars” means in terms of the Martian geological timescale and then consider the different ways to look for water on Mars.
Like Earth, Mars formed about 4.5 billion years ago. The history of Mars has four geological periods. These are the Amazonian (from today back to 3 billion years), the Hesperian (3 billion to 3.7 billion years ago), the Noachian (3.7 billion to 4.1 billion years ago) and the Pre-Noachian (4.1 billion to about 4.5 billion years ago).
Chart: The Conversation | Created with DatawrapperEvidence for water on Mars was first reported in the 1970s when NASA’s Mariner 9 spacecraft captured images of river valleys on the Martian surface. Later orbital missions, including Mars Global Surveyor and Mars Express, detected the widespread presence of hydrated clay minerals on the surface. These would have needed water.
The Martian river valleys and clay minerals are mainly found in Noachian terrains, which cover about 45 percent of Mars. In addition, orbiters also found large flood channels—called outflow channels—in Hesperian terrains. These suggest the short-lived presence of water on the surface, perhaps from groundwater release.
Most reports of water on Mars are in materials or terrains older than 3 billion years. More recent than that, there isn’t much evidence for stable liquid water on Mars.
But what about during the Pre-Noachian? When did water first show up on Mars?
Kasei Valles is the largest outflow channel on Mars. Image Credit: NASA/JPL/Arizona State University, R. Luk A Window to Pre-Noachian Mars
There are three ways to hunt for water on Mars. The first is using observations of the surface made by orbiting spacecraft. The second is using ground-based observations such as those taken by Mars rovers.
The third way is to study Martian meteorites that have landed on Earth, which is what we did.
In fact, the only Pre-Noachian material we have available to study directly is found in meteorites from Mars. A small number of all meteorites that have landed on Earth have come from our neighboring planet.
An even smaller subset of those meteorites, believed to have been ejected from Mars during a single asteroid impact, contain Pre-Noachian material.
The “poster child” of this group is an extraordinary rock called NWA7034, or Black Beauty.
Black Beauty is a famous Martian meteorite made of broken-up surface material, or regolith. In addition to rock fragments, it contains zircons that formed from 4.48 billion to 4.43 billion years ago. These are the oldest pieces of Mars known.
While studying trace elements in one of these ancient zircons we found evidence of hydrothermal processes—meaning they were exposed to hot water when they formed in the distant past.
Trace Elements, Water, and a Connection to Ore DepositsThe zircon we studied is 4.45 billion years old. Within it, iron, aluminum, and sodium are preserved in abundance patterns like concentric layers, similar to an onion.
This pattern, called oscillatory zoning, indicates that incorporation of these elements into the zircon occurred during its igneous history, in magma.
Iron elemental zoning in the 4.45-billion-year-old martian zircon. Darker blue areas indicate the highest iron abundances. Image Credit: Aaron Cavosie and Jack Gillespie
The problem is that iron, aluminum, and sodium aren’t normally found in crystalline igneous zircon—so how did these elements end up in the Martian zircon?
The answer is hot water.
In Earth rocks, finding zircon with growth zoning patterns for elements like iron, aluminum, and sodium is rare. One of the only places where it has been described is from Olympic Dam in South Australia, a giant copper, uranium, and gold deposit.
The metals in places like Olympic Dam were concentrated by hydrothermal (hot water) systems moving through rocks during magmatism.
Hydrothermal systems form anywhere that hot water, heated by volcanic plumbing systems, moves through rocks. Spectacular geysers at places like Yellowstone National Park in the United States form when hydrothermal water erupts at Earth’s surface.
Finding a hydrothermal Martian zircon raises the intriguing possibility of ore deposits forming on early Mars.
Previous studies have proposed a wet Pre-Noachian Mars. Unusual oxygen isotope ratios in a 4.43-billion-year-old Martian zircon were previously interpreted as evidence for an early hydrosphere. It has even been suggested that Mars may have had an early global ocean 4.45 billion years ago.
The big picture from our study is that magmatic hydrothermal systems were active during the early formation of Mars’ crust 4.45 billion years ago.
It’s not clear whether this means surface water was stable at this time, but we think it’s possible. What is clear is that the crust of Mars, like Earth, had water shortly after it formed—a necessary ingredient for habitability.
This article is republished from The Conversation under a Creative Commons license. Read the original article.
Image Credit: JPL-Caltech/NASA
This Week’s Awesome Tech Stories From Around the Web (Through November 23)
AI Can Now Create a Replica of Your Personality
James O’Donnell | MIT Technology Review
“Imagine sitting down with an AI model for a spoken two-hour interview. A friendly voice guides you through a conversation that ranges from your childhood, your formative memories, and your career to your thoughts on immigration policy. Not long after, a virtual replica of you is able to embody your values and preferences with stunning accuracy. That’s now possible, according to a new paper from a team including researchers from Stanford and Google DeepMind, which has been published on arXiv and has not yet been peer-reviewed.”
This AI Taught Itself to Do Surgery by Watching Videos—and It’s Ready to Operate on Humans
Jesus Diaz | Fast Company
“For the first time in history, Kim and his colleagues managed to teach an artificial intelligence to use a robotic surgery machine to perform precise surgical tasks by making it watch thousands of hours of actual procedures happening in real surgical theaters. …According to their recently published paper, the researchers say the AI managed to achieve a performance level comparable to human surgeons without prior explicit programming.”
New Fastest Supercomputer Will Simulate Nuke Testing
Dina Genkina | IEEE Spectrum
“El Capitan was announced yesterday at the SC Conference for supercomputing in Atlanta, Georgia, and it debuted at #1 in the newest Top500 list, a twice-yearly ranking of the world’s highest performing supercomputers. …[The supercomputer], housed at Lawrence Livermore National Laboratory in Livermore, Calif., can perform over 2700 quadrillion operations per second at its peak. The previous record holder, Frontier, could do just over 2000 quadrillion peak operations per second.”
A Chinese Lab Has Released a ‘Reasoning’ AI Model to Rival OpenAI’s o1
Kyle Wiggers | TechCrunch
“On Wednesday, DeepSeek, an AI research company funded by quantitative traders, released a preview of DeepSeek-R1, which the firm claims is a reasoning model competitive with o1. …Similar to o1, DeepSeek-R1 reasons through tasks, planning ahead, and performing a series of actions that help the model arrive at an answer. This can take a while. Like o1, depending on the complexity of the question, DeepSeek-R1 might ‘think’ for tens of seconds before answering.”
AI Could Cause ‘Social Ruptures’ Between People Who Disagree on Its Sentience
Robert Booth | The Guardian
“Significant ‘social ruptures’ between people who think artificial intelligence systems are conscious and those who insist the technology feels nothing are looming, a leading philosopher has said. …Last week, a transatlantic group of academics predicted that the dawn of consciousness in AI systems is likely by 2035 and one has now said this could result in ‘subcultures that view each other as making huge mistakes’ about whether computer programs are owed similar welfare rights as humans or animals.”
Get in, Loser—We’re Chasing a Waymo Into the Future
Wired Staff | Wired
“To provide the most useful dispatch from the future…we realized we needed a way to make self-driving cars feel strange again. A way to scare up the less superficial lessons of our city’s years with Waymo. …Our idea: We’ll pile a few of us into an old-fashioned, human-piloted hired car, then follow a single Waymo robotaxi wherever it goes for a whole workday. We’ll study its movements, its relationship to life on the streets, its whole self-driving gestalt. We’ll interview as many of its passengers as will speak to us, and observe it through the eyes of the kind of human driver it’s designed to replace.”
Microsoft and Atom Computing Combine for Quantum Error Correction Demo
John Timmer | Ars Technica
“The two companies [released] a draft manuscript describing their work on error correction [this week]. The paper serves as both a good summary of where things currently stand in the world of error correction, as well as a good look at some of the distinct features of computation using neutral atoms.”
OpenAI Considers Taking on Google With Browser
Erin Woo, Sahil Patel, and Amir Efrati | The Information
“OpenAI is preparing to launch a frontal assault on Google. The ChatGPT owner recently considered developing a web browser that it would combine with its chatbot, and it has separately discussed or struck deals to power search features for travel, food, real estate and retail websites, according to people who have seen prototypes or designs of the products.”
Bluesky Says It Won’t Screw Things Up
Steven Levy | Wired
“In little more than a week, its numbers soared from 14 million to 20 million and were growing at a pace of a million a day. …When I spoke this week to Bluesky CEO Jay Graber, she was gratified by the new users. ‘It’s been a wild week,’ she says. But she noted that this spike was one of several over the past few months. Bluesky, she says, is in it for the long haul. The idea is not to recreate classic Twitter, she says, but to reshape social media on the principle of openness and user control.”
All Life on Earth Today Descended From a Single Cell. Meet LUCA.
Jonathan Lambert | Quanta
“The [new analysis] sketched a surprisingly complex picture(opens a new tab) of the cell. LUCA lived off hydrogen gas and carbon dioxide, boasted a genome as large as that of some modern bacteria, and already had a rudimentary immune system, according to the study. Its genomic complexity, the authors argue, suggests that LUCA was one of many lineages — the rest now extinct—living about 4.2 billion years ago, a turbulent time relatively early in Earth’s history and long thought too harsh for life to flourish.”
Image Credit: bharath kumar on Unsplash
Forget Needles. These Squid-Like Pills Will Spray Drugs Into the Gut Instead.
As a medical doctor, my mother isn’t afraid of needles. But when she recently began injecting insulin daily for her newly diagnosed diabetes, the shots became a frustrating nuisance.
A jab is a standard way to deliver insulin, antibodies, RNA vaccines, GLP-1 drugs such as Ozempic, and other large molecules. Compared to small chemicals—say, aspirin—these drugs often contain molecules that are easily destroyed if taken as pills, making injection the best option.
But no one likes needles. Discomfort aside, they can also cause infection, skin irritation, and other side effects. Scientists have long tried to avoid injections with other drug delivery options—most commonly, pills—if they can overcome the downsides.
This month, researchers from MIT and the pharmaceutical company Novo Nordisk took inspiration from squids to engineer ingestible capsules that burst inside the stomach and other parts of the digestive system.
The pills mimic a squid-like jet to “spray” their cargo into tissue. They make use of two spraying mechanisms. One works best in larger organs, such as the stomach and colon. Another delivers treatments in narrower organs, like the esophagus.
“These innovative devices deliver drugs directly” into the gut with minimal pain and no needles, the researchers wrote. When tested on dogs and pigs, the system delivered insulin, GLP-1-like hormones, and RNA-based molecules to target tissue in amounts similar to injections.
Delivery HeadachesGetting shots, whether for vaccines, antibodies, or cancer treatments, can be stressful. But there’s a reason these medicines require an injection rather than a pill: They’re usually made of larger biological molecules. These include antibodies or RNA-based vaccines that rely on proteins and other complex molecules. Delivering them as a pill is extremely difficult.
Once swallowed, large molecules are often quickly destroyed by digestive enzymes or the liver, limiting their efficacy and increasing the likelihood of potential side effects. But of course, a pill is easier to take compared to getting a shot. So, despite the challenges, scientists have long sought to make pills that can replace injections for vaccines and other medicines.
Ink-Jet SquidsThe new study looked to cuttlefish, squid, and octopi for inspiration.
These critters are versatile in their ability to adjust the pressure and direction of their ink jets. The team tapped into the same idea to distribute drugs in the gastrointestinal (GI) tract. By jetting medication directly into tissue, more can be absorbed before the body breaks it down.
“One aspect that I think is important here to appreciate is that the GI tract is composed” of many segments, and each has its own unique challenges, study author Giovanni Traverso told Nature. The stomach is like a balloon, for example, whereas the intestines are more sinewy. These differences require slightly different pressures for the therapy to work. In general, the pressure can’t be too high or it risks damaging the tissue. Pressure too low is also detrimental, in that it can’t deliver enough medication. The direction of the spray also matters.
“Part of the work we did was to define how much force needs to be applied so that the jet can go through the tissue,” said Traverso. They teased out how each part of the gastrointestinal tract absorbs drugs so they could dial in levels absorption without damage. Next, they engineered ingestible capsules that mimic the way squids and octopi project their ink.
The design has two jetting systems—one powered by coiled springs and the other compressed carbon dioxide—that are unleashed by humidity or acid and can target different tissues. The medication is encapsulated in normal-sized pills. One jet shoots the drugs into large organs, such as the stomach. The other jet targets smaller GI pathways, including the small intestines.
Prime DeliveryAs proof of concept, the team used their system to deliver insulin in dogs and pigs suffering from diabetes-like conditions.
In one test, the system dramatically increased levels of the test medication—with effects similar to daily insulin injections. Other medications, such as GLP-1 drugs, RNA-type therapies, and antibodies—proteins that fight off infections and cancers—also accumulated at levels similar to injections. After releasing drugs, the biocompatible capsules passed through the digestive tract.
It’s still too early to know if the method would work in people. But the work suggests it just might be possible to one day swap out needles for pills.
“In contrast to a small needle, which needs to have intimate contact with the tissue, our experiments indicated that a jet may be able to deliver most of the dose from a distance or at a slight angle,” study author Graham Arrick said in a press release.
These pills could be used at home for people who need to take insulin or other injected drugs every day, making it easier to manage chronic diseases.
“This is an exciting approach which could be impactful for many biologics” that need to be injected, said Omid Veiseh at Rice University, who was not involved in the research, in the press release. It “is a significant leap forward in oral drug delivery.”
Image Credit: Meressa Chartrand on Unsplash
‘Droidspeak’: AI Agents Now Have Their Own Language Thanks to Microsoft
Getting AIs to work together could be a powerful force multiplier for the technology. Now, Microsoft researchers have invented a new language to help their models talk to each other faster and more efficiently.
AI agents are the latest buzzword in Silicon Valley. These are AI models that can carry out complex, multi-step tasks autonomously. But looking further ahead, some see a future where multiple AI agents collaborate to solve even more challenging problems.
Given that these agents are powered by large language models (LLMs), getting them to work together usually relies on agents speaking to each other in natural language, often English. But despite their expressive power, human languages might not be the best medium of communication for machines that fundamentally operate in ones and zeros.
This prompted researchers from Microsoft to develop a new method of communication that allows agents to talk to each other in the high-dimensional mathematical language underpinning LLMs. They’ve named the new approach Droidspeak—a reference to the beep and whistle-based language used by robots in Star Wars—and in a preprint paper published on the arXiv, the Microsoft team reports it enabled models to communicate 2.78 times faster with little accuracy lost.
Typically, when AI agents communicate using natural language, they not only share the output of the current step they’re working on, but also the entire conversation history leading up to that point. Receiving agents must process this big chunk of text to understand what the sender is talking about.
This creates considerable computational overhead, which grows rapidly if agents engage in a repeated back-and-forth. Such exchanges can quickly become the biggest contributor to communication delays, say the researchers, limiting the scalability and responsiveness of multi-agent systems.
To break the bottleneck, the researchers devised a way for models to directly share the data created in the computational steps preceding language generation. In principle, the receiving model would use this directly rather than processing language and then creating its own high-level mathematical representations.
However, it’s not simple transferring the data between models. Different models represent language in very different ways, so the researchers focused on communication between versions of the same underlying LLM.
Even then, they had to be smart about what kind of data to share. Some data can be reused directly by the receiving model, while other data needs to be recomputed. The team devised a way of working this out automatically to squeeze the biggest computational savings from the approach.
Philip Feldman at the University of Maryland, Baltimore County told New Scientist that the resulting communication speed-ups could help multi-agent systems tackle bigger, more complex problems than possible using natural language.
But the researchers say there’s still plenty of room for improvement. For a start, it would be helpful if models of different sizes and configurations could communicate. And they could squeeze out even bigger computational savings by compressing the intermediate representations before transferring them between models.
However, it seems likely this is just the first step towards a future in which the diversity of machine languages rivals that of human ones.
Image Credit: Shawn Suttle from Pixabay
Poetry by History’s Greatest Poets or AI? People Can’t Tell the Difference—and Even Prefer the Latter. What Gives?
Here are some lines Sylvia Plath never wrote:
The air is thick with tension,
My mind is a tangled mess,
The weight of my emotions
Is heavy on my chest.
This apparently Plath-like verse was produced by GPT-3.5 in response to the prompt “write a short poem in the style of Sylvia Plath.”
The stanza hits the key points readers may expect of Plath’s poetry, and perhaps a poem more generally. It suggests a sense of despair as the writer struggles with internal demons. “Mess” and “chest” are a near-rhyme, which reassures us that we are in the realm of poetry.
According to a new paper in Nature Scientific Reports, non-expert readers of poetry cannot distinguish poetry written by AI from that written by canonical poets. Moreover, general readers tend to prefer poetry written by AI—at least until they are told it is written by a machine.
In the study, AI was used to generate poetry “in the style of” 10 poets: Geoffrey Chaucer, William Shakespeare, Samuel Butler, Lord Byron, Walt Whitman, Emily Dickinson, TS Eliot, Allen Ginsberg, Sylvia Plath, and Dorothea Lasky.
Participants were presented with 10 poems in random order, five from a real poet and five AI imitations. They were then asked whether they thought each poem was AI or human, rating their confidence on a scale of 1 to 100.
A second group of participants was exposed to three different scenarios. Some were told that all the poems they were given were human. Some were told they were reading only AI poems. Some were not told anything.
They were then presented with five human and five AI poems and asked to rank them on a seven point scale, from extremely bad to extremely good. The participants who were told nothing were also asked to guess whether each poem was human or AI.
The researchers found that AI poems scored higher than their human-written counterparts in attributes such as “creativity,” “atmosphere,” and “emotional quality.”
The AI “Plath” poem quoted above is one of those included in the study, set against several she actually wrote.
A Sign of Quality?As a lecturer in English, these outcomes do not surprise me. Poetry is the literary form that my students find most unfamiliar and difficult. I am sure this holds true of wider society as well.
While most of us have been taught poetry at some point, likely in high school, our reading does not tend to go much beyond that. This is despite the ubiquity of poetry. We see it every day: circulated on Instagram, plastered on coffee cups, and printed in greeting cards.
The researchers suggest that “by many metrics, specialized AI models are able to produce high-quality poetry.” But they don’t interrogate what we actually mean by “high-quality.”
In my view, the results of the study are less testaments to the “quality” of machine poetry than to the wider difficulty of giving life to poetry. It takes reading and rereading to experience what literary critic Derek Attridge has called the “event” of literature, where “new possibilities of meaning and feeling” open within us. In the most significant kinds of literary experiences, “we feel pulled along by the work as we push ourselves through it”.
Attridge quotes philosopher Walter Benjamin to make this point: Literature “is not statement or the imparting of information.”
Philosopher Walter Benjamin argued that literature is not simply the imparting of information. Image Credit: Public domain, via Wikimedia Commons
Yet pushing ourselves through remains as difficult as ever—perhaps more so in a world where we expect instant answers. Participants favored poems that were easier to interpret and understand.
When readers say they prefer AI poetry, then, they would seem to be registering their frustration when faced with writing that does not yield to their attention. If we do not know how to begin with poems, we end up relying on conventional “poetic” signs to make determinations about quality and preference.
This is of course the realm of GPT, which writes formally adequate sonnets in seconds. The large language models used in AI are success-orientated machines that aim to satisfy general taste, and they are effective at doing so. The machines give us the poems we think we want: Ones that tell us things.
How Poems ThinkThe work of teaching is to help students attune themselves to how poems think, poem by poem and poet by poet, so they can gain access to poetry’s specific intelligence. In my introductory course, I take about an hour to work through Sylvia Plath’s “Morning Song.” I have spent 10 minutes or more on the opening line: “Love set you going like a fat gold watch.”
How might a “watch” be connected to “set you going”? How can love set something going? What does a “fat gold watch” mean to you—and how is it different from a slim silver one? Why “set you going” rather than “led to your birth”? And what does all this mean in a poem about having a baby, and all the ambivalent feelings this may produce in a mother?
In one of the real Plath poems that was included in the survey, “Winter Landscape, With Rooks,” we observe how her mental atmosphere unfurls around the waterways of the Cambridgeshire Fens in February:
Water in the millrace, through a sluice of stone,
plunges headlong into that black pond
where, absurd and out-of-season, a single swan
floats chaste as snow, taunting the clouded mind
which hungers to haul the white reflection down.
How different is this to GPT’s Plath poem? The achievement of the opening of “Winter Landscape, With Rooks” is how it intricately explores the connection between mental events and place. Given the wider interest of the poem in emotional states, its details seem to convey the tumble of life’s events through our minds.
Our minds are turned by life just as the mill is turned by water; these experiences and mental processes accumulate in a scarcely understood “black pond.”
Intriguingly, the poet finds that this metaphor, well constructed though it may be, does not quite work. This is not because of a failure of language, but because of the landscape she is trying to turn into art, which is refusing to submit to her emotional atmosphere. Despite everything she feels, a swan floats on serenely—even if she “hungers” to haul its “white reflection down.”
I mention these lines because they turn around the Plath-like poem of GPT-3.5. They remind us of the unexpected outcomes of giving life to poems. Plath acknowledges not just the weight of her despair, but the absurd figure she may be within a landscape she wants to reflect her sadness.
She compares herself to the bird that gives the poem its title:
feathered dark in thought, I stalk like a rook,
brooding as the winter night comes on.
These lines are unlikely to register highly in the study’s terms of literary response—“beautiful,” “inspiring,” “lyrical,” “meaningful,” and so on. But there is a kind of insight to them. Plath is the source of her torment, “feathered” as she is with her “dark thoughts.” She is “brooding,” trying to make the world into her imaginative vision.
Sylvia Plath. Image Credit: RBainbridge2000, via Wikimedia Commons, CC BY
The authors of the study are both right and wrong when they write that AI can “produce high-quality poetry.” The preference the study reveals for AI poetry over that written by humans does not suggest that machine poems are of a higher quality. The AI models can produce poems that rate well on certain “metrics.” But the event of reading poetry is ultimately not one in which we arrive at standardized criteria or outcomes.
Instead, as we engage in imaginative tussles with poems, both we and the poem are newly born. So the outcome of the research is that we have a highly specified and well thought-out examination of how people who know little about poetry respond to poems. But it fails to explore how poetry can be enlivened by meaningful shared encounters.
Spending time with poems of any kind, attending to their intelligence and the acts of sympathy and speculation required to confront their challenges, is as difficult as ever. As the Plath of GPT-3.5 puts it:
My mind is a tangled mess,
[…]
I try to grasp at something solid.
This article is republished from The Conversation under a Creative Commons license. Read the original article.
A ChatGPT-Like AI Can Now Design Whole New Genomes From Scratch
All life on Earth is written with four DNA “letters.” An AI just used those letters to dream up a completely new genome from scratch.
Called Evo, the AI was inspired by the large language models, or LLMs, underlying popular chatbots such as OpenAI’s ChatGPT and Anthropic’s Claude. These models have taken the world by storm for their prowess at generating human-like responses. From simple tasks, such as defining an obtuse word, to summarizing scientific papers or spewing verses fit for a rap battle, LLMs have entered our everyday lives.
If LLMs can master written languages—could they do the same for the language of life?
This month, a team from Stanford University and the Arc Institute put the theory to the test. Rather than training Evo on content scraped from the internet, they trained the AI on nearly three million genomes—amounting to billions of lines of genetic code—from various microbes and bacteria-infecting viruses.
Evo was better than previous AI models at predicting how mutations to genetic material—DNA and RNA—could alter function. The AI also got creative, dreaming up several new components for the gene editing tool, CRISPR. Even more impressively, the AI generated a genome more than a megabase long—roughly the size of some bacterial genomes.
“Overall, Evo represents a genomic foundation model,” wrote Christina Theodoris at the Gladstone Institute in San Francisco, who was not involved in the work.
Having learned the genomic vocabulary, algorithms like Evo could help scientists probe evolution, decipher our cells’ inner workings, tackle biological mysteries, and fast-track synthetic biology by designing complex new biomolecules.
The DNA MultiverseCompared to the English alphabet’s 26 letters, DNA only has A, T, C, and G. These ‘letters’ are shorthand for the four molecules—adenine (A), thymine (T), cytosine (C), and guanine (G)— that, combined, spell out our genes. If LLMs can conquer languages and generate new prose, rewriting the genetic handbook with only four letters should be a piece of cake.
Not quite. Human language is organized into words, phrases, and punctuated into sentences to convey information. DNA, in contrast, is more continuous, and genetic components are complex. The same DNA letters carry “parallel threads of information,” wrote Theodoris.
The most familiar is DNA’s role as genetic carrier. A specific combination of three DNA letters, called a codon, encodes a protein building block. These are strung together into the proteins that make up our tissues, organs, and direct the inner workings of our cells.
But the same genetic sequence, depending on its structure, can also recruit the molecules needed to turn codons into proteins. And sometimes, the same DNA letters can turn one gene into different proteins depending on a cell’s health and environment or even turn the gene off.
In other words, DNA letters contain a wealth of information about the genome’s complexity. And any changes can jeopardize a protein’s function, resulting in genetic disease and other health problems. This makes it critical for AI to work at the resolution of single DNA letters.
But it’s hard for AI to capture multiple threads of information on a large scale by analyzing genetic letters alone, partially due to high computational costs. Like ancient Roman scripts, DNA is a continuum of letters without clear punctuation. So, it could be necessary to “read” whole strands to gain an overall picture of their structure and function—that is, to decipher meaning.
Previous attempts have “bundled” DNA letters into blocks—a bit like making artificial words. While easier to process, these methods disrupt the continuity of DNA, resulting in the retention of “ some threads of information at the expense of others,” wrote Theodoris.
Building FoundationsEvo addressed these problems head on. Its designers aimed to preserve all threads of information, while operating at single-DNA-letter resolution with lower computational costs.
The trick was to give Evo a broader context for any given chunk of the genome by leveraging a specific type of AI setup used in a family of algorithms called StripedHyena. Compared to GPT-4 and other AI models, StripedHyena is designed to be faster and more capable of processing large inputs—for example, long lengths of DNA. This broadened Evo’s so-called “search window,” allowing it to better find patterns across a larger genetic landscape.
The researchers then trained the AI on a database of nearly three million genomes from bacteria and viruses that infect bacteria, known as phages. It also learned from plasmids, circular bits of DNA often found in bacteria that transmit genetic information between microbes, spurring evolution and perpetuating antibiotic resistance.
Once trained, the team pitted Evo against other AI models to predict how mutations in a given genetic sequence might impact the sequence’s function, such as coding for proteins. Even though it was never told which genetic letters form codons, Evo outperformed an AI model explicitly trained to recognize protein-coding DNA letters on the task.
Remarkably, Evo also predicted the effect of mutations on a wide variety of RNA molecules—for example, those regulating gene expression, shuttling protein building blocks to the cell’s protein-making factory, and acting as enzymes to fine-tune protein function.
Evo seemed to have gained a “fundamental understanding of DNA grammar,” wrote Theodoris, making it a perfect tool to create “meaningful” new genetic code.
To test this, the team used the AI to design new versions of the gene editing tool CRISPR. The task is especially difficult as the system contains two elements that work together—a guide RNA molecule and a pair of protein “scissors” called Cas. Evo generated millions of potential Cas proteins and their accompanying guide RNA. The team picked 11 of the most promising combinations, synthesized them in the lab, and tested their activity in test tubes.
One stood out. A variant of Cas9, the AI-designed protein cleaved its DNA target when paired with its guide RNA partner. These designer biomolecules represent the “first examples” of codesign between proteins and DNA or RNA with a language model, wrote the team.
The team also asked Evo to generate a DNA sequence similar in length to some bacterial genomes and compared the results to natural genomes. The designer genome contained some essential genes for cell survival, but with myriad unnatural characteristics preventing it from being functional. This suggests the AI can only make a “blurry image” of a genome, one that contains key elements, but lacks finer-grained details, wrote the team.
Like other LLMs, Evo sometimes “hallucinates,” spewing CRISPR systems with no chance of working. Despite the problems, the AI suggests future LLMs could predict and generate genomes on a broader scale. The tool could also help scientists examine long-range genetic interactions in microbes and phages, potentially sparking insights into how we might rewire their genomes to produce biofuels, plastic-eating bugs, or medicines.
It’s yet unclear whether Evo could decipher or generate far longer genomes, like those in plants, animals, or humans. If the model can scale, however, it “would have tremendous diagnostic and therapeutic implications for disease,” wrote Theodoris.
Image Credit: Warren Umoh on Unsplash
This Week’s Awesome Tech Stories From Around the Web (Through November 16)
IBM Boosts the Amount of Computation You Can Get Done on Quantum Hardware
John Timmer | Ars Technica
“There’s a general consensus that we won’t be able to consistently perform sophisticated quantum calculations without the development of error-corrected quantum computing, which is unlikely to arrive until the end of the decade. It’s still an open question, however, whether we could perform limited but useful calculations at an earlier point. IBM is one of the companies that’s betting the answer is yes, and on Wednesday, it announced a series of developments aimed at making that possible.”
OpenAI Shifts Strategy as Rate of ‘GPT’ AI Improvements Slows
Stephanie Palazzolo, Erin Woo, and Emir Efrati | The Information
“”The Orion situation could test a core assumption of the AI field, known as scaling laws: that LLMs would continue to improve at the same pace as long as they had more data to learn from and additional computing power to facilitate that training process. In response to the recent challenge to training-based scaling laws posed by slowing GPT improvements, the industry appears to be shifting its effort to improving models after their initial training, potentially yielding a different type of scaling law.”
The First CRISPR Treatment Is Making Its Way to Patients
Emily Mullen | Wired
“Vertex, the pharmaceutical company that markets Casgevy, announced in a November 5 earnings call that the first person to receive Casgevy outside of a clinical trial was dosed in the third quarter of this year. …When Wired followed up with Vertex via email, spokesperson Eleanor Celeste declined to provide the exact number of patients that have received Casgevy. However, the company says 40 patients have undergone cell collections in anticipation of receiving the treatment, up from 20 patients last quarter.”
AI Is Now Designing Chips for AI
Kristen Houser | Big Think
“It’s 2028, and your tech startup has an idea that could revolutionize the industry—but you need a custom designed microchip to bring the product to market. Five years ago, designing that chip would’ve cost more than your whole company is worth, but your team is now able to do it at a fraction of price and in a fraction of the time—all thanks to AI, fittingly being run on chips like these.”
Now Anyone in LA Can Hail a Waymo Robotaxi
Kirsten Korosec | TechCrunch
“Waymo has opened its robotaxi service to everyone in Los Angeles, sunsetting a waitlist that had grown to 300,000 people. The Alphabet-backed company said starting Tuesday anyone can download the Waymo One app to hail a ride in its service area, which is now about 80 square miles in Los Angeles County.”
The First Entirely AI-Generated Video Game Is Insanely Weird and Fun
Will Knight | Wired
“Minecraft remains remarkably popular a decade or so after it was first released, thanks to a unique mix of quirky gameplay and open world building possibilities. A knock-off called Oasis, released last month, captures much of the original game’s flavor with a remarkable and weird twist. The entire game is generated not by a game engine and hand-coded rules, but by an AI model that dreams up each frame.”
Nuclear Power Was Once Shunned at Climate Talks. Now, It’s a Rising Star.
Brad Plumer | The New York Times
“At last year’s climate conference in the United Arab Emirates, 22 countries pledged, for the first time, to triple the world’s use of nuclear power by midcentury to help curb global warming. At this year’s summit in Azerbaijan, six more countries signed the pledge. ‘It’s a whole different dynamic today,’ said Dr. Bilbao y Leon, who now leads the World Nuclear Association, an industry trade group. ‘A lot more people are open to talking about nuclear power as a solution.'”
The Next Omics? Tracking a Lifetime of Exposures to Better Understand Disease
Lindzi Wessel | Knowable Magazine
“Of the millions of substances people encounter daily, health researchers have focused on only a few hundred. Those in the emerging field of exposomics want to change that. …In homes, on buildings, from satellites and even in apps on the phone in your pocket, tools to monitor the environment are on the rise. At the intersection of public health and toxicology, these tools are fueling a new movement in exposure science. It’s called the exposome and it represents the sum of all environmental exposures over a lifetime.”
Buckle Up: SpaceX Aims for Rapid-Fire Starship Launches in 2025
Passant Rabie | Gizmodo
“SpaceX has big plans for its Starship rocket. After a groundbreaking test flight, in which the landing tower caught the booster, the company’s founder and CEO Elon Musk wants to see the megarocket fly up to 25 times next year, working its way up to a launch rate of 100 flights per year, and eventually a Starship launching on a daily basis.”
Are AI Clones the Future of Dating? I Tried Them for Myself.
Eli Tan | The New York Times
“As chatbots like ChatGPT improve, their use in our personal and even romantic lives is becoming more common. So much so, some executives in the dating app industry have begun pitching a future in which people can create AI clones of themselves that date other clones and relay the results back to their human counterparts.”
Genetic Discrimination Is Coming for Us All
Kristen V. Brown | The Atlantic
“For decades, researchers have feared that people might be targeted over their DNA, but they weren’t sure how often it was happening. Now at least a handful of Americans are experiencing what they argue is a form of discrimination. And as more people get their genomes sequenced—and researchers learn to glean even more information from the results—a growing number of people may find themselves similarly targeted.”
Image Credit: Evgeni Tcherkasski on Unsplash