Transhumanismus

Neuralink Rival’s Biohybrid Implant Connects to the Brain With Living Neurons

Singularity HUB - 19 Prosinec, 2024 - 16:00

Brain implants have improved dramatically in recent years, but they’re still invasive and unreliable. A new kind of brain-machine interface using living neurons to form connections could be the future.

While companies like Neuralink have recently provided some flashy demos of what could be achieved by hooking brains up to computers, the technology still has serious limitations preventing wider use.

Non-invasive approaches like electroencephalograms (EEGs) provide only coarse readings of neural signals, limiting their functionality. Directly implanting electrodes in the brain can provide a much clearer connection, but such risky medical procedures are hard to justify for all but the most serious conditions.

California-based startup Science Corporation thinks that an implant using living neurons to connect to the brain could better balance safety and precision. In recent non-peer-reviewed research posted on bioarXiv, the group showed a prototype device could connect with the brains of mice and even let them detect simple light signals.

“The principal advantages of a biohybrid implant are that it can dramatically change the scaling laws of how many neurons you can interface with versus how much damage you do to the brain,” Alan Mardinly, director of biology at Science Corporation, told New Scientist.

The company’s CEO Max Hodak is a former president of Neuralink, and his company also produces a retinal implant using more conventional electronics that can restore vision in some patients. But the company has been experimenting with so-called “biohybrid” approaches, which Hodak thinks could provide a more viable long-term solution for brain-machine interfaces.

“Placing anything into the brain inevitably destroys some amount of brain tissue,” he wrote in a recent blog post. “Destroying 10,000 cells to record from 1,000 might be perfectly justified if you have a serious injury and those thousand neurons create a lot of value—but it really hurts as a scaling characteristic.”

Instead, the company has developed a honeycomb-like structure made of silicon featuring more than 100,000 “microwells”—cylindrical holes roughly 15 micrometers deep. Individual neurons are inserted into each of these microwells, and the array can then be surgically implanted onto the surface of the brain.

The idea is that while the neurons remain housed in the implant, their axons—long strands that carry nerve signals away from the cell body—and their dendrites—the branched structures that form synapses with other cells—will be free to integrate with the host’s brain cells.

To see if the idea works in practice they installed the device in mice, using neurons genetically modified to react to light. Three weeks after implantation, they carried out a series of experiments where they trained the mice to respond whenever a light was shone on the device. The mice were able to detect when this happened, suggesting the light-sensitive neurons had merged with their native brain cells.

While it’s early days, the approach has significant benefits. You can squeeze a lot more neurons into a millimeter-scale chip than electrodes and each of those neurons can form many connections. That means the potential bandwidth of a biohybrid device could be much more than a conventional neural implant. The approach is also much less damaging to the patient’s brain.

However, the lifetime of these kinds of devices could be a concern—after 21 days, only 50 percent of the neurons had survived. And the company needs to find a way to ensure the neurons don’t illicit a negative immune response in the patient.

If the approach works though, it could be an elegant and potentially safer way to merge man and machine.

Image Credit: Science Corporation

Kategorie: Transhumanismus

How to Be Healthy at 100: Centenarian Stem Cells Could Hold the Key

Singularity HUB - 18 Prosinec, 2024 - 16:00

When Jeanne Calment died at the age of 122, her longevity had researchers scratching their heads. Although physically active for most of her life, she was also a regular smoker and enjoyed wine—lifestyle choices that are generally thought to decrease healthy lifespan.

Teasing apart the intricacies of human longevity is complicated. Diet, exercise, and other habits can change the trajectory of a person’s health as they grow older. Genetics also plays a role—especially during the twilight years. But experiments to test these ideas are difficult, in part because of our relatively long lifespan. Following a large population of people as they age is prohibitively expensive, and results could take decades. So, most studies have turned to animal aging models—including flies, rodents, and dogs—with far shorter lives.

But what if we could model human “aging in a dish” using cells derived from people with exceptionally long lives?

A new study, published in Aging Cell, did just that. Leveraging blood draws from the New England Centenarian Study—the largest and most comprehensive database of centenarians—they transformed blood cells into induced-pluripotent stem cells (iPSCs).

These cells contain their donor’s genetic blueprint. In essence, the team created a biobank of cells that could aid researchers in their search for longevity-related genes.

“Models of human aging, longevity, and resistance to and/or resilience against disease that allow for the functional testing of potential interventions are virtually non-existent,” wrote the team.

They’ve already shared these “super-aging” stem cells with the rest of the longevity community to advance understanding of the genes and other factors contributing to a healthier, longer life.

“This bank is really exciting,” Chiara Herzog, a longevity researcher at Kings College London, who was not involved in the study, told Nature.

Precious Resource

Centenarians are rare. According to the Pew Research Center, based on data from the US Census Bureau, they make up only 0.03 percent of the country’s population. Across the globe, roughly 722,000 people have celebrated their 100th birthday—a tiny fraction of the over eight billion people currently on Earth.

Centenarians don’t just live longer. They’re also healthier, even in extreme old age, and less likely to suffer age-related diseases, such as dementia, Type 2 diabetes, cancer, or stroke. Some evade these dangerous health problems altogether until the very end.

What makes them special? In the last decade, several studies have begun digging into their genes to see which are active (or not) and how this relates to healthy aging. Others have developed aging clocks, which use myriad biomarkers to determine a person’s biological age—that is, how well their bodies are working. Centenarians frequently stood out, with a genetic landscape and bodily functions resembling people far younger than expected for their chronological age.

Realizing the potential for studying human aging, the New England Centenarian Study launched in 1995. Now based at Boston University and led by Tom Perls and Stacy Andersen, both authors of the new study, the project has recruited centenarians through a variety of methods—voter registries, news articles, or mail to elderly care facilities.

Because longevity may have a genetic basis, their children were also invited to join, with spouses serving as controls. All participants reported on their socioeconomic status and medical history. Researchers assessed their cognition on video calls and screened for potential mental health problems. Finally, some participants had blood samples taken. Despite their age, many centenarians remained sharp and could take care of themselves.

Super-Ager Stem Cells

The team first tested participants with a variety of aging clocks. These measured methylation, which shuts genes down without changing their DNA sequences. Matching previous results, centenarians were, on average, six and a half years younger than their chronological age.

The anti-aging boost wasn’t as prominent in their children. Some had higher biological ages and others lower. This could be because of variation in who inherited a genetic “signature” associated with longevity, wrote the team.

They then transformed blood cells from 45 centenarians into iPSCs. The people they chose were “at the extremes of health and functionality,” the team wrote. Because of their age, they initially expected that turning back the clock might not work on old blood cells.

Luckily, they were wrong. Several proteins showed the iPSCs were healthy and capable of making other cells. They also mostly maintained their genomic integrity—although surprisingly, cells from three male centenarians showed a slight loss of the Y chromosome.

Previous studies have found a similar deletion pattern in blood cells from males over 70 years of age. It could be a marker for aging and a potential risk factor for age-related conditions such as cancer and heart disease. Women, on average, live longer than men. The findings “allow for interesting research opportunities” to better understand why Y chromosome loss happens.

Unraveling Aging

Turning blood cells into stem cells erases signs of aging, especially those related to the cells’ epigenetic state. This controls whether genes are turned on or off, and it changes with age. But the underlying genetic code remains the same.

If the secrets to longevity are, even only partially, hidden in the genes, these super-aging stem cells could help researchers figure out what’s protective or damaging, in turn prompting new ideas that slow the ticking of the clock.

In one example, the team nudged the stem cells to become cortical neurons. These neurons form the outermost part of the brain responsible for sensing and reasoning. They’re also the first to decay in dementia or Alzheimer’s disease. Those derived from centenarians better fought off damage, such as rapidly limiting the spread of toxic proteins that accumulate with age.

Researchers are also using the cells to test for resilience against Alzheimer’s. Another experiment observed cell cultures made of healthy neurons, immune cells, and astrocytes. The latter, supporting cells that help keep brains healthy, were created using centenarian stem cells. Astrocytes have increasingly been implicated in Alzheimer’s, but their role has been hard to study in humans. Those derived from centenarian stem cells offer a way forward.

Each line of centenarian stem cells is linked to its donor—their demographics, cognitive, and physical state. This additional information could guide researchers in choosing the best centenarian cell line for their investigations into different aspects of aging. And because the cells can be transformed into a wide variety of tissues that decline with age—muscles, heart, or immune cells—they offer a new way to explore how aging affects different organs, and at what pace.

“The result of this work is a one-of-a-kind resource for studies of human longevity and resilience that can fuel the discovery and validation of novel therapeutics for aging-related disease,” wrote the authors.

Image Credit: Danie Franco on Unsplash

Kategorie: Transhumanismus

The Tech World Is ‘Disrupting’ Book Publishing. But Do We Want Effortless Art?

Singularity HUB - 17 Prosinec, 2024 - 16:00

Publishing is one of many fields poised for disruption by tech companies and artificial intelligence. New platforms and approaches, like a book imprint by Microsoft and a self-publishing tech startup that uses AI, promise to make publishing faster and more accessible than ever.

But they also may threaten jobs—and demand a reconsideration of the status and role of books as cultural objects. And what will be the impact of TikTok owner ByteDance’s move into traditional book publishing?

Microsoft’s 8080 Books

Last month, Microsoft announced a new book imprint, 8080 Books. It will focus on nonfiction titles relating to technology, science, and business.

8080 Books plans “to test and experiment with the latest tech to accelerate and democratize book publishing,” though as some skeptics have noted, it is not yet entirely clear what this will entail.

The first title, No Prize for Pessimism  by Sam Schillace (Microsoft’s deputy chief technology officer) arguably sets the tone for the imprint. These “letters from a messy tech optimist” urge readers to embrace the disruptive potential of new technologies (AI is name-checked in the blurb), arguing optimism is essential for innovation and creativity. You can even discuss the book with its bespoke chatbot here.

Elsewhere, in the self-publishing space, tech startup Spines aims to bring 8,000 new books to market each year. For a fee, authors can use the publishing platform’s AI to edit, proofread, design, format, and distribute their books.

The move has been condemned by some authors and publishers, but Spines (like Microsoft) states its aim is to make publishing more open and accessible. Above all, it aims to make it faster, reducing the time it takes to publish to just a fortnight—rather than the long months of editing, negotiating, and waiting required by traditional publishing.

TikTok Is Publishing Books Too

Technological innovations are not just being used to speed up the publishing process, but also to identify profitable audiences, emerging authors, and genres that will sell. Chinese tech giant and owner of TikTok, ByteDance, launched their publishing imprint 8th Note Press (initially digital only) last year.

They are now partnering with Zando (an independent publishing company whose other imprints include one by actor Sarah Jessica Parker and another by the Pod Save America team’s Crooked Media) to produce a fiction range targeted at Gen Z readers. It will produce print books, to be sold in bookshops, from February.

8th Note Press focuses on the fantasy and romance genres (and authors) generating substantial followings on BookTok, the TikTok community proving invaluable for marketing and promoting new fiction. In the United States, authors with a strong presence on BookTok have seen a 23 percent growth in print sales in 2024, compared to 6 percent growth overall.

Access to Tiktok’s data and the ability to engineer viral videos could give 8th Note Press a serious advantage over legacy publishers in this space.

Hundreds of AI Self-Publishing Startups

These initiatives reflect some broader industry trends. Since OpenAI first demoed ChatGPT in 2022, approximately 320 publishing startups have emerged. Almost all of them revolve around AI in some way. There is speculation that the top five global publishers all have their own proprietary internal AI systems in the works.

Spotify’s entry into the audiobook market in 2023 has been described as a game changer by its CEO and is now using AI to recommend books to listeners. Other companies, like Storytel and Nuanxed, are using AI to autogenerate audiobook narration and expedite translations.

The embrace of AI may produce some useful innovations and efficiencies in publishing processes. It will almost certainly help publishers promote their authors and connect books with invested audiences. But it will have an impact on people working in the sector.

Companies like Storytel are using AI to narrate audiobooks. Image Credit: Karolina Grabowska/Pexels

Publishing houses have been consistently reducing in-house staff since the 1990s and relying more heavily on freelancers for editorial and design tasks. It would be naïve to think AI and other emerging technologies won’t be used to further reduce costs.

We are moving rapidly towards a future where once-important roles in the publishing sector—editing, translation, narration and voice acting, book design—will be increasingly performed by machines.

Spines’ CEO and cofounder, Yehuda Niv, has said, when queried, “We are not here to replace human creativity.” He emphasized his belief this automation will allow more writers to access the book market.

Storytel and Nuanxed have both suggested the growth of audiobook circulation will compensate for the replacement of human actors and translators. Exactly who will benefit the most from this growth—authors or faceless shareholders—remains to be seen.

Side Hustles, Grifts, and ‘Easy’ Writing

I appreciate Schillace’s genuine, thoughtful optimism about AI and other new technologies. (I will admit to not having read his book yet, but did have a stimulating conversation with its bot.) But my mind is drawn back to the techno-utopianists of the 19th century, like Edward Bellamy.

In his 1888 novel, Looking Backward, Bellamy speculates on a future in which art and literature flourishes, once advanced automation has freed people from the drudgery of miserable labor, leaving them with more time for cultural pursuits.

The inverse seems to be occurring now. Previously important and meaningful forms of cultural work are being increasingly automated.

I could be shortsighted about this, of course. The publishing disruption is just getting underway, and we’ve already made some great strides towards dispensing with the admittedly often quite miserable labor of writing itself.

We’re moving closer to ‘dispensing with the admittedly often quite miserable labour of writing itself’. Image Credit: Polina Zimmerman/Pexels

Soon after the launch of ChatGPT, science fiction magazines in the US had to close submissions, due being inundated with AI-generated short stories, many of them almost identical. Today, there are so many AI-assisted books being published on Amazon, they have had to limit self-publishing authors to just three uploads per day.

AI-assisted publishing enterprises range from side hustles focusing on republishing editions of texts in the public domain to grifts targeting unsuspecting readers and writers. All these schemes are premised on the idea writing can be rendered easy and effortless.

The use of AI may have other, delayed, costs though.

Can AI Be a ‘Thinking Partner’?

When I was younger, writing and publishing a lousy short story just obliterated my time and personal relationships. Now, I can do so with a one-sentence prompt, if I have a mind to—but apparently, this will destroy a lake somewhere.

Of course, as the No Prize for Pessimism bookbot takes pains to remind me, using AI in the writing process needn’t be a matter of lazy auto generation. It can be used for generative drafting, which is then revised, again and again, and integrated into the text.

AI can operate as a “thinking partner,” helping the writer with ideation and brainstorming. The technology is in its infancy, after all: There is bound to be some initial mess. But whatever way it is used, AI will help writers get to publication faster.

8080 Books’ charter offers a lot of rhetorical praise for the form of the book. We are told that books “matter,” that they impart “knowledge and wisdom,” that they “build empathy.” 8080 Books also wants to “accelerate the publishing process” and see less “lag” between the manuscript submission and its arrival in the marketplace. It wants books that are immediate and timely.

Slow Can Be Good

But what is a book if it arrives easily and at speed? Regardless of whether it is AI-generated or AI-assisted, it won’t be quite the same medium.

For much of their history, books have been defined by slowness and effort, both in writing and the journey towards publication. A book doesn’t always need to be up to date or of the moment.

Indeed, the hope might be that the slowness and effort of its production can lead to the book outlasting its immediate context and remaining relevant in other times and places.

Greater speed and broader access may be laudable aims for these publishing innovations. But they will also likely lead to greater disposability—at least in the short term—for both publishing professionals and the books themselves.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image Credit: Muhammed ÖÇAL on Unsplash

Kategorie: Transhumanismus

Study Suggests an mRNA Shot Could Reverse This Deadly Pregnancy Condition

Singularity HUB - 16 Prosinec, 2024 - 16:00

With a single shot, scientists protected pregnant mice from a deadly complication called pre-eclampsia. The shot, inspired by mRNA vaccines, contains mRNA instructions to make a protein that reverses damage to the placenta—which occurs in the condition—protecting both mother and growing fetus.

Pre-eclampsia causes 75,000 maternal deaths and 500,000 fetal and newborn deaths every year around the globe. Trademark signs of the condition are extreme high blood pressure, reduced blood flow to the placenta, and sometimes seizures. Existing drugs, such as those that lower blood pressure, manage the symptoms but not the underlying causes.

“There aren’t any therapeutics that address the underlying problem, which is in the placenta,” study author Kelsey Swingle at the University of Pennsylvania told Nature.

Thanks to previous studies in mice, scientists already have an idea of what triggers pre-eclampsia: The placenta struggles to produce a protein crucial to the maintenance of structure and growth. Called vascular endothelial growth factor (VEGF), the condition inhibits the protein’s activity, interfering with the maternal blood vessels supporting placental health.

Restoring the protein could treat the condition at its core. The challenge is delivering it.

The team developed a lipid-nanoparticle system that directly targets the placenta. Like in Covid vaccines, these fatty “bubbles” are loaded with mRNA molecules that instruct cells to make the missing protein. But compared to standard lipid nanoparticles used in mRNA vaccines, the new bubbles—dubbed LNP 55—were 150 times more likely to home in on their target.

In two mouse models of pre-eclampsia, a single shot of the treatment boosted VEGF levels in the placenta, spurred growth of healthy blood vessels, and prevented symptoms. The treatment didn’t harm the fetuses. Rather, it helped them grow, and the newborn mouse pups were closer to a healthy weight.

The new approach is “an innovative method,” wrote Ravi Thadhani at Emory University and Ananth Karumanchi at the Cedars-Sinai Medical Center, who were not involved in the study.

A Surprising Start

The team didn’t originally focus on treating pre-eclampsia.

“We’re a drug delivery lab,” study author Michael Mitchell told Nature. But his interest was piqued when he started receiving emails from pregnant mothers, asking whether Covid-19 mRNA vaccines were safe for fetuses.

A quick recap: Covid vaccines contain two parts.

One is a strand of mRNA encoding the spike protein attached to the surface of the virus. Once in the body, the cell’s machinery processes the mRNA, makes the protein, and this triggers an immune response—so the body recognizes the actual virus after infection.

The other part is a lipid nanoparticle to deliver the mRNA cargo. These fatty bubbles are bioengineering wonders with multiple components. Some of these grab onto the mRNA; others stabilize the overall structure. A bit of cholesterol and other modified lipids lower the chance of immune attack.

Previously, scientists found that most lipid nanoparticles zoom towards the liver and release their cargo. But “being able to deliver lipid nanoparticles to parts of the body other than the liver is desirable, because it would allow designer therapeutics to be targeted specifically to the organ or tissue of interest,” wrote Thadhani and Karumanchi.

Inspired by the emails, the team first engineered a custom lipid nanoparticle that targets the placenta. They designed nearly 100 delivery bubbles—each with a slightly different lipid recipe—injected them into the bloodstream of pregnant mice, and tracked where they went.

One candidate, called LNP 55, especially stood out. The particles collected in the placenta, without going into the fetus. This is “ideal because the fetus is an ‘innocent bystander’ in pre-eclampsia” and likely not involved in triggering the complication, wrote Thadhani and Karumanchi. It could also lower any potential side effects to the fetus.

Compared to standard lipid nanoparticles, LNP 55 was 150 times more likely to move into multiple placental cell types, rather than the liver. The results got the team wondering: Can we use LNP 55 to treat pregnancy conditions?

Load It Up

The next step was finding the right cargo to tackle pre-eclampsia. The team decided on VEGF mRNA, which can fortify blood vessels in the placenta.

In two mouse models of pre-eclampsia in the middle of their pregnancy, a single injection reduced their high blood pressure almost immediately, and their blood pressure was stable until delivery of their pups. The treatment also lowered “toxins” secreted by the damaged placenta.

“This is really exciting outcome, and it suggests that perhaps we’re remolding the vasculature [blood vessel structure] to kind of see a really sustained therapeutic effect,” said Swingle.

The treatment also benefited the developing pups. Moms with pre-eclampsia often give birth to babies that weigh less. This is partly because doctors induce early delivery as a mother’s health declines. But an unhealthy placenta also contributes. Standard care for the condition can manage the mother’s symptoms, but it doesn’t change birth weight. The fetuses look almost “shriveled up” because of poor nutrient and lack of oxygen, said Mitchell.

Pups from moms treated with VEGF mRNA were far larger and healthier, looking almost exactly the same as normal mice born without pre-eclampsia.

A Long Road Ahead

Though promising, there are a few roadblocks before the treatment can help pregnant humans.

Our placentas are vastly different compared to those of mice, especially in their cellular makeup. The team is considering guinea pigs, which surprisingly have placentas more like humans, for future testing. Higher doses of VEGF may also trigger side effects, such as making blood vessels leakier—although the problem wasn’t seen in this study.

Dosing schedule is another problem. Mice are pregnant for roughly 20 days, a sliver of time compared to a human’s 40 weeks. While a single dose worked in mice, the effects may not last for longer pregnancies.

Then there’s timing. In humans, pre-eclampsia begins early when the placenta is just taking shape. Starting the treatment earlier, rather than in the middle of a pregnancy, could have different results.

Regardless, the study is welcome. Research into pregnancy complications has lagged cancer, heart conditions, metabolic disorders, and even some rare diseases. Limited funding aside, developing drugs for pregnancy is far more difficult because of stringent regulations in place to protect mother and fetus from unexpected and potentially catastrophic side effects.

The new work “offers a promising opportunity to tackle pre-eclampsia, one of the most common and devastating medical complications in pregnancy, and one that is in dire need of intervention,” wrote Thadhani and Karumanchi.

Image Credit: Isaac Quesada on Unsplash

Kategorie: Transhumanismus

This Week’s Awesome Tech Stories From Around the Web (Through December 14)

Singularity HUB - 14 Prosinec, 2024 - 16:00
ARTIFICIAL INTELLIGENCE

Google’s New Project Astra Could Be Generative AI’s Killer App
Will Douglas Heaven | MIT Technology Review
“Last week I was taken through an unmarked door on an upper floor of a building in London’s King’s Cross district into a room with strong secret-project vibes. The word ‘ASTRA’ was emblazoned in giant letters across one wall. …’The pitch to my mum is that we’re building an AI that has eyes, ears, and a voice. It can be anywhere with you, and it can help you with anything you’re doing,’ says Greg Wayne, co-lead of the Astra team. ‘It’s not there yet, but that’s the kind of vision.'”

COMPUTING

Graphene Interconnects Aim to Give Moore’s Law New Life 
Dina Genkina | IEEE Spectrum
“Destination 2D, a startup based in Milpitas, Calif., claims to have solved [two challenges associated with using graphene in chips]. Destination 2D’s team has demonstrated a technique to deposit graphene interconnects onto chips at 300 °C, which is still cool enough to be done by traditional CMOS techniques. They have also developed a method of doping graphene sheets that offers current densities 100 times as dense as copper, according to Kaustav Banerjee, co-founder and CTO of Destination 2D.”

AUTOMATION

Wayve’s AI Self-Driving System Is Here to Drive Like a Human and Take On Waymo and Tesla
Ben Oliver | Wired
“[In contrast to Waymo’s hybrid system] Wayve’s AI operates without high-definition maps or coded interventions, and learns unsupervised from vast quantities of unlabeled real-life or simulated driving videos. ‘I think the gap between that geofenced robotaxi model and what an embodied AI solution can do is stark and game-changing,’ Wayve founder Alex Kendall says. ‘The market’s now somewhat swinging in our direction, but there’s no prizes for having the right idea eight years ago. Now it’s all down to execution.'”

ARTIFICIAL INTELLIGENCE

Harvard Makes 1 Million Books Available to Train AI Models
Kate Knibbs | Wired
“Harvard University announced Thursday it’s releasing a high-quality dataset of nearly 1 million public-domain books that could be used by anyone to train large language models and other AI tools. …In addition to the trove of books, the Institutional Data Initiative is also working with the Boston Public Library to scan millions of articles from different newspapers now in the public domain, and it says it’s open to forming similar collaborations down the line.”

TRANSPORTATION

Electric Cars Could Last Much Longer Than You Think
James Morris | Wired
“Rather than having a shorter lifespan than internal combustion engines, EV batteries are lasting way longer than expected, surprising even the automakers themselves. …A 10-year-old EV could be almost as good as new, and a 20-year-old one still very usable. That could be yet another disruption to an automotive industry that relies on cars mostly heading to the junkyard after 15 years.”

ENVIRONMENT

AI’s Emissions Are About to Skyrocket Even Further
James O’Donnell | MIT Technology Review
“AI models are rapidly moving from fairly simple text generators like ChatGPT toward highly complex image, video, and music generators. Until now, many of these ‘multimodal’ models have been stuck in the research phase, but that’s changing. ‘As we scale up to images and video, the data sizes increase exponentially,’ says Gianluca Guidi, a PhD student in artificial intelligence at University of Pisa and IMT Lucca, who is the paper’s lead author. Combine that with wider adoption, he says, and emissions will soon jump.”

SPACE

NASA’s Boss-to-Be Proclaims We’re About to Enter an ‘Age of Experimentation’
Stephen Clark | Ars Technica
“‘If the launch doesn’t cost a half-billion dollars, we don’t need to spend many, many years and lots of billions to get it right with some super exquisite asset, when you can get into a rhythm of using all of these providers to get things up very quickly to see what works and what doesn’t, and then evolve into something else,’ Jared Isaacman said. ‘What happens when industry starts cranking out spaceships out of multiple factories? …You’re going to have lots and lots of people in space at one time, and that’s why I call it a light switch-like moment, where a lot of things are going to change.'”

AUTOMATION

The End of Cruise Is the Beginning of a Risky New Phase for Autonomous Vehicles
Andrew J. Hawkins | The Verge
“Eight years and $10 billion later, GM has decided to pull the plug on its grand robotaxi experiment. The automaker’s CEO, Mary Barra, made the surprise announcement late on Tuesday, arguing that a shared autonomous mobility service was never really in its ‘core business.’ It was too expensive and had too many regulatory hurdles to overcome to make it a viable revenue stream. Instead, GM would pivot to ‘privately owned’ driverless cars—because, after all, that’s what the people really wanted.”

FUTURE

Galactic Civilizations May Be Impossible. Here’s Why.
Adam Frank | Big Think
“For galactic-scale civilizations to exist in our Universe, they would have to overcome two major hurdles related to physics and biology. One is the sheer distance between each society. The other is biological life span. …Do the laws of physics and the dynamics of social arrangements (even alien ones) allow for galactic societies? As much as I love them (how else could I become a Space Pirate?), I fear the answer may be ‘no.'”

Image Credit: Thomas Chan on Unsplash

Kategorie: Transhumanismus

The Secret to Predicting How Your Brain Will Age May Be in Your Blood

Singularity HUB - 13 Prosinec, 2024 - 22:31

Brain aging occurs in distinctive phases. Its trajectory could be hidden in our blood—paving the way for early diagnosis and intervention.

A new study published in Nature Aging analyzed brain imaging data from nearly 11,000 healthy adults, middle-aged and older, using AI to gauge their “brain age.” Roughly half of participants had their blood proteins analyzed to fish out those related to aging.

Scientists have long looked for the markers of brain aging in blood proteins, but this study had a unique twist. Rather than mapping protein profiles to a person’s chronological age—the number of years on your birthday card—they used biological brain age, which better reflects the actual working state of the brain as the clock ticks on.

Thirteen proteins popped up—eight associated with faster brain aging and five that slowed down the clock. Most alter the brain’s ability to handle inflammation or are involved in cells’ ability to form connections.

From these, three unique “signatures” emerged at 57, 70, and 78 years of age. Each showed a combination of proteins in the blood marking a distinct phase of brain aging. Those related to neuron metabolism peaked early, while others spurring inflammation were more dominate in the twilight years.

These spikes signal a change in the way the brain functions with age. They may be points of intervention, wrote the authors. Rather than relying on brain scans, which aren’t often available to many people, the study suggests that a blood test for these proteins could one day be an easy way to track brain health as we age.

The protein markers could also help us learn to prevent age-related brain disorders, such as dementia, Alzheimer’s disease, stroke, or problems with movement. Early diagnosis is key. Although the protein “hallmarks” don’t test for the disorders directly, they offer insight into the brain’s biological age, which often—but not always—correlates with signs of aging.

The study helps bridge gaps in our understanding of how brains age, the team wrote.

Treasure Trove

Many people know folks who are far sharper than expected at their age. A dear relative of mine, now in their mid-80s, eagerly adopted ChatGPT, AI-assisted hearing aids, and “Ok Google.” Their eyes light up anytime they get to try a new technology. Meanwhile, I watched another relative—roughly the same age—rapidly lose their wit, sharp memory, and eventually, the ability to realize they were no longer logical.

My experiences are hardly unique. With the world rapidly aging, many of us will bear witness to, and experience, the brain aging process. Projections suggest that by 2050, over 1.5 billion people will be 65 or older, with many potentially experiencing age-related memory or cognitive problems.

But chronological age doesn’t reflect the brain’s actual functions. For years, scientists studying longevity have focused on “biological age” to gauge bodily functions, rather than the year on your birth certificate. This has led to the development of multiple aging clocks, with each measuring a slightly different aspect of cell aging. Hundreds of these clocks are now being tested, as clinical trials use them to gauge the efficacy of potential anti-aging treatments.

Many of the clocks were built by taking tiny samples from the body and analyzing certain gene expression patterns linked to the aging process. It’s tough to do that with the brain. Instead, scientists have largely relied on brain scans, showing structure and connectivity across regions, to build “brain clocks.” These networks gradually erode as we age.

The studies calculate the “brain age gap”— the difference between the brain’s structural integrity and your actual age. A ten-year gap, for example, means your brain’s networks are more similar to those of people a decade younger, or older, than you.

Most studies have had a small number of participants. The new study tapped into the UK Biobank, a comprehensive dataset of over a million people with regular checkups—including brain scans and blood draws—offering up a deluge of data for analysis.

The Brain Age Gap

Using machine learning, the study first sorted through brain scans of almost 11,000 people aged 45 to 82 to calculate their biological brain age. The AI model was trained on hundreds of structural features of the brain, such as overall size, thickness of the cortex—the outermost region—and the amount and integrity of white matter.

They then calculated the brain age gap for each person. On average, the gap was roughly three years, swinging both ways, meaning some people had either a slightly “younger” or “older” brain.

Next, the team tried to predict the brain age gap by measuring proteins in plasma, the liquid part of blood. Longevity research in mice has uncovered many plasma proteins that age or rejuvenate the brain.

After screening nearly 3,000 plasma proteins from 4,696 people, they matched each person’s protein profile to the participant’s brain age. They found 13 proteins associated with the brain age gap, with most involved in inflammation, movement, and cognition.

Two proteins particularly stood out.

One called Brevican, or BCAN, helps maintain the brain’s wiring and overall structure and supports learning and memory. The protein dwindles in Alzheimer’s disease. Higher levels, in contrast, were associated with slower brain aging and lower risk of dementia and stroke.

The other protein, growth differentiation factor 15 (GDF15), is released by the body when it senses damage. Higher levels correlated with a higher risk of age-related brain disease, likely because it sparks chronic inflammation—a “hallmark” of aging.

There was also a surprising result.

Plasma protein levels didn’t change linearly with age. Instead, changes peaked at three chronological ages—57, 70, and 78—with each stage marking a distinctive phase of brain aging.

At 57, for example, proteins related to brain metabolism and wound healing changed markedly, suggesting early molecular signs of brain aging. By 70, proteins that support the brain’s ability to rewire itself—some strongly associated with dementia and stroke—changed rapidly. Another peak, at 78, showed protein changes mostly related to inflammation and immunity.

“Our findings thus emphasize the importance and necessity of intervention and prevention at brain age 70 years to reduce the risk of multiple brain disorders,” wrote the authors

To be clear: These are early results. The participants are largely of European ancestry, and the results may not translate to other populations. The 13 proteins also need further testing in animals before any can be validated as biomarkers. But the study paves the way.

Their results, the authors conclude, suggest the possibility of earlier, simpler diagnosis of age-related brain disorders and the development of personalized therapies to treat them.

Kategorie: Transhumanismus

Dr. Jad Tarifi of Integral AI: “We Now Have All the Ingredients for AGI”

Singularity Weblog - 13 Prosinec, 2024 - 19:58
In this thought-provoking episode of Singularity.FM, I sit down with Dr. Jad Tarifi, CEO and co-founder of Integral AI, to explore the cutting-edge developments at the intersection of artificial intelligence and human potential. Dr. Tarifi shares insights into Integral AI’s mission to “Give Humankind A True Magic Wand” and the profound implications of achieving artificial […]
Kategorie: Transhumanismus

Google’s Latest Quantum Computing Breakthrough Shows Practical Machines Are Within Reach

Singularity HUB - 12 Prosinec, 2024 - 22:38

One of the biggest barriers to large-scale quantum computing is the error-prone nature of the technology. This week, Google announced a major breakthrough in quantum error correction, which could lead to quantum computers capable of tackling real-world problems.

Quantum computing promises to solve problems that are beyond classical computers by harnessing the strange effects of quantum mechanics. But to do so we’ll need processors made up of hundreds of thousands, if not millions, of qubits (the quantum equivalent of bits).

Having just crossed the 1,000-qubit mark, today’s devices area a long way off, but more importantly their qubits are incredibly unreliable. The devices are highly susceptible to errors which can derail any attempt to carry out calculations long before an algorithm has run its course.

That’s why error correction has been a major focus for quantum computing companies in recent years. Now, Google’s new Willow quantum processor, unveiled Monday, has crossed a critical threshold suggesting that as the company’s devices get larger, their ability to suppress errors will improve exponentially.

“This is the most convincing prototype for a scalable logical qubit built to date,” Hartmut Neven, founder and lead of Google Quantum AI, wrote in a blog post. “It’s a strong sign that useful, very large quantum computers can indeed be built.”

Quantum error-correction schemes typically work by spreading the information needed to carry out calculations across multiple qubits. This introduces redundancy to the systems, so that even if one of the underlying qubits experiences an error, the information can be recovered. Using this approach, many “physical qubits” can be combined to create a single “logical qubit.”

In general, the more physical qubits you use to create each logical qubit, the more resistant it is to errors. But this is only true if the error rate of the individual qubits is below a certain threshold. Otherwise, the increased chance of an error from adding more faulty qubits outweighs the benefits of redundancy.

While other groups have demonstrated error correction that produces modest accuracy improvements, Google’s results are definitive. In a series of experiments reported in Nature, they encoded logical qubits into increasingly large arrays—starting with a three-by-three grid—and found that each time they increased the size the error rate halved. Crucially, the team found that the logical qubits they created lasted more than twice as long as the physical qubits that make them up.

“The more qubits we use in Willow, the more we reduce errors, and the more quantum the system becomes,” wrote Neven.

This was made possible by significant improvements in the underlying superconducting qubit technology Google uses to build its processors.  In the company’s previous Sycamore processor, the average operating lifetime of each physical qubit was roughly 20 microseconds. But thanks to new fabrication techniques and circuit optimizations, Willow’s qubits have more than tripled this to 68 microseconds.

As well as showing off the chip’s error-correction prowess, the company’s researchers also demonstrated its speed. They carried out a computation in under five minutes that would take the world’s second fastest supercomputer, Frontier, 10 septillion years to complete. However, the test they used is a contrived one with little practical use. The quantum computer simply has to execute random circuits with no useful purpose, and the classical computer then has to try and emulate it.

The big test for companies like Google is to go from such proofs of concept to solving commercially relevant problems. The new error-correction result is a big step in the right direction, but there’s still a long way to go.

Julian Kelly, who leads the company’s quantum hardware division, told Nature that solving practical challenges will likely require error rates of around one per ten million steps. Achieving that will necessitate logical qubits made of roughly 1,000 physical qubits each, though breakthroughs in error-correction schemes could bring this down by several hundred qubits.

More importantly, Google’s demonstration simply involved storing information in its logical qubits rather than using them to carry out calculations. Speaking to MIT Technology Review in September, when a preprint of the research was posted to arXiv, Kenneth Brown from Duke University noted that carrying out practical calculations would likely require a quantum computer to perform roughly a billion logical operations.

So, despite the impressive results, there’s still a long road ahead to large-scale quantum computers that can do anything useful. However, Google appears to have reached an important inflection point that suggests this vision is now within reach.

Image Credit: Google

Kategorie: Transhumanismus

Blurry, Morphing, and Surreal: A New AI Aesthetic Is Emerging in Film

Singularity HUB - 10 Prosinec, 2024 - 22:06

Type text into AI image and video generators, and you’ll often see outputs of unusual, sometimes creepy, pictures.

In a way, this is a feature, not a bug, of generative AI. And artists are wielding this aesthetic to create a new storytelling art form.

The tools, such as Midjourney to generate images, Runway, and Sora to produce videos, and Luma AI to create 3D objects, are relatively cheap or free to use. They allow filmmakers without access to major studio budgets or soundstages to make imaginative short films for the price of a monthly subscription.

I’ve studied these new works as the co-director of the AI for Media & Storytelling studio at the University of Southern California.

Surveying the increasingly captivating output of artists from around the world, I partnered with curators Jonathan Wells and Meg Grey Wells to produce the Flux Festival, a four-day showcase of experiments in AI filmmaking, in November 2024.

While this work remains dizzyingly eclectic in its stylistic diversity, I would argue that it offers traces of insight into our contemporary world. I’m reminded that in both literary and film studies, scholars believe that as cultures shift, so do the way we tell stories.

With this cultural connection in mind, I see five visual trends emerging in film.

1. Morphing, Blurring Imagery

In her “NanoFictions” series, the French artist Karoline Georges creates portraits of transformation. In one short, “The Beast,” a burly man mutates from a two-legged human into a hunched, skeletal cat, before morphing into a snarling wolf.

The metaphor—man is a monster—is clear. But what’s more compelling is the thrilling fluidity of transformation. There’s a giddy pleasure in seeing the figure’s seamless evolution that speaks to a very contemporary sensibility of shapeshifting across our many digital selves.

This sense of transformation continues in the use of blurry imagery that, in the hands of some artists, becomes an aesthetic feature rather than a vexing problem.

Theo Lindquist’s “Electronic Dance Experiment #3,” for example, begins as a series of rapid-fire shots showing flashes of nude bodies in a soft smear of pastel colors that pulse and throb. Gradually it becomes clear that this strange fluidity of flesh is a dance. But the abstraction in the blur offers its own unique pleasure; the image can be felt as much as it can be seen.

2. The Surreal

Thousands of TikTok videos demonstrate how cringy AI images can get, but artists can wield that weirdness and craft it into something transformative. The Singaporean artist known as Niceaunties creates videos that feature older women and cats, riffing on the concept of the “auntie” from Southeast and East Asian cultures.

In one recent video, the aunties let loose clouds of powerful hairspray to hold up impossible towers of hair in a sequence that grows increasingly ridiculous. Even as they’re playful and poignant, the videos created by Niceaunties can pack a political punch. They comment on assumptions about gender and age, for example, while also tackling contemporary issues such as pollution.

On the darker side, in a music video titled “Forest Never Sleeps,” the artist known as Doopiidoo offers up hybrid octopus-women, guitar-playing rats, rooster-pigs, and a wood-chopping ostrich-man. The visual chaos is a sweet match for the accompanying death metal music, with surrealism returning as a powerful form.

Doopiidoo’s uncanny music video ‘Forest Never Sleeps’ leverages artificial intelligence to create surreal visuals. Image Credit: Doopiidoo 3. Dark Tales

The often-eerie vibe of so much AI-generated imagery works well for chronicling contemporary ills, a fact that several filmmakers use to unexpected effect.

In “La Fenêtre,” Lucas Ortiz Estefanell of the AI agency SpecialGuestX pairs diverse image sequences of people and places with a contemplative voice-over to ponder ideas of reality, privacy, and the lives of artificially generated people. At the same time, he wonders about the strong desire to create these synthetic worlds. “When I first watched this video,” recalls the narrator, “the meaning of the image ceased to make sense.”

In the music video titled “Closer,” based on a song by Iceboy Violet and Nueen, filmmaker Mau Morgó captures the world-weary exhaustion of Gen Z through dozens of youthful characters slumbering, often under the green glow of video screens. The snapshot of a generation that has come of age in the era of social media and now artificial intelligence, pictured here with phones clutched close to their bodies as they murmur in their sleep, feels quietly wrenching.

The music video for ‘Closer’ spotlights a generation awash in screens. Image Credit: Mau Morgó, Closer – Violet, Nueen 4. Nostalgia

Sometimes filmmakers turn to AI to capture the past.

Rome-based filmmaker Andrea Ciulu uses AI to reimagine 1980s East Coast hip-hop culture in “On These Streets,” which depicts the city’s expanse and energy through breakdancing as kids run through alleys and then spin magically up into the air.

Ciulu says that he wanted to capture New York’s urban milieu, all of which he experienced at a distance, from Italy, as a kid. The video thus evokes a sense of nostalgia for a mythic time and place to create a memory that is also hallucinatory.

Similarly, David Slade’s “Shadow Rabbit” borrows black-and-white imagery reminiscent of the 1950s to show small children discovering miniature animals crawling about on their hands. In just a few seconds, Slade depicts the enchanting imagination of children and links it to generated imagery, underscoring AI’s capacities for creating fanciful worlds.

5. New Times, New Spaces

In his video for the song “The Hardest Part” by Washed Out, filmmaker Paul Trillo creates an infinite zoom that follows a group of characters down the seemingly endless aisle of a school bus, through the high school cafeteria and out onto the highway at night. The video perfectly captures the zoominess of time and the collapse of space for someone young and in love haplessly careening through the world.

The freewheeling camera also characterizes the work of Montreal-based duo Vallée Duhamel, whose music video “The Pulse Within” spins and twirls, careening up and around characters who are cut loose from the laws of gravity.

In both music videos, viewers experience time and space as a dazzling, topsy-turvy vortex where the rules of traditional time and space no longer apply.

In Vallée Duhamel’s ‘The Pulse Within,’ the rules of physics no longer apply. Image Credit: Vallée Duhamel

Right now, in a world where algorithms increasingly shape everyday life, many works of art are beginning to reflect how intertwined we’ve become with computational systems.

What if machines are suggesting new ways to see ourselves, as much as we’re teaching them to see like humans?

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Banner Image: A still from Theo Lindquist’s short film ‘Electronic Dance Experiment #3.’

Kategorie: Transhumanismus

Thousands of Undiscovered Genes May Be Hidden in DNA ‘Dark Matter’

Singularity HUB - 9 Prosinec, 2024 - 23:05

Thousands of new genes are hidden inside the “dark matter” of our genome.

Previously thought to be noise left over from evolution, a new study found that some of these tiny DNA snippets can make miniproteins—potentially opening a new universe of treatments, from vaccines to immunotherapies for deadly brain cancers.

The preprint, not yet peer-reviewed, is the latest from a global consortium that hunts down potential new genes. Ever since the Human Genome Project completed its first draft at the turn of the century, scientists have tried to decipher the genetic book of life. Buried within the four genetic letters—A, T, C, and G—and the proteins they encode is a wealth of information that could help tackle our most frustrating medical foes, such as cancer.

The Human Genome Project’s initial findings came as a surprise. Scientists found less than 30,000 genes that build our bodies and keep them running—roughly a third of that previously predicted. Now, roughly 20 years later, as the technologies that sequence our DNA or map proteins have become increasingly sophisticated, scientists are asking: “What have we missed?”

The new study filled the gap by digging into relatively unexplored portions of the genome. Called “non-coding,” these parts haven’t yet been linked to any proteins. Combining several existing datasets, the team zeroed in on thousands of potential new genes that make roughly 3,000 miniproteins.

Whether these proteins are functional remains to be tested, but initial studies suggest some are involved in a deadly childhood brain cancer. The team is releasing their tools and results to the wider scientific community for further exploration. The platform isn’t just limited to deciphering the human genome; it can delve into the genetic blueprint of other animals and plants as well.

Even though mysteries remain, the results “help provide a more complete picture of the coding portion of the genome,” Ami Bhatt at Stanford University told Science.

What’s in a Gene?

A genome is like a book without punctuation. Sequencing one is relatively easy today, thanks to cheaper costs and higher efficiency. Making sense of it is another matter.

Ever since the Human Genome Project, scientists have searched our genetic blueprint to find the “words,” or genes, that make proteins. These DNA words are further broken down into three-letter codons, each one encoding a specific amino acid—the building block of a protein.

A gene, when turned on, is transcribed into messenger RNA. These molecules shuttle genetic information from DNA to the cell’s protein-making factory, called the ribosome. Picture it as a sliced bun, with an RNA molecule running through it like a piece of bacon.

When first defining a gene, scientists focus on open reading frames. These are made of specific DNA sequences that dictate where a gene starts and stops. Like a search function, the framework scans the genome for potential genes, which are then validated with lab experiments based on myriad criteria. These include whether they can make proteins of a certain size—more than 100 amino acids. Sequences that meet the mark are compiled into GENCODE, an international database of officially recognized genes.

Genes that encode proteins have attracted the most attention because they aid our understanding of disease and inspire ways to treat it. But much of our genome is “non-coding,” in that large sections of it don’t make any known proteins.

For years, these chunks of DNA were considered junk—the defunct remains of our evolutionary past. Recent studies, however, have begun revealing hidden value. Some bits regulate when genes turn on or off. Others, such as telomeres, protect against the degradation of DNA as it replicates during cell division and ward off aging.

Still, the dogma was that these sequences don’t make proteins.

A New Lens

Recent evidence is piling up that non-coding areas do have protein-making segments that affect health.

One study found that a small missing section in supposedly non-coding areas caused inherited bowel troubles in infants. In mice genetically engineered to mimic the same problem, restoring the DNA snippet—not yet defined as a gene—reduced their symptoms. The results highlight the need to go beyond known protein-coding genes to explain clinical findings, the authors wrote.

Dubbed non-canonical open reading frames (ncORFs), or “maybe-genes,” these snippets have popped up across human cell types and diseases, suggesting they have physiological roles.

In 2022, the consortium behind the new study began peeking into potential functions, hoping to broaden our genetic vocabulary. Rather than sequencing the genome, they looked at datasets that sequenced RNA as it was being turned into proteins in the ribosome.

The method captures the actual output of the genome—even extremely short amino acid chains normally thought too small to make proteins. Their search produced a catalog of over 7,000 human “maybe-genes,” some of which made microproteins that were eventually detected inside cancer and heart cells.

But overall, at that time “we did not focus on the questions of protein expression or functionality,” wrote the team. So, they broadened their collaboration in the new study, welcoming specialists in protein science from over 20 institutions across the globe to make sense of the “maybe-genes.”

They also included several resources that provide protein databases from various experiments—such as the Human Proteome Organization and the PeptideAtlas—and added data from published experiments that use the human immune system to detect protein fragments.

In all, the team analyzed over 7,000 “maybe-genes” from a variety of cells: Healthy, cancerous, and also immortal cell lines grown in the lab. At least a quarter of these “maybe-genes” translated into over 3,000 miniproteins. These are far smaller than normal proteins and have a unique amino acid makeup. They also seem to be more attuned to parts of the immune system—meaning they could potentially help scientists develop vaccines, autoimmune treatments, or immunotherapies.

Some of these newly found miniproteins may not have a biological role at all. But the study gives scientists a new way to interpret potential functions. For quality control, the team organized each miniprotein into a different tier, based on the amount of evidence from experiments, and integrated them into an existing database for others to explore.

We’re just beginning to probe our genome’s dark matter. Many questions remain.

“A unique capacity of our multi-consortium collaboration is the ability to develop consensus on the key challenges” that we feel need answers, wrote the team.

For example, some experiments used cancer cells, meaning that certain “maybe-genes” might only be active in those cells—but not in normal ones. Should they be called genes?

From here, deep learning and other AI methods may help speed up analysis. Although annotating genes is “historically rooted in manual inspection” of the data, wrote the authors, AI can churn through multiple datasets far faster, if only as a first pass to find new genes.

How many might scientists discover? “50,000 is in the realm of possibility,” study author Thomas Martinez told Science.

Image Credit: Miroslaw Miras from Pixabay

Kategorie: Transhumanismus

This Week’s Awesome Tech Stories From Around the Web (Through December 7)

Singularity HUB - 7 Prosinec, 2024 - 16:00
ARTIFICIAL INTELLIGENCE

The GPT Era Is Already Ending
Matteo Wong | The Atlantic
“[OpenAI] has been unusually direct that the o1 series is the future: Chen, who has since been promoted to senior vice president of research, told me that OpenAI is now focused on this ‘new paradigm,’ and Altman later wrote that the company is prioritizing’ o1 and its successors. The company believes, or wants its users and investors to believe, that it has found some fresh magic. The GPT era is giving way to the reasoning era.”

SPACE

Falcon 9 Reaches a Flight Rate 30 Times Higher Than Shuttle at 1/100th the Cost
Eric Berger | Ars Technica
“Space enthusiast Ryan Caton also crunched the numbers on the number of SpaceX launches this year compared to some of its competitors. So far this year, SpaceX has launched as many rockets as Roscosmos has since 2013, United Launch Alliance since 2010, and Arianespace since 2009. This year alone, the Falcon 9 has launched more times than the Ariane 4, Ariane 5, or Atlas V rockets each did during their entire careers.”

NEUROSCIENCE

These Temporary Tattoos Can Read Your Brainwaves
Ed Cara | Gizmodo
“The future of diagnostic medicine is gearing up to look a bit more cyberpunk. Scientists have just unveiled technology that should allow people to one day have their brains and bodies monitored via customized, temporary electronic tattoos. Scientists at the University of Texas at Austin and others developed the tech, which aims to avoid the limitations of conventional electroencephalography, or EEG, testing.”

CRYPTOCURRENCY

Another Crypto Revolution Is Here—and It’s Unlike Any From the Past
Yueqi Yang | The Information
“The new period of crypto that’s beginning to unfold is shaping up to be starkly different from previous ones. A few years ago, cryptonians wanted to talk about topics like Web3, DeFi and the metaverse, and they gambled heavily on speculative assets: most notably NFTs and crypto coins that traded on meme stock–like hype. For now, they appear far more temperate and are placing an enormous priority on stablecoins, theoretically a less risky form of crypto since they’re backed by dollar reserves.”

ROBOTICS

Waymo’s Next Robotaxi City Will Be Miami
Andrew J. Hawkins | The Verge
“Waymo is making the moves on Magic City. Alphabet’s robotaxi service said it would launch in Miami in 2026. The company has been testing its autonomous vehicles in the Florida city on-and-off since 2019, and more recently has begun to lay the groundwork in earnest. Waymo plans to start ‘reacquainting’ its autonomous Jaguar I-Pace vehicles to Miami’s streets in 2025. And in 2026, it expects to start making its vehicles available to riders through its Waymo One ridehail app.”

TECH

The Inside Story of Apple Intelligence
Steven Levy | Wired
“Google, Meta, and Microsoft, as well as startups like OpenAI and Anthropic, all had well-developed strategies for generative AI by the time Apple finally announced its own push this June. Conventional wisdom suggested this entrance was unfashionably late. Apple disagrees. Its leaders say the company is arriving just in time—and that it’s been stealthily preparing for this moment for years.”

ARTIFICIAL INTELLIGENCE

ChatGPT Now Has Over 300 Million Weekly Users
Emma Roth | The Verge
“OpenAI CEO Sam Altman revealed the milestone during The New York Times’ DealBook Summit on Wednesday, which comes just months after ChatGPT hit 200 million weekly users in August. ‘Our product has scaled … now we have more than 300 million weekly active users,’ Altman said. ‘We have users sending more than 1 billion messages per day to ChatGPT.'”

FUTURE

Would You Eat Dried Microbes? This Company Hopes So.
Casey Crownhart | MIT Technology Review
“LanzaTech, a rising star in the fuel and chemical industries, is joining a growing group of businesses producing microbe-based food as an alternative to plant and animal products. Using microbes to make food is hardly new—beer, yogurt, cheese, and tempeh all rely on microbes to transform raw ingredients into beloved dishes. But some companies are hoping to create a new category of food, one that relies on microbes themselves as a primary ingredient in our meals.”

SECURITY

OpenAI Is Working With Anduril to Supply the US Military With AI
Will Knight | Wired
“OpenAI, maker of ChatGPT and one of the most prominent artificial intelligence companies in the world, said today that it has entered a partnership with Anduril, a defense startup that makes missiles, drones, and software for the United States military. It marks the latest in a series of similar announcements made recently by major tech companies in Silicon Valley, which has warmed to forming closer ties with the defense industry.”

Image Credit: Declan Sun on Unsplash

Kategorie: Transhumanismus

Jamelle Lindo on Emotional Intelligence in the Age of AI: Harness the Power of Emotion

Singularity Weblog - 6 Prosinec, 2024 - 23:19
In this episode of Singularity FM, I speak with emotional intelligence (EQ) expert, executive coach, and keynote speaker Jamelle Lindo about the evolving role of EQ in our age of rapidly advancing artificial intelligence. While much of today’s discourse around AI focuses on technical prowess, data-driven decision-making, and automation, Jamelle highlights why understanding our inner […]
Kategorie: Transhumanismus

Google DeepMind’s New AI Weatherman Tops World’s Most Reliable System

Singularity HUB - 6 Prosinec, 2024 - 19:14

This was another year of rollercoaster weather. Heat domes broiled the US southwest. California experienced a “second summer” in October, with multiple cities breaking heat records. Hurricane Helene—and just a few weeks later, Hurricane Milton—pummeled the Gulf Coast, unleashing torrential rainfall and severe flooding. What shocked even seasoned meteorologists was how fast the hurricanes intensified, with one choking up as he said “this is just horrific.”

When bracing for extreme weather, every second counts. But planning measures rely on accurate predictions. Here’s where AI comes in.

This week, Google DeepMind unveiled an AI that predicts weather 15 days in advance in minutes, rather than the hours usually needed with traditional models. In a head-to-head with the European Center for Medium-Range Weather Forecasts’ model (ENS)—the best “medium-range” weather forecaster today—the AI won over 90 percent of the time.

Dubbed GenCast, the algorithm is DeepMind’s latest foray into weather prediction. Last year, they unleashed a version with strikingly accurate prediction for a 10-day forecast. GenCast differs in its machine learning architecture. True to its name, it’s a generative AI model, roughly similar to those that power ChatGPT, Gemini, or generate images and videos with a text prompt.

The setup gives GenCast an edge over previous models, which usually provide a single weather path prediction. GenCast, in contrast, pumps out 50 or more predictions—each representing a potential weather trajectory, while assigning their likelihood.

In other words, the AI “imagines” a multiverse of future weather possibilities and picks the one with the largest chance of occurring.

GenCast didn’t just excel at day-to-day weather prediction. It also beat ENS at predicting extreme weather—heat, cold, and high wind speeds. Challenged with data from Typhoon Hagibis—the deadliest tropical cyclone to strike Japan in decades—GenCast visualized possible routes seven days before landfall.

“As climate change drives more extreme weather events, accurate and trustworthy forecasts are more essential than ever,” wrote study authors Ilan Price and Matthew Wilson in a DeepMind blog post.

Embracing Uncertainty

Predicting weather is notoriously difficult. This is largely because weather is a chaotic system. You might have heard of the “butterfly effect”—a butterfly flaps it wings, stirring a tiny change in the atmosphere and triggering tsunamis and other weather disasters a world apart. Although just a metaphor, it highlights that any small changes in initial weather conditions can rapidly spread across large regions, changing weather outcomes.

For decades, scientists have tried to emulate these processes using physical simulations of the Earth’s atmosphere. By gathering data from weather stations across the globe and satellites, they’ve written equations mapping current estimates of the weather and forecasting how they’ll change over time.

The problem? The deluge of data takes hours, if not days, to crunch on supercomputers, and consumes a huge amount of energy.

AI may be able to help. Rather than mimicking the physics of atmospheric shifts or the swirls of our oceans, these systems slurp up decades of data to find weather patterns. GraphCast, released in 2013, captured more than a million points across our planet’s surface to predict 10-day weather in less than a minute. Others in the race to improve weather forecasting are Huawei’s Pangu-Weather and NowcastNet, both based in China. The latter gauges the chance of rain with high accuracy—one of the toughest aspects of weather prediction.

But weather is finicky. GraphCast and other similar weather-prediction AI models, in contrast, are deterministic. They only forecast a single weather trajectory. The weather community is now increasingly embracing an “ensemble model,” which predicts a range of possible scenarios.

“Such ensemble forecasts are more useful than relying on a single forecast, as they provide decision makers with a fuller picture of possible weather conditions in the coming days and weeks and how likely each scenario is,” wrote the team.

Cloudy With a Chance of Rain

GenCast tackles the weather’s uncertainty head-on. The AI mainly relies on a diffusion model, a type of generative AI. Overall, it incorporates 12 metrics about the Earth’s surface and atmosphere—such as temperature, wind speed, humidity, and atmospheric pressure—traditionally used to gauge weather.

The team trained the AI on 40 years of historical weather data from a publicly available database up to 2018. Rather than asking for one prediction, they had GenCast spew out a number of forecasts, each one starting with a slightly different weather condition—a different “butterfly,” so to speak. The results were then combined into an ensemble forecast, which also predicted the chance of each weather pattern actually occurring.

When tested with weather data from 2019, which GenCast had never seen, the AI outperformed the current leader, ENS—especially for longer-term forecasting up to 15 days. Checked against recorded data, the AI outperformed ENS 97 percent of the time across 1,300 measures of weather prediction.

GenCast’s predictions are also blazingly fast. Compared to the hours on supercomputers usually needed to generate results, the AI churned out predictions in roughly eight minutes. If adopted, the system could add valuable time for emergency notices.

All for One

Although GenCast wasn’t explicitly trained to forecast severe weather patterns, it was able to predict the path of Typhoon Hagibis before landfall in central Japan. One of the deadliest storms in decades, the typhoon flooded neighborhoods up to the rooftops as water broke through levees and took out much of the region’s electrical power.

GenCast’s ensemble prediction was like a movie. It began with a relatively wide range of possible paths for Typhoon Hagibis seven days before landfall. As the storm edged closer, however, the AI got more accurate, narrowing its predictive path. Although not perfect, GenCast painted an overall trajectory of the devastating cyclone that closely matched recorded data.

Given a week of lead time, “GenCast can provide substantial value in decisions about

when and how to prepare for tropical cyclones,” wrote the authors.

Accurate and longer predictions don’t just help prepare for future climate challenges. They could also help optimize renewable energy planning. Take wind power. Predicting where, when, and how strong wind is likely to blow could increase the power source’s reliability—reducing costs and potentially upping adoption of the technology. In a proof-of-concept analysis, GenCast was more accurate than ENS at predicting total wind power generated by over 5,000 wind power plants across the globe, opening the possibility of building wind farms based on data.

GenCast isn’t the only AI weatherman. Nvidia’s FourCastNet also uses generative AI to predict weather with a lower energy cost than traditional methods. Google Research has also engineered myriad weather-predicting algorithms, including NeuralGCM and SEEDS. Some are being integrated into Google search and maps, including rain forecasts, wildfires, flooding, and heat alerts. Microsoft joined the race with ClimaX, a flexible AI that can be tailored to generate predictions from hours to months ahead (with varying accuracies).

All this is not to say AI will be taking jobs from meteorologists. The DeepMind team stresses that GenCast wouldn’t be possible without foundational work from climate scientists and physics-based models. To give back, they’re releasing aspects of GenCast to the wider weather community to gain further insights and feedback.

Image Credit: NASA

Kategorie: Transhumanismus

Automated Cyborg Cockroach Factory Could Churn Out a Bug a Minute for Search and Rescue

Singularity HUB - 5 Prosinec, 2024 - 19:19

Envisioning armies of electronically controllable insects is probably nightmare fuel for most people. But scientists think they could help rescue workers scour challenging and hazardous terrain. An automated cyborg cockroach factory could help bring the idea to life.

The merger of living creatures with machines is a staple of science fiction, but it’s also a serious line of research for academics. Several groups have implanted electronics into moths, beetles, and cockroaches that allow simple control of the insects.

However, building these cyborgs is tricky as it takes considerable dexterity and patience to surgically implant electrodes in their delicate bodies. This means that creating enough for most practical applications is simply too time-consuming.

To overcome this obstacle, researchers at Nanyang Technological University in Singapore have automated the process, using a robotic arm with computer vision to install electrodes and tiny backpacks full of electronics on Madagascar hissing cockroaches. The approach cuts the time required to attach the equipment from roughly half an hour to just over a minute.

“In the future, factories for insect-computer hybrid robot[s] could be built to satisfy the needs for fast preparation and application of the hybrid robots,” the researchers write in a non-peer-reviewed paper on arXiv.

“Different sensors could be added to the backpack to develop applications on the inspection and search missions based on the requirements.”

Cyborg insects could be a promising alternative to conventional robots thanks to their small size, ability to operate for hours on little food, and their adaptability to new environments. As well as helping with search and rescue operations, the researchers suggest that swarms of these robot bugs could be used to inspect factories.

The researchers had already shown that signals from electrodes implanted into cockroach abdomens could be used to control the direction of travel and get them to slow down and even stop. But installing these electrodes and a small backpack with control electronics required painstaking work from a trained researcher.

That kind of approach makes it difficult to scale up to the hundreds or even thousands of insects required for practically useful swarms. So, the team developed an automated system that could install the electronics on a cockroach with minimal human involvement.

First, the researchers anesthetized the cockroaches by exposing them to carbon dioxide for 10 minutes. They then placed the bugs on a platform where a pair of rods powered by a motor pressed down on two segments of their hard exoskeletons to expose a soft membrane just behind the head.

A computer vision system then identified where to implant the electrodes and used this information to guide a robotic arm carrying the electronic backpack. Electrodes in place, the arm pressed the backpack down until its mounting mechanism hooked into another section of the insect’s body. The arm then released the backpack, and the rods retracted to free the cyborg bug.

The entire assembly process takes just 68 seconds, and the resulting cockroaches are just as controllable as ones made manually, the researchers found. A four-bug team was able to cover 80 percent of a 20-square-foot outdoor test environment filled with obstacles in about 10 minutes.

Fabian Steinbeck at Bielefeld University in Germany told New Scientist that using these cyborg bugs for search and rescue might be tricky as they currently have to be controlled remotely. Getting signal in collapsed buildings and similar challenging terrain would be difficult, and we don’t yet have the technology to get them to navigate autonomously.

Rapid improvements in both AI and communication technologies could soon change that though. So, it may not be too far-fetched to imagine swarms of robot bugs coming to your rescue in the near future.

Image Credit: Erik Karits from Pixabay

Kategorie: Transhumanismus

Astronomers Have Pinpointed the Origin of Mysterious Repeating Radio Bursts From Space

Singularity HUB - 3 Prosinec, 2024 - 19:13

Slowly repeating bursts of intense radio waves from space have puzzled astronomers since they were discovered in 2022.

In new research, my colleagues and I have for the first time tracked one of these pulsating signals back to its source: a common kind of lightweight star called a red dwarf, likely in a binary orbit with a white dwarf, the core of another star that exploded long ago.

A Slowly Pulsing Mystery

In 2022, our team made an amazing discovery. Periodic radio pulsations that repeated every 18 minutes, emanating from space. The pulses outshone everything nearby, flashed brilliantly for three months, then disappeared.

We know some repeating radio signals come from a kind of neutron star called a radio pulsar, which spins rapidly (typically once a second or faster), beaming out radio waves like a lighthouse. The trouble is, our current theories say a pulsar spinning only once every 18 minutes should not produce radio waves.

So we thought our 2022 discovery could point to new and exciting physics—or help explain exactly how pulsars emit radiation, which despite 50 years of research is still not understood very well.

More slowly blinking radio sources have been discovered since then. There are now about 10 known “long-period radio transients.”

However, just finding more hasn’t been enough to solve the mystery.

Searching the Outskirts of the Galaxy

Until now, every one of these sources has been found deep in the heart of the Milky Way.

This makes it very hard to figure out what kind of star or object produces the radio waves, because there are thousands of stars in a small area. Any one of them could be responsible for the signal, or none of them.

So, we started a campaign to scan the skies with the Murchison Widefield Array radio telescope in Western Australia, which can observe 1,000 square degrees of the sky every minute. An undergraduate student at Curtin University, Csanád Horváth, processed data covering half of the sky, looking for these elusive signals in more sparsely populated regions of the Milky Way.

One element of the Murchison Widefield Array, a radio telescope in Western Australia that observes the sky at low radio frequencies. Image Credit: ICRAR / Curtin University

And sure enough, we found a new source! Dubbed GLEAM-X J0704-37, it produces minute-long pulses of radio waves, just like other long-period radio transients. However, these pulses repeat only once every 2.9 hours, making it the slowest long-period radio transient found so far.

Where Are the Radio Waves Coming From?

We performed follow-up observations with the MeerKAT telescope in South Africa, the most sensitive radio telescope in the southern hemisphere. These pinpointed the location of the radio waves precisely: They were coming from a red dwarf star. These stars are incredibly common, making up 70 percent of the stars in the Milky Way, but they are so faint that not a single one is visible to the naked eye.

The source of the radio waves, as seen by the MWA at low resolution (magenta circle) and MeerKAT at high resolution (cyan circle). The white circles are all stars in our own Galaxy. Image Credit: Hurley-Walker et al. 2024 / Astrophysical Journal Letters

Combining historical observations from the Murchison Widefield Array and new MeerKAT monitoring data, we found that the pulses arrive a little earlier and a little later in a repeating pattern. This probably indicates that the radio emitter isn’t the red dwarf itself, but rather an unseen object in a binary orbit with it.

Based on previous studies of the evolution of stars, we think this invisible radio emitter is most likely to be a white dwarf, which is the final endpoint of small to medium-sized stars like our own sun. If it were a neutron star or a black hole, the explosion that created it would have been so large it should have disrupted the orbit.

It Takes Two to Tango

So, how do a red dwarf and a white dwarf generate a radio signal?

The red dwarf probably produces a stellar wind of charged particles, just like our sun does. When the wind hits the white dwarf’s magnetic field, it would be accelerated, producing radio waves.

This could be similar to how the Sun’s stellar wind interacts with Earth’s magnetic field to produce beautiful aurora and also low-frequency radio waves.

We already know of a few systems like this, such as AR Scorpii, where variations in the brightness of the red dwarf imply that the companion white dwarf is hitting it with a powerful beam of radio waves every two minutes. None of these systems are as bright or as slow as the long-period radio transients, but maybe as we find more examples, we will work out a unifying physical model that explains all of them.

On the other hand, there may be many different kinds of system that can produce long-period radio pulsations.

Either way, we’ve learned the power of expecting the unexpected—and we’ll keep scanning the skies to solve this cosmic mystery.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image Credit: An artist’s impression of the exotic binary star system AR Scorpii / Mark Garlick/University of Warwick/ESO, CC BY

Kategorie: Transhumanismus

This Tiny House Is Made From the Recycled Heart of a Wind Turbine

Singularity HUB - 2 Prosinec, 2024 - 16:00

If you’ve tried to rent or buy a home in the last few years, you may have noticed there’s a severe housing shortage in the US and around the world. Millions of people need homes, and there aren’t nearly enough of them to go around. Plenty of creative, low-cost solutions have been proposed, from inflatable houses to 3D-printed houses, “foldable” houses, and houses that ship in kits to be assembled like furniture.

Now there’s another idea joining the fray, and it carries the added benefit of playing a role in the renewable energy transition: It’s a tiny house made from the nacelle of a decommissioned wind turbine.

The house, unveiled last month as part of Dutch Design Week, is a collaboration between Swedish power company Vattenfall and Dutch architecture firm Superuse Studios. Wind turbines typically have a 20-year lifespan, and Vattenfall is looking for novel ways to repurpose parts of its turbines. With the first generation of large-scale turbines now reaching the end of their useful life, there will be thousands of nacelles (not to mention blades, towers, and generators) in search of a new purpose.

Blades, towers, and generators are the parts of a wind turbine that most people are familiar with, but not so much the nacelle. The giant rectangular box sits at the top of the turbine’s tower and houses its gearbox, shafts, generator, and brake. It’s the beating heart of the turbine, where the blades’ rotation is converted into electricity.

Though it’s big enough to be a tiny house, this particular nacelle is on the small side (as far as nacelles go). It’s 10 feet tall by 13 feet wide by 33 feet long. The interior space of the home about 387 square feet, or the size of a small studio apartment or hotel room. The nacelle came from one of Vattenfall’s V80 turbines, which was installed at an Austrian wind farm in 2005 and has a production capacity of two megawatts. Turbine technology has come a long way since then; the largest ones in the world are approaching a production capacity of 15 megawatts.

Though there will be larger nacelles available, Superuse Studios intentionally chose a small one for its prototype. Their thinking was, if you can make a livable home in this small of a space, you can definitely make a livable home—and add more features—in a larger space; better to start small and grow than start big then downsize.

Though the house is small, its designers ensured it was fully compliant with Dutch building code and therefore suitable for habitation. It has a kitchen with a sink and a stove, a bathroom with a shower, a dining area, and a combined living/sleeping area. As you’d expect from a house made of recycled wind turbine parts, it’s also climate-friendly: Its electricity comes partly from rooftop solar panels, and it has a bidirectional charger for electric vehicles (meaning power from the house can charge the car or power from the car’s battery can be used in the house). There’s an electric heat pump for temperature control, and a solar heater for hot water.

Solar panels and wind turbines don’t last forever, and they use various raw and engineered materials. When the panels or turbines can’t produce power anymore, what’s to be done with all that concrete, copper, steel, silicon, glass, or aluminum? Finding purposeful ways to reuse or recycle these materials will be a crucial component of a successful transition away from fossil fuels.

“We are looking for innovative ways in which you can reuse materials from used turbines as completely as possible,” said Thomas Hjort, Vattenfall’s director of innovation, in a press release. “So making something new from them with as few modifications as possible. That saves raw materials, energy consumption and in this way we ensure that these materials are useful for many years after their first working life.”

As of right now, the nacelle tiny house is just a proof of concept; there are no plans to start producing more in the immediate future, but it’s not outside the realm of possibility eventually. Picture communities of these houses arranged in rows or circles, with communal spaces or parks in between. Using a larger nacelle, homes with one or two bedrooms could be designed, expanding the possibilities for inhabitants and giving purpose to more decommissioned turbines.

“At least ten thousand of this generation of nacelles are available, spread around the world,” said Jos de Krieger, a partner at Superuse Studios. “Most of them have yet to be decommissioned. This offers perspective and a challenge for owners and decommissioners. If such a complex structure as a house is possible, then numerous simpler solutions are also feasible and scalable.”

If 10,000-plus nacelles are available, that means 30,000-plus blades are available. What innovative use might designers and engineers find for them?

Image Credit: Vattenfall

Kategorie: Transhumanismus

Most Supposedly ‘Open’ AI Systems Are Actually Closed—and That’s a Problem

Singularity HUB - 30 Listopad, 2024 - 16:00

“Open” AI models have a lot to give. The practice of sharing source code with the public spurs innovation and democratizes AI as a tool.

Or so the story goes. A new analysis in Nature puts a twist on the narrative: Most supposedly “open” AI models, such as Meta’s Llama 3, are hardly that.

Rather than encouraging or benefiting small startups, the “rhetoric of openness is frequently wielded in ways that…exacerbate the concentration of power” in large tech companies, wrote David Widder at Cornell University, Meredith Whittaker at Signal Foundation, and Sarah West at AI Now Institute.

Why care? Debating AI openness seems purely academic. But with growing use of ChatGPT and other large language models, policymakers are scrambling to catch up. Can models be allowed in schools or companies? What guiderails should be in place to protect against misuse?

And perhaps most importantly, most AI models are controlled by Google, Meta, and other tech giants, which have the infrastructure and financial means to either develop or license the technology—and in turn, guide the evolution of AI to meet their financial incentives.

Lawmakers around the globe have taken note. This year, the European Union adopted the AI Act, the world’s first comprehensive legislation to ensure AI systems used are “safe, transparent, non-discriminatory, and environmentally friendly.” As of September, there were over 120 AI bills in Congress, chaperoning privacy, accountability, and transparency.

In theory, open AI models can deliver those needs. But “when policy is being shaped, definitions matter,” wrote the team.

In the new analysis, they broke down the concept of “openness” in AI models across the entire development cycle and pinpointed how the term can be misused.

What Is ‘Openness,’ Anyway?

The term “open source” is nearly as old as software itself.

At the turn of the century, small groups of computing rebels released code for free software that anyone could download and use in defiance of corporate control. They had a vision: Open-source software, such as freely available word processors similar to Microsoft’s, could level the playing field for little guys and allow access to people who couldn’t afford the technology. The code also became a playground, where eager software engineers fiddled around with the code to discover flaws in need of fixing—resulting in more usable and secure software.

With AI, the story’s different. Large language models are built with numerous layers of interconnected artificial “neurons.” Similar to their biological counterparts, the structure of those connections heavily influences a model’s performance in a specific task.

Models are trained by scraping the internet for text, images, and increasingly, videos. As this training data flows through their neural networks, they adjust the strengths of their artificial neurons’ connections—dubbed “weights”—so that they generate desired outputs. Most systems are then evaluated by people to judge the accuracy and quality of the results.

The problem? Understanding these systems’ internal processes isn’t straightforward. Unlike traditional software, sharing only the weights and code of an AI model, without the underlying training data, makes it difficult for other people to detect potential bugs or security threats.

This means previous concepts from open-source software are being applied in “ill-fitting ways to AI systems,” wrote the team, leading to confusion about the term.

Openwashing

Current “open” AI models span a range of openness, but overall, they have three main characteristics.

One is transparency, or how much detail about an AI model’s setup its creator publishes. Eleuther AI’s Pythia series, for example, allows anyone to download the source code, underlying training data, and full documentation. They also license the AI model for wide reuse, meeting the definition of “open source” from the Open Source Initiative, a non-profit that has defined the term as it has evolved over nearly three decades. In contrast, Meta’s Llama 3, although described as open, only allows people to build on their AI through an API—a sort of interface that lets different software communicate, without sharing the underlying code—or download just the model’s weights to tinker but with restrictions on their usage.

“This is ‘openwashing’ systems that are better understood as closed,” wrote the authors.

A second characteristic is reusability, in that openly licensed data and details of an AI model can be used by other people (although often only through a cloud service—more on that later.) The third characteristic, extensibility, lets people fine-tune existing models for their specific needs.

“[This] is a key feature championed particularly by corporate actors invested in open AI,” wrote the team. There’s a reason: Training AI models requires massive computing power and resources, often only available to large tech companies. Llama 3, for example, was trained on 15 trillion tokens—a unit for processing data, such as words or characters. These choke points make it hard for startups to build AI systems from scratch. Instead, they often retrain “open” systems to adapt them to a new task or run more efficiently. Stanford’s AI Alpaca model, based on Llama, for example, gained interest for the fact it could run on a laptop.

There’s no doubt that many people and companies have benefited from open AI models. But to the authors, they may also be a barrier to the democratization of AI.

The Dark Side

Many large-scale open AI systems today are trained on cloud servers, the authors note. The UAE’s Technological Innovation Institute developed Falcon 40B and trained it on Amazon’s AWS servers. MosaicML’s AI is “tied to Microsoft’s Azure.” Even OpenAI has partnered with Microsoft to offer its new AI models at a price.

While cloud computing is extremely useful, it limits who can actually run AI models to a handful of large companies—and their servers. Stanford’s Alpaca eventually shut down partially due to a lack of financial resources.

Secrecy around training data is another concern. “Many large-scale AI models described as open neglect to provide even basic information about the underlying data used to train the system,” wrote the authors.

Large language models process huge amounts of data scraped from the internet, some of which is copyrighted, resulting in a number of ongoing lawsuits. When datasets aren’t readily made available, or when they’re incredibly large, it’s tough to fact-check the model’s reported performance, or if the datasets “launder others’ intellectual property,” according to the authors.

The problem gets worse when building frameworks, often developed by large tech companies, to minimize the time “[reinventing] the wheel.” These pre-written pieces of code, workflows, and evaluation tools help developers quickly build on an AI system. However, most tweaks don’t change the model itself. In other words, whatever potential problems or biases that exist inside the models could also propagate to downstream applications.

An AI Ecosystem

To the authors, developing AI that’s more open isn’t about evaluating one model at a time. Rather, it’s about taking the whole ecosystem into account.

Most debates on AI openness miss the larger picture. As AI advances, “the pursuit of openness on its own will be unlikely to yield much benefit,” wrote the team. Instead, the entire cycle of AI development—from setting up, training, and running AI systems to their practical uses and financial incentives—has to be considered when building open AI policies.

“Pinning our hopes on ‘open’ AI in isolation will not lead us to that world,” wrote the team.

Image Credit: x / x

Kategorie: Transhumanismus

OpenAI’s GPT-4o Makes AI Clones of Real People With Surprising Ease

Singularity HUB - 29 Listopad, 2024 - 16:00

AI has become uncannily good at aping human conversational capabilities. New research suggests its powers of mimicry go a lot further, making it possible to replicate specific people’s personalities.

Humans are complicated. Our beliefs, character traits, and the way we approach decisions are products of both nature and nurture, built up over decades and shaped by our distinctive life experiences.

But it appears we might not be as unique as we think. A study led by researchers at Stanford University has discovered that all it takes is a two-hour interview for an AI model to predict people’s responses to a battery of questionnaires, personality tests, and thought experiments with an accuracy of 85 percent.

While the idea of cloning people’s personalities might seem creepy, the researchers say the approach could become a powerful tool for social scientists and politicians looking to simulate responses to different policy choices.

“What we have the opportunity to do now is create models of individuals that are actually truly high-fidelity,” Stanford’s Joon Sung Park from, who led the research, told New Scientist.We can build an agent of a person that captures a lot of their complexities and idiosyncratic nature.”

AI wasn’t used only to create virtual replicas of the study participants, it also helped gather the necessary training data. The researchers got a voice-enabled version of OpenAI’s GPT-4o to interview people using a script from the American Voices Project—a social science initiative aimed at gathering responses from American families on a wide range of issues.

As well as asking preset questions, the researchers also prompted the model to ask follow-up questions based on how people responded. The model interviewed 1,052 people across the US for two hours and produced transcripts for each individual.

Using this data, the researchers created GPT-4o-powered AI agents to answer questions in the same way the human participant would. Every time an agent fielded a question, the entire interview transcript was included alongside the query, and the model was told to imitate the participant.

To evaluate the approach, the researchers had the agents and human participants go head-to-head on a range of tests. These included the General Social Survey, which measures social attitudes to various issues; a test designed to judge how people score on the Big Five personality traits; several games that test economic decision making; and a handful of social science experiments.

Humans often respond quite differently to these kinds of tests at different times, which would throw off comparisons to the AI models. To control for this, the researchers asked the humans to complete the test twice, two weeks apart, so they could judge how consistent participants were.

When the team compared responses from the AI models against the first round of human responses, the agents were roughly 69 percent accurate. But taking into account how the humans’ responses varied between sessions, the researchers found the models hit an accuracy of 85 percent.

Hassaan Raza, the CEO of Tavus, a company that creates “digital twins” of customers, told MIT Technology Review that the biggest surprise from the study was how little data it took to create faithful copies of real people. Tavus normally needs a trove of emails and other information to create their AI clones.

“What was really cool here is that they show you might not need that much information,” he said. “How about you just talk to an AI interviewer for 30 minutes today, 30 minutes tomorrow? And then we use that to construct this digital twin of you.”

Creating realistic AI replicas of humans could prove a powerful tool for policymaking, Richard Whittle at the University of Salford, UK, told New Scientist, as AI focus groups could be much cheaper and quicker than ones made up of humans.

But it’s not hard to see how the same technology could be put to nefarious uses. Deepfake video has already been used to pose as a senior executive in an elaborate multi-million-dollar scam. The ability to mimic a target’s entire personality would likely turbocharge such efforts.

Either way, the research suggests that machines that can realistically imitate humans in a wide range of settings are imminent.

Image Credit: Richmond Fajardo on Unsplash

Kategorie: Transhumanismus

HumAInity Is Genius, But Where’s the Wisdom?

Singularity Weblog - 29 Listopad, 2024 - 13:36
It was human genius that took us to Mars and beyond. A lack of wisdom, however, might stop us from going further. For genius is not wisdom. In fact, talent often undermines character. Just look at the mess some of history’s most gifted business leaders, scientists, musicians, artists, and writers have made of their lives. […]
Kategorie: Transhumanismus

Niantic Is Training a Giant ‘Geospatial’ AI on Pokémon Go Data

Singularity HUB - 27 Listopad, 2024 - 16:00

If you want to see what’s next in AI, just follow the data. ChatGPT and DALL-E trained on troves of internet data. Generative AI is making inroads in biotechnology and robotics thanks to existing or newly assembled datasets. One way to glance ahead, then, is to ask: What colossal datasets are still ripe for the picking?

Recently, a new clue emerged.

In a blog post, gaming company Niantic said it’s training a new AI on millions of real-world images collected by Pokémon Go players and in its Scaniverse app. Inspired by the large language models powering chatbots, they call their algorithm a “large geospatial model” and hope it’ll be as fluent in the physical world as ChatGPT is in the world of language.

Follow the Data

This moment in AI is defined by algorithms that generate language, images, and increasingly, video. With OpenAI’s DALL-E and ChatGPT, anyone can use everyday language to get a computer to whip up photorealistic images or explain quantum physics. Now, the company’s Sora algorithm is applying a similar approach to video generation. Others are competing with OpenAI, including Google, Meta, and Anthropic.

The crucial insight that gave rise to these models: The rapid digitization of recent decades is useful for more than entertaining and informing us humans—it’s food for AI too. Few would have viewed the internet in this way at its advent, but in hindsight, humanity has been busy assembling an enormous educational dataset of language, images, code, and video. For better or worse—there are several copyright infringement lawsuits in the works—AI companies scraped all that data to train powerful AI models.

Now that they know the basic recipe works well, companies and researchers are looking for more ingredients.

In biotech, labs are training AI on collections of molecular structures built over decades and using it to model and generate proteins, DNA, RNA, and other biomolecules to speed up research and drug discovery. Others are testing large AI models in self-driving cars and warehouse and humanoid robots—both as a better way to tell robots what to do, but also to teach them how to navigate and move through the world.

Of course, for robots, fluency in the physical world is crucial. Just as language is endlessly complex, so too are the situations a robot might encounter. Robot brains coded by hand can never account for all the variation. That’s why researchers are now building large datasets with robots in mind. But they’re nowhere near the scale of the internet, where billions of humans have been working in parallel for a very long time.

Might there be an internet for the physical world? Niantic thinks so. It’s called Pokémon Go. But the hit game is only one example. Tech companies have been creating digital maps of the world for years. Now, it seems likely those maps will find their way into AI.

Pokémon Trainers

Released in 2016, Pokémon Go was an augmented reality sensation.

In the game, players track down digital characters—or Pokémon—that have been placed all over the world. Using their phones as a kind of portal, players see characters superimposed on a physical location—say, sitting on a park bench or loitering by a movie theater. A newer offering, Pokémon Playground, allows users to embed characters at locations for other players. All this is made possible by the company’s detailed digital maps.

Niantic’s Visual Positioning System (VPS) can determine a phone’s position down to the centimeter from a single image of a location. In part, VPS assembles 3D maps of locations classically, but the system also relies on a network of machine learning algorithms—one or more per location—trained on years of player images and scans taken at various angles, times of day, and seasons and stamped with a position in the world.

“As part of Niantic’s Visual Positioning System (VPS), we have trained more than 50 million neural networks, with more than 150 trillion parameters, enabling operation in over a million locations,” the company wrote in its recent blog post.

Now, Niantic wants to go further.

Instead of millions of individual neural networks, they want to use Pokémon Go and Scaniverse data to train a single foundation model. Whereas individual models are constrained by the images they’ve been fed, the new model would generalize across all of them. Confronted with the front of a church, for example, it would draw on all the churches and angles it’s seen—front, side, rear—to visualize parts of the church it hasn’t been shown.

This is a bit like what we humans do as we navigate the world. We might not be able to see around a corner, but we can guess what’s there—it might be a hallway, the side of a building, or a room—and plan for it, based on our point of view and experience.

Niantic writes that a large geospatial model would allow it to improve augmented reality experiences. But it also believes such a model might power other applications, including in robotics and autonomous systems.

Getting Physical

Niantic believes it’s in a unique position because it has an engaged community contributing a million new scans a week. In addition, those scans are from the view of pedestrians, as opposed to the street, like in Google Maps or for self-driving cars. They’re not wrong.

If we take the internet as an example, then the most powerful new datasets may be collected by millions, or even billions, of humans working in concert.

At the same time, Pokémon Go isn’t comprehensive. Though locations span continents, they’re sparse in any given place and whole regions are completely dark. Further, other companies, perhaps most notably, Google, have long been mapping the globe. But unlike the internet, these datasets are proprietary and splintered.

Whether that matters—that is, whether an internet-sized dataset is needed to make a generalized AI that’s as fluent in the physical world as LLMs are in the verbal—isn’t clear.

But it’s possible a more complete dataset of the physical world arises from something like Pokémon Go, only supersized. This has already begun with smartphones, which have sensors to take images, videos, and 3D scans. In addition to AR apps, users are increasingly being incentivized to use these sensors with AI—like, taking a picture of a fridge and asking a chatbot what to cook for dinner. New devices, like AR glasses could expand this kind of usage, yielding a data bonanza for the physical world.

Of course, collecting data online is already controversial, and privacy is a big issue. Extending those problems to the real world is less than ideal.

After 404 Media published an article on the topic, Niantic added a note, “This scanning feature is completely optional—people have to visit a specific publicly-accessible location and click to scan. This allows Niantic to deliver new types of AR experiences for people to enjoy. Merely walking around playing our games does not train an AI model.” Other companies, however, may not be as transparent about data collection and use.

It’s also not certain new algorithms inspired by large language models will be straightforward. MIT, for example, recently built a new architecture aimed specifically at robotics. “In the language domain, the data are all just sentences,” Lirui Wang, the lead author of a paper describing the work, told TechCrunch.  “In robotics, given all the heterogeneity in the data, if you want to pretrain in a similar manner, we need a different architecture.”

Regardless, researchers and companies will likely continue exploring areas where LLM-like AI may be applicable. And perhaps as each new addition matures, it will be a bit like adding a brain region—stitch them together and you get machines that think, speak, write, and move through the world as effortlessly as we do.

Image: Kamil Switalski on Unsplash

Kategorie: Transhumanismus
Syndikovat obsah