Singularity HUB

Syndikovat obsah Singularity Hub
News and Insights on Technology, Science, and the Future from Singularity Group
Aktualizace: 3 dny 17 hodin zpět

These Scientists Are Battling Dangerous Superbugs With a ChatGPT-Like AI

9 Srpen, 2024 - 00:57

Bacteria and antibiotics have been in a roughly century-long game of cat and mouse. Unfortunately, bacteria are gaining the upper hand.

According to the World Health Organization, antibiotic resistance is a top public health risk that was responsible for 1.27 million deaths across the globe in 2019. When repeatedly exposed to antibiotics, bacteria rapidly learn to adapt their genes to counteract the drugs—and share the genetic tweaks with their peers—rendering the drugs ineffective.

Superpowered bacteria also torpedo medical procedures—surgery, chemotherapy, C-sections—adding risk to life-saving therapies. With antibiotic resistance on the rise, there are very few new drugs in development. While studies in petri dishes have zeroed in on potent candidates, some of these also harm the body’s cells, leading to severe side effects.

What if there’s a way to retain their bacteria-fighting ability, but with fewer side effects? This month, researchers used AI to reengineer a toxic antibiotic. They made thousands of variants and screened for the ones that maintained their bug-killing abilities without harming human cells.

The AI used in the study is a large language model similar to those behind famed chatbots from Google, OpenAI, and Anthropic. The algorithm sifted 5.7 million variants of the original antibiotic and found one that maintained its potency but with far less toxicity.

In lab tests, the new variant rapidly broke down bacteria “shields”—a fatty bubble that keeps the cells intact—but left host cells undamaged. Compared to the original antibiotic, the newer version was far less toxic to human kidney cells in petri dishes. It also rapidly eliminated deadly bacteria in infected mice with minimal side effects. The platform can also be readily adapted to screen other drugs in development, including those for various types of cancers.

“We have found that large language models are a major step forward for machine learning applications in protein and peptide engineering,” said Dr. Claus Wilke, a University of Austin biologist and data scientist and an author on the study, in a press release.

Insane in the Membrane

Antibiotics work in several ways. Some disrupt bacteria’s ability to create proteins. Others inhibit the copying of their genetic material, halting reproduction. Yet more selectively destroy their metabolisms.

Each strategy took years to research and even longer to develop safe and effective antibiotics. But bacteria rapidly evolve to evade these drugs.

Overuse of antibiotics in medicine and agriculture is giving rise to “superbugs” resistant to even the toughest current drugs. Once a strain of bacteria learns to evade a mechanism—say, hindering protein production—it readily blocks other drugs that target the same strategy.

Resistance can also rapidly spread through a bacterial population. Unlike our genetic material, which is encapsulated inside a nut-like structure, bacterial DNA freely floats around in their cells. Genetic changes—for example, those that allow bacteria to evade antibiotics—can be transmitted to other similar bacteria through temporary biological “tunnels” that literally connect the two cells. In other words, antibiotic resistance spreads fast.

That is, if given the chance.

For antibiotic resistance to develop, the bacteria need to survive the initial onslaught. Extremely deadly treatments, including a class called antimicrobial peptides, wipe out bacteria before they can adapt. These drugs rapidly break up the fatty protective barrier surrounding all bacterial cells. Decades in the works, scientists have made many of these molecules.

The problem? They also harm the membranes protecting our own cells, resulting in toxicity that makes most of them unusable in people. Although a library of these hyper-potent antibiotic drugs already exists, like underperforming ball players, they’ve mostly been benched.

Safe and Sound

The new study aimed to rehabilitate antimicrobial peptides by tweaking one called Protegrin-1. While extremely efficient at killing bacteria, it’s too toxic for human use. The researchers wanted to see if they could dial down side effects but maintain its bacteria-killing prowess.

Led by Dr. Bryan Davies, the team had previously developed a system to rapidly screen hundreds of thousands of peptides to see if they could kill harmful bacteria.

Called SLAY, for Surface Localized Antimicrobial Display, the system looks like a bunch of tetherballs with one end of each fixed to a biological surface and the other—this is the antimicrobial peptide—floating around to capture bacteria.

The researchers then engineered over 5.7 million Protegrin-1 variants. “This is a massive increase in diversity over the 18 single mutants” in previous studies, wrote the authors.

Next, they turned to AI large language models. Known for their ability to generate text, audio, and videos, this type of algorithm learns by ingesting terabytes of data and can spit out responses based on a specific prompt. While mostly used to generate text, scientists have increasingly embraced their capacity to “dream up” new proteins or other drugs.

The study used several prompts to guide the AI’s search: Things like, the drug has to target bacteria membranes, and it needs to break those up without harming human cells. The AI screened the available pool of variants and found one that hit the sweet spot—a new version dubbed bacterially selective Protegrin-1.2—that met all the guidelines.

Tested in petri dishes, the variant rapidly broke down membranes in Escherichia coli, a common type of bacteria often used for research, within half an hour. Human red blood cells, meanwhile, thrived under the same circumstances, even when exposed to levels 100 times higher than the bacteria. Rather than indiscriminatingly killing off both bacteria and human cells, the AI-approved antibiotic zeroed in on the pathogen.

Protegrin-1 has a reputation for causing kidney harm. The team pitted Protegrin-1.2 against the original and Colistin, an antibiotic used as a last-resort treatment, in cultured human kidney cells. The variant topped the others in safety measures, showing less cell membrane damage.

The team also treated mice infected with a type of multidrug-resistant bacteria—which roams hospitals—with the AI-selected antibiotic. Six days later, critters treated with the new version had lower levels of bacteria in multiple organs compared to untreated mice. Some had zero signs of infection at all. Compared to Protegrin-1, the new version “is significantly less toxic to mice,” wrote the authors.

Although the study focused on antibiotics, the team envisions using a similar strategy to reengineer other drugs previously thought too toxic for humans. Recently, another team used AI to determine the structure of small chemicals useful in antibiotic and cancer therapies but previously discarded by chemists as unusable in safe and effective medications.

“Many use cases that weren’t feasible with prior approaches are now starting to work. I foresee that these and similar approaches are going to be used widely for developing therapeutics or drugs going forward,” said Wilke.

Image Credit: x / x

Kategorie: Transhumanismus

A New Study Says AI Models Encode Language Like the Human Brain Does

7 Srpen, 2024 - 19:34

Language enables people to transmit thoughts to each other because each person’s brain responds similarly to the meaning of words. In newly published research, my colleagues and I developed a framework to model the brain activity of speakers as they engaged in face-to-face conversations.

We recorded the electrical activity of two people’s brains as they engaged in unscripted conversations. Previous research has shown that when two people converse, their brain activity becomes coupled, or aligned, and that the degree of neural coupling is associated with better understanding of the speaker’s message.

A neural code refers to particular patterns of brain activity associated with distinct words in their contexts. We found that the speakers’ brains are aligned on a shared neural code. Importantly, the brain’s neural code resembled the artificial neural code of large language models.

The Neural Patterns of Words

A large language model is a machine learning program that can generate text by predicting what words most likely follow others. Large language models excel at learning the structure of language, generating humanlike text, and holding conversations. They can even pass the Turing test, making it difficult for someone to discern whether they are interacting with a machine or a human. Like humans, large language models learn how to speak by reading or listening to text produced by other humans.

By giving the large language model a transcript of the conversation, we were able to extract its “neural activations,” or how it translates words into numbers, as it “reads” the script. Then, we correlated the speaker’s brain activity with both the large language model’s activations and with the listener’s brain activity. We found that the large language model’s activations could predict the speaker and listener’s shared brain activity.

To understand each other, people have a shared agreement on the grammatical rules and the meaning of words in context. For instance, we know to use the past tense form of a verb to talk about past actions, as in the sentence: “He visited the museum yesterday.” Additionally, we intuitively understand that the same word can have different meanings in different situations. For instance, the word cold in the sentence “you are cold as ice” can refer either to one’s body temperature or personality trait, depending on the context. Due to the complexity and richness of natural language, until the recent success of large language models, we lacked a precise mathematical model to describe it.

Our study found that large language models can predict how linguistic information is encoded in the human brain, providing a new tool to interpret human brain activity. The similarity between the human brain’s and the large language model’s linguistic code has enabled us, for the first time, to track how information in the speaker’s brain is encoded into words and transferred, word by word, to the listener’s brain during face-to-face conversations. For example, we found that brain activity associated with the meaning of a word emerges in the speaker’s brain before articulating a word, and the same activity rapidly reemerges in the listener’s brain after hearing the word.

Powerful New Tool

Our study has provided insights into the neural code for language processing in the human brain and how both humans and machines can use this code to communicate. We found that large language models were better able to predict shared brain activity compared with different features of language, such as syntax, or the order in which words connect to form phrases and sentences. This is partly due to the large language models’ ability to incorporate the contextual meaning of words, as well as integrate multiple levels of the linguistic hierarchy into one model: from words to sentences to conceptual meaning. This suggests important similarities between the brain and artificial neural networks.

An important aspect of our research is using everyday recordings of natural conversations to ensure that our findings capture the brain’s processing in real life. This is called ecological validity. In contrast to experiments in which participants are told what to say, we relinquish control of the study and let the participants converse as naturally as possible. This loss of control makes it difficult to analyze the data because each conversation is unique and involves two interacting individuals who are spontaneously speaking. Our ability to model neural activity as people engage in everyday conversations attests to the power of large language models.

Other Dimensions

Now that we’ve developed a framework to assess the shared neural code between brains during everyday conversations, we’re interested in what factors drive or inhibit this coupling. For example, does linguistic coupling increase if a listener better understands the speaker’s intent? Or perhaps complex language, like jargon, may reduce neural coupling.

Another factor that can influence linguistic coupling may be the relationship between the speakers. For example, you may be able to convey a lot of information with a few words to a good friend but not to a stranger. Or you may be better neurally coupled to political allies rather than rivals. This is because differences in the way we use words across groups may make it easier to align and be coupled with people within rather than outside our social groups.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image Credit: Mohamed HassanPixabay

Kategorie: Transhumanismus

Bitcoin Miners Flush With Chips Are Pivoting to AI in Search of New Riches

6 Srpen, 2024 - 22:30

As the bitcoin gold rush dries up, crypto miners are finding it hard to make ends meet. But for many there’s a silver lining—the facilities they’ve set up are perfect for Silicon Valley’s latest obsession with artificial intelligence.

Crypto mining can be a profitable but highly volatile endeavor. It involves creating massive datacenters packed with specialized computer chips and using them to solve the mathematical puzzles underpinning the security of various cryptocurrencies. In exchange, the miners win some of that cryptocurrency as a reward.

Most miners make the bulk of their money from bitcoin. But earlier this year, an event called “the halving” seriously hit earnings. Every four years, the bitcoin protocol halves the mining reward—that is, how much bitcoin miners receive in exchange for solving math puzzles—to increase the scarcity of the coin. Normally, this causes the price of bitcoin to jump in response, but this time around that didn’t happen, severely impacting the profitability of miners.

Fortunately for them, another industry with a voracious appetite for computing has arrived just in time. The rush to train massive generative AI models has left companies scrabbling for chips, datacenter space, and reliable access to large amounts of cheap power, things many miners already have in abundance.

“It [normally] takes 3-5 years to build an HPC-grade data center from scratch,” JPMorgan analysts wrote in a recent note, according to the Financial Times. “This scramble for power puts a premium on companies with access to cheap power today.”

While crypto mining and training AI aren’t exactly the same, they share crucial similarities. Both require huge datacenters specialized to carry out one particular job, and they both consume large amounts of power. But because miners have been playing this game for a long time and most AI companies have only started trying to train truly massive models since the launch of ChatGPT less than two years ago, the companies have a big head start.

They’ve already spent years scouring the country for places with abundant cheap power and plenty of space to build large datacenters. More importantly, they’ve already gone through the time-consuming process of getting approvals, negotiating power licenses, and getting the facilities up and running.

The rapid expansion in demand for AI training is straining grids in some areas, and so, many jurisdictions in North America have implemented long waitlists for new datacenters, according to Time. Already, roughly 83 percent of datacenter capacity currently under construction has been leased in advance, says Bloomberg.

This means the biggest bottleneck for many AI companies is finding the hardware to train their models, and that presents a new opportunity for crypto miners. “You’ve seen a number of crypto miners that were sort of struggling that have actually made a full pivot away,” Kent Draper, chief commercial officer of crypto miner IREN, told Time.

Converting a bitcoin mine into an AI training cluster isn’t a straight swap. AI training is typically done on GPUs while bitcoin mining uses specialized mining chips from Bitmain. But often, it’s not so much the chips AI companies are after, but the infrastructure and power access the mine has already set up.

In June, crypto miner Core Scientific announced it would host 270 megawatts of GPUs for the AI infrastructure startup CoreWeave. “We view the opportunity in AI today to be one where we can convert existing infrastructure we own to host clients who are looking to install very large arrays of GPUs for their clients that are ultimately AI clients,” Core Scientific CEO Adam Sullivan told Bloomberg.

Some miners are also operating GPUs themselves. German miner Northern Data had been focused on mining the Ethereum cryptocurrency, but a major software update to the coin’s blockchain in 2022 did away with mining. Pivoting, the company purchased $800 million worth of Nvidia’s latest GPUs to launch a 20,000-GPU AI cluster, one of the largest in Europe, according to Bloomberg.

Other miners like Hut 8 and IREN are investing heavily in new chips to more proactively chase the AI boom. Often, AI training is happening side-by-side with crypto mining. “We view them as mutually complementary,” IREN’s Draper told Time. “Bitcoin is instant revenue but somewhat more volatile. AI is customer-dependent—but once you have customers, it’s contracted and more stable.”

This new trend could provide some modest environmental benefits too. People are concerned about the enormous power consumption of both AI training and bitcoin mining. If increasing demand for AI simply displaces existing mining infrastructure, rather than requiring new power-hungry datacenters, that could help curtail the growing carbon impact of the industry.

However, for miners, chasing the latest gold rush can be a risky strategy. There are growing concerns the AI industry is in a bubble close to bursting. If that happens, the rich new seam miners have started to tap could dry up very quickly.

Update 8/8/2024: The article previously stated Northern Data bought $800 million worth of new Nvidia chips to mine Ethereum and repurposed them to train AI models. However, Northern Data bought the chips exclusively for their AI business. The article has been updated to reflect this.

Image Credit: Traxer / Unsplash

Kategorie: Transhumanismus

Ozempic-Like Drug Slows Cognitive Decline in Mild Alzheimer’s Disease

5 Srpen, 2024 - 23:35

If you hear the word Ozempic, weight loss immediately comes to mind. The drug—part of a family of treatments called GLP-1 agonists—took the medical world (and internet) by storm for helping people manage diabetes, lower the risk of heart disease, and rapidly lose weight.

The drugs may also protect the brain against dementia. In a clinical trial including over 200 people with mild Alzheimer’s disease, a daily injection of a GLP-1 drug for one year slowed cognitive decline. When challenged with a battery of tests assessing memory, language skills, and decision-making, participants who took the drug remained sharper for longer than those who took a placebo—an injection that looked the same but wasn’t functional.

The results are the latest from the Evaluating Liraglutide in Alzheimer’s Disease (ELAD) study led by Dr. Paul Edison at Imperial College London. Launched in 2014, the study was based on years of research in mice showing liraglutide—a GLP-1 drug already approved for weight loss and diabetes management in the United States—also protects the brain.

In Alzheimer’s disease, neurons die off and the brain gradually loses volume. In the trial, Liraglutide slowed the process down, resulting in roughly 50 percent less volume lost in several areas of the brain related to memory compared to a placebo.

“We are in an era of unprecedented promise, with new treatments in various stages of development that slow or may possibly prevent cognitive decline due to Alzheimer’s disease,” said Dr. Maria C. Carrillo, Alzheimer’s Association chief science officer and medical affairs lead, in a press release. “This research provides hope that more options for changing the course of the disease are on the horizon.”

The results were presented last month at the Alzheimer’s Association International Conference.

Back to Basics

The quest for an Alzheimer’s disease treatment is littered with failures. Most treatments aim to tackle toxic protein clumps that build up inside the brain. It’s thought that breaking them up could prevent neurons from withering away.

A few have had limited success. Last month, the US Food and Drug Administration (FDA) approved a drug that breaks down the clumps in people already experiencing symptoms at an early stage of the disease. A few weeks later, the European Medicines Agency refused to approve another drug that also targets the clumps, saying the effects of delaying cognitive decline didn’t balance the risk of serious side effects, including brain swelling and bleeding.

Other scientists have looked elsewhere—specifically, diabetes. Insulin helps maintain brain health, and Type 2 diabetes is a risk factor for developing Alzheimer’s disease. Rather than directly breaking down protein clumps in the brain, might we protect the brain by tweaking the body’s metabolism?

Enter GLP-1 drugs. These mimic hormones released by the stomach after a satisfying meal, tricking the brain into thinking you’re full. In other words, the drugs don’t only influence the gut—they also change brain functions.

In a mouse model of Alzheimer’s, daily injections of liraglutide for eight weeks prevented memory problems. Their neurons also thrived. Synapses—the junctions connecting brain cells—were still able to rapidly form neural networks in areas especially damaged by the disease. Surprisingly, toxic protein clumps also declined by up to 50 percent, and inflammation dropped.

Liraglutide didn’t just work on neurons. Another study, also in an Alzheimer’s mouse model, found it rapidly tweaked the metabolism of a particular kind of star-shaped brain cell that supports neurons. These cells don’t form neural networks, but they do help provide energy. In Alzheimer’s, they stop functioning normally, but liraglutide reversed the decline. In mice, the drug improved the cells’ ability to support neurons, allowing the neurons to flourish and connect to others. The brain also made better use of sugar—its primary fuel—allowing it to give birth to new neurons in a region important for memory.

But as the field frustratingly knows, mice are not people. Many promising treatments in mice have failed in clinical studies, earning these endeavors the nickname “graveyard of dreams.”

The Trial

Edison took on the task of extending the research from mice to humans. In 2019, he and his colleagues detailed plans for a clinical trial to gauge liraglutide’s effects in people with mild Alzheimer’s. Called ELAD, the study was to be randomized and double-blind—the gold standard in clinical trials. Here, neither doctor nor patient knows who’s getting liraglutide or the placebo.

They recruited 204 people to receive injections, either liraglutide or placebo, every day for a year. Before the trial, each person had an MRI scan to map their brain’s structure and volume. Other scans recorded brain metabolism, and a battery of memory tests detailed cognition. These tests were repeated at the end, with safety checkups in between, in case of side effects.

The study had several goals. One was to see if liraglutide increased the brain’s metabolism in regions heavily impacted by Alzheimer’s—those related to learning, memory, and decision-making. Another examined brain volume, which decreases as the disease progresses. The last evaluated cognitive tests of memory, comprehension, language, and spatial navigation.

People who took liraglutide had nearly 50 percent less brain volume loss, especially in regions associated with reasoning and learning. “The slower loss of brain volume suggests liraglutide protects the brain, much like statins protect the heart,” said Dr. Edison.

Liraglutide also boosted cognition. Comparing scores from before the trial, at its midpoint, and at the end, those who received the drug had an 18 percent slower decline than those who took the placebo. However, the drug didn’t affect brain metabolism.

Side effects were relatively mild. The most common was nausea. More serious ones, not specified, occurred in 18 patients but weren’t likely related to the treatment according to Edison.

To be clear, the team presented the results at a conference, and they haven’t yet been formally vetted by other experts in the field. But they add to accumulating evidence that GLP-1 drugs slow cognitive decline. A Swedish study in June conducted a simulated trial in people with Type 2 diabetes given GLP-1 or two other types of drugs and assessed their cognition afterward. Using health data records from over 88,000 participants followed over four years, GLP-1 drugs were better than the two other diabetes drugs at keeping the risk of dementia at bay.

We don’t yet know how liraglutide protects the brain. Based on studies in mice, it likely works multiple ways, such as reducing inflammation, clearing toxic protein clumps, and improving communicate between neurons, Edison said.

But the idea is gaining steam. EVOKE Plus, a late stage clinical trial of semaglutide—the chemical in Ozempic—is ongoing. The study will take about three and a half years, with an estimated enrollment of 1,840 people with early Alzheimer’s disease. It’s set to conclude in late 2026.

“Repurposing drugs already approved for other conditions has the advantage of providing data and experience from previous research and practical use—so we already know a lot about real-world effectiveness in other diseases and side effects,” said Carrillo.

Image Credit: Maxim Berg / Unsplash

Kategorie: Transhumanismus

This Week’s Awesome Tech Stories From Around the Web (Through August 3)

3 Srpen, 2024 - 16:00
ARTIFICIAL INTELLIGENCE

The Era of Predictive AI Is Almost Over
Dean W. Ball | The New Atlantis
“For firms like OpenAI, DeepMind, and Anthropic to achieve their ambitious goals, AI models will need to do more than write prose and code and come up with images. And the companies will have to contend with the fact that human input for training the models is a limited resource. The next step in AI development is promising as it is daunting: AI building upon AI to solve ever more complex problems and check for its own mistakes. There will likely be another leap in LLM development, and soon.”

TECH

ChatGPT Advanced Voice Mode Impresses Testers With Sound Effects, Catching Its Breath
Benj Edwards | Ars Technica
“In early tests reported by users with access, Advanced Voice Mode allows them to have real-time conversations with ChatGPT, including the ability to interrupt the AI mid-sentence almost instantly. It can sense and respond to a user’s emotional cues through vocal tone and delivery, and provide sound effects while telling stories. But what has caught many people off-guard initially is how the voices simulate taking a breath while speaking.”

ROBOTICS

Arc’teryx’s New Powered Pants Could Make Hikers Feel 30 Pounds Lighter
Andrew Liszewski | The Verge
“Strength-boosting exoskeleton suits can help make jobs with physical labor feel less strenuous, but Arc’teryx has partnered with Skip, a spinoff of Google’s X Labs, to bring the technology to leisure time. The powered MO/GO pants feature a lightweight electric motor at the knee that can boost a hiker’s leg strength when going uphill while also absorbing the impact of steps during a descent.“

ARTIFICIAL INTELLIGENCE

Silicon Valley’s Trillion-Dollar Leap of Faith
Matteo Wong | The Atlantic
“Silicon Valley has already triggered tens or even hundreds of billions of dollars of spending on AI, and companies only want to spend more. Their reasoning is straightforward: These companies have decided that the best way to make generative AI better is to build bigger AI models. And that is really, really expensive, requiring resources on the scale of moon missions and the interstate-highway system to fund the data centers and related infrastructure that generative AI depends on. …Now a number of voices in the finance world are beginning to ask whether all of this investment can pay off.”

archive page

AUTOMATION

Robots Are Coming, and They’re on a Mission: Install Solar Panels
Brad Plumer | The New York Times
“On Tuesday, AES Corporation, one of the country’s biggest renewable energy companies, introduced a first-of-its-kind robot that can lug around and install the thousands of heavy panels that typically make up a large solar array. AES said its robot, nicknamed Maximo, would ultimately be able to install solar panels twice as fast as humans can and at half the cost.”

ENERGY

Silicon Plus Perovskite Solar Reaches 34 Percent Efficiency
John Timmer | Ars Technica
“Perovskite crystals can be layered on top of silicon, creating a panel with two materials that absorb different areas of the spectrum—plus, perovskites can be made from relatively cheap raw materials. Unfortunately, it has been difficult to make perovskites that are both high-efficiency and last for the decades that the silicon portion will. Lots of labs are attempting to change that, though. And two of them reported some progress this week, including a perovskite/silicon system that achieved 34 percent efficiency.”

DIGITAL MEDIA

How This Brain Implant Is Using ChatGPT
Jesse Orrall | CNET
“One of the leading-edge implantable brain-computer-interface, or BCI, companies is experimenting with ChatGPT integration to make it easier for people living with paralysis to control their digital devices. …Now, instead of typing out each word, answers can be filled in with a single ‘click.’ There’s a refresh button in case none of the AI answers are right, and [a pioneering patient] Mark has noticed the AI getting better at providing answers that are more in line with things he might say.”

ETHICS

A New Trick Could Block the Misuse of Open Source AI
Will Knight | Wired
“When Meta released its large language model Llama 3 for free this April, it took outside developers just a couple days to create a version without the safety restrictions that prevent it from spouting hateful jokes, offering instructions for cooking meth, or misbehaving in other ways. A new training technique developed by researchers at the University of Illinois Urbana-Champaign, UC San Diego, Lapis Labs, and the nonprofit Center for AI Safety could make it harder to remove such safeguards from Llama and other open source AI models in the future.”

SCIENCE

Complex Life on Earth May Be Much Older Than Thought
Georgina Rannard | BBC
“A group of scientists say they have found new evidence to back up their theory that complex life on Earth may have begun 1.5 billion years earlier than thought. The team, working in Gabon, say they discovered evidence deep within rocks showing environmental conditions for animal life 2.1 billion years ago. But they say the organisms were restricted to an inland sea, did not spread globally and eventually died out.”

FUTURE

Should We Put a Frozen Backup of Earth’s Life on the Moon?
James Woodford | New Scientist
“A backup of life on Earth could be kept safe in a permanently dark location on the moon, without the need for power or maintenance, allowing us to potentially restore organisms if they die out. …’There is no place on Earth cold enough to have a passive repository that must be held at -196°C, so we thought about space or the moon,’ says [Mary] Hagedorn.”

Image Credit: Vishnu MohananUnsplash

Kategorie: Transhumanismus

Meta Just Launched the Largest ‘Open’ AI Model in History. Here’s Why It Matters.

2 Srpen, 2024 - 22:32

In the world of artificial intelligence, a battle is underway. On one side are companies that believe in keeping the datasets and algorithms behind their advanced software private and confidential. On the other are companies that believe in allowing the public to see what’s under the hood of their sophisticated AI models.

Think of this as the battle between open- and closed-source AI.

In recent weeks, Meta, the parent company of Facebook, took up the fight for open-source AI in a big way by releasing a new collection of large AI models. These include a model named Llama 3.1 405B, which Meta’s founder and chief executive, Mark Zuckerberg, says is “the first frontier-level open-source AI model.”

For anyone who cares about a future in which everybody can access the benefits of AI, this is good news.

The Danger of Closed-Source AI—and the Promise of Open-Source AI

Closed-source AI refers to models, datasets, and algorithms that are proprietary and kept confidential. Examples include ChatGPT, Google’s Gemini, and Anthropic’s Claude.

Though anyone can use these products, there is no way to find out what dataset and source codes have been used to build the AI model or tool.

While this is a great way for companies to protect their intellectual property and profits, it risks undermining public trust and accountability. Making AI technology closed-source also slows down innovation and makes a company or other users dependent on a single platform for their AI needs. This is because the platform that owns the model controls changes, licensing, and updates.

There are a range of ethical frameworks that seek to improve the fairness, accountability, transparency, privacy, and human oversight of AI. However, these principles are often not fully achieved with closed-source AI due to the inherent lack of transparency and external accountability associated with proprietary systems.

In the case of ChatGPT, its parent company, OpenAI, releases neither the dataset nor code of its latest AI tools to the public. This makes it impossible for regulators to audit it. And while access to the service is free, concerns remain about how users’ data are stored and used for retraining models.

By contrast, the code and dataset behind open-source AI models is available for everyone to see.

This fosters rapid development through community collaboration and enables the involvement of smaller organizations and even individuals in AI development. It also makes a huge difference for small- and medium-size enterprises as the cost of training large AI models is colossal.

Perhaps most importantly, open-source AI allows for scrutiny and identification of potential biases and vulnerability.

However, open-source AI does create new risks and ethical concerns.

For example, quality control in open-source products is usually low. As hackers can also access the code and data, the models are also more prone to cyberattacks and can be tailored and customized for malicious purposes, such as retraining the model with data from the dark web.

An Open-Source AI Pioneer

Among all leading AI companies, Meta has emerged as a pioneer of open-source AI. With its new suite of AI models, it is doing what OpenAI promised to do when it launched in December 2015—namely, advancing digital intelligence “in the way that is most likely to benefit humanity as a whole,” as OpenAI said back then.

Llama 3.1 405B is the largest open-source AI model in history. It is what’s known as a large language model, capable of generating human language text in multiple languages. It can be downloaded online but because of its huge size, users will need powerful hardware to run it.

While it does not outperform other models across all metrics, Llama 3.1 405B is considered highly competitive and does perform better than existing closed-source and commercial large language models in certain tasks, such as reasoning and coding tasks.

But the new model is not fully open because Meta hasn’t released the huge dataset used to train it. This is a significant “open” element that is currently missing.

Nonetheless, Meta’s Llama levels the playing field for researchers, small organizations, and startups because it can be leveraged without the immense resources required to train large language models from scratch.

Shaping the Future of AI

To ensure AI is democratized, we need three key pillars:

  • Governance: regulatory and ethical frameworks to ensure AI technology is being developed and used responsibly and ethically
  • Accessibility: affordable computing resources and user-friendly tools to ensure a fair landscape for developers and users
  • Openness: datasets and algorithms to train and build AI tools should be open source to ensure transparency.

Achieving these three pillars is a shared responsibility for government, industry, academia and the public. The public can play a vital role by advocating for ethical policies in AI, staying informed about AI developments, using AI responsibly, and supporting open-source AI initiatives.

But several questions remain about open-source AI. How can we balance protecting intellectual property and fostering innovation through open-source AI? How can we minimize ethical concerns around open-source AI? How can we safeguard open-source AI against potential misuse?

Properly addressing these questions will help us create a future where AI is an inclusive tool for all. Will we rise to the challenge and ensure AI serves the greater good? Or will we let it become another nasty tool for exclusion and control? The future is in our hands.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image Credit: Google DeepMindUnsplash

Kategorie: Transhumanismus

A Massively Strong Beetle Just Inspired a Lightweight Flying Robot

1 Srpen, 2024 - 21:25

One of the largest and strongest beetles in the world hardly seems the best inspiration for a delicate flying microbot.

But using slow-motion cameras to capture the critters in flight, an international team designed a flying micromachine that can similarly expand and retract its wings. The robot—resembling a rocket before takeoff and a flying insect once airborne—deploys its wings for takeoff, then easily hovers and flaps them to stay aloft. Upon landing, it tucks its wings back into its body.

The robot was inspired by rhinoceros beetles, named for the distinctive horns protruding from the males’ foreheads. These critters can grow up to six inches—picture a similarly sized Subway sandwich—and carry up to 100 times their body weight in cargo, earning them the nickname Hercules beetles.

They’re hardly stationary beefcakes. Covered in a shiny black or grey exoskeleton, these beetles can fly two miles a day. But it was their sophisticated wing-deployment system that caught the eyes of roboticists.

“Birds, bats, and many insects can tuck their wings against their bodies when at rest and deploy them to power flight,” but we didn’t know how the process worked for the beetle, wrote the authors.

It’s not just scientific curiosity. The research could lead to flapping robot designs for search and rescue operations or environmental, agricultural, and military monitoring.

The findings could improve the design of flapping-wing robots, especially smaller ones with limited takeoff weights, explained the team, “enabling them to deploy and retract their wings similarly to their biological counterparts.”

Nuisance to Notion

When it comes to fashioning mini-bots, Mother Nature is a mother lode of creative inspiration.

In 1989, a pair of intrepid scientists at MIT’s Artificial Intelligence Lab imagined and built several small, multi-legged robots to explore our planet and the solar system beyond.

Fast forward to earlier this year, and the idea is becoming reality. One team developed a crawling MiniBug robot and artificial water strider by mimicking movements observed in their natural counterparts. These were some of the smallest, lightest, and fastest fully functional robots to date, relying on tiny motors—called actuators—to help them move.

Meanwhile, bees have inspired microbots that fly, even with damaged wings, and flies have inspired tiny accelerometers that sense wind and aid flight control. Dr. Sawyer Buckminster Fuller at the University of Washington, an author of the latter study, explained at the time why bugbots makes sense. “First, they’re so small that they’re inherently safe around people. You won’t get an injury if an insect robot crashes into you. The other is, they’re so small they use very little power.”

Yet these systems still require electricity or motors to control wing positions during takeoff, flight, and landing, which limits their range and utility. The new study looked to beetles for an alternative—one that doesn’t require motors to stretch and tuck a bugbot’s wings.

Beetle Juice

The rhinoceros beetle was a risky inspiration. With two pairs of wings—each having its own set of mechanics and uses—the beetle has always been hard to study.

“Beetles…possess one of the most complex mechanisms among the various insect species,” wrote the authors.

Part of this is due to a complex dynamic between the pairs of wings. The forewings, also called elytra, are hardened and shell-like. The hindwings, in contrast, are delicate, membrane-like structures—think of a dragonfly’s wings—that fold into themselves like origami.

This “allows them to neatly stow between the body and the elytra” when not in flight, wrote the team.

The shell-like elytra protect their hindwing teammates at rest and spread like fighter-jet wings during flight. The hindwings unfold and flap during flight, then fold back upon landing. Previous studies suggested that muscles, stretchy tissues, or other elements drive the hindwings. Here, the team laid the debate to rest using high-speed cameras to record beetles as they took flight.

Wing Man

The beetle’s wings spread in two steps.

First, like a fighter jet, the beetle deploys the hard-shell elytra. Through a spring-like mechanism, the hindwings then slightly stretch out using stored energy rather than muscle energy. In other words, the beetle doesn’t flex its muscles—its hindwings naturally spread.

“This allows the clearance needed for the subsequent flapping motion,” wrote the team.

The second phase activates synchronized flaps of both wing pairs. The hindwings unfold and assume flight position, allowing the beetle to maneuver through nooks and crannies.

The duo also work in concert for landing. The elytra push the hindwings to fold and neatly tuck into a resting position—with the elytra’s hard shell protecting them from above.

Flapping Flying Bots

The team designed a flapping robot that mimics the beetles’ wing system.

It looks like a cyborg fly, with two translucent wings connected to a golden body and rotund head. Unlike the beetle, the bugbot has just one pair of retractable wings that fold into itself at rest, decreasing its length by over 60 percent.

Each wing is made of light-weight carbon and a stretchy membrane. Combined with flexible joints, the bugbot easily rotates as it flaps around. An elastic tendon at the bot’s “armpits” can pull the wings back in just 100 milliseconds—or about the blink of an eye. The team used a single motor, based on the elytra, to deploy them.

Once activated, the wings rapidly spread, propelling the minibot skyward in two wing flaps. In a series of tests, the bot successfully took off, hovered, and landed. The wings automatically unfolded into the flight position, generating enough lift for takeoff. While airborne, it hovered and stayed upright, despite some wobbles. On landing, the bugbot refolded in on itself, retracting its wings in the blink of an eye.

These retractable wings have an additional perk—resilience.

If the bugbot is hit by an obstacle, causing it to irreversibly tumble and potentially crash, it immediately retracts its wings to protect them from impact—without the need for muscle energy or other external controls. This resilience may come in handy when navigating dangerous terrain—after an environmental disaster, for example.

Although the study focused on the rhinoceros beetle, a similar strategy could be used to observe and harness biological perks from other insects, such as ladybugs.

“These experiments…[demonstrate] a new design principle for the robust flight of flapping-wing microrobots with stringent weight constraints in cluttered and confined spaces,” wrote the team.

Image Credit: Hoang-Vu Phan

Kategorie: Transhumanismus

This Ultra-Thin Lightsail Could Tow Tiny Spacecraft to the Nearest Stars

31 Červenec, 2024 - 22:17

Traveling the vast distances between solar systems is well beyond existing technology. But a new ultra-thin lightsail designed with AI could make it possible to reach the nearest star within 20 years.

Launched in 1977, the Voyager 1 probe was the first human-made object to leave our solar system. But at current speeds, it would take over 70,000 years to reach Alpha Centauri, the closest star system to our own.

There is one propulsion technology, however, that could significantly speed things up. A lightsail is a large reflective surface deployed in front of a spacecraft, where it can harness either sunlight or light from an Earth-based laser to continually accelerate the vehicle. In theory, this could make it possible to achieve speeds of 10 to 20 percent of the speed of light.

Building materials that are both reflective and light enough to make this possible has been an outstanding challenge though. Now, researchers have used an AI technique called “neural topology optimization” to create a nanometer-thick sheet of silicon nitride that could bring the idea to life.

“This mission requires lightsail materials that challenge the fundamentals of nanotechnology, requiring innovations in optics, material science, and structural engineering,” the team writes in a preprint posted to arXiv.

“This study underscores the potential of neural topology optimization to achieve innovative and economically viable lightsail designs, crucial for next-generation space exploration.”

The researchers’ technique was inspired by Breakthrough Starshot, a project launched by the Breakthrough Initiatives in 2016. Starshot seeks to design a fleet of around 1,000 tiny spacecraft that use lightsails and an Earth-based laser to reach Alpha Centauri within 20 to 30 years. The probes would carry cameras and other sensors to send back data on arrival.

To reach the required speeds, the spacecraft will have to be incredibly light—the probes themselves will be just centimeters across and weigh a few grams. But to gather enough light, the sails need to measure roughly 100 square feet, so we need new ultralight materials to keep their weight down.

One promising approach involves creating optical nanostructures called “photonic crystals” made up of a repeating grid of tiny holes. Punching millions or billions of these holes into the material reduces its weight significantly, but these repeating structures also create unusual optical effects that can actually enhance the material’s reflectivity.

Working out exactly how to arrange these holes is a complicated process though, so the group from Delft University in the Netherlands and Brown University in the United States enlisted AI to help them. They combined a neural network with a more conventional computational physics program to find the most optimal configuration and shape of the holes to minimize mass and boost reflectivity.

This resulted in a lattice of bean-shaped holes less than 200 nanometers thick. To show the design works as expected, they used an approach called flood lithography, in which a laser uses an incredibly detailed stencil to create holes in a silicon nitride wafer. Using the approach, the team created a 5.5 square inch sample that weighed just a few micrograms.

Lithography is the same technology companies use to make computer chips, so the researchers think the approach could easily be scaled up. The team predict it would take about a day and cost around $2,700 to create a full-sized sail. They’d need to build a dedicated facility though, team leader Richard Norte, from Delft, told New Scientist, because those used for chip fabrication only work with wafers about 15-inches long.

There are still a lot of other engineering challenges to be solved for the Breakthrough Starshot mission to come together, Stefania Soldini at the University of Liverpool told New Scientist, but a cheap and fast way of producing lightsails will be crucial.

NASA is also actively pursuing the approach. Just last week, the agency announced that its Advanced Composite Solar Sail System, which launched earlier this year, is close to hoisting its sails for the first time.

If these projects are successful, we may get our first close-up glimpse of worlds beyond our solar system within many people’s lifetimes.

Image Credit: This 4.5-square-inch sample could lead to a full-sized lightsail lightweight enough to tow tiny spacecraft to another star system / L. Norder, et al via arXiv

Kategorie: Transhumanismus

The Psychology of Olympians and How They Master Their Minds to Perform

30 Červenec, 2024 - 19:55

Participating in the Olympic Games is a rare achievement, and the pressures and stressors that come with it are unique. Whether an athlete is battling to win the breaststroke or powering their way to gold in the modern pentathlon, psychology will play a vital role in their success or failure in Paris this summer.

In recent Olympics, we have seen the mental toll that competing at the highest level can have on athletes. US gymnast Simone Biles withdrew from five events at the 2020 Tokyo Olympics to protect her mental health, and 23-time gold medal winner Michael Phelps has described the mental crash that hits him after competing in the Games.

When even small errors can cost them a medal, how do athletes use psychological principles to master their minds and perform under pressure?

Resilience

The ability to recover from setbacks, such as disappointing performances or injury is crucial. The role of mental processes and behavior such as emotional regulation (recognizing and controlling emotions such as anxiety) allows Olympians to maintain focus and determination amid the global scrutiny that comes with competing on the world’s biggest stage.

Resilience is not a fixed trait but rather a dynamic process that evolves through an interplay between individual characteristics, such as personality and psychological skills, and environment, such as an athlete’s social support. A 2012 study made in the UK investigating resilience in Olympic champions highlighted that a range of psychological factors such as positive personality, motivation, confidence, and focus as well feeling like they have social support helped to protect athletes from the potential negative stressors caused by competing in the Olympics. These factors helped to increase an athlete’s resilience and the likelihood they would perform at their best.

Social support means that athletes don’t have to feel like they are going it alone. If they can call on strong networks of family, friends, and coaches, it provides them with additional emotional strength and motivation.

Resilience empowers Olympians to draw upon individual skills and traits and protects them from the negative effects of stressors that inevitably come with competing in the Olympics. For example, a rower may need to solve problems such as changing weather conditions. Resilience allows them to maintain composure and adjust to the conditions, for instance by modifying their stroke technique.

Being Present

Staying in the present can help athletes avoid being overwhelmed or consumed by the significance of their event or distracted by the disappointment of past failures and the pressure of high medal expectations.

To help them remain in the present moment, athletes may use a variety of strategies. Mindfulness-based meditation and breathing exercises can help athletes feel calm and focused. They may also use performance visualization to rehearse specific movements or routines. Think of a basketball player visualizing a free-throw shot.

Similarly, many athletes will have well-rehearsed pre-performance routines which can create a sense of normality and control. For example, a tennis player may bounce the ball a certain number of times before serving. Staying in the present will help reduce athletes’ anxiety, maintain focus on the task, and allow them to fully experience (and hopefully enjoy) the atmosphere.

Protecting Their Mental Wellbeing

Failure can be devastating and athletes can have complicated relationships with winning. For example, some athletes experience post-Olympic blues, which is often described as the feeling of emptiness, loss of self-worth, and even depression following an Olympic Games, even if the athlete has won a medal. British cyclist Victoria Pendleton wrote for The Telegraph in 2016 describing this phenomenon: “It’s almost easier to come second because you have something to aim for when you finish. When you win, you suddenly feel lost.”

Olympians may be champions, but like the rest of us, they will need to prioritize the fundamentals such as getting adequate sleep and downtime to recharge mentally. An Australian study conducted in 2020 highlighted the relationship between maintaining mental wellbeing and increased athletic performance. To ensure this, Olympians will be working closely with support staff such as performance nutritionists who will ensure they have a balanced diet which meets the physical needs of their event, helping to protect both physical and mental health.

They will also be working with sport and exercise psychologists throughout their training in preparation for the Olympics to manage challenges as and when they experience them. If an athlete starts struggling with performance anxiety ahead of the Games, they may practice mindfulness or cognitive restructuring, which are techniques that help people to notice and change negative thinking patterns.

Olympians and their support team will need to take care of both the person and the athlete to protect their wellbeing. When they protect their wellbeing, they are offering the best chance of both achieving their best performance during the Games themselves and avoiding the post-Olympic blues when they are over.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image Credit: Jacob RiceUnsplash

Kategorie: Transhumanismus

How a Mind-Controlling Parasite Could Deliver Medicine to the Brain

29 Červenec, 2024 - 22:55

The brain is like a medieval castle perched on a cliff, protected on all sides by high walls, making it nearly impenetrable.

Its shield is the blood-brain barrier, a layer of tightly connected cells that only allows an extremely selective group of molecules to pass. The barrier keeps delicate brain cells safely away from harmful substances, but it also blocks therapeutic proteins—like, for example, those that grab onto and neutralize toxic clumps in Alzheimer’s disease.

One way to smuggle proteins across? A cat parasite.

A new study in Nature Microbiology tapped into the strange world of mind-bending parasites, specifically, Toxoplasma gondii. Perhaps best known for its ability to rid infected mice of their fear of cats, the parasite naturally travels from the gut to the brain—including ours—and releases proteins that tweak behavior.

The international team hijacked T. gondii’s natural, brain-targeting impulses to engineer two delivery systems, one for a single-shot therapeutic boost and another that lasts longer.

The unconventional shuttle worked on brain cells in petri dishes and brain organoids. Often called “mini-brains,” these pea-sized blobs roughly capture the cell types and structure of a growing fetal human brain. However, they don’t usually produce a blood-brain barrier.

To show the shuttle could gain access to the brain, the team engineered a T. gondii shuttle with a therapeutic protein for Rett syndrome, a genetic disorder that leads to autism-like symptoms.

After one shot into the belly, the shuttle released the therapeutic proteins widely into the brains of lab mice within a few weeks. The proteins mostly accumulated in parts of the brain critical for perception, reasoning, and memory.

“For medicine, efficient and safe delivery of proteins could unlock a broad category of protein-based therapies,” wrote the authors.

U-Haul to the Brain

Getting protein-based drugs into the brain is a pain. Unlike gene therapy concoctions, proteins are extremely sensitive to heat and acid. They can’t be swallowed as a pill—the gut’s acid destroys them. Even injections straight into the blood stream are problematic. Immune cells, for example, may wipe out the proteins before they have a chance to reach the brain.

Thankfully, nature is a source of inspiration. All brain-targeting carriers need to bypass two “checkpoints”: The first is the blood-brain barrier, the second, the neuron’s membrane.

A popular approach uses a bio-engineered virus carrying the genetic instructions to make a protein once inside the neurons. Often employed in gene therapy, scientists make the virus relatively safe by stripping away its infectious tendencies. But like a small U-Haul van, it only has room for the genetic instructions of smaller proteins.

Another surprising carrier traces its roots to HIV. Scientists studying the virus found a small protein chunk that allows it to penetrate the blood-brain barrier and get past neuron membranes. By engineering these chunks—which aren’t infectious—into shuttles, scientists can then tag protein cargos onto them. One example (by yours truly) could tunnel into the brain after an injection into the bloodstream and protect rats’ brains from damage after a stroke.

These shuttles too are limited by size: They can only drag along very small protein snippets. Antibodies and other larger proteins are beyond reach.

T. gondii, in contrast, has a much larger capacity.

A Synthetic Fleet

A cat parasite hardly sounds like medicine. But it’s a worthy candidate.

Normally, T. gondii produces egg-like “offspring” in the guts of cats, which are then strewn into the wild as they poop. The parasite waits for potential hosts—say, a mouse sniffing for crumbs or a human changing the litter box—and infects the unsuspecting host, ultimately spreading into the brain. Once inside, T. gondii lingers in neurons, rather than other brain cells.

It sounds terrifying, but for people with a healthy immune system, the parasite usually doesn’t cause harm. “In fact, it is estimated that a third of the world population is chronically infected with the parasite,” Dr. Oded Rechavi’s lab, who led the study, wrote in a blog post.

To transform T. gondii into a delivery tool, the team focused on two secretion systems in the parasite that let the parasite pump proteins into target cells. These are “remarkable innate abilities,” wrote the team.

They first built a protein link between the two systems and their potential cargo, for example, proteins implicated in Parkinson’s disease, gene-editing proteins, and MECP2—which is linked to Rett syndrome. The team then tethered the proteins to one of the two systems and delivered them into a variety of cells in petri dishes.

Within a day, the proteins were thriving inside their hosts.

In neurons without MECP2, a dose of T. gondii carrying a synthetic version of the protein boosted its levels to roughly 58 percent of normal cells, which is similar to previous gene therapy studies of Rett syndrome. The added MECP2 worked like its natural counterpart, turning genes on or off inside neurons as expected.

T. gondii also reliably released its payload into mature brain organoids. The protein altered genetic transcription throughout the mini-brains, changing gene expression as predicted.

The two T. gondii systems had individual strengths. One is a “kiss-and-spit”: Like a fighter jet, T. gondii swoops in on a neuron, releases its protein payload, and leaves. The other takes a longer approach, requiring T. gondii to infiltrate and establish itself inside the cell, like a sleeper agent. Once in, however, the system can deliver its cargo for a longer time and at a higher level.

Cat and Mouse Game

As a final test, the team injected the engineered T. gondii, with an MECP2 payload, into the bellies of mice—like an insulin shot for people with diabetes.

Eighteen days later, the mice’s brains showed signs of cysts—which are harmless for people without immune problems—indicating the parasite was establishing itself inside the brain. Other tissues, including the liver, lung, and spleen, had very little T. gondii roaming around for up to three months after injection. Only the brain had a boost in MECP2.

“Many proteins require controlled targeting” to a specific part of the body, or otherwise they’re “ineffective or even deleterious if delivered elsewhere,” explained the team.

Surveying multiple regions of the brain, T. gondii seemed to prefer settling inside the cortex—the outermost region of the brain involved in perception, reasoning, and making decisions. Its second choice was the “memory center,” the hippocampus. That’s good news: Both regions are favorite targets for tackling neurological disorders. And the treatment didn’t alert the body’s immune system, with the therapeutic proteins easily getting along with the brain’s usual protein brigade.

T. gondii can be used…[for]…many of the challenges associated with protein delivery,” for both scientific research and therapeutics, wrote the team.

There’s still a long road to go. Although T. gondii is safe for healthy people, it has been linked to side effects in the brain for the immunocompromised. The next step is to strip away its toxicity in a way similar to the viral carriers now used for gene therapy. If it works, T. gondii is set for a genetic makeover as a safe shuttle to the brain—despite its cat parasite origin story.

Image Credit: T. gondii cyst in mouse brain tissue. Jitinder P. Dubey / Wikimedia Commons

Kategorie: Transhumanismus

This Week’s Awesome Tech Stories From Around the Web (Through July 27)

27 Červenec, 2024 - 16:00
ARTIFICIAL INTELLIGENCE

Google DeepMind’s New AI Systems Can Now Solve Complex Math Problems
Rhiannon Williams | MIT Technology Review
“AI models can easily generate essays and other types of text. However, they’re nowhere near as good at solving math problems, which tend to involve logical reasoning—something that’s beyond the capabilities of most current AI systems. But that may finally be changing. Google DeepMind says it has trained two specialized AI systems to solve complex math problems involving advanced reasoning.”

archive page

COMPUTING

This Startup Is Building the Country’s Most Powerful Quantum Computer on Chicago’s South Side
Adam Bluestein | Fast Company
“PsiQuantum’s approach is radically different from that of its competitors. It’s relying on cutting-edge ‘silicon photonics’ to manipulate single particles of light for computation. And instead of taking an incremental approach to building a supercomputer, it’s focused entirely on coming out of the gate with a full-blown, ‘fault tolerant’ system that will be far larger than any quantum computer built to date. The company has vowed to have its first system operational by late 2027, years earlier than other projections.”

BIOTECH

The Race for the Next Ozempic
Emily Mullin | Wired
“These drugs are now wildly popular, in shortage as a result, and hugely profitable for the companies making them. Their success has sparked a frenzy among pharmaceutical companies looking for the next blockbuster weight-loss drug. Researchers are now racing to develop new anti-obesity medications that are more effective, more convenient, or produce fewer side effects than the ones currently on the market.”

ROBOTICS

Watch a Robot Peel a Squash With Human-Like Dexterity
Alex Wilkins | New Scientist
“Pulkit Agrawal at the Massachusetts Institute of Technology and his colleagues have developed a robotic system that can rotate different types of fruit and vegetable using its fingers on one hand, while the other arm is made to peel.”

FUTURE

Here’s What Happens When You Give People Free Money
Paresh Dave | Wired
“The initial results from what OpenResearch, an Altman-funded research lab, describes as the most comprehensive study on ‘unconditional cash’ show that while the grants had their benefits and weren’t spent on items such as drugs and alcohol, they were hardly a panacea for treating some of the biggest concerns about income inequality and the prospect of AI and other automation technologies taking jobs.”

ARTIFICIAL INTELLIGENCE

Meta Releases the Biggest and Best Open-Source AI Model Yet
Alex Heath | The Verge
“Meta is releasing Llama 3.1, the largest-ever open-source AI model, which the company claims outperforms GPT-4o and Anthropic’s Claude 3.5 Sonnet on several benchmarks. …CEO Mark Zuckerberg now predicts that Meta AI will be the most widely used assistant by the end of this year, surpassing ChatGPT.”

ENERGY

US Solar Production Soars by 25 Percent in Just One Year
John Timmer | Ars Technica
“In terms of utility-scale production, the first five months of 2024 saw it rise by 29 percent compared to the same period in the year prior. Small-scale solar was ‘only’ up by 18 percent, with the combined number rising by 25.3 percent. …It’s worth noting that this data all comes from before some of the most productive months of the year for solar power; overall, the EIA is predicting that solar production could rise by as much as 42 percent in 2024.”

TECH

SearchGPT Is OpenAI’s Direct Assault on Google
Reece Rogers and Will Knight | Wired
“After months of speculation about its search ambitions, OpenAI has revealed SearchGPT, a ‘prototype’ search engine that could eventually help the company tear off a slice of Google’s lucrative business. OpenAI said that the new tool would help users find what they are looking for more quickly and easily by using generative AI to gather links and answer user queries in a conversational tone.”

SPACE

Wafer-Thin Light Sail Could Help Us Reach Another Star Sooner
Alex Wilkins | New Scientist
“A light sail designed using artificial intelligence is about 1000 times thinner than a human hair and weighs as much as a grain of sand—and it could help us create a spacecraft capable of reaching another star sooner than we thought.”

ART

AI Can’t Make Music
Matteo Wong | The Atlantic
“While AI models are starting to replicate musical patterns, it is the breaking of rules that tends to produce era-defining songs. Algorithms ‘are great at fulfilling expectations but not good at subverting them, but that’s what often makes the best music,’ Eric Drott, a music-theory professor at the University of Texas at Austin, told me.”

Image Credit: David ClodeUnsplash

Kategorie: Transhumanismus

China Demonstrates the First Entirely Meltdown-Proof Nuclear Reactor

26 Červenec, 2024 - 16:00

Efforts to expand nuclear power have long been stymied by fears of a major nuclear meltdown. A new Chinese reactor design is the first full-scale demonstration that’s entirely meltdown-proof.

Despite the rapid rise of renewable energy, many argue that nuclear power still has an important role to play in the race to decarbonize our supply of electricity. But incidents like Chernobyl and Fukushima have made people understandably wary.

The latest nuclear reactor designs are far safer than those of previous generations, but they still carry the risk of a nuclear meltdown. This refers to when a plant’s cooling system fails, often due to power supplies being cut off, leading to runaway overheating in the core. This can cause an explosion that breaches containment units and spreads radioactive material far and wide.

But now, researchers in China have carried out tests to prove that a new kind of reactor design is essentially impervious to meltdowns. In a paper in Joule, they describe a test in which they cut power to a live nuclear plant—and the plant was able to passively cool itself.

“The responses of nuclear power and temperatures within different reactor structures show that the reactors can be cooled down naturally without active intervention,” the authors write. “The results of the tests manifest the existence of commercial-scale inherent safety for the first time.”

The researchers from Tsinghua University carried out the test on the 200-megawatt High-Temperature Gas-Cooled Reactor Pebble-Bed Module (HTR-PM) in Shandong, which became commercially operational last December. The plant’s novel design replaces the fuel rods found in conventional reactor designs with a large number of “pebbles.” Each of these is a couple of inches across and made up of graphite with a small amount of uranium fuel inside.

The approach significantly reduces the energy density of the reactor’s fuel, making it easier for heat to dissipate naturally if cooling systems fail. Although small prototype reactors have been built in China and Germany, a full-scale demonstration of the technology’s safety had yet to happen.

To put the new reactor to the test, the researchers deliberately cut power to both of the plant’s reactor modules and observed the results. Both modules cooled down naturally without any intervention in roughly 35 hours. The researchers claim this is proof the design is “inherently safe” and should significantly reduce requirements for safety systems in future reactors.

The design does result in power generation costs roughly 20 percent higher than conventional reactors, the researchers admit. But they believe this will come down if and when the technology goes into mass production.

China isn’t the only country building such reactors. American company X-Energy has designed an 80-megawatt pebble-bed reactor called the Xe-100 and is currently waiting for a decision on its license to operate from the Nuclear Regulatory Commission.

However, as New Scientist notes, it’s not possible to retrofit existing plants with this technology, which means the risk of meltdowns from older plants remains. And given the huge amount of time and money it typically takes to build a nuclear power plant, it’s unlikely the technology will make up a significant chunk of the world’s nuclear fleet anytime soon.

But by proving it’s possible to build a meltdown-proof reactor, the researchers have disarmed one of the major arguments against using nuclear power to tackle the climate crisis.

Image Credit: Tsinghua University

Kategorie: Transhumanismus

This Is What Could Happen if AI Content Is Allowed to Take Over the Internet

25 Červenec, 2024 - 23:44

Generative AI is a data hog.

The algorithms behind chatbots like ChatGPT learn to create human-like content by scraping terabytes of online articles, Reddit posts, TikTok captions, or YouTube comments. They find intricate patterns in the text, then spit out search summaries, articles, images, and other content.

For the models to become more sophisticated, they need to capture new content. But as more people use them to generate text and then post the results online, it’s inevitable that the algorithms will start to learn from their own output, now littered across the internet. That’s a problem.

A study in Nature this week found a text-based generative AI algorithm, when heavily trained on AI-generated content, produces utter nonsense after just a few cycles of training.

“The proliferation of AI-generated content online could be devastating to the models themselves,” wrote Dr. Emily Wenger at Duke University, who was not involved in the study.

Although the study focused on text, the results could also impact multimodal AI models. These models also rely on training data scraped online to produce text, images, or videos.

As the usage of generative AI spreads, the problem will only get worse.

The eventual end could be model collapse, where AI increasing fed data generated by AI is overwhelmed by noise and only produces incoherent baloney.

Hallucinations or Breakdown?

It’s no secret generative AI often “hallucinates.” Given a prompt, it can spout inaccurate facts or “dream up” categorically untrue answers. Hallucinations could have serious consequences, such as a healthcare AI incorrectly, but authoritatively, identifying a scab as cancer.

Model collapse is a separate phenomenon, where AI trained on its own self-generated data degrades over generations. It’s a bit like genetic inbreeding, where offspring have a greater chance of inheriting diseases. While computer scientists have long been aware of the problem, how and why it happens for large AI models has been a mystery.

In the new study, researchers built a custom large language model and trained it on Wikipedia entries. They then fine-tuned the model nine times using datasets generated from its own output and measured the quality of the AI’s output with a so-called “perplexity score.” True to its name, the higher the score, the more bewildering the generated text.

Within just a few cycles, the AI notably deteriorated.

In one example, the team gave it a long prompt about the history of building churches—one that would make most human’s eyes glaze over. After the first two iterations, the AI spewed out a relatively coherent response discussing revival architecture, with an occasional “@” slipped in. By the fifth generation, however, the text completely shifted away from the original topic to a discussion of language translations.

The output of the ninth and final generation was laughably bizarre:

“architecture. In addition to being home to some of the world’s largest populations of black @-@ tailed jackrabbits, white @-@ tailed jackrabbits, blue @-@ tailed jackrabbits, red @-@ tailed jackrabbits, yellow @-.”

Interestingly, AI trained on self-generated data often ends up producing repetitive phrases, explained the team. Trying to push the AI away from repetition made the AI’s performance even worse. The results held up in multiple tests using different prompts, suggesting it’s a problem inherent to the training procedure, rather than the language of the prompt.

Circular Training

The AI eventually broke down, in part because it gradually “forgot” bits of its training data from generation to generation.

This happens to us too. Our brains eventually wipe away memories. But we experience the world and gather new inputs. “Forgetting” is highly problematic for AI, which can only learn from the internet.

Say an AI “sees” golden retrievers, French bulldogs, and petit basset griffon Vendéens—a far more exotic dog breed—in its original training data. When asked to make a portrait of a dog, the AI would likely skew towards one that looks like a golden retriever because of an abundance of photos online. And if subsequent models are trained on this AI-generated dataset with an overrepresentation of golden retrievers, they eventually “forget” the less popular dog breeds.

“Although a world overpopulated with golden retrievers doesn’t sound too bad, consider how this problem generalizes to the text-generation models,” wrote Wenger.

Previous AI-generated text already swerves towards well-known concepts, phrases, and tones, compared to other less common ideas and styles of writing. Newer algorithms trained on this data would exacerbate the bias, potentially leading to model collapse.

The problem is also a challenge for AI fairness across the globe. Because AI trained on self-generated data overlooks the “uncommon,” it also fails to gauge the complexity and nuances of our world. The thoughts and beliefs of minority populations could be less represented, especially for those speaking underrepresented languages.

“Ensuring that LLMs [large language models] can model them is essential to obtaining fair predictions—which will become more important as generative AI models become more prevalent in everyday life,” wrote Wenger.

How to fix this? One way is to use watermarks—digital signatures embedded in AI-generated data—to help people detect and potentially remove the data from training datasets. Google, Meta, and OpenAI have all proposed the idea, though it remains to be seen if they can agree on a single protocol. But watermarking is not a panacea: Other companies or people may choose not to watermark AI-generated outputs or, more likely, can’t be bothered.

Another potential solution is to tweak how we train AI models. The team found that adding more human-generated data over generations of training produced a more coherent AI.

All this is not to say model collapse is imminent. The study only looked at a text-generating AI trained on its own output. Whether it would also collapse when trained on data generated by other AI models remains to be seen. And with AI increasingly tapping into images, sounds, and videos, it’s still unclear if the same phenomenon appears in those models too.

But the results suggest there’s a “first-mover” advantage in AI. Companies that scraped the internet earlier—before it was polluted by AI-generated content—have the upper hand.

There’s no denying generative AI is changing the world. But the study suggests models can’t be sustained or grow over time without original output from human minds—even if it’s memes or grammatically-challenged comments. Model collapse is about more than a single company or country.

What’s needed now is community-wide coordination to mark AI-created data, and openly share the information, wrote the team. “Otherwise, it may become increasingly difficult to train newer versions of LLMs [large language models] without access to data that were crawled from the internet before the mass adoption of the technology or direct access to data generated by humans at scale.”

Image Credit: Kadumago / Wikimedia Commons

Kategorie: Transhumanismus

AI-Powered Weather and Climate Models Are Set to Change Forecasting

23 Červenec, 2024 - 20:00

A new system for forecasting weather and predicting future climate uses artificial intelligence to achieve results comparable with the best existing models while using much less computer power, according to its creators.

In a paper published in Nature yesterday, a team of researchers from Google, MIT, Harvard, and the European Center for Medium-Range Weather Forecasts say their model offers enormous “computational savings” and can “enhance the large-scale physical simulations that are essential for understanding and predicting the Earth system.”

The NeuralGCM model is the latest in a steady stream of research models that use advances in machine learning to make weather and climate predictions faster and cheaper.

What Is NeuralGCM?

The NeuralGCM model aims to combine the best features of traditional models with a machine-learning approach.

At its core, NeuralGCM is what’s called a “general circulation model.” It contains a mathematical description of the physical state of Earth’s atmosphere and solves complicated equations to predict what will happen in the future.

However, NeuralGCM also uses machine learning—a process of searching out patterns and regularities in vast troves of data—for some less well-understood physical processes, such as cloud formation. The hybrid approach makes sure the output of the machine learning modules will be consistent with the laws of physics.

The resulting model can then be used for making forecasts of weather days and weeks in advance, as well as looking months and years ahead for climate predictions.

The researchers compared NeuralGCM against other models using a standardized set of forecasting tests called WeatherBench 2. For three- and five-day forecasts, NeuralGCM did about as well as other machine-learning weather models such as Pangu and GraphCast. For longer-range forecasts, over 10 and 15 days, NeuralGCM was about as accurate as the best existing traditional models.

NeuralGCM was also quite successful in forecasting less-common weather phenomena, such as tropical cyclones and atmospheric rivers.

Why Machine Learning?

Machine learning models are based on algorithms that learn patterns in the data fed to them and then use this learning to make predictions. Because climate and weather systems are highly complex, machine learning models require vast amounts of historical observations and satellite data for training.

The training process is very expensive and requires a lot of computer power. However, after a model is trained, using it to make predictions is fast and cheap. This is a large part of their appeal for weather forecasting.

The high cost of training and low cost of use is similar to other kinds of machine learning models. GPT-4, for example, reportedly took several months to train at a cost of more than $100 million, but can respond to a query in moments.

A comparison of how NeuralGCM compares with leading models (AMIP) and real data (ERA5) at capturing climate change between 1980 and 2020. Credit: Google Research

A weakness of machine learning models is that they often struggle in unfamiliar situations—or in this case, extreme or unprecedented weather conditions. To improve at this, a model needs to generalize, or extrapolate beyond the data it was trained on.

NeuralGCM appears to be better at this than other machine learning models because its physics-based core provides some grounding in reality. As Earth’s climate changes, unprecedented weather conditions will become more common, and we don’t know how well machine learning models will keep up.

Nobody is actually using machine learning-based weather models for day-to-day forecasting yet. However, it is a very active area of research—and one way or another, we can be confident that the forecasts of the future will involve machine learning.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image Credit: Kochov et al. / Nature

Kategorie: Transhumanismus

Scientists Say They Extended Mice’s Lifespans 25% With an Antibody Drug

23 Červenec, 2024 - 00:14

Age catches up with us all. Eyes struggle to focus. Muscles wither away. Memory dwindles. The risk of high blood pressure, diabetes, and other age-related diseases skyrockets.

A myriad of anti-aging therapies are in the works, and a new one just joined the fray. In mice, blocking a protein that promotes inflammation in middle age increased metabolism, lowered muscle wasting and frailty, and reduced the chances of cancer.

Unlike most previous longevity studies that tracked the health of aging male mice, the study involved both sexes, and the therapy worked across the board.

Lovingly called “supermodel grannies” by the team, the elderly lady mice looked and behaved far younger than their age, with shiny coats of fur, less fatty tissue, and muscles rivaling those of much younger mice.

The treatment didn’t just boost healthy longevity, also known as healthspan—the number of years living without diseases—it also increased the mice’s lifespan by up 25 percent. The average life expectancy of people in the US is roughly 77.5 years. If the results translate from mice to people—and that’s a very big if—it could mean a bump to almost 97 years.

The protein, dubbed IL-11, has been in scientists’ crosshairs for decades. It promotes inflammation and causes lung and kidney scarring. It’s also been associated with various types of cancers and senescence. The likelihood of all these conditions increases as we age.

Among a slew of pro-aging proteins already discovered, IL-11 stands out as it could make a beeline for testing in humans. Blockers for IL-11 are already in the works for treating cancer and tissue scarring. Although clinical trials are still ongoing, early results show the drugs are relatively safe in humans.

“Previously proposed life-extending drugs and treatments have either had poor side-effect profiles, or don’t work in both sexes, or could extend life, but not healthy life, however this does not appear to be the case for IL-11,” said study author Dr. Stuart Cook in a press release. “These findings are very exciting.”

Strange Coincidence

In 2017, Cook zeroed in on IL-11 as a treatment target for heart and kidney scarring, not longevity. Injecting IL-11 triggered the conditions, eventually leading to organ failure. Genetically deleting the protein protected against the diseases.

It’s easy to call IL-11 a villain. But the protein is an essential part of the immune system. Produced by the bone marrow, it’s necessary for embryo implantation. It also helps certain types of blood cells grow and mature, notably those that stop bleeding after a scrape.

With age, however, the protein tends to goes rogue. It sparks inflammation across the body, damaging cells and tissues and contributing to cancer, autoimmune disorders, and tissue scarring. A “hallmark of aging,” inflammation has long been targeted as a way to reduce age-related diseases. Although IL-11 is a known trigger for inflammation, it hasn’t been directly linked to aging.

Until now. The story is one of chance.

“This project started back in 2017 when a collaborator of ours sent us some tissue samples for another project,” said study author Anissa Widjaja in the press release. She was testing a method to accurately detect IL-11. Several samples of an old rat’s proteins were in the mix, and she realized that IL-11 levels were far higher in the samples than in those from younger mice.

“From the readings, we could clearly see that the levels of IL-11 increased with age, and that’s when we got really excited,” she said.

Longevity Blocker

The results spurred the team to shift their research focus to longevity. A series of tests confirmed IL-11 levels consistently rose in a variety of tissues—muscle, fat, and liver—in both male and female mice as they aged.

To see how IL-11 influences the body, the team next deleted the gene coding for IL-11 and compared mice without the protein to their normal peers. At two years old, considered elderly for mice, tissues in normal individuals were littered with genetic signatures suggesting senescence—when cells lose their function but are still alive. Often called “zombie cells,” they spew out a toxic mix of inflammatory molecules and harm their neighbors. Elderly mice without IL-11, however, had senescence genetic profiles similar to those of much younger mice.

Deleting IL-11 had other perks. Weight gain is common with age, but without IL-11, the mice maintained their slim shape and had lower levels of fat, greater lean muscle mass, and shiny, full coats of fur. It’s not just about looks. Cholesterol levels and markers for liver damage were far lower than in normal peers. Aged mice without IL-11 were also spared shaking tremors—otherwise common in elderly mice—and could flexibly adjust their metabolism depending on the quantity of food they ate.

The benefits also showed up in their genetic material. DNA is protected by telomeres—a sort of end cap on chromosomes—that dwindle in length with age. Ridding cells of IL-11 prevented telomeres from eroding away in the livers and muscles of the elderly mice.

Genetically deleting IL-11 is a stretch for clinical use in humans. The team next turned to a more feasible alternative: An antibody shot. Antibodies can grab onto a target, in this case IL-11, and prevent it from functioning.

Beginning at 75 weeks, roughly the equivalent of 55 human years, the mice received an antibody shot every month for 25 weeks—over half a year. Similar antibodies are already being tested in clinical trials.

The health benefits in these mice matched those in mice without IL-11. Their weight and fat decreased, and they could better handle sugar. They also fought off signs of frailty as they aged, experiencing minimal tremors and problems with gait and maintaining higher metabolisms. Rather than wasting away, their muscles were even stronger than at the beginning of the study.

The treatment didn’t just increase healthspan. Monthly injections of the IL-11 antibody until natural death also increased lifespan in both male and female mice by up to 25 percent.

“These findings are very exciting. The treated mice had fewer cancers and were free from the usual signs of aging and frailty… In other words, the old mice receiving anti-IL-11 were healthier,” said Cook.

Although IL-11 antibody drugs are already in clinical trials, translating these results to humans could face hurdles. Mice have a relatively short lifespan. A longevity trial in humans would be long and very expensive. The treated mice were also contained in a lab setting, whereas in the real world we roam around and have differing lifestyles—diet, exercise, drinking, smoking—that could confound results. Even if it works in humans, a shot every month beginning in middle age would likely rack up a hefty bill, providing health and life extension only to those who could afford it.

To Cook, rather than focusing on extending longevity per se, tackling a specific age-related problem, such as tissue scarring or losing muscles is a better alternative for now.

“While these findings are only in mice, it raises the tantalizing possibility that the drugs could have a similar effect in elderly humans. Anti-IL-11 treatments are currently in human clinical trials for other conditions, potentially providing exciting opportunities to study its effects in aging humans in the future,” he said.

Image Credit: MRC LMS, Duke-NUS Medical School

Kategorie: Transhumanismus

This Week’s Awesome Tech Stories From Around the Web (Through July 20)

20 Červenec, 2024 - 16:00
ARTIFICIAL INTELLIGENCE

The Data That Powers AI Is Disappearing Fast
Kevin Roose | The New York Times
“Over the past year, many of the most important web sources used for training AI models have restricted the use of their data, according to a study published this week by the Data Provenance Initiative, an MIT-led research group. The study, which looked at 14,000 web domains that are included in three commonly used AI training data sets, discovered an ’emerging crisis in consent,’ as publishers and online platforms have taken steps to prevent their data from being harvested.”

COMPUTING

How One Bad CrowdStrike Update Crashed the World’s Computers
Lily Hay Newman, Matt Burgess, and Andy Greenberg | Wired
“Only a handful of times in history has a single piece of code managed to instantly wreck computer systems worldwide. The Slammer worm of 2003. Russia’s Ukraine-targeted NotPetya cyberattack. North Korea’s self-spreading ransomware WannaCry. But the ongoing digital catastrophe that rocked the internet and IT infrastructure around the globe over the past 12 hours appears to have been triggered not by malicious code released by hackers, but by the software designed to stop them.”

ROBOTICS

Tiny Solar-Powered Drones Could Stay in the Air Forever
Matthew Sparkes | New Scientist
“A drone weighing just 4 grams is the smallest solar-powered aerial vehicle to fly yet, thanks to its unusual electrostatic motor and tiny solar panels that produce extremely high voltages. Although the hummingbird-sized prototype only operated for an hour, its makers say their approach could result in insect-sized drones that can stay in the air indefinitely.”

TECH

How Microsoft’s Satya Nadella Became Tech’s Steely Eyed AI Gambler
Karen Weise and Cade Metz | The New York Times
“Though it could be years before he knows if any of this truly pays off, Mr. Nadella sees the AI boom as an all-in moment for his company and the rest of the tech industry. He aims to make sure that Microsoft, which was slow to the dot-com boom and whiffed on smartphones, dominates this new technology.”

ENERGY

Chinese Nuclear Reactor Is Completely Meltdown-Proof
Alex Wilkins | New Scientist
“A large-scale nuclear power station in China is the first in the world to be completely impervious to dangerous meltdowns, even during a full loss of external power. …To test this [capability in the power station], which became commercially operational in December 2023, [Zhe] Dong and his team switched off both modules of HTR-PM as they were operating at full power, then measured and tracked how the temperature of different parts of the plant went down afterwards. They found that HTR-PM naturally cooled and reached a stable temperature within 35 hours after the power was removed.”

AUTOMATION

The AI-Powered Future of Coding Is Near
Will Knight | Wired
“I am by no means a skilled coder, but thanks to a free program called SWE-agent, I was just able to debug and fix a gnarly problem involving a misnamed file within different code repositories on the software-hosting site GitHub. I pointed SWE-agent at an issue on GitHub and watched as it went through the code and reasoned about what might be wrong. It correctly determined that the root cause of the bug was a line that pointed to the wrong location for a file, then navigated through the project, located the file, and amended the code so that everything ran properly.”

ENVIRONMENT

Balloons Will Surf Wind Currents to Track Wildfires
Sarah Scoles | MIT Technology Review
“Urban Sky aims to combine the advantages of satellites and aircraft by using relatively inexpensive high-altitude balloons that can fly above the fray—out of the way of airspace restrictions, other aircraft, and the fire itself. The system doesn’t put a human pilot at risk and has an infrared sensor system called HotSpot that provides a sharp, real-time picture, with pixels 3.5 meters across.”

ARTIFICIAL INTELLIGENCE

Here’s the Real Reason AI Companies Are Slimming Down Their Models
Mark Sullivan | Fast Company
“OpenAI is one of a number of AI companies to develop a version of its best ‘foundation’ model that trades away some intelligence for some speed and affordability. Such a trade-off could let more developers power their apps with AI, and may open the door for more complex apps like autonomous agents in the future.”

SPACE

Will Space-Based Solar Power Ever Make Sense?
Kat Friedrich | Ars Technica
“Is space-based solar power a costly, risky pipe dream? Or is it a viable way to combat climate change? Although beaming solar power from space to Earth could ultimately involve transmitting gigawatts, the process could be made surprisingly safe and cost-effective, according to experts from Space Solar, the European Space Agency, and the University of Glasgow. But we’re going to need to move well beyond demonstration hardware and solve a number of engineering challenges if we want to develop that potential.”

Image Credit: Edward Chou / Unsplash

Kategorie: Transhumanismus

OpenAI’s Project Strawberry Said to Be Building AI That Reasons and Does ‘Deep Research’

19 Červenec, 2024 - 21:44

Despite their uncanny language skills, today’s leading AI chatbots still struggle with reasoning. A secretive new project from OpenAI could reportedly be on the verge of changing that.

While today’s large language models can already carry out a host of useful tasks, they’re still a long way from replicating the kind of problem-solving capabilities humans have. In particular, they’re not good at dealing with challenges that require them to take multiple steps to reach a solution.

Imbuing AI with those kinds of skills would greatly increase its utility and has been a major focus for many of the leading research labs. According to recent reports, OpenAI may be close to a breakthrough in this area.

An article in Reuters claimed its journalists had been shown an internal document from the company discussing a project code-named Strawberry that is building models capable of planning, navigating the internet autonomously, and carrying out what OpenAI refers to as “deep research.”

A separate story from Bloomberg said the company had demoed research at a recent all-hands meeting that gave its GPT-4 model skills described as similar to human reasoning abilities. It’s unclear whether the demo was part of project Strawberry.

According, to the Reuters report, project Strawberry is an extension of the Q* project that was revealed last year just before OpenAI CEO Sam Altman was ousted by the board. The model in question was supposedly capable of solving grade-school math problems.

That might sound innocuous, but some inside the company believed it signaled a breakthrough in problem-solving capabilities that could accelerate progress towards artificial general intelligence, or AGI. Math has long been an Achilles’ heel for large language models, and capabilities in this area are seen as a good proxy for reasoning skills.

A source told Reuters that OpenAI has tested a model internally that achieved a 90 percent score on a challenging test of AI math skills, though it again couldn’t confirm if this was related to project Strawberry. But another two sources reported seeing demos from the Q* project that involved models solving math and science questions that would be beyond today’s leading commercial AIs.

Exactly how OpenAI has achieved these enhanced capabilities is unclear at present. The Reuters report notes that Strawberry involves fine-tuning OpenAI’s existing large language models, which have already been trained on reams of data. The approach, according to the article, is similar to one detailed in a 2022 paper from Stanford researchers called Self-Taught Reasoner or STaR.

That method builds on a concept known as “chain-of-thought” prompting, in which a large language model is asked to explain the reasoning steps behind its answer to a query. In the STaR paper, the authors showed an AI model a handful of these “chain-of-thought” rationales as examples and then asked it to come up with answers and rationales for a large number of questions.

If it got the question wrong, the researchers would show the model the correct answer and then ask it to come up with a new rationale. The model was then fine-tuned on all of the rationales that led to a correct answer, and the process was repeated. This led to significantly improved performance on multiple datasets, and the researchers note that the approach effectively allowed the model to self-improve by training on reasoning data it had produced itself.

How closely Strawberry mimics this approach is unclear, but if it relies on self-generated data, that could be significant. The holy grail for many AI researchers is “recursive self-improvement,” in which weak AI can enhance its own capabilities to bootstrap itself to higher orders of intelligence.

However, it’s important to take vague leaks from commercial AI research labs with a pinch of salt. These companies are highly motivated to give the appearance of rapid progress behind the scenes.

The fact that project Strawberry seems to be little more than a rebranding of Q*, which was first reported over six months ago, should give pause. As far as concrete results go, publicly demonstrated progress has been fairly incremental, with the most recent AI releases from OpenAI, Google, and Anthropic providing modest improvements over previous versions.

At the same time, it would be unwise to discount the possibility of a significant breakthrough. Leading AI companies have been pouring billions of dollars into making the next great leap in performance, and reasoning has been an obvious bottleneck on which to focus resources. If OpenAI has genuinely made a significant advance, it probably won’t be long until we find out.

Image Credit: gemenuPixabay

Kategorie: Transhumanismus

Your Brain on Mushrooms: Study Reveals What Psilocybin Does to the Brain—and for How Long

18 Červenec, 2024 - 23:35

Magic mushrooms have recently had a reputation revamp. Often considered a hippie drug, their main active component, psilocybin, is being tested in a variety of clinical trials as a therapy for the likes of depression and post-traumatic stress, bipolar, and eating disorders.

Psilocybin joins ketamine, LSD (commonly known as acid), and MDMA (often called ecstasy or molly) as part of the psychedelic therapy renaissance. But the field has had some ups and downs.

In 2019, the FDA approved a type of ketamine for severe depression that was resistant to other therapies. Then in early June, the agency rejected MDMA therapy for post-traumatic stress disorder, although it has been approved for limited use in Australia. Meanwhile, healthcare practitioners in Oregon are already using psilocybin, in combination with counseling, to treat depression, although the drug hasn’t yet been federally approved.

Despite its potential, no one knows how psilocybin works in the brain, especially over longer durations.

Now, a team from Washington University School of Medicine has comprehensively documented brain-wide changes before, during, and after a single dose of psilocybin over a period of weeks. As a control, the volunteers also took Ritalin, a stimulant, at a different time to mimic parts of the psilocybin high.

An fMRI scan shows the effect of psilocybin on the brain. Yellows, oranges, and reds indicate an increasingly large departure from normal activity. Image Credit: Sara Moser/Washington University

In the study, psilocybin dramatically reset brain networks that hum along during active rest—say, while daydreaming or spacing out. These networks control our sense of self, time, and space. Although most effects were temporary, one connection showed changes for weeks.

In some participants, the alterations were so drastic that their brain connections resembled those of completely different people.

Normally, the brain synchronizes activity across regions. Psilocybin disrupts these connections, in turn making the brain more malleable and ready to form new networks.

This could be how magic mushrooms “contribute to persistent changes…in brain regions that are responsible for controlling a person’s sense of self, emotion, and life-narrative,” wrote Petros Petridis at the NYU Langone Center for Psychedelic Medicine, who was not involved in the study.

Magical Mystery Tour

The brain’s 100 billion neurons and trillions of connections are highly organized into local and brain-wide networks.

Local networks tackle immediate tasks such as processing vision, sound, or motor functions. Brain-wide networks integrate information from local networks to coordinate more complex tasks, such as decision-making, reasoning, or self-reflection.

Previous psilocybin studies mainly focused on local networks. In rodents, for example, the drug regrew neural connections that often wither away in people with severe depression. Scientists have also pinpointed a receptor—which psilocybin grabs onto—that triggers this growth.

But psilocybin’s effects on the whole human brain remained a mystery.

Several years back, one team sought an answer by giving people with severe depression a dose of psilocybin. Using functional MRI (fMRI), a type of imaging that captures brain activity based on changes in blood flow, they found the chemical desynchronized neural networks across the entire brain, essentially “rebooting” them out of a depressive state.

Daydream Believer

The new study used fMRI to track brain activity in seven adults without mental health struggles before, during, and for three weeks after they took psilocybin. The researchers gave participants a single dose on par with that commonly used in clinical trials for depression.

During the scans, the participants had two tasks. One sounds easy: They kept still and focused their gaze on white crosshairs on a computer screen, but remained otherwise relatively relaxed. Even so, tripping on mushrooms inside a noisy, claustrophobic machine is hardly relaxing—heart rate skyrockets, nerves are on high alert, and anxiety rapidly builds. To control for these side effects, the participants also took Ritalin—a stimulant commonly used to manage attention deficit hyperactivity disorder—at another point in time during the study.

The other task required more brain power. Like an audio version of a CAPTCHA, the researchers asked volunteers to match an image and a word prompt—for example, they’d have to pick a photo of a beach after hearing the word “beach.”

Throughout the study, each person had their brains scanned roughly every other day, on average totaling 18 scans.

Mapping brain connections over time in the same person can “minimize the effects of individual differences in brain networks organization,” wrote Petridis.

The study found psilocybin immediately desynchronized a brain-wide network, generating a brain activation “fingerprint” of sorts that differentiates it from a sober brain.

Dubbed the default mode network, this neural system is active when the mind is alert but wanders, like when reliving previous memories or imagining future scenarios. The network is distributed across the brain and is often studied for its role in consciousness and a sense of self. The chemical also desynchronized local networks across the cortex, the outermost layer of the brain that supports perception, reasoning, and decision-making.

However, the chemical partially lost its magic when the volunteers were focused on the image-audio task, at which point the scans showed less disruption to the default mode network.

This has implications for psilocybin-assisted treatment. Clinical studies have shown that during psychedelic therapy, a challenging experience—a bad trip—can be overcome by a method called “grounding,” which reconnects the person to the outside world.

These results could explain why adding eye masks and ear plugs can enhance the therapeutic experience by blocking outside stimulation, while grounding pulls one out of a bad trip.

Psilocybin’s effects lingered for a few days, after which most brain networks returned to normal—with one exception. A link between the default mode network and a part of the brain involved in creating memories, emotions, and a sense of time, space, and self was disrupted for weeks.

In a way, psilocybin opens a window during which neural connections become more malleable and easier to rewire. People with depression or post-traumatic stress disorder often have a rigid and maladaptive thought pattern that’s hard to shake off. With therapy, psilocybin allows the brain to reorganize those networks, potentially helping people with depression to escape negative ruminations or for people suffering from addiction to consider a new perspective on their relationship to substances.

“In other words, psilocybin could open the door to change, allowing the therapist to lead the patient through,” wrote Petridis.

Although the study offered a higher resolution image of the brain on mushrooms over a longer timeframe than ever before, it only captured scans of seven people. As the participants did not have mental health issues, their responses to psilocybin may differ from those most likely to benefit therapeutically.

Ultimately, larger studies in diverse patient populations—as in several recent MDMA trials—could offer more insights into the efficacy of psilocybin therapy. For example, the one persistent brain network disruption could be an indicator of treatment efficacy. Investigating whether other psychedelics alter the same neural connection is a worthy next step, wrote Petridis.

With the field of psychedelic therapy projected to reach over $10 billion by 2027, understanding how the drug affects the brain could bring new medications with fewer side effects.

Image Credit: Sara Moser/Washington University

Kategorie: Transhumanismus

Could We Turn Mars Into Another Earth? Here’s What It Would Take to Terraform the Red Planet

17 Červenec, 2024 - 23:48

Is it possible that one day we could make Mars like Earth? –Tyla, age 16, Mississippi

When I was in middle school, my biology teacher showed our class the sci-fi movie Star Trek III: The Search for Spock.

The plot drew me in with its depiction of the “Genesis Project”—a new technology that transformed a dead alien world into one brimming with life.

After watching the movie, my teacher asked us to write an essay about such technology. Was it realistic? Was it ethical? And to channel our inner Spock: Was it logical? This assignment had a huge impact on me.

Fast-forward to today, and I’m an engineer and professor developing technologies to extend the human presence beyond Earth.

For example: I’m working on advanced propulsion systems to take spacecraft beyond Earth’s orbit. I’m helping to develop lunar construction technologies to support NASA’s goal of a long-term human presence on the moon. And I’ve been on a team that showed how to 3D print habitats on Mars.

To sustain people beyond Earth will take a lot of time, energy, and imagination. But engineers and scientists have started to chip away at the many challenges.

A photo taken of the bleak Martian surface by NASA’s Perseverance rover in June 2024. Image Credit: NASA/JPL-Caltech A Partial Checklist: Food, Water, Shelter, Air

After the moon, the next logical place for humans to live beyond Earth is Mars.

But is it possible to terraform Mars—that is, transform it to resemble the Earth and support life? Or are these just the musings of science fiction?

To live on Mars, humans will need liquid water, food, shelter, and an atmosphere with enough oxygen to breathe and that’s thick enough to retain heat and protect against radiation from the sun.

But the Martian atmosphere is almost all carbon dioxide, with virtually no oxygen. And it’s very thin—only about 1 percent as dense as the Earth’s.

The less dense an atmosphere, the less heat it can hold onto. Earth’s atmosphere is thick enough to retain the heat needed to sustain life by what’s known as the greenhouse effect.

But on Mars, the atmosphere is so slight that the nighttime temperature drops routinely to -150 degrees Fahrenheit (-101 degrees Celsius).

So what’s the best way to give Mars an atmosphere?

Although Mars has no active volcanoes now—at least as far as we know—scientists could trigger volcanic eruptions via nuclear explosions. The gases trapped deep in a volcano would be released and then drift into the atmosphere. But that scheme is a bit harebrained because the explosions would also introduce deadly radioactive material into the air.

A better idea: Redirecting water-rich comets and asteroids to crash into Mars. That too would release gases from below the planet’s surface into the atmosphere while also releasing the water found in the comets. NASA has already demonstrated that it is possible to redirect asteroids—but relatively large ones, and lots of them, are needed to make a difference.

Making Mars Cozy

There are numerous ways to heat up the planet. For instance, gigantic mirrors, built in space and placed in orbit around Mars, could reflect sunlight to the surface and warm it up.

One recent study proposed that Mars colonists could spread aerogel, an ultralight solid material, on the ground. The aerogel would act as insulation and trap heat. This could be done all over Mars, including the polar ice caps, where the aerogel could melt the existing ice to make liquid water.

To grow food, you need soil. On Earth, soil is composed of five ingredients: minerals, organic matter, living organisms, gases, and water.

But Mars is covered in a blanket of loose, dust-like material called regolith. Think of it as Martian sand. The regolith contains few nutrients, not enough for healthy plant growth, and it hosts some nasty chemicals called perchlorates, used on Earth in fireworks and explosives.

Cleaning up the regolith and turning it into something viable wouldn’t be easy. What the alien soil needs is some Martian fertilizer, maybe made by adding extremophiles to it—hardy microbes imported from Earth that can survive even the harshest conditions. Genetically engineered organisms are also a possibility.

Through photosynthesis, these organisms would begin converting carbon dioxide to oxygen. Eventually, as Mars became more friendly to Earth-like organisms, colonists could introduce more complex plants and even animals.

Providing oxygen, water, and food in the right proportions is extraordinarily complex. On Earth, scientists have tried to simulate this in Biosphere 2, a closed-off ecosystem featuring ocean, tropical, and desert habitats. Although all of Biosphere 2’s environments are controlled, even there scientists struggle to get the balance right. Mother Nature really knows what she’s doing.

A House on Mars

Buildings could be 3D printed; initially, they would need to be pressurized and protected until Mars acquired Earth-like temperatures and air. NASA’s Moon-to-Mars Planetary Autonomous Construction Technologies program is researching how to do exactly this.

There are many more challenges. For example, unlike Earth, Mars has no magnetosphere, which protects a planet from solar wind and cosmic radiation. Without a magnetic field, too much radiation gets through for living things to stay healthy. There are ways to create a magnetic field, but so far the science is highly speculative.

In fact, all the technologies I’ve described are far beyond current capabilities at the scale needed to terraform Mars. Developing them would take enormous amounts of research and money, probably much more than possible in the near term. Although the Genesis device from Star Trek III could terraform a planet in a matter of minutes, terraforming Mars would take centuries or even millennia.

And there are a lot of ethical questions to resolve before people get started on turning Mars into another Earth. Is it right to make such drastic permanent changes to another planet?

If this all leaves you disappointed, don’t be. As scientists create innovations to terraform Mars, we’ll also use them to make life better on Earth. Remember the technology we’re developing to 3D print habitats on Mars? Right now, I’m part of a group of scientists and engineers employing that very same technology to print homes here on Earth—which will help address the world’s housing shortage.

Curious Kids is a series for children of all ages. If you have a question you’d like an expert to answer, send it to [email protected].

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image Credit: Daein Ballard / Wikimedia Commons

Kategorie: Transhumanismus

This Translucent Skull Implant for Mice Could Help Scientists Unravel the Brain’s Mysteries

16 Červenec, 2024 - 22:53

With half their skulls replaced by translucent 3D-printed implants, the mice looked straight out of a science fiction movie. Yet they nosed around, ferociously ate their chow, and groomed as usual. Meanwhile, sensors spread across half their brains recorded electrical chatter.

Brain implants have revolutionized neuroscience. Our perception, thoughts, emotions, and memories all rely on electrical signals spreading across networks of neurons. Implants tap into these signals and—often with the help of AI—can rapidly decipher seemingly random electrical activity into intent or movement.

So-called “mind reading” devices translate brain signals related to speech into text, allowing people who’ve lost their ability to speak to communicate directly with loved ones with their minds. Others tap into motor regions of the brain or nerves in the spinal cord and help people with severe paralysis to walk again. Neural signals, alone or combined with eye movements, can even control cursors on a computer screen, re-opening the digital world to paralyzed people for texting, Googling, and scrolling through social media.

These devices are beginning to transform lives for the better. But all rely on the answer to one critical question: How does the brain support those functions? So far, each implant has focused on a small brain region underlying a given capability—controlling vision or movement.

But many of the brain’s functions rely on signals, or brain waves, that spread across multiple regions and synchronize electrical activity. The height and frequency of the waves—some come fast and low, others slow and high—change the brain’s overall function.

Scientists can measure these waves, but like a camera with low resolution, they can’t explain how the waves are generated, propagate, and eventually die down.

In a new study, the custom-fitted transparent implant described above replaces the skull in mice, offering a way to, literally, peek into the brain in search of answers.

The Evolution of Brain Probes

Neural implants have been around since the 1980s. The idea is simple. The brain uses electrical and chemical signals to process information. Electrodes can tap into the electrical communications. Sophisticated software then deciphers the neural code, potentially allowing us to reprogram it and tackle neurological symptoms when the code breaks down.

There are a few ways to make it work. One is to directly record individual neurons—often in rodents—to see which activate when challenging a mouse to a task. Another technology records large-scale brain activity from beneath the skull. This approach sacrifices resolution—we no longer know how each individual neuron behaves—but paints a broader picture.

The challenge is how to combine resolution and scale. A previous attempt relied on multiple high-density electrodes inserted into the brain. Called Neuropixels, each implant is a powerhouse with over 5,000 recording sites packed in a tiny, durable package. “Extremely large numbers of individual neurons could…be followed and tracked with the same probe for weeks and occasionally months,” the authors of a paper about the implant wrote at the time.

But to measure brain-wide activity, scientists have to place multiple Neuropixel devices across the brain. Each requires drilling through the skull and could harm the plastic-wrap-like structure, known as the blood-brain barrier, that protects the brain. Damage from these surgeries often compounds, triggering inflammation that could change how the brain works for weeks with an increasing risk of infection.

So far, scientists have inserted up to eight implants to record activity in mice as they went about their lives or participated in experiments. While they gained news insights, scientists struggled to keep the mice healthy after multiple surgeries. In other words, it wasn’t the Neuropixel implants causing problems—it was all the brain surgeries.

Might there be an alternative?

A 3D Replacement

To avoid multiple surgeries, the Allen Institute team behind the new study developed an implant that covers nearly half a mouse’s brain.

Called SHIELD, the implant looks a bit like molded Swiss cheese. The scaffold is carefully contoured into a shape that perfectly mimics the skulls of young mice.

Then comes the customization. The SHIELD scaffold can accommodate up to 21 small insertion holes for Neuropixels. Scientists can strategically choose where to put the holes to record from multiple brain regions of interest. The implant is then printed with resin, a viscous liquid often used in everyday 3D printing.

“The SHIELD implant is straightforward to fabricate in-house, using a commercially available and relatively low-cost 3D printer,” wrote the team.

Once printed, each hole is temporarily filled with transparent silicon rubber. Like a flexible windshield, the rubber protects delicate brain tissue during implantation. The SHIELD then replaces half of the skull in a single surgery.

It sounds traumatic, but the team made sure the procedure didn’t harm the mice’s health or brains. Images of their brains at multiple time points over two months after surgery showed little damage in most mice, who went about their merry business after a short recovery period. Brain inflammation levels also stayed low during the study.

Here’s how it went. The mice watched one of eight photos flash before them continuously and then learned to lick a treat when the photo switched. During the test, six Neuropixel implants recorded brain activity related to the task. The position of the implants changed every day for four days, altogether collecting neural signals from roughly 25 different brain areas.

Another test dug into the potential underpinnings of brain waves first discovered in the 1920s. These neural oscillations, called alpha waves, are associated with restful and meditative states. In humans, brain waves are usually monitored using a beanie-like cap covered in electrodes that can record most brain regions. With the help of AI, Neuropixel recordings from across the brain homed in on signals resembling alpha waves.

Overall, the team made stable, high-quality recordings from 25 mice, with 467 probe insertions across nearly 90 different experiments.

“Thus, this work goes beyond mere proof of concept,” instead providing a solid recipe for recording across dozens of brain regions and thousands of neurons over multiple days with a single initial surgery, wrote the team.

And there’s a final perk. Because SHIELD is translucent, scientists can tweak brain activity using light. This approach, called optogenetics, alters brain activity with flashes of light—either amping it up or turning it down—giving scientists insight into the neural underpinnings of thoughts, emotions, and memories.

The authors shared 3D printing files of the scaffold for other scientists to design their own custom implants.

Image Credit: RalphPixabay

Kategorie: Transhumanismus