Singularity HUB

Syndikovat obsah Singularity Hub
News and Insights on Technology, Science, and the Future from Singularity Group
Aktualizace: 20 min 48 sek zpět

Google Just Released Two Open AI Models That Can Run on Laptops

17 hodin 25 min zpět

Last year, Google united its AI units in Google DeepMind and said it planned to speed up product development in an effort to catch up to the likes of Microsoft and OpenAI. The stream of releases in the last few weeks follows through on that promise.

Two weeks ago, Google announced the release of its most powerful AI to date, Gemini Ultra, and reorganized its AI offerings, including its Bard chatbot, under the Gemini brand. A week later, they introduced Gemini Pro 1.5, an updated Pro model that largely matches Gemini Ultra’s performance and also includes an enormous context window—the amount of data you can prompt it with—for text, images, and audio.

Today, the company announced two new models. Going by the name Gemma, the models are much smaller than Gemini Ultra, weighing in at 2 and 7 billion parameters respectively. Google said the models are strictly text-based—as opposed to multimodal models that are trained on a variety of data, including text, images, and audio—outperform similarly sized models, and can be run on a laptop, desktop, or in the cloud. Before training, Google stripped datasets of sensitive data like personal information. They also fine-tuned and stress-tested the trained models pre-release to minimize unwanted behavior.

The models were built and trained with the same technology used in Gemini, Google said, but in contrast, they’re being released under an open license.

That doesn’t mean they’re open-source. Rather, the company is making the model weights available so developers can customize and fine-tune them. They’re also releasing developer tools to help keep applications safe and make them compatible with major AI frameworks and platforms. Google says the models can be employed for responsible commercial usage and distribution—as defined in the terms of use—for organizations of any size.

If Gemini is aimed at OpenAI and Microsoft, Gemma likely has Meta in mind. Meta is championing a more open model for AI releases, most notably for its Llama 2 large language model. Though sometimes confused for an open-source model, Meta has not released the dataset or code used to train Llama 2. Other more open models, like the Allen Institute for AI’s (AI2) recent OLMo models, do include training data and code. Google’s Gemma release is more akin to Llama 2 than OLMo.

“[Open models have] become pretty pervasive now in the industry,” Google’s Jeanine Banks said in a press briefing. “And it often refers to open weights models, where there is wide access for developers and researchers to customize and fine-tune models but, at the same time, the terms of use—things like redistribution, as well as ownership of those variants that are developed—vary based on the model’s own specific terms of use. And so we see some difference between what we would traditionally refer to as open source and we decided that it made the most sense to refer to our Gemma models as open models.”

Still, Llama 2 has been influential in the developer community, and open models from the likes of French startup, Mistral, and others are pushing performance toward state-of-the-art closed models, like OpenAI’s GPT-4. Open models may make more sense in enterprise contexts, where developers can better customize them. They’re also invaluable for AI researchers working on a budget. Google wants to support such research with Google Cloud credits. Researchers can apply for up to $500,000 in credits toward larger projects.

Just how open AI should be is still a matter of debate in the industry.

Proponents of a more open ecosystem believe the benefits outweigh the risks. An open community, they say, can not only innovate at scale, but also better understand, reveal, and solve problems as they emerge. OpenAI and others have argued for a more closed approach, contending the more powerful the model, the more dangerous it could be out in the wild. A middle road might allow an open AI ecosystem but more tightly regulate it.

What’s clear is both closed and open AI are moving at a quick pace. We can expect more innovation from big companies and open communities as the year progresses.

Image Credit: Google

Kategorie: Transhumanismus

This Week’s Awesome Tech Stories From Around the Web (Through February 17)

17 Únor, 2024 - 16:00

OpenAI Teases an Amazing New Generative Video Model Called Sora
Will Douglas Heaven | MIT Technology Review
“OpenAI has built a striking new generative video model called Sora that can take a short text description and turn it into a detailed, high-definition film clip up to a minute long. …The sample videos from OpenAI’s Sora are high-definition and full of detail. OpenAI also says it can generate videos up to a minute long. One video of a Tokyo street scene shows that Sora has learned how objects fit together in 3D: the camera swoops into the scene to follow a couple as they walk past a row of shops.”


Google’s Flagship AI Model Gets a Mighty Fast Upgrade
Will Knight | Wired
“Google says Gemini Pro 1.5 can ingest and make sense of an hour of video, 11 hours of audio, 700,000 words, or 30,000 lines of code at once—several times more than other AI models, including OpenAI’s GPT-4, which powers ChatGPT. …Gemini Pro 1.5 is also more capable—at least for its size—as measured by the model’s score on several popular benchmarks. The new model exploits a technique previously invented by Google researchers to squeeze out more performance without requiring more computing power.”


Surgery in Space: Tiny Remotely Operated Robot Completes First Simulated Procedure at the Space Station
Taylor Nicioli and Kristin Fisher | CNN
“The robot, known as spaceMIRA—which stands for Miniaturized In Vivo Robotic Assistant—performed several operations on simulated tissue at the orbiting laboratory while remotely operated by surgeons from approximately 250 miles (400 kilometers) below in Lincoln, Nebraska. The milestone is a step forward in developing technology that could have implications not just for successful long-term human space travel, where surgical emergencies could happen, but also for establishing access to medical care in remote areas on Earth.”


Our Unbiased Take on Mark Zuckerberg’s Biased Apple Vision Pro Review
Kyle Orland | Ars Technica
“Zuckerberg’s Instagram-posted thoughts on the Vision Pro can’t be considered an impartial take on the device’s pros and cons. Still, Zuckerberg’s short review included its fair share of fair points, alongside some careful turns of phrase that obscure the Quest’s relative deficiencies. To figure out which is which, we thought we’d consider each of the points made by Zuckerberg in his review. In doing so, we get a good viewpoint on the very different angles from which Meta and Apple are approaching mixed-reality headset design.”


Things Get Strange When AI Starts Training Itself
Matteo Wong | The Atlantic
“Over the past few months, Google DeepMind, Microsoft, Amazon, Meta, Apple, OpenAI, and various academic labs have all published research that uses an AI model to improve another AI model, or even itself, in many cases leading to notable improvements. Numerous tech executives have heralded this approach as the technology’s future.”


Single-Dose Gene Therapy May Stop Deadly Brain Disorders in Their Tracks
Paul McClure | New Atlas
“Researchers have developed a single-dose genetic therapy that can clear protein blockages that cause motor neurone disease, also called amyotrophic lateral sclerosis, and frontotemporal dementia, two incurable neurodegenerative diseases that eventually lead to death. …The researchers found that, in mice, a single dose of CTx1000 targeted only the ‘bad’ [version of the protein] TDP-43, leaving the healthy version of it alone. Not only was it safe, it was effective even when symptoms were present at the time of treatment.”


Spike Jonze’s Her Holds Up a Decade Later
Sheon Han | The Verge
“Spike Jonze’s sci-fi love story is still a better depiction of AI than many of its contemporaries. …Upon rewatching it, I noticed that this pre-AlphaGo film holds up beautifully and still offers a wealth of insight. It also doesn’t shy away from the murky and inevitably complicated feelings we’ll have toward AI, and Jonze first expressed those over a decade ago.”


OpenAI Wants to Eat Google Search’s Lunch
Maxwell Zeff | Gizmodo
“OpenAI is reportedly developing a search app that would directly compete with Google Search, according to The Information on Wednesday. The AI search engine could be a new feature for ChatGPT, or a potentially separate app altogether. Microsoft Bing would allegedly power the service from Sam Altman, which could be the most serious threat Google Search has ever faced.”


Here’s What a Solar Eclipse Looks Like on Mars
Isaac Schultz | Gizmodo
“Typically, the Perseverance rover is looking down, scouring the Martian terrain for rocks that may reveal aspects of the planet’s ancient past. But over the last several weeks, the intrepid robot looked up and caught two remarkable views: solar eclipses on the Red Planet, as the moon Phobos and Deimos passed in front of the sun.”

Image Credit: Neeqolah Creative Works / Unsplash

Kategorie: Transhumanismus

Why the New York Times’ AI Copyright Lawsuit Will Be Tricky to Defend

16 Únor, 2024 - 20:38

The New York Times’ (NYT) legal proceedings against OpenAI and Microsoft has opened a new frontier in the ongoing legal challenges brought on by the use of copyrighted data to “train” or improve generative AI.

There are already a variety of lawsuits against AI companies, including one brought by Getty Images against Stability AI, which makes the Stable Diffusion online text-to-image generator. Authors George R.R. Martin and John Grisham have also brought legal cases against ChatGPT owner OpenAI over copyright claims. But the NYT case is not “more of the same” because it throws interesting new arguments into the mix.

The legal action focuses in on the value of the training data and a new question relating to reputational damage. It is a potent mix of trademarks and copyright and one which may test the fair use defenses typically relied upon.

It will, no doubt, be watched closely by media organizations looking to challenge the usual “let’s ask for forgiveness, not permission” approach to training data. Training data is used to improve the performance of AI systems and generally consists of real-world information, often drawn from the internet.

The lawsuit also presents a novel argument—not advanced by other, similar cases—that’s related to something called “hallucinations,” where AI systems generate false or misleading information but present it as fact. This argument could in fact be one of the most potent in the case.

The NYT case in particular raises three interesting takes on the usual approach. First, that due to their reputation for trustworthy news and information, NYT content has enhanced value and desirability as training data for use in AI.

Second, that due to the NYT’s paywall, the reproduction of articles on request is commercially damaging. Third, that ChatGPT hallucinations are causing reputational damage to the New York Times through, effectively, false attribution.

This is not just another generative AI copyright dispute. The first argument presented by the NYT is that the training data used by OpenAI is protected by copyright, and so they claim the training phase of ChatGPT infringed copyright. We have seen this type of argument run before in other disputes.

Fair Use?

The challenge for this type of attack is the fair-use shield. In the US, fair use is a doctrine in law that permits the use of copyrighted material under certain circumstances, such as in news reporting, academic work, and commentary.

OpenAI’s response so far has been very cautious, but a key tenet in a statement released by the company is that their use of online data does indeed fall under the principle of “fair use.”

Anticipating some of the difficulties that such a fair-use defense could potentially cause, the NYT has adopted a slightly different angle. In particular, it seeks to differentiate its data from standard data. The NYT intends to use what it claims to be the accuracy, trustworthiness, and prestige of its reporting. It claims that this creates a particularly desirable dataset.

It argues that as a reputable and trusted source, its articles have additional weight and reliability in training generative AI and are part of a data subset that is given additional weighting in that training.

It argues that by largely reproducing articles upon prompting, ChatGPT is able to deny the NYT, which is paywalled, visitors and revenue it would otherwise receive. This introduction of some aspect of commercial competition and commercial advantage seems intended to head off the usual fair-use defense common to these claims.

It will be interesting to see whether the assertion of special weighting in the training data has an impact. If it does, it sets a path for other media organizations to challenge the use of their reporting in the training data without permission.

The final element of the NYT’s claim presents a novel angle to the challenge. It suggests that damage is being done to the NYT brand through the material that ChatGPT produces. While almost presented as an afterthought in the complaint, it may yet be the claim that causes OpenAI the most difficulty.

This is the argument related to AI hallucinations. The NYT argues that this is compounded because ChatGPT presents the information as having come from the NYT.

The newspaper further suggests that consumers may act based on the summary given by ChatGPT, thinking the information comes from the NYT and is to be trusted. The reputational damage is caused because the newspaper has no control over what ChatGPT produces.

This is an interesting challenge to conclude with. Hallucination is a recognized issue with AI generated responses, and the NYT is arguing that the reputational harm may not be easy to rectify.

The NYT claim opens a number of lines of novel attack which move the focus from copyright on to how the copyrighted data is presented to users by ChatGPT and the value of that data to the newspaper. This is much trickier for OpenAI to defend.

This case will be watched closely by other media publishers, especially those behind paywalls, and with particular regard to how it interacts with the usual fair-use defense.

If the NYT dataset is recognized as having the “enhanced value” it claims to, it may pave the way for monetization of that dataset in training AI rather than the “forgiveness, not permission” approach prevalent today.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image Credit: AbsolutVision / Unsplash 

Kategorie: Transhumanismus

Scientists Say New Hybrid Beef Rice Could Cost Just a Dollar per Pound

15 Únor, 2024 - 22:22

Here’s a type of fusion food you don’t see every day: fluffy, steamed grains of rice, chock-full of beef cells.

It sounds Frankenstein. But the hybrid plant-animal concoction didn’t require any genetic engineering—just a hefty dose of creativity. Devised by Korean scientists, the avant-garde grains are like lab-grown meat with a dose of carbohydrates.

The hybrid rice includes grains grown with beef muscle cells and fatty tissue. Steamed together, the resulting bowl has a light pink hue and notes of cream, butter, coconut oil, and a rich beefy umami.

The rice also packs a nutritional punch, with more carbohydrates, protein, and fat than normal rice. It’s like eating rice with a small bite of beef brisket. Compared to lab-grown meat, the hybrid rice is relatively easy to grow, taking less than a week to make a small batch.

It is also surprisingly affordable. One analysis showed the market price of hybrid rice with full production would be roughly a dollar per pound. All ingredients are edible and meet food safety guidelines in Korea.

Rice is a staple food in much of the world. Protein, however, isn’t. Hybrid rice could supply a dose of much-needed protein without raising more livestock.

“Imagine obtaining all the nutrients we need from cell-cultured protein rice,” said study author Sohyeon Park at Yonsei University in a press release.

The study is the latest entry into a burgeoning field of “future foods”—with lab-grown meat being a headliner—that seek to cut down carbon dioxide emissions while meeting soaring global demand for nutritious food.

“There has been a surge of interest over the past five years in developing alternatives to conventional meat with lower environmental impacts,” said Dr. Neil Ward, an agri-food and climate specialist at the University of East Anglia who was not involved in the study. “This line of research holds promise for the development of healthier and more climate-friendly diets in future.”

Future Food

Many of us share a love for a juicy steak or a glistening burger.

But raising livestock puts enormous pressure on the environment. Their digestion and manure produce significant greenhouse gas emissions, contributing to climate change. They consume copious amounts of resources and land. With standards of living rising across many countries and an ever-increasing global population, demand for protein is rapidly growing.

How can we balance the need to feed a growing world with long-term sustainability? Here’s where “future foods” come in. Scientists have been cooking up all sorts of new-age recipes. Algae, cricket-derived proteins, and 3D-printed food are heading to a futuristic cookbook near you. Lab-grown chicken has already graced menus in upscale restaurants in Washington DC and San Francisco. Meat grown inside soy beans and other nuts has been approved in Singapore.

The problem with nut-based scaffolds, explained the team in their paper, is that they can trigger allergies. Rice, in contrast, has very few allergens. The grain grows rapidly and is a culinary staple for much of the world. While often viewed as a carbohydrate, rice also contains fats, proteins, and minerals such as calcium and magnesium.

“Rice already has a high nutrient level,” said Park. But better yet, it has a structure that can accommodate other cells—including those from animals.

Rice, Rice, Baby

The structure of a single grain of rice is like an urban highway system inside a dome. “Roads” crisscross the grain, intersecting at points but also leaving an abundance of empty space.

This structure provides lots of surface area and room for beef cells to grow, wrote the team. Like a 3D scaffold, the “roads” nudge cells in a certain direction, eventually populating most of the rice grain.

Animal cells and rice proteins don’t normally mix well. To get beef cells to stick to the rice scaffold, the team added a layer of glue made of fish gelatin, a neutral-tasting ingredient commonly used as a thickener in cooking in many Asian countries. The coating linked starchy molecules inside the rice grains to the beef cells and melted away after steaming the grains.

The study used muscle and fat cells. For seven days, the cells rested at the bottom of the rice, mingling with the grains. They thrived, growing twice as fast as they would in a petri dish.

“I didn’t expect the cells to grow so well in the rice,” said Park in the press release.

Rice can rapidly go soft and mushy inside liquids. But the fishy coating withstood the nutrient bath and supported the rice’s internal scaffolds, allowing the beef cells—either muscle or fat—to grow.

Beefy Rice

Future foods need to be tasty to catch on. This includes texture.

Like variations of pasta, different types of rice have a different bite. The hybrid rice expanded after cooking, but with more chew. When boiled or steamed, it was a bit harder and more brittle than normal rice, but with a nutty, slightly sweet and savory taste.

Compared to normal supermarket rice, the hybrid rice packed a nutritious punch. Its carbohydrate, protein, and fat levels all increased, with protein getting the biggest boost.

Eating 100 grams (3.5 ounces) of the hybrid rice is like eating the same amount of plain rice with a bite of lean beef, the authors wrote in the paper.

For all future foods, cost is the elephant in the room. The team did their homework. Their hybrid rice could have a production cycle of just three months, perhaps even shorter with optimized growing procedures. It’s also cost-effective. Rice is far more affordable than beef, and if commercialized, they estimate the price could be around a dollar a pound.

Although the scientists used beef cells in this study, a similar strategy could be used to grow chicken, shrimp, or other proteins inside rice.

Future foods offer a path towards sustainability (although some researchers have questioned the climate impact of lab-grown meat). The new study suggests engineered food can reduce the environmental impact of raising livestock. Even with lab procedures, the carbon footprint for growing hybrid rice is a fraction of farming.

While beef-scented rice may not be for everyone, the team is already envisioning “microbeef sushi” using the beef-rice hybrid or producing the grain as a “complete meal.” Because the ingredients are food safe, hybrid rice may easily navigate food regulations on its way to a supermarket near you.

“Now I see a world of possibilities for this grain-based hybrid food. It could one day serve as food relief for famine, military ration, or even space food,” said Park.

Image Credit: Dr. Jinkee Hong / Yonsei University

Kategorie: Transhumanismus

These Glow-in-the-Dark Flowers Will Make Your Garden Look Like Avatar

14 Únor, 2024 - 23:22

The sci-fi dream that gardens and parks would one day glow like Pandora, the alien moon in Avatar, is decades old. Early attempts to splice genes into plants to make them glow date back to the 1980s, but experiments emitted little light and required special food.

Then in 2020, scientists made a breakthrough. Adding genes from luminous mushrooms yielded brightly glowing specimens that needed no special care. The team has refined the approach—writing last month they’ve increased their plants’ luminescence as much as 100-fold—and spun out a startup called Light Bio to sell them.

Light Bio received USDA approval in September and this month announced the first continuously glowing plant, named the firefly petunia, is officially available for purchase in the US. The petunias look and grow like their ordinary cousins—green leaves, white flowers—but after sunset, they glow a gentle green. The company is selling the plants for $29 on its website and says a crop of 50,000 will ship in April.

“This is an incredible achievement for synthetic biology. Light Bio is bringing us leaps and bounds closer to our solarpunk dream of living in Avatar’s Pandora,” Jason Kelly, CEO and co-founder of Ginkgo Bioworks, a Light Bio partner, said in a statement.

Glow Up

In synthetic biology, glowing plants and animals have been a staple for years. Scientists will often insert a gene to make an organism glow as visual proof that some intended biological process has taken effect. Keith Wood, Light Bio cofounder and CEO, was a pioneer of the approach in plants. In 1986, he gave tobacco plants a firefly gene that produces luciferin, the molecule behind the bugs’ signature glow. Those plants glowed weakly, but needed special plant food to provide fuel for the chemical reaction. Later work tried genes from bioluminescent bacteria instead, but the plants were similarly dim.

Then in 2020, a team including Light Bio cofounders Karen Sarkisyan and Ilia Yampolsky turned to the luminous mushroom, Neonothopanus nambi. The mushroom runs a chemical reaction involving caffeic acid—a molecule also commonly found in plants—to produce luciferin and light. The scientists spliced the associated genes into tobacco plants and found the plants glowed too, no extra ingredients needed.

They later tried the genes in petunias, found the effect was even more pronounced, and began refining their work. In a paper published in Nature Methods in January, the team added genes from other mushrooms and employed directed evolution to further enhance the luminescence. After experimentation with a few collections of genes, they landed on a combination that worked in multiple species and significantly upped the brightness.

From here, they hope to further increase the luminescence by as much as 10-fold, add different colors to the lineup, and expand their work into different plant varieties.

Lab to Living Room

The plants are a scientific achievement, but the creation and approval of a commercial product is also noteworthy. Prior attempts to offer people glowing plants, including a popular 2013 Kickstarter, failed to materialize.

Last fall, the USDA gave Light Bio the go-ahead to sell their firefly petunias to the general public. The approval concluded the plants as described didn’t pose new risks to agriculture compared to naturally occurring petunias.

Jennifer Kuzma, codirector of the Genetic Engineering and Society Center at North Carolina State University, told Wired last year she would have liked the USDA to do a more thorough review. But scientists recently contacted by Nature did not voice major concerns. The plants are largely grown indoors or in gardens and aren’t considered invasive, lowering the risk the new genes would make their way into other species. Though, as Kuzma noted, that risk may depend on how many are grown and where they take root.

Beyond household appeal, the system at work here could also find its way into agricultural applications. Diego Orzáez, a plant biologist in Spain, is extending the luciferase system to other plants. He envisions such plants beginning to glow only when they’re in trouble, allowing farmers to take quick visual stock of crop health with drones or satellites.

Other new genetically modified plants are headed our way soon too. As of this month, gardeners can buy seeds for bioengineered purple tomatoes high in antioxidants. Another startup is developing a genetically engineered houseplant to filter harmful chemicals from the air. And Pairwise is using CRISPR to make softer kale, seedless berries, and pitless cherries.

“People’s reactions to genetically modified plants are complicated,” Steven Burgess, a plant biologist at the University of Illinois Urbana–Champaign, told Nature. That’s due, in part, to the association with controversial corporations and worry about what we put in our bodies. The new glow-in-the-dark petunias are neither the product of a big company—indeed, Sarkisyan said Light Bio doesn’t plan to be overly combative when it comes to people sharing plant cuttings—nor are they food. But they are compelling.

“They invite people to experience biotechnology from a position of wonder,” Drew Endy told Wired. Apart from conjuring popular sci-fi, perhaps such examples can introduce a wider audience to the possibilities and risks of synthetic biology, kickstart thoughtful conversations, and help people decide for themselves where to draw lines.

Image Credit: Light Bio

Kategorie: Transhumanismus

AI Is Everywhere—Including Countless Applications You’ve Likely Never Heard Of

13 Únor, 2024 - 19:46

Artificial intelligence is seemingly everywhere. Right now, generative AI in particular—tools like Midjourney, ChatGPT, Gemini (previously Bard), and others—is at the peak of hype.

But as an academic discipline, AI has been around for much longer than just the last couple of years. When it comes to real-world applications, many have stayed hidden or relatively unknown. These AI tools are much less glossy than fantasy-image generators—yet they are also ubiquitous.

As various AI technologies continue to progress, we’ll only see an increase of AI use in various industries. This includes healthcare and consumer tech, but also more concerning uses, such as warfare. Here’s a rundown of some of the wide-ranging AI applications you may be less familiar with.

AI in Healthcare

Various AI systems are already being used in the health field, both to improve patient outcomes and to advance health research.

One of the strengths of computer programs powered by artificial intelligence is their ability to sift through and analyze truly enormous data sets in a fraction of the time it would take a human—or even a team of humans—to accomplish.

For example, AI is helping researchers comb through vast genetic data libraries. By analyzing large data sets, geneticists can home in on genes that could contribute to various diseases, which in turn will help develop new diagnostic tests.

AI is also helping to speed up the search for medical treatments. Selecting and testing treatments for a particular disease can take ages, so leveraging AI’s ability to comb through data can be helpful here, too.

For example, United States-based non-profit Every Cure is using AI algorithms to search through medical databases to match up existing medications with illnesses they might potentially work for. This approach promises to save significant time and resources.

The Hidden AIs

Outside medical research, other fields not directly related to computer science are also benefiting from AI.

At CERN, home of the Large Hadron Collider, a recently developed advanced AI algorithm is helping physicists tackle some of the most challenging aspects of analyzing the particle data generated in their experiments.

Last year, astronomers used an AI algorithm for the first time to identify a “potentially hazardous” asteroid—a space rock that might one day collide with Earth. This algorithm will be a core part of the operations of the Vera C. Rubin Observatory currently under construction in Chile.

One major area of our lives that uses largely “hidden” AI is transportation. Millions of flights and train trips are coordinated by AI all over the world. These AI systems are meant to optimize schedules to reduce costs and maximize efficiency.

Artificial intelligence can also manage real-time road traffic by analyzing traffic patterns, volume and other factors, and then adjusting traffic lights and signals accordingly. Navigation apps like Google Maps also use AI optimization algorithms to find the best path in their navigation systems.

AI is also present in various everyday items. Robot vacuum cleaners use AI software to process all their sensor inputs and deftly navigate our homes.

The most cutting-edge cars use AI in their suspension systems so passengers can enjoy a smooth ride.

Of course, there is also no shortage of more quirky AI applications. A few years ago, UK-based brewery startup IntelligentX used AI to make custom beers for its customers. Other breweries are also using AI to help them optimize beer production.

And Meet the Ganimals is a “collaborative social experiment” from MIT Media Lab, which uses generative AI technologies to come up with new species that have never existed before.

AI Can Also Be Weaponized

On a less lighthearted note, AI also has many applications in defense. In the wrong hands, some of these uses can be terrifying.

For example, some experts have warned AI can aid the creation of bioweapons. This could happen through gene sequencing, helping non-experts easily produce risky pathogens such as novel viruses.

Where active warfare is taking place, military powers can design warfare scenarios and plans using AI. If a power uses such tools without applying ethical considerations or even deploys autonomous AI-powered weapons, it could have catastrophic consequences.

AI has been used in missile guidance systems to maximize the effectiveness of a military’s operations. It can also be used to detect covertly operating submarines.

In addition, AI can be used to predict and identify the activities and movements of terrorist groups. This way, intelligence agencies can come up with preventive measures. Since these types of AI systems have complex structures, they require high-processing power to get real-time insights.

Much has also been said about how generative AI is supercharging people’s abilities to produce fake news and disinformation. This has the potential to affect the democratic process and sway the outcomes of elections.

AI is present in our lives in so many ways, it is nearly impossible to keep track. Its myriad applications will affect us all.

This is why ethical and responsible use of AI, along with well-designed regulation, is more important than ever. This way we can reap the many benefits of AI while making sure we stay ahead of the risks.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image Credit: Michael Dziedzic / Unsplash

Kategorie: Transhumanismus

An Antibiotic You Inhale Can Deliver Medication Deep Into the Lungs

12 Únor, 2024 - 23:28

We’ve all been more aware of lung health since Covid-19.

However, for people with asthma and chronic obstructive pulmonary disease (COPD), dealing with lung problems is a lifelong struggle. Those with COPD suffer from highly inflamed lung tissue that swells and obstructs airways, making it hard to breathe. The disease is common, with more than three million annual cases in the US alone.

Although manageable, there is no cure. One problem is that lungs with COPD pump out tons of viscous mucus, which forms a barrier preventing treatments from reaching lung cells. The slimy substance—when not coughed out—also attracts bacteria, further aggravating the condition.

A new study in Science Advances describes a potential solution. Scientists have developed a nanocarrier to shuttle antibiotics into the lungs. Like a biological spaceship, the carrier has “doors” that open and release antibiotics inside the mucus layer to fight infections.

The “doors” themselves are also deadly. Made from a small protein, they rip apart bacterial membranes and clean up their DNA to rid lung cells of chronic infection.

The team engineered an inhalable version of an antibiotic using the nanocarrier. In a mouse model of COPD, the treatment revived their lung cells in just three days. Their blood oxygen levels returned to normal, and previous signs of lung damage slowly healed.

“This immunoantibacterial strategy may shift the current paradigm of COPD management,” the team wrote in the article.

Breathe Me

Lungs are extremely delicate. Picture thin but flexible layers of cells separated into lobes to help coordinate oxygen flow into the body. Once air flows through the windpipe, it rapidly disperses among a complex network of branches, filling thousands of air sacs that supply the body with oxygen while ridding it of carbon dioxide.

These structures are easily damaged, and smoking is a common trigger. Cigarette smoke causes surrounding cells to pump out a slimy substance that obstructs the airway and coats air sacs, making it difficult for them to function normally.

In time, the mucus builds a sort of “glue” that attracts bacteria and condenses into a biofilm. The barrier further blocks oxygen exchange and changes the lung’s environment into one favorable for bacteria growth.

One way to stop the downward spiral is to obliterate the bacteria. Broad-spectrum antibiotics are the most widely used treatment. But because of the slimy protective layer, they can’t easily reach bacteria deep inside lung tissues. Even worse, long-term treatment increases the chance of antibiotic resistance, making it even more difficult to wipe out stubborn bacteria.

But the protective layer has a weakness: It’s just a little bit too sour. Literally.

Open-Door Policy

Like a lemon, the slimy layer is slightly more acidic compared to healthy lung tissue. This quirk gave the team an idea for an ideal antibiotic carrier that would only release its payload in an acidic environment.

The team made hollow nanoparticles out of silica—a flexible biomaterial—filled them with a common antibiotic, and added “doors” to release the drugs.

These openings are controlled by additional short protein sequences that work like “locks.” In normal airway and lung environments, they fold up at the door, essentially sequestering the antibiotics inside the bubble.

Released in lungs with COPD, the local acidity changes the structure of the lock protein, so the doors open and release antibiotics directly into the mucus and biofilm—essentially breaking through the bacterial defenses and targeting them on their home turf.

One test with the concoction penetrated a lab-grown biofilm in a petri dish. It was far more effective compared to a previous type of nanoparticle, largely because the carrier’s doors opened once inside the biofilm—in other nanoparticles, the antibiotics remained trapped.

The carriers could also dig deeper into infected areas. Cells have electrical charges. The carrier and mucus both have negative charges, which—like similarly charged ends of two magnets—push the carriers deeper into and through the mucus and biofilm layers.

Along the way, the acidity of the mucus slowly changes the carrier’s charge to positive, so that once past the biofilm, the “lock” mechanism opens and releases medication.

The team also tested the nanoparticle’s ability to obliterate bacteria. In a dish, they wiped out multiple common types of infectious bacteria and destroyed their biofilms. The treatment appeared relatively safe. Tests in human fetal lung cells in a dish found minimal signs of toxicity.

Surprisingly, the carrier itself could also destroy bacteria. Inside an acidic environment, its positive charge broke down bacterial membranes. Like popped balloons, the bugs released genetic material into their surroundings, which the carrier swept up.

Damping the Fire

Bacterial infections in the lungs attract overactive immune cells, which leads to swelling. Blood vessels surrounding air sacs also become permeable, making it easier for dangerous molecules to get through. These changes cause inflammation, making it hard to breathe.

In a mouse model of COPD, the inhalable nanoparticle treatment quieted the overactive immune system. Multiple types of immune cells returned to a healthy level of activation—allowing the mice to switch from a highly inflammatory profile to one that combats infections and inflammation.

Mice treated with the inhalable nanoparticle had about 98 percent less bacteria in their lungs, compared to those given the same antibiotic without the carrier.

Wiping out bacteria gave the mice a sigh of relief. They breathed easier.  Their blood oxygen levels went up, and blood acidity—a sign of dangerously low oxygen—returned to normal.

Under the microscope, treated lungs restored normal structures, with sturdier air sacks that slowly recovered from COPD damage. The treated mice also had less swelling in their lungs from fluid buildup that’s commonly seen in lung injuries.

The results, while promising, are only for a smoking-related COPD model in mice. There’s still much we don’t know about the treatment’s long-term consequences.

Although for now there were no signs of side effects, it’s possible the nanoparticles could accumulate inside the lungs over time eventually causing damage. And though the carrier itself damages bacterial membranes, the therapy mostly relies on the encapsulated antibiotic. With antibiotic resistance on the rise, some drugs are already losing effect for COPD.

Then there’s the chance of mechanical damage over time. Repeatedly inhaling silicon-based nanoparticles could cause lung scarring in the long term. So, while nanoparticles could shift strategies for COPD management, it’s clear we need follow-up studies, the team wrote.

Image Credit: crystal light /

Kategorie: Transhumanismus

This Week’s Awesome Tech Stories From Around the Web (Through February 10)

10 Únor, 2024 - 16:00

Sam Altman Seeks Trillions of Dollars to Reshape Business of Chips and AI
Keach Hagey | The Wall Street Journal
“The OpenAI chief executive officer is in talks with investors including the United Arab Emirates government to raise funds for a wildly ambitious tech initiative that would boost the world’s chip-building capacity, expand its ability to power AI, among other things, and cost several trillion dollars, according to people familiar with the matter. The project could require raising as much as $5 trillion to $7 trillion, one of the people said.”


AI Is Rewiring Coders’ Brains. Yours May Be Next
Will Knight | Wired
“GitHub’s owner, Microsoft, said in its latest quarterly earnings that there are now 1.3 million paid Copilot accounts—a 30 percent increase over the previous quarter—and noted that 50,000 different companies use the software. Dohmke says the latest usage data from Copilot shows that almost half of all the code produced by users is AI-generated. At the same time, he claims there is little sign that these AI programs can operate without human oversight.”


Google Prepares for a Future Where Search Isn’t King
Lauren Goode | Wired
“[Sundar] Pichai is…experimenting with a new vision for what Google offers—not replacing search, not yet, but building an alternative to see what sticks. ‘This is how we’ve always approached search, in the sense that as search evolved, as mobile came in and user interactions changed, we adapted to it,’ Pichai says, speaking with Wired ahead of the Gemini launch. ‘In some cases we’re leading users, as we are with multimodal AI. But I want to be flexible about the future, because otherwise we’ll get it wrong.'”


Turbocharged CAR-T Cells Melt Tumors in Mice—Using a Trick From Cancer Cells
Asher Mullard | Nature
“The team treated mice carrying blood and solid cancers with several T-cell therapies boosted with CARD11–PIK3R3, and watched the animals’ tumors melt away. Researchers typically use around one million cells to treat these mice, says Choi, but even 20,000 of the cancer-mutation-boosted T cells were enough to wipe out tumors. ‘That’s an impressively small number of cells,’ says Nick Restifo, a cell-therapy researcher and chief scientist of the rejuvenation start-up company Marble Therapeutics in Boston, Massachusetts.”


OpenAI Wants to Control Your Computer
Maxwell Zeff | Gizmodo
“OpenAI is reportedly developing ‘agent software,’ that will effectively take over your device and complete complex tasks on your behalf, according to The Information. OpenAI’s agent would work between multiple apps on your computer, performing clicks, cursor movements, and text typing. It’s really a new type of operating system, and it could change the way you interact with your computer altogether.”


The New Car Batteries That Could Power the Electric Vehicle Revolution
Nicola Jones | Nature
“Researchers are experimenting with different designs that could lower costs, extend vehicle ranges and offer other improvements. …Chinese manufacturers have announced budget cars for 2024 featuring batteries based not on the lithium that powers today’s best electric vehicles (EVs), but on cheap sodium—one of the most abundant elements in Earth’s crust. And a US laboratory has surprised the world with a dream cell that runs in part on air and could pack enough energy to power airplanes.”


I Stopped Using Passwords. It’s Great—and a Total Mess
Matt Burgess | Wired
“For the past month, I’ve been converting as many of my accounts as possible—around a dozen for now—to use passkeys and start the move away from the password for good. Spoiler: When passkeys work seamlessly, it’s a glimpse of a more secure future for millions, if not billions, of people, and a reinvention of how we sign in to websites and services. But getting there for every account across the internet is still likely to prove a minefield and take some time.”


Momentary Fusion Breakthroughs Face Hard Reality
Edd Gent | IEEE Spectrum
“The dream of fusion power inched closer to reality in December 2022, when researchers at Lawrence Livermore National Laboratory (LLNL) revealed that a fusion reaction had produced more energy than what was required to kick-start it. According to new research, the momentary fusion feat required exquisite choreography and extensive preparations, whose high degree of difficulty reveals a long road ahead before anyone dares hope a practicable power source could be at hand.”


Meet ‘Smaug-72B’: The New King of Open-Source AI
Michael Nuñez | VentureBeat
“What’s most noteworthy about today’s release is that Smaug-72B outperforms GPT-3.5 and Mistral Medium, two of the most advanced proprietary large language models developed by OpenAI and Mistral, respectively, in several of the most popular benchmarks. While the model still falls short of the 90-100 point average indicative of human-level performance, its birth signals that open-source AI may soon rival Big Tech’s capabilities, which have long been shrouded in secrecy.”


AI-Generated Voices in Robocalls Can Deceive Voters. The FCC Just Made Them Illegal
Ali Swenson | Associated Press
“The [FCC] on Thursday outlawed robocalls that contain voices generated by artificial intelligence, a decision that sends a clear message that exploiting the technology to scam people and mislead voters won’t be tolerated. …The agency’s chairwoman, Jessica Rosenworcel, said bad actors have been using AI-generated voices in robocalls to misinform voters, impersonate celebrities, and extort family members. ‘It seems like something from the far-off future, but this threat is already here,’ Rosenworcel told The AP on Wednesday as the commission was considering the regulations.”

Image Credit: NASA Hubble Space Telescope / Unsplash

Kategorie: Transhumanismus

It Will Take Only a Single SpaceX Starship to Launch a Space Station

9 Únor, 2024 - 18:42

SpaceX’s forthcoming Starship rocket will make it possible to lift unprecedented amounts of material into orbit. One of its first customers will be a commercial space station, which will be launched fully assembled in a single mission.

Measuring 400 feet tall and capable of lifting 150 tons to low-Earth orbit, Starship will be the largest and most powerful rocket ever built. But with its first two test launches ending in “rapid unscheduled disassembly”—SpaceX’s euphemism for an explosion—the spacecraft is still a long way from commercial readiness.

That hasn’t stopped customers from signing up for launches. Now, a joint venture between Airbus and Voyager Space that’s building a private space station called Starlab has inked a contract with SpaceX to get it into orbit. The venture plans to put the impressive capabilities of the new rocket to full use by launching the entire 26-foot-diameter space station in one go.

“Starlab’s single-launch solution continues to demonstrate not only what is possible, but how the future of commercial space is happening now,” SpaceX’s Tom Ochinero said in a statement. “The SpaceX team is excited for Starship to launch Starlab to support humanity’s continued presence in low-Earth orbit on our way to making life multiplanetary.”

Starlab is one of several private space stations currently under development as NASA looks to find a replacement for the International Space Station, which is due to be retired in 2030. In 2021, the agency awarded $415 million in funding for new orbital facilities to Voyager Space, Northrop Grumman, and Jeff Bezos’ company Blue Origin. Axiom Space also has a contract with NASA to build a commercial module that will be attached to the ISS in 2026 and then be expanded to become an independent space station around the time its host is decommissioned.

Northrop Grumman and Voyager have since joined forces and brought Airbus on board to develop Starlab together. The space station will only have two modules—a service module that provides energy from solar panels as well as propulsion and a module with quarters for a crew of four and a laboratory. That compares to the 16 modules that make up the ISS. But at roughly twice the diameter of its predecessor, those two modules will still provide half the total volume of the ISS.

The station is designed to provide an orbital base for space agencies like NASA but also private customers and other researchers. The fact that Hilton is helping design the crew quarters suggests they will be catering to space tourists too.

Typically, space stations are launched in parts and assembled in space, but Starlab will instead be fully assembled on the ground. This not only means it will be habitable almost immediately after launch, but it also greatly simplifies the manufacturing process, Voyager CEO Dylan Taylor told Tech Crunch recently.

“Let’s say you have a station that requires multiple launches, and then you’re taking the hardware and you’re assembling it [on orbit],” he said. “Not only is that very costly, but there’s a lot of execution risk around that as well. That’s what we were trying to avoid and we’re convinced that that’s the best way to go.”

As Starship is the only rocket big enough to carry such a large payload in one go, it’s not surprising Voyager has chosen SpaceX, even though the vehicle they’re supposed to fly is still under development. The companies didn’t give a timeline for the launch.

If they pull it off, it would be a major feat of space engineering. But it’s still unclear how economically viable this new generation of private space stations will be. Ars Technica points out that it cost NASA more than $100 billion to build the ISS and another $3 billion a year to operate it.

The whole point of NASA encouraging the development of private space stations is so it can slash that bill, so it’s unlikely to be offering  anywhere near that much cash. The commercial applications for space stations are fuzzy at best, so whether space tourists and researchers will provide enough money to make up the difference remains to be seen.

But spaceflight is much cheaper these days thanks to SpaceX driving down launch costs, and the ability to launch pre-assembled space stations could further slash the overall bill. So, Starlab may well prove the doubters wrong and usher in a new era of commercial space flight.

Image Credit: Voyager Space

Kategorie: Transhumanismus

Partially Synthetic Moss Paves the Way for Plants With Designer Genomes

8 Únor, 2024 - 22:20

Synthetic biology is already rewriting life.

In late 2023, scientists revealed yeast cells with half their genetic blueprint replaced by artificial DNA. It was a “watershed” moment in an 18-year-long project to design alternate versions of every yeast chromosome. Despite having seven and a half synthetic chromosomes, the cells reproduced and thrived.

A new study moves us up the evolutionary ladder to designer plants.

For a project called SynMoss, a team in China redesigned part of a single chromosome in a type of moss. The resulting part-synthetic plant grew normally and produced spores, making it one of the first living things with multiple cells to carry a partially artificial chromosome.

The custom changes in the plant’s chromosomes are relatively small compared to the synthetic yeast. But it’s a step towards completely redesigning genomes in higher-level organisms.

In an interview with Science, synthetic biologist Dr. Tom Ellis of Imperial College London said it’s a “wake-up call to people who think that synthetic genomes are only for microbes.”

Upgrading Life

Efforts to rewrite life aren’t just to satisfy scientific curiosity.

Tinkering with DNA can help us decipher evolutionary history and pinpoint critical stretches of DNA that keep chromosomes stable or cause disease. The experiments could also help us better understand DNA’s “dark matter.” Littered across the genome, mysterious sequences that don’t encode proteins have long baffled scientists: Are they useful or just remnants of evolution?

Synthetic organisms also make it easier to engineer living things. Bacteria and yeast, for example, are already used to brew beer and pump out life-saving medications such as insulin. By adding, switching, or deleting parts of the genome, it’s possible to give these cells new capabilities.

In one recent study, for example, researchers reprogrammed bacteria to synthesize proteins using amino acid building blocks not seen in nature. In another study, a team turned bacteria into plastic-chomping Terminators that recycle plastic waste into useful materials.

While impressive, bacteria are made of cells unlike ours—their genetic material floats around, making them potentially easier to rewire.

The Synthetic Yeast Project was a breakthrough. Unlike bacteria, yeast is a eukaryotic cell. Plants, animals, and humans all fall into this category. Our DNA is protected inside a nut-like bubble called a nucleus, making it more challenging for synthetic biologists to tweak.

And as far as eukaryotes go, plants are harder to manipulate than yeast—a single-cell organism—as they contain multiple cell types that coordinate growth and reproduction. Chromosomal changes can play out differently depending on how each cell functions and, in turn, affect the health of the plant.

“Genome synthesis in multicellular organisms remains uncharted territory,” the team wrote in their paper.

Slow and Steady

Rather than building a whole new genome from scratch, the team tinkered with the existing moss genome.

This green fuzz has been extensively studied in the lab. An early analysis of the moss genome found it has 35,000 potential genes—strikingly complex for a plant. All 26 of its chromosomes have been completely sequenced.

For this reason, the plant is a “broadly used model in evolutionary developmental and cell biological studies,” wrote the team.

Moss genes readily adapt to environmental changes, especially those that repair DNA damage from sunlight. Compared to other plants—such as thale cress, another model biologists favor—moss has the built-in ability to tolerate large DNA changes and regenerate faster. Both aspects are “essential” when rewriting the genome, explained the team.

Another perk? The moss can grow into a full plant from a single cell. This ability is a dream scenario for synthetic biologists because altering genes or chromosomes in just one cell can potentially change an entire organism.

Like our own, plant chromosomes look like an “X” with two crossed arms. For this study, the team decided to rewrite the shortest chromosome arm in the plant—chromosome 18. It was still a mammoth project. Previously, the largest replacement was only about 5,000 DNA letters; the new study needed to replace over 68,000 letters.

Replacing natural DNA sequences with “the redesigned large synthetic fragments presented a formidable technical challenge,” wrote the team.

They took a divide-and-conquer strategy. They first designed mid-sized chunks of synthetic DNA before combining them into a single DNA “mega-chunk” of the chromosome arm.

The newly designed chromosome had several notable changes. It was stripped of transposons, or “jumping genes.” These DNA blocks move around the genome, and scientists are still debating if they’re essential for normal biological functions or if they contribute to disease. The team also added DNA “tags” to the chromosome to mark it as synthetic and made changes to how it regulates the manufacturing of certain proteins.

Overall, the changes reduced the size of the chromosome by nearly 56 percent. After inserting the designer chromosome into moss cells, the team nurtured them into adult plants.

A Half-Synthetic Blossom

Even with a heavily edited genome, the synthetic moss was surprisingly normal. The plants readily grew into leafy bushes with multiple branches and eventually produced spores. All reproductive structures were like those found in the wild, suggesting the half-synthetic plants had a normal life cycle and could potentially reproduce.

The plants also maintained their resilience against highly salty environments—a useful adaptation also seen in their natural counterparts.

But the synthetic moss did have some unexpected epigenetic quirks. Epigenetics is the science of how cells turn genes on or off. The synthetic part of the chromosome had a different epigenetic profile compared to natural moss, with more activated genes than usual. This could potentially be harmful, according to the team.

The moss also offered potential insights into DNA’s “dark matter,” including transposons. Deleting these jumping genes didn’t seem to harm the partially synthetic plants, suggesting they might not be essential to their health.

More practically, the results could boost biotechnology efforts using moss to produce a wide range of therapeutic proteins, including ones that combat heart disease, heal wounds, or treat stroke. Moss is already used to synthesize medical drugs. A partially designer genome could alter its metabolism, boost its resilience against infections, and increase yield.

The next step is to replace the entirety of chromosome 18’s short arm with synthetic sequences. They’re aiming to generate an entire synthetic moss genome within 10 years.

It’s an ambitious goal. Compared to the yeast genome, which took 18 years and a global collaboration to rewrite half of it, the moss genome is 40 times bigger. But with increasingly efficient and cheaper DNA reading and synthesis technologies, the goal isn’t beyond reach.

Similar techniques could also inspire other projects to redesign chromosomes in organisms beyond bacteria and yeast, from plants to animals.

Image Credit: Pyrex / Wikimedia Commons

Kategorie: Transhumanismus

Scientists ‘Astonished’ Yet Another of Saturn’s Moons May Be an Ocean World

8 Únor, 2024 - 00:21

Liquid water is a crucial prerequisite for life as we know it. When astronomers first looked out into the solar system, it seemed Earth was a special case in this respect. They found enormous balls of gas, desert worlds, blast furnaces, and airless hellscapes. But evidence is growing that liquid water isn’t rare at all—it’s just extremely well-hidden.

The list of worlds with subsurface oceans in our solar system is getting longer by the year. Of course, many people are familiar with the most obvious cases: The icy moons Enceladus and Europa are literally bursting at the seams with water. But other less obvious candidates have joined their ranks, including Callisto, Ganymede, Titan, and even, perhaps, Pluto.

Now, scientists argue in a paper in Nature that we may have reason to add yet another long-shot to the list: Saturn’s “Death Star” moon, Mimas. Nicknamed for the giant impact crater occupying around a third of its diameter, Mimas has been part of the conversation for years. But a lack of clear evidence on its surface made scientists skeptical it could be hiding an interior ocean.

The paper, which contains fresh analysis of observations made by the Cassini probe, says changes in the moon’s orbit over time are best explained by the presence of a global ocean deep below its icy crust. The team believes the data also suggests the ocean is very young, explaining why it has yet to make its presence known on the surface.

“The major finding here is the discovery of habitability conditions on a solar system object which we would never, never expect to have liquid water,” Valery Lainey, first author and scientist at the Observatoire de Paris, told “It’s really astonishing.”

The Solar System Is Sopping

How exactly do frozen moons on the outskirts of the solar system come to contain whole oceans of liquid water?

In short: Combine heat and a good amount of ice and you get oceans. We know there is an abundance of ice in the outer solar system, from moons to comets. But heat? Not so much. The further out you go, the more the sun fades into the starry background.

Interior ocean worlds depend on another source of heat—gravity. As they orbit Jupiter or Saturn, enormous gravitational shifts flex and warp their insides. The friction from this grinding, called tidal flexing, produces heat which melts ice to form salty oceans.

And the more we look, the more we find evidence of hidden oceans throughout the outer solar system. Some are thought to have more liquid water than Earth, and where there’s liquid water, there just might be life—at least, that’s what we want to find out.

Yet Another Ocean World?

Speculation that Mimas might be an ocean world isn’t new. A decade ago, small shifts in the moon’s orbit measured by Cassini suggested it either had a strangely pancake-shaped core or an interior ocean. Scientists thought the latter was a long shot because—unlike the cracked but largely crater-free surfaces of Enceladus and Europa—Mimas’s surface is pocked with craters, suggesting it has been largely undisturbed for eons.

The new study aimed for a more precise look at the data to better weigh the possibilities. According to modeling using more accurate calculations, the team found a pancake-shaped core is likely impossible. To fit observations, its ends would have to extend beyond the surface: “This is incompatible with observations,” they wrote.

So they looked to the interior ocean hypothesis and modeled a range of possibilities. The models not only fit Mimas’s orbit well, they also suggest the ocean likely begins 20 to 30 kilometers below the surface. The team believes the ocean would likely be relatively young, somewhere between a few million years old and 25 million years old. The combination of depth and youth could explain why the moon’s surface remains largely undisturbed.

But what accounts for this youth? The team suggests relatively recent gravitational encounters—perhaps with other moons or during the formation of Saturn’s ring system, which some scientists believe to be relatively young also—may have changed the degree of tidal flexing inside Mimas. The associated heat only recently became great enough to melt ice into oceans.

Take Two

It’s a compelling case, but still unproven. Next steps would involve more measurements taken by a future mission. If these measurements match predictions made in the paper, scientists might confirm the ocean’s existence as well as its depth below the surface.

Studying a young, still-evolving interior ocean could give us clues about how older, more stable oceans formed in eons past. And the more liquid water we find in our own solar system, the more likely it’s common through the galaxy. If water worlds—either in the form of planets or moons—are a dime a dozen, what does that say about life?

This is, of course, still one of the biggest questions in science. But each year, thanks to clues gathered in our solar system and beyond, we’re stepping closer to an answer.

Image Credit: NASA/JPL/Space Science Institute

Kategorie: Transhumanismus

This AI Is Learning to Decode the ‘Language’ of Chickens

6 Únor, 2024 - 19:36

Have you ever wondered what chickens are talking about? Chickens are quite the communicators—their clucks, squawks, and purrs are not just random sounds but a complex language system. These sounds are their way of interacting with the world and expressing joy, fear, and social cues to one another.

Like humans, the “language” of chickens varies with age, environment, and surprisingly, domestication, giving us insights into their social structures and behaviors. Understanding these vocalizations can transform our approach to poultry farming, enhancing chicken welfare and quality of life.

At Dalhousie University, my colleagues and I are conducting research that uses artificial intelligence to decode the language of chickens. It’s a project that’s set to revolutionize our understanding of these feathered creatures and their communication methods, offering a window into their world that was previously closed to us.

Chicken Translator

The use of AI and machine learning in this endeavor is like having a universal translator for chicken speech. AI can analyze vast amounts of audio data. As our research, yet to be peer-reviewed, is documenting, our algorithms are learning to recognize patterns and nuances in chicken vocalizations. This isn’t a simple task—chickens have a range of sounds that vary in pitch, tone, and context.

But by using advanced data analysis techniques, we’re beginning to crack their code. This breakthrough in animal communication is not just a scientific achievement; it’s a step towards more humane and empathetic treatment of farm animals.

One of the most exciting aspects of this research is understanding the emotional content behind these sounds. Using natural language processing (NLP), a technology often used to decipher human languages, we’re learning to interpret the emotional states of chickens. Are they stressed? Are they content? By understanding their emotional state, we can make more informed decisions about their care and environment.

Non-Verbal Chicken Communication

In addition to vocalizations, our research also delves into non-verbal cues to gauge emotions in chickens. Our research has also explored chickens’ eye blinks and facial temperatures. How these might be reliable indicators of chickens’ emotional states is examined in a preprint (not-yet-peer-reviewed) paper.

By using non-invasive methods like video and thermal imaging, we’ve observed changes in temperature around the eye and head regions, as well as variations in blinking behavior, which appear to be responses to stress. These preliminary findings are opening new avenues in understanding how chickens express their feelings, both behaviorally and physiologically, providing us with additional tools to assess their well-being.

Happier Fowl

This project isn’t just about academic curiosity; it has real-world implications. In the agricultural sector, understanding chicken vocalizations can lead to improved farming practices. Farmers can use this knowledge to create better living conditions, leading to healthier and happier chickens. This, in turn, can impact the quality of produce, animal health, and overall farm efficiency.

The insights gained from this research can also be applied to other areas of animal husbandry, potentially leading to breakthroughs in the way we interact with and care for a variety of farm animals.

But our research goes beyond just farming practices. It has the potential to influence policies on animal welfare and ethical treatment. As we grow to understand these animals better, we’re compelled to advocate for their well-being. This research is reshaping how we view our relationship with animals, emphasizing empathy and understanding.

Understanding animal communication and behavior can impact animal welfare policies. Image Credit: Unsplash/Zoe Schaeffer Ethical AI

The ethical use of AI in this context sets a precedent for future technological applications in animal science. We’re demonstrating that technology can and should be used for the betterment of all living beings. It’s a responsibility that we take seriously, ensuring that our advancements in AI are aligned with ethical principles and the welfare of the subjects of our study.

The implications of our research extend to education and conservation efforts as well. By understanding the communication methods of chickens, we gain insights into avian communication in general, providing a unique perspective on the complexity of animal communication systems. This knowledge can be vital for conservationists working to protect bird species and their habitats.

As we continue to make strides in this field, we are opening doors to a new era in animal-human interaction. Our journey into decoding chicken language is more than just an academic pursuit: It’s a step towards a more empathetic and responsible world.

By leveraging AI, we’re not only unlocking the secrets of avian communication but also setting new standards for animal welfare and ethical technological use. It’s an exciting time, as we stand on the cusp of a new understanding between humans and the animal world, all starting with the chicken.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image Credit: Ben Moreland / Unsplash 

Kategorie: Transhumanismus

A One-and-Done Injection to Slow Aging? New Study in Mice Opens the Possibility

5 Únor, 2024 - 22:31

A preventative anti-aging therapy seems like wishful thinking.

Yet a new study led by Dr. Corina Amor Vegas at Cold Spring Harbor Laboratory describes a treatment that brings the dream to life—at least for mice. Given a single injection in young adulthood, they aged more slowly compared to their peers.

By the equivalent of roughly 65 years of age in humans, the mice were slimmer, could better regulate blood sugar and insulin levels, and had lower inflammation and a more youthful metabolic profile. They even kept up their love for running, whereas untreated seniors turned into couch potatoes.

The shot is made up of CAR (chimeric antigen receptor) T cells. These cells are genetically engineered from the body’s T cells—a type of immune cell adept at hunting down particular targets in the body.

CAR T cells first shot to fame as a revolutionary therapy for previously untreatable blood cancers. They’re now close to tackling other medical problems, such as autoimmune disorders, asthma, liver and kidney diseases, and even HIV.

The new study took a page out of CAR T’s cancer-fighting playbook. But instead of targeting cancer cells, they engineered them to hunt down and destroy senescent cells, a type of cell linked to age-related health problems. Often dubbed “zombie cells,” they accumulate with age and pump out a toxic chemical brew that damages surrounding tissues. Zombie cells have been in the crosshairs of longevity researchers and investors alike. Drugs that destroy the cells called senolytics are now a multi-billion-dollar industry.

The new treatment, called senolytic CAR T, also turned back the clock when given to elderly mice. Like humans, the risk of diabetes increases with age in mice. By clearing out zombie cells in multiple organs, the mice could handle sugar rushes without a hitch. Their metabolism improved, and they began jumping around and running like much younger mice.

“If we give it to aged mice, they rejuvenate. If we give it to young mice, they age slower. No other therapy right now can do this,” said Amor Vegas in a press release.

The Walking Dead

Zombie cells aren’t always evil.

They start out as regular cells. As damage to their DNA and internal structures accumulates over time, the body “locks” the cells into a special state called senescence. When young, this process helps prevent cells from turning cancerous by limiting their ability to divide. Although still living, the cells can no longer perform their usual jobs. Instead, they release a complex cocktail of chemicals that alerts the body’s immune system—including T cells—to clear them out. Like spring cleaning, this helps keep the body functioning normally.

With age, however, zombie cells linger. They amp up inflammation, leading to age-related diseases such as cancer, tissue scarring, and blood vessel and heart conditions. Senolytics—drugs that destroy these cells—improve these conditions and increase life span in mice.

But like a pill of Advil, senolytics don’t last long inside the body. To keep zombie cells at bay, repeated doses are likely necessary.

A Perfect Match

Here’s where CAR T cells come in. Back in 2020, Amor Vegas and colleagues designed a “living” senolytic T cell that tracks down and kills zombie cells.

All cells are dotted with protein “beacons” that stick out from their surfaces. Different cell types have unique assortments of these proteins. The team found a protein “beacon” on zombie cells called uPAR. The protein normally occurs at low levels in most organs, but it ramps up in zombie cells, making it a perfect target for senolytic CAR T cells.

In a test, the therapy eliminated senescent cells in mouse models with liver and lung cancers. But surprisingly, the team also found that young mice receiving the treatment had better liver health and metabolism—both of which contribute to age-related diseases.

Can a similar treatment also extend health during aging?

A Living Anti-Aging Drug

The team first injected senolytic CAR T cells into elderly mice aged the equivalent of roughly 65 human years old. Within 20 days, they had lower numbers of zombie cells throughout their bodies, particularly in their livers, fatty tissues, and pancreases. Inflammation levels caused by zombie cells went down, and the mice’s immune profiles reversed to a more youthful state.

In both mice and humans, metabolism tends to go haywire with age. Our ability to handle sugars and insulin decreases, which can lead to diabetes.

With senolytic CAR T therapy, the elderly mice could regulate their blood sugar levels far better than non-treated peers. They also had lower baseline insulin levels after fasting, which rapidly increased when given a sugary treat—a sign of a healthy metabolism.

A potentially dangerous side effect of CAR T is an overzealous immune response. Although the team saw signs of the side effect in young animals at high doses, lowering the amount of the therapy was safe and effective in elderly mice.

Young and Beautiful

Chemical senolytics only last a few hours inside the body. Practically, this means they may need to be consistently taken to keep zombie cells at bay.

CAR T cells, on the other hand, have a far longer lifespan, which can last over 10 years after an initial infusion inside the body. They also “train” the immune system to learn about a new threat—in this case, senescent cells.

“T cells have the ability to develop memory and persist in your body for really long periods, which is very different from a chemical drug,” said Amor Vegas. “With CAR T cells, you have the potential of getting this one treatment, and then that’s it.”

To test how long senolytic CAR T cells can persist in the body, the team infused them into young adult mice and monitored their health as they aged. The engineered cells were dormant until senescent cells began to build up, then they reactivated and readily wiped out the zombie cells.

With just a single shot, the mice aged gracefully. They had lower blood sugar levels, better insulin responses, and were more physically active well into old age.

But mice aren’t people. Their life spans are far shorter than ours. The effects of senolytic CAR T cells may not last as long in our bodies, potentially requiring multiple doses. The treatment can also be dangerous, sometimes triggering a violent immune response that damages organs. Then there’s the cost factor. CAR T therapies are out of reach for most people—a single dose is priced at hundreds of thousands of dollars for cancer treatments.

Despite these problems, the team is cautiously moving forward.

“With CAR T cells, you have the potential of getting this one treatment, and then that’s it,” said Amor Vegas. For chronic age-related diseases, that’s a potential life-changer. “Think about patients who need treatment multiple times per day versus you get an infusion, and then you’re good to go for multiple years.”

Image Credit: Senescent cells (blue) in healthy pancreatic tissue samples from an old mouse treated with CAR T cells as a pup / Cold Spring Harbor Laboratory

Kategorie: Transhumanismus

This Week’s Awesome Tech Stories From Around the Web (Through February 3)

3 Únor, 2024 - 16:00

I Tested a Next-Gen AI Assistant. It Will Blow You Away
Will Knight | Wired
“When the fruits of the recent generative AI boom get properly integrated into…legacy assistant bots [like Siri and Alexa], they will surely get much more interesting. ‘A year from now, I would expect the experience of using a computer to look very different,’ says Shah, who says he built vimGPT in only a few days. ‘Most apps will require less clicking and more chatting, with agents becoming an integral part of browsing the web.'”


CRISPR Gene Therapy Seems to Cure Dangerous Inflammatory Condition
Clare Wilson | New Scientist
“Ten people who had the one-off gene treatment that is given directly into the body saw their number of ‘swelling attacks’ fall by 95 percent in the first six months as the therapy took effect. Since then, all but one have had no further episodes for at least a further year, while one person who had the lowest dose of the treatment had one mild attack. ‘This is potentially a cure,’ says Padmalal Gurugama at Cambridge University Hospitals in the UK, who worked on the new approach.”


Apple Vision Pro Review: Magic, Until It’s Not
Nilay Patel | The Verge
“The Vision Pro is an astounding product. It’s the sort of first-generation device only Apple can really make, from the incredible display and passthrough engineering, to the use of the whole ecosystem to make it so seamlessly useful, to even getting everyone to pretty much ignore the whole external battery situation. …But the shocking thing is that Apple may have inadvertently revealed that some of these core ideas are actually dead ends—that they can’t ever be executed well enough to become mainstream.”


Allen Institute for AI Releases ‘Truly Open Source’  LLM to Drive ‘Critical Shift’ in AI Development
Sharon Goldman | VentureBeat
“While other models have included the model code and model weights, OLMo also provides the training code, training data and associated toolkits, as well as evaluation toolkits. In addition, OLMo was released under an open source initiative (OSI) approved license, with AI2 saying that ‘all code, weights, and intermediate checkpoints are released under the Apache 2.0 License.’ The news comes at a moment when open source/open science AI, which has been playing catch-up to closed, proprietary LLMs like OpenAI’s GPT-4 and Anthropic’s Claude, is making significant headway.”


This Robot Can Tidy a Room Without Any Help
Rhiannon Williams | MIT Technology Review
“While robots may easily complete tasks like [picking up and moving things] in a laboratory, getting them to work in an unfamiliar environment where there’s little data available is a real challenge. Now, a new system called OK-Robot could train robots to pick up and move objects in settings they haven’t encountered before. It’s an approach that might be able to plug the gap between rapidly improving AI models and actual robot capabilities, as it doesn’t require any additional costly, complex training.”


People Are Worried That AI Will Take Everyone’s Jobs. We’ve Been Here Before.
David Rotman | MIT Technology Review
“[Karl T. Compton’s 1938] essay concisely framed the debate over jobs and technical progress in a way that remains relevant, especially given today’s fears over the impact of artificial intelligence. …While today’s technologies certainly look very different from those of the 1930s, Compton’s article is a worthwhile reminder that worries over the future of jobs are not new and are best addressed by applying an understanding of economics, rather than conjuring up genies and monsters.”


Experimental Drug Cuts Off Pain at the Source, Company Says
Gina Kolata | The New York Times
“Vertex Pharmaceuticals of Boston announced [this week] that it had developed an experimental drug that relieves moderate to severe pain, blocking pain signals before they can get to the brain. It works only on peripheral nerves—those outside the brain and the spinal cord—making it unlike opioids. Vertex says its new drug is expected to avoid opioids’ potential to lead to addiction.”


Starlab—With Half the Volume of the ISS—Will Fit Inside Starship’s Payload Bay
Eric Berger | Ars Technica
“‘We looked at multiple launches to get Starlab into orbit, and eventually gravitated toward single launch options,’ [Voyager Space CTO Marshall Smith] said. ‘It saves a lot of the cost of development. It saves a lot of the cost of integration. We can get it all built and checked out on the ground, and tested and launch it with payloads and other systems. One of the many lessons we learned from the International Space Station is that building and integrating in space is very expensive.’ With a single launch on a Starship, the Starlab module should be ready for human habitation almost immediately, Smith said.”


9 Retrofuturistic Predictions That Came True
Maxwell Zeff | Gizmodo
“Commentators and reporters annually try to predict where technology will go, but many fail to get it right year after year. Who gets it right? More often than not, the world resembles the pop culture of the past’s vision for the future. Looking to retrofuturism, an old version of the future, can often predict where our advanced society will go.”


Can This AI-Powered Search Engine Replace Google? It Has for Me.
Kevin Roose | The New York Times
“Intrigued by the hype, I recently spent several weeks using Perplexity as my default search engine on both desktop and mobile. …Hundreds of searches later, I can report that even though Perplexity isn’t perfect, it’s very good. And while I’m not ready to break up with Google entirely, I’m now more convinced that AI-powered search engines like Perplexity could loosen Google’s grip on the search market, or at least force it to play catch-up.”

Image Credit: Dulcey Lima / Unsplash

Kategorie: Transhumanismus

These Technologies Could Axe 85% of CO2 Emissions From Heavy Industry

2 Únor, 2024 - 16:05

Heavy industry is one of the most stubbornly difficult areas of the economy to decarbonize. But new research suggests emissions could be reduced by up to 85 percent globally using a mixture of tried-and-tested and upcoming technologies.

While much of the climate debate focuses on areas like electricity, vehicle emissions, and aviation, a huge fraction of carbon emissions comes from hidden industrial processes. In 2022, the sector—which includes things like chemicals, iron and steel, and cement—accounted for a quarter of the world’s emissions, according to the International Energy Agency.

While they are often lumped together, these industries are very different, and the sources of their emissions can be highly varied. That means there’s no silver bullet and explains why the sector has proven to be one of the most challenging to decarbonize.

This prompted researchers from the UK to carry out a comprehensive survey of technologies that could help get the sector’s emissions under control. They found that solutions like carbon capture and storage, switching to hydrogen or biomass fuels, or electrification of key industrial processes could cut out the bulk of the heavy industry carbon footprint.

“Our findings represent a major step forward in helping to design industrial decarbonization strategies and that is a really encouraging prospect when it comes to the future health of the planet,”  Dr. Ahmed Gailani, from Leeds University, said in a press release.

The researchers analyzed sectors including iron and steel, chemicals, cement and lime, food and drink, pulp and paper, glass, aluminum, refining, and ceramics. They carried out an extensive survey of all the emissions-reducing technologies that had been proposed for each industry, both those that are well-established and emerging ones.

Across all sectors, they identified four key approaches that could help slash greenhouse gases—switching to low-carbon energy supplies like green hydrogen, renewable electricity, or biomass; using carbon capture and storage to mitigate emissions; modifying or replacing emissions-heavy industrial processes; and using less energy and raw materials to produce a product.

Electrification will likely be an important approach across a range of sectors, the authors found. In industries requiring moderate amounts of heat, natural gas boilers and ovens could be replaced with electric ones. Novel technologies like electric arc furnaces and electric steam crackers could help decarbonize the steel and chemicals industries, respectively, though these technologies are still immature.

Green hydrogen could also play a broad role, both as a fuel for heating and an ingredient in various industrial processes that currently rely on hydrogen derived from fossil fuels. Biomass similarly can be used for heating but could also provide more renewable feedstocks for plastic production.

Some industries, such as cement and chemicals, are particularly hard to tackle because carbon dioxide is produced directly by industrial processes rather than as a byproduct of energy needs. For these sectors, carbon capture and storage will likely be particularly important, say the authors.

In addition, they highlight a range of industry-specific alternative production routes that could make a major dent in emissions. Altogether, they estimate these technologies could slash the average emissions of heavy industry by up to 85 percent compared to the baseline.

It’s important to note that the research, which was reported in Joule, only analyzes the technical feasibility of these approaches. The team did not look into the economics or whether the necessary infrastructure was in place, which could have a big impact on how much of a difference they could really make.

“There are of course many other barriers to overcome,” said Gailani. “For example, if carbon capture and storage technologies are needed but the means to transport CO2 are not yet in place, this lack of infrastructure will delay the emissions reduction process. There is still a great amount of work to be done.”

Nonetheless, the research is the first comprehensive survey of what’s possible when it comes to decarbonizing industry. While bringing these ideas to fruition may take a lot of work, the study shows getting emissions from these sectors under control is entirely possible.

Image Credit: Marek Piwnicki / Unsplash

Kategorie: Transhumanismus

An AI Just Learned Language Through the Eyes and Ears of a Toddler

1 Únor, 2024 - 21:30

Sam was six months old when he first strapped a lightweight camera onto his forehead.

For the next year and a half, the camera captured snippets of his life. He crawled around the family’s pets, watched his parents cook, and cried on the front porch with grandma. All the while, the camera recorded everything he heard.

What sounds like a cute toddler home video is actually a daring concept: Can AI learn language like a child? The results could also reveal how children rapidly acquire language and concepts at an early age.

A new study in Science describes how researchers used Sam’s recordings to train an AI to understand language. With just a tiny portion of one child’s life experience over a year, the AI was able to grasp basic concepts—for example, a ball, a butterfly, or a bucket.

The AI, called Child’s View for Contrastive Learning (CVCL), roughly mimics how we learn as toddlers by matching sight to audio. It’s a very different approach than that taken by large language models like the ones behind ChatGPT or Bard. These models’ uncanny ability to craft essays, poetry, or even podcast scripts has thrilled the world. But they need to digest trillions of words from a wide variety of news articles, screenplays, and books to develop these skills.

Kids, by contrast, learn with far less input and rapidly generalize their learnings as they grow. Scientists have long wondered if AI can capture these abilities with everyday experiences alone.

“We show, for the first time, that a neural network trained on this developmentally realistic input from a single child can learn to link words to their visual counterparts,” study author Dr. Wai Keen Vong at NYU’s Center for Data Science said in a press release about the research.

Child’s Play

Children easily soak up words and their meanings from everyday experience.

At just six months old, they begin to connect words to what they’re seeing—for example, a round bouncy thing is a “ball.” By two years of age, they know roughly 300 words and their concepts.

Scientists have long debated how this happens. One theory says kids learn to match what they’re seeing to what they’re hearing. Another suggests language learning requires a broader experience of the world, such as social interaction and the ability to reason.

It’s hard to tease these ideas apart with traditional cognitive tests in toddlers. But we may get an answer by training an AI through the eyes and ears of a child.


The new study tapped a rich video resource called SAYCam, which includes data collected from three kids between 6 and 32 months old using GoPro-like cameras strapped to their foreheads.

Twice every week, the cameras recorded around an hour of footage and audio as they nursed, crawled, and played. All audible dialogue was transcribed into “utterances”—words or sentences spoken before the speaker or conversation changes. The result is a wealth of multimedia data from the perspective of babies and toddlers.

For the new system, the team designed two neural networks with a “judge” to coordinate them. One translated first-person visuals into the whos and whats of a scene—is it a mom cooking? The other deciphered words and meanings from the audio recordings.

The two systems were then correlated in time so the AI learned to associate correct visuals with words. For example, the AI learned to match an image of a baby to the words “Look, there’s a baby” or an image of a yoga ball to “Wow, that is a big ball.” With training, it gradually learned to separate the concept of a yoga ball from a baby.

“This provides the model a clue as to which words should be associated with which objects,” said Vong.

The team then trained the AI on videos from roughly a year and a half of Sam’s life. Together, it amounted to over 600,000 video frames, paired with 37,500 transcribed utterances. Although the numbers sound large, they’re roughly just one percent of Sam’s daily waking life and peanuts compared to the amount of data used to train large language models.

Baby AI on the Rise

To test the system, the team adapted a common cognitive test used to measure children’s language abilities. They showed the AI four new images—a cat, a crib, a ball, and a lawn—and asked which one was the ball.

Overall, the AI picked the correct image around 62 percent of the time. The performance nearly matched a state-of-the-art algorithm trained on 400 million image and text pairs from the web—orders of magnitude more data than that used to train the AI in the study. They found that linking video images with audio was crucial. When the team shuffled video frames and their associated utterances, the model completely broke down.

The AI could also “think” outside the box and generalize to new situations.

In another test, it was trained on Sam’s perspective of a picture book as his parent said, “It’s a duck and a butterfly.” Later, he held up a toy butterfly when asked, “Can you do the butterfly?” When challenged with multicolored butterfly images—ones the AI had never seen before—it detected three out of four examples for “butterfly” with above 80 percent accuracy.

Not all word concepts scored the same. For instance, “spoon” was a struggle. But it’s worth pointing out that, like a tough reCAPTCHA, the training images were hard to decipher even for a human.

Growing Pains

The AI builds on recent advances in multimodal machine learning, which combines text, images, audio, or video to train a machine brain.

With input from just a single child’s experience, the algorithm was able to capture how words relate to each other and link words to images and concepts. It suggests that for toddlers hearing words and matching them to what they’re seeing helps build their vocabulary.

That’s not to say other brain processes, such as social cues and reasoning don’t come into play. Adding these components to the algorithm could potentially improve it, the authors wrote.

The team plans to continue the experiment. For now, the “baby” AI only learns from still image frames and has a vocabulary mostly comprised of nouns. Integrating video segments into the training could help the AI learn verbs because video includes movement.

Adding intonation to speech data could also help. Children learn early on that a mom’s “hmm” can have vastly different meanings depending on the tone.

But overall, combining AI and life experiences is a powerful new method to study both machine and human brains. It could help us develop new AI models that learn like children, and potentially reshape our understanding of how our brains learn language and concepts.

Image Credit: Wai Keen Vong

Kategorie: Transhumanismus

The First 3D Printer to Use Molten Metal in Space Is Headed to the ISS This Week

31 Leden, 2024 - 23:11

The Apollo 13 moon mission didn’t go as planned. After an explosion blew off part of the spacecraft, the astronauts spent a harrowing few days trying to get home. At one point, to keep the air breathable, the crew had to cobble together a converter for ill-fitting CO2 scrubbers with duct tape, space suit parts, and pages from a mission manual.

They didn’t make it to the moon, but Apollo 13 was a master class in hacking. It was also a grim reminder of just how alone astronauts are from the moment their spacecraft lifts off. There are no hardware stores in space (yet). So what fancy new tools will the next generation of space hackers use? The first 3D printer to make plastic parts arrived at the ISS a decade ago. This week, astronauts will take delivery of the first metal 3D printer. The machine should arrive at the ISS Thursday as part of the Cygnus NG-20 resupply mission.

The first 3D printer to print metal in space, pictured here, is headed to the ISS. Image Credit: ESA

Built by an Airbus-led team, the printer is about the size of a washing machine—small for metal 3D printers but big for space exploration—and uses high-powered lasers to liquefy metal alloys at temperatures of over 1,200 degrees Celsius (2,192 degrees Fahrenheit). Molten metal is deposited in layers to steadily build small (but hopefully useful) objects, like spare parts or tools.

Astronauts will install the 3D printer in the Columbus Laboratory on the ISS, where the team will conduct four test prints. They then plan to bring these objects home and compare their strength and integrity to prints completed under Earth gravity. They also hope the experiment demonstrates the process—which involves much higher temperatures than prior 3D printers and harmful fumes—is safe.

“The metal 3D printer will bring new on-orbit manufacturing capabilities, including the possibility to produce load-bearing structural parts that are more resilient than a plastic equivalent,” Gwenaëlle Aridon, a lead engineer at Airbus said in a press release. “Astronauts will be able to directly manufacture tools such as wrenches or mounting interfaces that could connect several parts together. The flexibility and rapid availability of 3D printing will greatly improve astronauts’ autonomy.”

One of four test prints planned for the ISS mission. Image Credit: Airbus Space and Defence SAS

Taking nearly two days per print job, the machine is hardly a speed demon, and the printed objects will be rough around the edges. Following the first demonstration of partial-gravity 3D printing on the ISS, the development of technologies suitable for orbital manufacturing has been slow. But as the ISS nears the end of its life and private space station and other infrastructure projects ramp up, the technology could find more uses.

The need to manufacture items on-demand will only grow the further we travel from home and the longer we stay there. The ISS is relatively nearby—a mere 200 miles overhead—but astronauts exploring and building a more permanent presence on the moon or Mars will need to repair and replace anything that breaks on their mission.

Ambitiously, and even further out, metal 3D printing could contribute to ESA’s vision of a “circular space economy,” in which material from old satellites, spent rocket stages, and other infrastructure is recycled into new structures, tools, and parts as needed.

Duct tape will no doubt always have a place in every space hacker’s box of tools—but a few 3D printers to whip up plastic and metal parts on the fly certainly won’t hurt the cause.

Image Credit: NASA

Kategorie: Transhumanismus

How Much Life Has Ever Existed on Earth, and How Much Ever Will?

30 Leden, 2024 - 21:36

All organisms are made of living cells. While it is difficult to pinpoint exactly when the first cells came to exist, geologists’ best estimates suggest at least as early as 3.8 billion years ago. But how much life has inhabited this planet since the first cell on Earth? And how much life will ever exist on Earth?

In our new study, published in Current Biology, my colleagues from the Weizmann Institute of Science and Smith College and I took aim at these big questions.

Carbon on Earth

Every year, about 200 billion tons of carbon is taken up through what is known as primary production. During primary production, inorganic carbon—such as carbon dioxide in the atmosphere and bicarbonate in the ocean—is used for energy and to build the organic molecules life needs.

Today, the most notable contributor to this effort is oxygenic photosynthesis, where sunlight and water are key ingredients. However, deciphering past rates of primary production has been a challenging task. In lieu of a time machine, scientists like myself rely on clues left in ancient sedimentary rocks to reconstruct past environments.

In the case of primary production, the isotopic composition of oxygen in the form of sulfate in ancient salt deposits allows for such estimates to be made.

In our study, we compiled all previous estimates of ancient primary production derived through the method above, as well as many others. The outcome of this productivity census was that we were able to estimate that 100 quintillion (or 100 billion billion) tons of carbon have been through primary production since the origin of life.

Big numbers like this are difficult to picture; 100 quintillion tons of carbon is about 100 times the amount of carbon contained within the Earth, a pretty impressive feat for Earth’s primary producers.

Primary Production

Today, primary production is mainly achieved by plants on land and marine micro-organisms such as algae and cyanobacteria. In the past, the proportion of these major contributors was very different; in the case of Earth’s earliest history, primary production was mainly conducted by an entirely different group of organisms that doesn’t rely on oxygenic photosynthesis to stay alive.

A combination of different techniques has been able to give a sense of when different primary producers were most active in Earth’s past. Examples of such techniques include identifying the oldest forests or using molecular fossils called biomarkers.

In our study, we used this information to explore what organisms have contributed the most to Earth’s historical primary production. We found that despite being late on the scene, land plants have likely contributed the most. However, it is also very plausible that cyanobacteria contributed the most.

Filamentous cyanobacteria from a tidal pond at Little Sippewissett salt marsh, Falmouth, Mass. Image Credit: Argonne National Laboratory, CC BY-NC-SA Total Life

By determining how much primary production has ever occurred, and by identifying what organisms have been responsible for it, we were also able to estimate how much life has ever been on Earth.

Today, one may be able to approximate how many humans exist based on how much food is consumed. Similarly, we were able to calibrate a ratio of primary production to how many cells exist in the modern environment.

Despite the large variability in the number of cells per organism and the sizes of different cells, such complications become secondary since single-celled microbes dominate global cell populations. In the end, we were able to estimate that about 1030 (10 noninillion) cells exist today, and between 1039 (a duodecillion) and 1040 cells have ever existed on Earth.

How Much Life Will Earth Ever Have?

Save for the ability to move Earth into the orbit of a younger star, the lifetime of Earth’s biosphere is limited. This morbid fact is a consequence of our star’s life cycle. Since its birth, the sun has slowly been getting brighter over the past four and half billion years as hydrogen has been converted to helium in its core.

Far in the future, about two billion years from now, all of the biogeochemical fail-safes that keep Earth habitable will be pushed past their limits. First, land plants will die off, and then eventually the oceans will boil, and the Earth will return to a largely lifeless rocky planet as it was in its infancy.

But until then, how much life will Earth house over its entire habitable lifetime? Projecting our current levels of primary productivity forward, we estimated that about 1040 cells will ever occupy the Earth.

A planetary system 100 light-years away in the constellation Dorado is home to the first Earth-size habitable-zone planet, discovered by NASA’s Transiting Exoplanet Survey Satellite. Image Credit: NASA Goddard Space Flight Center Earth as an Exoplanet

Only a few decades ago, exoplanets (planets orbiting other stars) were just a hypothesis. Now we are able to not only detect them, but describe many aspects of thousands of far off worlds around distant stars.

But how does Earth compare to these bodies? In our new study, we have taken a birds eye view of life on Earth and have put forward Earth as a benchmark to compare other planets.

What I find truly interesting, however, is what could have happened in Earth’s past to produce a radically different trajectory and therefore a radically different amount of life that has been able to call Earth home. For example, what if oxygenic photosynthesis never took hold, or what if endosymbiosis never happened?

Answers to such questions are what will drive my laboratory at Carleton University over the coming years.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image Credit: Mihály Köles / Unsplash 

Kategorie: Transhumanismus

AI Can Design Totally New Proteins From Scratch—It’s Time to Talk Biosecurity

29 Leden, 2024 - 23:08

Two decades ago, engineering designer proteins was a dream.

Now, thanks to AI, custom proteins are a dime a dozen. Made-to-order proteins often have specific shapes or components that give them abilities new to nature. From longer-lasting drugs and protein-based vaccines, to greener biofuels and plastic-eating proteins, the field is rapidly becoming a transformative technology.

Custom protein design depends on deep learning techniques. With large language models—the AI behind OpenAI’s blockbuster ChatGPT—dreaming up millions of structures beyond human imagination, the library of bioactive designer proteins is set to rapidly expand.

“It’s hugely empowering,” Dr. Neil King at the University of Washington recently told Nature. “Things that were impossible a year and a half ago—now you just do it.”

Yet with great power comes great responsibility. As newly designed proteins increasingly gain traction for use in medicine and bioengineering, scientists are now wondering: What happens if these technologies are used for nefarious purposes?

A recent essay in Science highlights the need for biosecurity for designer proteins. Similar to ongoing conversations about AI safety, the authors say it’s time to consider biosecurity risks and policies so custom proteins don’t go rogue.

The essay is penned by two experts in the field. One, Dr. David Baker, the director of the Institute for Protein Design at the University of Washington, led the development of RoseTTAFold—an algorithm that cracked the half-decade problem of decoding protein structure from its amino acid sequences alone. The other, Dr. George Church at Harvard Medical School, is a pioneer in genetic engineering and synthetic biology.

They suggest synthetic proteins need barcodes embedded into each new protein’s genetic sequence. If any of the designer proteins becomes a threat—say, potentially triggering a dangerous outbreak—its barcode would make it easy to trace back to its origin.

The system basically provides “an audit trail,” the duo write.

Worlds Collide

Designer proteins are inextricably tied to AI. So are potential biosecurity policies.

Over a decade ago, Baker’s lab used software to design and build a protein dubbed Top7. Proteins are made of building blocks called amino acids, each of which is encoded inside our DNA. Like beads on a string, amino acids are then twirled and wrinkled into specific 3D shapes, which often further mesh into sophisticated architectures that support the protein’s function.

Top7 couldn’t “talk” to natural cell components—it didn’t have any biological effects. But even then, the team concluded that designing new proteins makes it possible to explore “the large regions of the protein universe not yet observed in nature.”

Enter AI. Multiple strategies recently took off to design new proteins at supersonic speeds compared to traditional lab work.

One is structure-based AI similar to image-generating tools like DALL-E. These AI systems are trained on noisy data and learn to remove the noise to find realistic protein structures. Called diffusion models, they gradually learn protein structures that are compatible with biology.

Another strategy relies on large language models. Like ChatGPT, the algorithms rapidly find connections between protein “words” and distill these connections into a sort of biological grammar. The protein strands these models generate are likely to fold into structures the body can decipher. One example is ProtGPT2, which can engineer active proteins with shapes that could lead to new properties.

Digital to Physical

These AI protein-design programs are raising alarm bells. Proteins are the building blocks of life—changes could dramatically alter how cells respond to drugs, viruses, or other pathogens.

Last year, governments around the world announced plans to oversee AI safety. The technology wasn’t positioned as a threat. Instead, the legislators cautiously fleshed out policies that ensure research follows privacy laws and bolsters the economy, public health, and national defense. Leading the charge, the European Union agreed on the AI Act to limit the technology in certain domains.

Synthetic proteins weren’t directly called out in the regulations. That’s great news for making designer proteins, which could be kneecapped by overly restrictive regulation, write Baker and Church. However, new AI legislation is in the works, with the United Nation’s advisory body on AI set to share guidelines on international regulation in the middle of this year.

Because the AI systems used to make designer proteins are highly specialized, they may still fly under regulatory radars—if the field unites in a global effort to self-regulate.

At the 2023 AI Safety Summit, which did discuss AI-enabled protein design, experts agreed documenting each new protein’s underlying DNA is key. Like their natural counterparts, designer proteins are also built from genetic code. Logging all synthetic DNA sequences in a database could make it easier to spot red flags for potentially harmful designs—for example, if a new protein has structures similar to known pathogenic ones.

Biosecurity doesn’t squash data sharing. Collaboration is critical for science, but the authors acknowledge it’s still necessary to protect trade secrets. And like in AI, some designer proteins may be potentially useful but too dangerous to share openly.

One way around this conundrum is to directly add safety measures to the process of synthesis itself. For example, the authors suggest adding a barcode—made of random DNA letters—to each new genetic sequence. To build the protein, a synthesis machine searches its DNA sequence, and only when it finds the code will it begin to build the protein.

In other words, the original designers of the protein can choose who to share the synthesis with—or whether to share it at all—while still being able to describe their results in publications.

A barcode strategy that ties making new proteins to a synthesis machine would also amp up security and deter bad actors, making it difficult to recreate potentially dangerous products.

“If a new biological threat emerges anywhere in the world, the associated DNA sequences could be traced to their origins,” the authors wrote.

It will be a tough road. Designer protein safety will depend on global support from scientists, research institutions, and governments, the authors write. However, there have been previous successes. Global groups have established safety and sharing guidelines in other controversial fields, such as stem cell research, genetic engineering, brain implants, and AI. Although not always followed—CRISPR babies are a notorious example—for the most part these international guidelines have helped move cutting-edge research forward in a safe and equitable manner.

To Baker and Church, open discussions about biosecurity will not slow the field. Rather, it can rally different sectors and engage public discussion so custom protein design can further thrive.

Image Credit: University of Washington

Kategorie: Transhumanismus

This Week’s Awesome Tech Stories From Around the Web (Through January 27)

27 Leden, 2024 - 16:00

New Theory Suggests Chatbots Can Understand Text
Anil Ananthaswamy | Quanta
“Artificial intelligence seems more powerful than ever, with chatbots like Bard and ChatGPT capable of producing uncannily humanlike text. But for all their talents, these bots still leave researchers wondering: Do such models actually understand what they are saying? ‘Clearly, some people believe they do,’ said the AI pioneer Geoff Hinton in a recent conversation with Andrew Ng, ‘and some people believe they are just stochastic parrots.’ …New research may have intimations of an answer.”


Etching AI Controls Into Silicon Could Keep Doomsday at Bay
Will Knight | Wired
“Even the cleverest, most cunning artificial intelligence algorithm will presumably have to obey the laws of silicon. Its capabilities will be constrained by the hardware that it’s running on. Some researchers are exploring ways to exploit that connection to limit the potential of AI systems to cause harm. The idea is to encode rules governing the training and deployment of advanced algorithms directly into the computer chips needed to run them.”


Google’s Hugging Face Deal Puts ‘Supercomputer’ Power Behind Open-Source AI
Emilia David | The Verge
“Google Cloud’s new partnership with AI model repository Hugging Face is letting developers build, train, and deploy AI models without needing to pay for a Google Cloud subscription. Now, outside developers using Hugging Face’s platform will have ‘cost-effective’ access to Google’s tensor processing units (TPU) and GPU supercomputers, which will include thousands of Nvidia’s in-demand and export-restricted H100s.”


How Microsoft Catapulted to $3 Trillion on the Back of AI
Tom Dotan | The Wall Street Journal
“Microsoft on Thursday became the second company ever to end the trading day valued at more than $3 trillion, a milestone reflecting investor optimism that one of the oldest tech companies is leading an artificial-intelligence revolution. …One of [CEO Satya Nadella’s] biggest gambles in recent years has been partnering with an untested nonprofit startup—generative AI pioneer OpenAI—and quickly folding its technology into Microsoft’s bestselling products. That move made Microsoft a de facto leader in a burgeoning AI field many believe will retool the tech industry.”


Hell Yeah, We’re Getting a Space-Based Gravitational Wave Observatory
Isaac Schultz | Gizmodo
“To put an interferometer in space would vastly reduce the noise encountered by ground-based instruments, and lengthening the arms of the observatory would allow scientists to collect data that is imperceptible on Earth. ‘Thanks to the huge distance traveled by the laser signals on LISA, and the superb stability of its instrumentation, we will probe gravitational waves of lower frequencies than is possible on Earth, uncovering events of a different scale, all the way back to the dawn of time,’ said Nora Lützgendorf, the lead project scientist for LISA, in an ESA release.”


General Purpose Humanoid Robots? Bill Gates Is a Believer
Brian Heater | TechCrunch
“The robotics industry loves a good, healthy debate. Of late, one of the most intense ones centers around humanoid robots. It’s been a big topic for decades, of course, but the recent proliferation of startups like 1X and Figure—along with projects from more established companies like Tesla—have put humanoids back in the spotlight. Humanoid robots can, however, now claim a big tech name among their ranks. Bill Gates this week issued a list of ‘cutting-edge robotics startups and labs that I’m excited about.’ Among the names are three companies focused on developing humanoids.”


Is Cryptocurrency Like Stocks and Bonds? Courts Move Closer to an Answer.
Matthew Goldstein and David Yaffe-Bellany | The New York Times
“How the courts rule could determine whether the crypto industry can burrow deeper into the American financial system. If the SEC prevails, crypto supporters say, it will stifle the growth of a new and dynamic technology, pushing start-ups to move offshore. The government has countered that robust oversight is necessary to end the rampant fraud that cost investors billions of dollars when the crypto market imploded in 2022.”


Solid-State EV Batteries Now Face ‘Production Hell’
Charles J. Murray | IEEE Spectrum
“Producing battery packs that yield 800+ kilometers remains rough going. …’Solid-state is a great technology,’ noted Bob Galyen, owner of Galyen Energy LLC and former chief technology officer for the Chinese battery giant, Contemporary Amperex Technology Ltd (CATL). ‘But it’s going to be just like lithium-ion was in terms of the length of time it will take to hit the market. And lithium-ion took a long time to get there.'”


I Love My GPT, But I Can’t Find a Use for Anybody Else’s
Emilia David | The Verge
“Though I’ve come to depend on my GPT, it’s the only one I use. It’s not fully integrated into my workflow either, because GPTs live in the ChatGPT Plus tab on my browser instead of inside a program like Google Docs. And honestly, if I wasn’t already paying for ChatGPT Plus, I’d be happy to keep Googling alternative terms. I don’t think I’ll be giving up ‘What’s Another Word For’ any time soon, but unless another hot GPT idea strikes me, I’m still not sure what they’re good for—at least in my job.”

Image Credit: Jonny Caspari / Unsplash

Kategorie: Transhumanismus