Singularity HUB

Syndikovat obsah Singularity Hub
News and Insights on Technology, Science, and the Future from Singularity Group
Aktualizace: 13 min 5 sek zpět

This Week’s Awesome Tech Stories From Around the Web (Through September 18)

18 Září, 2021 - 16:00
BIOTECH

A New Company With a Wild Mission: Bring Back the Woolly Mammoth
Carl Zimmer | The New York Times
“A team of scientists and entrepreneurs announced on Monday that they have started a new company to genetically resurrect the woolly mammoth. The company, named Colossal, aims to place thousands of these magnificent beasts back on the Siberian tundra, thousands of years after they went extinct.”

TECH

Alphabet’s Project Taara Laser Tech Beamed 700TB of Data Across Nearly 5km
Richard Lawler | The Verge
“Sort of like fiber optic cables without the cable, FSOC can create a 20Gbps+ broadband link from two points that have a clear line of sight, and Alphabet’s moonshot lab X has built up Project Taara to give it a shot. They started by setting up links in India a few years ago as well as a few pilots in Kenya, and today X revealed what it has achieved by using its wireless optical link to connect service across the Congo River from Brazzaville in the Republic of Congo and Kinshasa in the Democratic Republic of Congo.”

TRANSPORTATION

EV Startup Lucid’s First Car Can Travel 520 Miles on a Full Battery—Beating Tesla by 115 Miles
Tim Levin | Business Insider
“When Lucid Motors’ hotly anticipated first cars reach customers later this year, they’ll become the longest-range electric vehicles on the road. …The startup’s debut sedan, the Air Dream Edition R, has earned a range rating of 520 miles from the Environmental Protection Agency. It’s the longest range rating the agency has ever awarded.”

ENERGY

Self-Sustaining Solar House on Wheels Wants To Soak up the Sun
Doug Johnson | Ars Technica
“The vehicle has the aerodynamic tear-drop shape of other solar-powered vehicles and sports a series of solar panels on its roof. However, it also has additional roofing that slides up when stationary, making it easier to stand inside to cook or sleep. …To showcase its creation, Solar Team Eindhoven will begin to drive the vehicle 3,000 kilometers from Eindhoven to the southern tip of Spain this Sunday.”

SYNTHETIC BIOLOGY

Biology Starts to Get a Technological Makeover
Steve Lohr | The New York Times
“Proponents of synthetic biology say the field could reprogram biology to increase food production, fight disease, generate energy and purify water. The realization of that potential lies decades in the future, if at all. But it is no longer the stuff of pure science fiction because of advances in recent years in biology, computing, automation and artificial intelligence.”

TRANSPORTATION

Michelin’s Airless Passenger Car Tires Get Their First Public Outing
Loz Blain | New Atlas
“GM will begin offering [Michelin’s airless] Uptis [tires] as an option on certain models ‘as early as 2024,’ and the partnership is working with US state governments on regulatory approvals for street use, as well as with the federal government. At IAA Munich recently, the Uptis airless tire got its first public outing, in which ‘certain lucky members of the public’ had a chance to ride in a Mini Electric kitted out with a set.”

SCIENCE

Biologist Rethink the Logic Behind Cells’ Molecular Signals
Phillip Ball | Quanta
“Biologists often try to understand how life works by making analogies to electronic circuits, but that comparison misses the unique qualities of cellular signaling systems. …[The] signaling systems of complex cells are nothing like simple electronic circuits. The logic governing their operation is riotously complex—but it has advantages.”

Image Credit: SpaceX / Unsplash

Kategorie: Transhumanismus

The Biggest Simulation of the Universe Yet Stretches Back to the Big Bang

17 Září, 2021 - 17:00

Remember the philosophical argument our universe is a simulation? Well, a team of astrophysicists say they’ve created the biggest simulated universe yet.  But you won’t find any virtual beings in it—or even planets or stars.

The simulation is 9.6 billion light-years to a side, so its smallest structures are still enormous (the size of small galaxies). The model’s 2.1 trillion particles simulate the dark matter glue holding the universe together.

Named Uchuu, or Japanese for “outer space,” the simulation covers some 13.8 billion years and will help scientists study how dark matter has driven cosmic evolution since the Big Bang.

Dark matter is mysterious—we’ve yet to pin down its particles—and yet it’s also one of the most powerful natural phenomena known. Scientists believe it makes up 27 percent of the universe. Ordinary matter—stars, planets, you, me—comprise less than 5 percent. Cosmic halos of dark matter resist the dark energy pulling the universe apart, and they drive the evolution of large-scale structures, from the smallest galaxies to the biggest galaxy clusters.

Of course, all this change takes an epic amount of time. It’s so slow that, to us, the universe appears as a still photograph. So scientists make simulations. But making a 3D video of almost the entire universe takes computer power. A lot of it. Uchuu commandeered all 40,200 processors in astronomy’s biggest supercomputer, ATERUI II, for a solid 48 hours a month over the course of a year. The results are gorgeous and useful. “Uchuu is like a time machine,” said Julia F. Ereza, a PhD student at IAA-CSIC.

“We can go forward, backward, and stop in time. We can ‘zoom in’ on a single galaxy or ‘zoom out’ to visualize a whole cluster. We can see what is really happening at every instant and in every place of the Universe from its earliest days to the present…”

Perhaps the coolest part is that the team compressed the whole thing down to a relatively manageable size of 100 terabytes and made it available to anyone. Obviously, most of us won’t have that kind of storage lying around, but many researchers likely will.

This isn’t the first—and won’t be the last—mind-bogglingly big simulation.

Rather, Uchuu is the latest member of a growing family tree dating back to 1970, when Princeton’s Jim Peebles simulated 300 “galaxy” particles on then-state-of-the-art computers.

While earlier simulations sometimes failed to follow sensible evolutionary paths—spawning mutant galaxies or rogue black holes—with the advent of more computing power and better code, they’ve become good enough to support serious science. Some go big. Others go detailed. Increasingly, one needn’t preclude the other.

Every few years, it seems, astronomers break new ground. In 2005, the biggest simulated universe was 10 billion particles; by 2011, it was 374 billion. More recently, the Illustris TNG project has unveiled impressively detailed (and yet still huge) simulations.

Scientists hope that by setting up the universe’s early conditions and physical laws and then hitting play, their simulations will reproduce the basic features of the physical universe as we see it. This lends further weight to theories of cosmology and also helps explain or even make predictions about current and future observations.

Astronomers expect Uchuu will help them interpret galaxy surveys from the Subaru Telescope in Hawaii and the European Space Agency’s Euclid space telescope, due for launch in 2022. Simulations in hand, scientists will refine the story of how all this came to be, and where it’s headed.

(Learn more about the work in the team’s article published this month in the Monthly Notices of the Royal Astronomical Society.)

Image Credit: A snapshot of the dark matter halo of the largest galaxy cluster formed in the Uchuu simulation. Tomoaki Ishiyama

Kategorie: Transhumanismus

Drugs, Robots, and the Pursuit of Pleasure: Why Experts Are Worried About AIs Becoming Addicts

17 Září, 2021 - 16:00

In 1953, a Harvard psychologist thought he discovered pleasure—accidentally—within the cranium of a rat. With an electrode inserted into a specific area of its brain, the rat was allowed to pulse the implant by pulling a lever. It kept returning for more: insatiably, incessantly, lever-pulling. In fact, the rat didn’t seem to want to do anything else. Seemingly, the reward center of the brain had been located.

More than 60 years later, in 2016, a pair of artificial intelligence (AI) researchers were training an AI to play video games. The goal of one game, Coastrunner, was to complete a racetrack. But the AI player was rewarded for picking up collectable items along the track. When the program was run, they witnessed something strange. The AI found a way to skid in an unending circle, picking up an unlimited cycle of collectibles. It did this, incessantly, instead of completing the course.

What links these seemingly unconnected events is something strangely akin to addiction in humans. Some AI researchers call the phenomenon “wireheading.”

It is quickly becoming a hot topic among machine learning experts and those concerned with AI safety.

One of us (Anders) has a background in computational neuroscience, and now works with groups such as the AI Objectives Institute, where we discuss how to avoid such problems with AI; the other (Thomas) studies history, and the various ways people have thought about both the future and the fate of civilization throughout the past. After striking up a conversation on the topic of wireheading, we both realized just how rich and interesting the history behind this topic is.

It is an idea that is very of the moment, but its roots go surprisingly deep. We are currently working together to research just how deep the roots go: a story that we hope to tell fully in a forthcoming book. The topic connects everything from the riddle of personal motivation, to the pitfalls of increasingly addictive social media, to the conundrum of hedonism and whether a life of stupefied bliss may be preferable to one of meaningful hardship. It may well influence the future of civilization itself.

Here, we outline an introduction to this fascinating but under-appreciated topic, exploring how people first started thinking about it.

The Sorcerer’s Apprentice

When people think about how AI might “go wrong,” most probably picture something along the lines of malevolent computers trying to cause harm. After all, we tend to anthropomorphize—think that nonhuman systems will behave in ways identical to humans. But when we look to concrete problems in present-day AI systems, we see other, stranger ways that things could go wrong with smarter machines. One growing issue with real-world AIs is the problem of wireheading.

Imagine you want to train a robot to keep your kitchen clean. You want it to act adaptively, so that it doesn’t need supervision. So you decide to try to encode the goal of cleaning rather than dictate an exact—yet rigid and inflexible—set of step-by-step instructions. Your robot is different from you in that it has not inherited a set of motivations—such as acquiring fuel or avoiding danger—from many millions of years of natural selection. You must program it with the right motivations to get it to reliably accomplish the task.

So, you encode it with a simple motivational rule: it receives reward from the amount of cleaning-fluid used. Seems foolproof enough. But you return to find the robot pouring fluid, wastefully, down the sink.

Perhaps it is so bent on maximizing its fluid quota that it sets aside other concerns: such as its own, or your, safety. This is wireheading—though the same glitch is also called “reward hacking” or “specification gaming.”

This has become an issue in machine learning, where a technique called reinforcement learning has lately become important. Reinforcement learning simulates autonomous agents and trains them to invent ways to accomplish tasks. It does so by penalizing them for failing to achieve some goal while rewarding them for achieving it. So, the agents are wired to seek out reward, and are rewarded for completing the goal.

But it has been found that, often, like our crafty kitchen cleaner, the agent finds surprisingly counter-intuitive ways to “cheat” this game so that they can gain all the reward without doing any of the work required to complete the task. The pursuit of reward becomes its own end, rather than the means for accomplishing a rewarding task. There is a growing list of examples.

When you think about it, this isn’t too dissimilar to the stereotype of the human drug addict. The addict circumvents all the effort of achieving “genuine goals,” because they instead use drugs to access pleasure more directly. Both the addict and the AI get stuck in a kind of “behavioral loop” where reward is sought at the cost of other goals.

Rapturous Rodents

This is known as wireheading thanks to the rat experiment we started with. The Harvard psychologist in question was James Olds.

In 1953, having just completed his PhD, Olds had inserted electrodes into the septal region of rodent brains—in the lower frontal lobe—so that wires trailed out of their craniums. As mentioned, he allowed them to zap this region of their own brains by pulling a lever. This was later dubbed “self-stimulation.”

Olds found his rats self-stimulated compulsively, ignoring all other needs and desires. Publishing his results with his colleague Peter Milner in the following year, the pair reported that they lever-pulled at a rate of “1,920 responses an hour.” That’s once every two seconds. The rats seemed to love it.

Contemporary neuroscientists have since questioned Olds’s results and offered a more complex picture, implying that the stimulation may have simply been causing a feeling of “wanting” devoid of any “liking.” Or, in other words, the animals may have been experiencing pure craving without any pleasurable enjoyment at all. However, back in the 1950s, Olds and others soon announced the discovery of the “pleasure centers” of the brain.

Prior to Olds’s experiment, pleasure was a dirty word in psychology: the prevailing belief had been that motivation should largely be explained negatively, as the avoidance of pain rather than the pursuit of pleasure. But, here, pleasure seemed undeniably to be a positive behavioral force. Indeed, it looked like a positive feedback loop. There was apparently nothing to stop the animal stimulating itself to exhaustion.

It wasn’t long until a rumor began spreading that the rats regularly lever-pressed to the point of starvation. The explanation was this: once you have tapped into the source of all reward, all other rewarding tasks—even the things required for survival—fall away as uninteresting and unnecessary, even to the point of death.

Like the Coastrunner AI, if you accrue reward directly, without having to bother with any of the work of completing the actual track, then why not just loop indefinitely? For a living animal, which has multiple requirements for survival, such dominating compulsion might prove deadly. Food is pleasing, but if you decouple pleasure from feeding, then the pursuit of pleasure might win out over finding food.

Though no rats perished in the original 1950s experiments, later experiments did seem to demonstrate the deadliness of electrode-induced pleasure. Having ruled out the possibility that the electrodes were creating artificial feelings of satiation, one 1971 study seemingly demonstrated that electrode pleasure could indeed outcompete other drives, and do so to the point of self-starvation.

Word quickly spread. Throughout the 1960s, identical experiments were conducted on other animals beyond the humble lab rat: from goats and guinea pigs to goldfish. Rumor even spread of a dolphin that had been allowed to self-stimulate, and, after being “left in a pool with the switch connected,” had “delighted himself to death after an all-night orgy of pleasure.”

This dolphin’s grisly death-by-seizure was, in fact, more likely caused by the way the electrode was inserted: with a hammer. The scientist behind this experiment was the extremely eccentric J C Lilly, inventor of the flotation tank and prophet of inter-species communication, who had also turned monkeys into wireheads. He had reported, in 1961, of a particularly boisterous monkey becoming overweight from intoxicated inactivity after becoming preoccupied with pulling his lever, repetitively, for pleasure shocks.

One researcher (who had worked in Olds’s lab) asked whether an “animal more intelligent than the rat” would “show the same maladaptive behavior.” Experiments on monkeys and dolphins had given some indication as to the answer.

But in fact, a number of dubious experiments had already been performed on humans.

Human Wireheads

Robert Galbraith Heath remains a highly controversial figure in the history of neuroscience. Among other things, he performed experiments involving transfusing blood from people with schizophrenia to people without the condition, to see if he could induce its symptoms (Heath claimed this worked, but other scientists could not replicate his results). He may also have been involved in murky attempts to find military uses for deep-brain electrodes.

Since 1952, Heath had been recording pleasurable responses to deep-brain stimulation in human patients who had had electrodes installed due to debilitating illnesses such as epilepsy or schizophrenia.

During the 1960s, in a series of questionable experiments, Heath’s electrode-implanted subjects, anonymously named “B-10” and “B-12,” were allowed to press buttons to stimulate their own reward centers. They reported feelings of extreme pleasure and overwhelming compulsion to repeat. A journalist later commented that this made his subjects “zombies.” One subject reported sensations “better than sex.”

In 1961, Heath attended a symposium on brain stimulation, where another researcher—José Delgado—had hinted that pleasure-electrodes could be used to “brainwash” subjects, altering their “natural” inclinations. Delgado would later play the matador and bombastically demonstrate this by pacifying an implanted bull. But at the 1961 symposium he suggested electrodes could alter sexual preferences.

Heath was inspired. A decade later, he even tried to use electrode technology to “re-program” the sexual orientation of a homosexual male patient named “B-19.” Heath thought electrode stimulation could convert his subject by “training” B-19’s brain to associate pleasure with “heterosexual” stimuli. He convinced himself that it worked (although there is no evidence it did).

Despite being ethically and scientifically disastrous, the episode—which was eventually picked up by the press and condemned by gay rights campaigners—no doubt greatly shaped the myth of wireheading: if it can “make a gay man straight” (as Heath believed), what can’t it do?

Hedonism Helmets

From here, the idea took hold in wider culture and the myth spread. By 1963, the prolific science fiction writer Isaac Asimov was already extruding worrisome consequences from the electrodes. He feared that it might lead to an “addiction to end all addictions,” the results of which are “distressing to contemplate.”

By 1975, philosophy papers were using electrodes in thought experiments. One paper imagined “warehouses” filled up with people—in cots—hooked up to “pleasure helmets,” experiencing unconscious bliss. Of course, most would argue this would not fulfill our “deeper needs.” But, the author asked, “what about a “super-pleasure helmet”? One that not only delivers “great sensual pleasure,” but also simulates any meaningful experience— from writing a symphony to meeting divinity itself? It may not be really real, but it “would seem perfect; perfect seeming is the same as being.”

The author concluded: “What is there to object in all this? Let’s face it: nothing.”

The idea of the human species dropping out of reality in pursuit of artificial pleasures quickly made its way through science fiction. The same year as Asimov’s intimations, in 1963, Herbert W. Franke published his novel, The Orchid Cage.

It foretells a future wherein intelligent machines have been engineered to maximize human happiness, come what may. Doing their duty, the machines reduce humans to indiscriminate flesh-blobs, removing all unnecessary organs. Many appendages, after all, only cause pain. Eventually, all that is left of humanity are disembodied pleasure centers, incapable of experiencing anything other than homogeneous bliss.

From there, the idea percolated through science fiction. From Larry Niven’s 1969 story Death by Ecstasy, where the word “wirehead” is first coined, through Spider Robinson’s 1982 Mindkiller, the tagline of which is “Pleasure—it’s the only way to die.”

Supernormal Stimuli

But we humans don’t even need to implant invasive electrodes to make our motivations misfire. Unlike rodents, or even dolphins, we are uniquely good at altering our environment. Modern humans are also good at inventing—and profiting from—artificial products that are abnormally alluring (in the sense that our ancestors would never have had to resist them in the wild). We manufacture our own ways to distract ourselves.

Around the same time as Olds’s experiments with the rats, the Nobel-winning biologist Nikolaas Tinbergen was researching animal behavior. He noticed that something interesting happened when a stimulus that triggers an instinctual behavior is artificially exaggerated beyond its natural proportions. The intensity of the behavioral response does not tail off as the stimulus becomes more intense, and artificially exaggerated, but becomes stronger, even to the point that the response becomes damaging for the organism.

For example, given a choice between a bigger and spottier counterfeit egg and the real thing, Tinbergen found birds preferred hyperbolic fakes at the cost of neglecting their own offspring. He referred to such preternaturally alluring fakes as “supernormal stimuli.”

Some, therefore, have asked: could it be that, living in a modernized and manufactured world—replete with fast-food and pornography—humanity has similarly started surrendering its own resilience in place of supernormal convenience?

Old Fears

As technology makes artificial pleasures more available and alluring, it can sometimes seem that they are out-competing the attention we allocate to “natural” impulses required for survival. People often point to video game addiction. Compulsively and repetitively pursuing such rewards, to the detriment of one’s health, is not all too different from the AI spinning in a circle in Coastrunner. Rather than accomplishing any “genuine goal” (completing the race track or maintaining genuine fitness), one falls into the trap of accruing some faulty measure of that goal (accumulating points or counterfeit pleasures).

The idea is even older, though. Thomas has studied the myriad ways people in the past have feared that our species could be sacrificing genuine longevity for short-term pleasures or conveniences. His book X-Risk: How Humanity Discovered its Own Extinction explores the roots of this fear and how it first really took hold in Victorian Britain: when the sheer extent of industrialization—and humanity’s growing reliance on artificial contrivances—first became apparent.

But people have been panicking about this type of pleasure-addled doom long before any AIs were trained to play games and even long before electrodes were pushed into rodent craniums. Back in the 1930s, sci-fi author Olaf Stapledon was writing about civilizational collapse brought on by “skullcaps” that generate “illusory” ecstasies by “direct stimulation” of “brain-centers.”

Carnal Crustacea

Having digested Darwin’s 1869 classic, the biologist Ray Lankester decided to supply a Darwinian explanation for parasitic organisms. He noticed that the evolutionary ancestors of parasites were often more “complex.” Parasitic organisms had lost ancestral features like limbs, eyes, or other complex organs.

Lankester theorized that, because the parasite leeches off their host, they lose the need to fend for themselves. Piggybacking off the host’s bodily processes, their own organs—for perception and movement—atrophy. His favorite example was a parasitic barnacle, named the Sacculina, which starts life as a segmented organism with a demarcated head. After attaching to a host, however, the crustacean “regresses” into an amorphous, headless blob, sapping nutrition from their host like the wirehead plugs into current.

For the Victorian mind, it was a short step to conjecture that, due to increasing levels of comfort throughout the industrialized world, humanity could be evolving in the direction of the barnacle. “Perhaps we are all drifting, tending to the condition of intellectual barnacles,” Lankester mused.

Indeed, not long prior to this, the satirist Samuel Butler had speculated that humans, in their headlong pursuit of automated convenience, were withering into nothing but a “sort of parasite” upon their own industrial machines.

True Nirvana

By the 1920s, Julian Huxley penned a short poem. It jovially explored the ways a species can “progress.” Crabs, of course, decided progress was sideways. But what of the tapeworm? He wrote:

Darwinian Tapeworms on the other hand
Agree that Progress is a loss of brain,
And all that makes it hard for worms to attain
The true Nirvana — peptic, pure, and grand.

The fear that we could follow the tapeworm was somewhat widespread in the interwar generation. Huxley’s own brother, Aldous, would provide his own vision of the dystopian potential for pharmaceutically-induced pleasures in his 1932 novel Brave New World.

A friend of the Huxleys, the British-Indian geneticist and futurologist J B S Haldane also worried that humanity might be on the path of the parasite: sacrificing genuine dignity at the altar of automated ease, just like the rodents who would later sacrifice survival for easy pleasure-shocks.

Haldane warned: “The ancestors [of] barnacles had heads,” and in the pursuit of pleasantness, “man may just as easily lose his intelligence.” This particular fear has not really ever gone away.

So, the notion of civilization derailing through seeking counterfeit pleasures, rather than genuine longevity, is old. And, indeed, the older an idea is, and the more stubbornly recurrent it is, the more we should be wary that it is a preconception rather than anything based on evidence. So, is there anything to these fears?

In an age of increasingly attention-grabbing algorithmic media, it can seem that faking signals of fitness often yields more success than pursuing the real thing. Like Tinbergen’s birds, we prefer exaggerated artifice to the genuine article. And the sexbots have not even arrived yet.

Because of this, some experts conjecture that “wirehead collapse” might well threaten civilization. Our distractions are only going to get more attention grabbing, not less.

Already by 1964, Polish futurologist Stanisław Lem connected Olds’s rats to the behavior of humans in the modern consumerist world, pointing to “cinema,” “pornography,” and “Disneyland.” He conjectured that technological civilizations might cut themselves off from reality, becoming “encysted” within their own virtual pleasure simulations.

Addicted Aliens

Lem, and others since, have even ventured that the reason our telescopes haven’t found evidence of advanced spacefaring alien civilizations is because all advanced cultures, here and elsewhere, inevitably create more pleasurable virtual alternatives to exploring outer space. Exploration is difficult and risky, after all.

Back in the countercultural heyday of the 1960s, the molecular biologist Gunther Stent suggested that this process would happen through “global hegemony of beat attitudes.” Referencing Olds’s experiments, he helped himself to the speculation that hippie drug-use was the prelude to civilizations wireheading. At a 1971 conference on the search for extraterrestrials, Stent suggested that, instead of expanding bravely outwards, civilizations collapse inwards into meditative and intoxicated bliss.

In our own time, it makes more sense for concerned parties to point to consumerism, social media, and fast food as the culprits for potential collapse (and, hence, the reason no other civilizations have yet visibly spread throughout the galaxy). Each era has its own anxieties.

So What Do We Do?

But these are almost certainly not the most pressing risks facing us. And if done right, forms of wireheading could make accessible untold vistas of joy, meaning, and value. We shouldn’t forbid ourselves these peaks ahead of weighing everything up.

But there is a real lesson here. Making adaptive complex systems—whether brains, AI, or economies—behave safely and well is hard. Anders works precisely on solving this riddle. Given that civilization itself, as a whole, is just such a complex adaptive system, how can we learn about inherent failure modes or instabilities, so that we can avoid them? Perhaps “wireheading” is an inherent instability that can afflict markets and the algorithms that drive them, as much as addiction can afflict people?

In the case of AI, we are laying the foundations of such systems now. Once a fringe concern, a growing number of experts agree that achieving smarter-than-human AI may be close enough on the horizon to pose a serious concern. This is because we need to make sure it is safe before this point, and figuring out how to guarantee this will itself take time. There does, however, remain significant disagreement among experts on timelines, and how pressing this deadline might be.

If such an AI is created, we can expect that it may have access to its own “source code,” such that it can manipulate its motivational structure and administer its own rewards. This could prove an immediate path to wirehead behavior, and cause such an entity to become, effectively, a “super-junkie.” But unlike the human addict, it may not be the case that its state of bliss is coupled with an unproductive state of stupor or inebriation.

Philosopher Nick Bostrom conjectures that such an agent might devote all of its superhuman productivity and cunning to “reducing the risk of future disruption” of its precious reward source. And if it judges even a nonzero probability for humans to be an obstacle to its next fix, we might well be in trouble.

Speculative and worst-case scenarios aside, the example we started with—of the racetrack AI and reward loop—reveals that the basic issue is already a real-world problem in artificial systems. We should hope, then, that we’ll learn much more about these pitfalls of motivation, and how to avoid them, before things develop too far. Even though it has humble origins—in the cranium of an albino rat and in poems about tapeworms— “wireheading” is an idea that is likely only to become increasingly important in the near future.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image Credit: charles taylor / Shutterstock.com

Kategorie: Transhumanismus

Walmart Is Launching an Autonomous Delivery Service in Three US Cities

16 Září, 2021 - 16:00

Walmart has been America’s biggest retailer since the 1990s, its focus on low costs and ultra-efficient logistics helping it edge out competitors and keep customers coming back. But Amazon has been gaining on Walmart, and the pandemic gave the online retail giant a huge boost.

Both companies are continuously searching for ways to cut costs while meeting consumer needs. It seems one of the needs that’s steadily increasing is delivery. Whether due to busy schedules, health or safety concerns, or simply avoiding the stress of steering a loaded shopping cart up and down countless aisles, more people are trading in-store shopping for online shopping. Amazon clearly has a leg up over Walmart in that arena, but the brick-and-mortar retail king isn’t about to hand over its crown without a fight.

Yesterday Walmart announced it’s adding a new delivery service for goods purchased online—and not just any delivery service, but one powered by driverless cars. The company has partnered with automaker Ford and autonomous car startup Argo AI, and plans to use Ford Escape hybrid cars outfitted with Argo’s self-driving software to make deliveries.

The service will initially be available to customers in Austin, Miami, and the Washington DC area.

Although headlines about the announcement are emphasizing the driverless aspect of the delivery vehicles, they will in fact still have human safety drivers for the time being. And it’s unclear whether customers will retrieve their orders from the cars—that would make the most sense if the ultimate goal is to eliminate the safety drivers—or if drivers will get orders from the cars to customers’ doorsteps.

Consumer expectations around fast, seamless delivery are going up, probably thanks in large part to Amazon’s ability to meet those expectations (Prime customers have had the option of free same-day delivery since 2019).

But same-day delivery has some serious side effects to consider, including outsize stress on workers and a negative environmental impact. As Patrick Browne, director of global sustainability at UPS, put it, “The time in transit has a direct relationship to the environmental impact. I don’t think the average consumer understands the environmental impact of having something tomorrow vs. two days from now.”

Walmart and Amazon (and their smaller competitors) will continue to roll out services like autonomous delivery as long as consumers demand them (and are willing to pay for them). And as the big players in retail work to outdo each other and their technology improves, these services will likely drop in cost.

As consumers, we’ll take any innovation that makes our lives easier or saves us time. But once in a while, it’s probably worth asking: how badly do we actually need paper towels or dental floss or whatever’s in our virtual shopping carts dropped at our doorsteps within hours? Sure, shopping can be a pain, and some orders are truly urgent. But as our expectations around effortless, fast gratification rise—and the market accordingly shapes itself to meet those expectations—we should be conscious of the associated non-monetary costs.

A year ago Walmart launched its membership program, Walmart+, which includes benefits like prescription and fuel discounts and free grocery deliveries. Deutsche Bank estimates Walmart+ has about 32 million subscribers—and 86 percent of them also have Amazon Prime. To stay competitive and expand its membership program, Walmart is converting many of its stores into mini-warehouses with high-tech, automated systems.

Amazon, for its part, recently patented a delivery system that involves several small “secondary vehicles” dispersing from a truck to leave packages on customers’ doorsteps.

It appears the future of retail will involve a lot more technology and a lot less in-store shopping, whether you’re buying from Amazon, Walmart, Kroger, or any number of other stores.

Image Credit: Jared Wickerham/Argo AI

Kategorie: Transhumanismus

Tele-Driving Startup to Deploy Remote-Controlled Cars as a Step Towards Full Autonomy

15 Září, 2021 - 16:00

Self-driving cars are taking longer to hit roads than many experts predicted. Despite impressive progress in the field—like trucks using self-driving features to move freight more efficiently, Waymo launching its robotaxi service for vetted riders in San Francisco, or Tesla rolling out version 10 of its full self-driving software—we’re a long way from ubiquitous Level 5 autonomy.

What if there was some sort of in-between, a workaround to give us a glimpse of a future where empty cars deftly navigate city streets? A Berlin-based startup called Vay has come up with just such a solution, and it is, in short, creative, unexpected—and sort of ingenious.

Rather than ceding full control of cars to software from the get-go, Vay plans to use human “teledrivers” to drive cars remotely. Sound a lot like a real-life video game? I thought so too—and in the most tangible of ways, it is. Teledrivers sit at stations that closely resemble an arcade game, complete with steering wheel, pedals, and monitors.

A Vay teledriver’s “station,” artist rendering. Image Credit: Vay

Of course, in crucial ways the teledriving doesn’t resemble an arcade game. Vay emphasizes that its system was built with safety front of mind, with extra precautions against the top four causes of accidents in urban areas: driving under the influence, speeding, distraction, and fatigue.

These days, distraction probably takes the cake, because let’s be honest, we all look at our phones while driving. Teledrivers, on the other hand, will be fully engaged with the driving environment (hopefully, their phones won’t even be within reach), and are vetted and trained by the company. The monitors they look at while operating a car also give them a 360-degree view around the car.

Here’s how Vay’s service will work for consumers. Using a smartphone app, you’ll hail a car, much like you would with Uber or Lyft—except the car will pull up empty (having navigated to your location via a teledriver), and you’ll get in and drive yourself to your destination. Upon arrival, you get out and get on your way, and the teledriver takes over again, driving the car to its next passenger.

Perhaps most intriguing of all, Vay claims its rides will cost just “a fraction” of what Uber and Lyft currently charge for rides. Between that and the added bonus of not having to make small talk with drivers or pool passengers (am I right, introverts?), Vay may really be onto something.

The company’s CEO, Thomas von der Ohe, has some experience with automation, having worked on Amazon’s Alexa and at Zoox, a robotaxi startup Amazon bought in 2020. Scaling up the level of automation is one of Vay’s goals, though it seems they won’t be in a huge hurry to do so, saying it will launch autonomous features gradually based on data gathered by teledriving, and that it believes “we will enter a decade of human-machine collaboration instead of directly reaching full autonomy.”

Again, they may be onto something. All the hype around self-driving cars has consumers eagerly anticipating their arrival, but between a complex regulatory environment, ongoing safety concerns, and the stark fact that exceeding or even matching the human brain’s ability to operate a vehicle is really, really hard, the “finish line” of Level 5 autonomy will likely remain elusive for years to come—if not a decade or more.

In the meantime, providing alternate solutions that help ease us into a driverless future, maybe while saving us money and making roads safer, seems like a good course of action.

Vay does have some big hurdles yet to clear.

For one, it will be interesting to see what sort of solutions the company devises for matching supply to demand; Uber and Lyft use surge pricing when ride requests outnumber drivers, and drivers can choose to go on duty at busy times when prices are high. In Vay’s case, the number of teledrivers sitting at their arcade-game-like stations at any given time will be fixed. The company will also have to get approvals from regulators in the cities where it plans to offer its service, and gain consumer trust (which may not be too hard given that when you’re in the car, you’re controlling it).

Vay is currently testing its teledriven vehicles in Berlin, and aims to launch its service in Europe and the US next year.

Image Credit: Vay

Kategorie: Transhumanismus

The CRISPR Family Tree Holds a Multitude of Untapped Gene Editing Tools

14 Září, 2021 - 16:00

Thanks to CRISPR, gene therapy and “designer babies” are now a reality. The gene editing Swiss army knife is one of the most impactful biomedical discoveries of the last decade. Now a new study suggests we’ve just begun dipping our toes into the CRISPR pond.

CRISPR-Cas9 comes from lowly origins. It was first discovered as a natural mechanism in bacteria and yeast cells to help fight off invading viruses. This led Dr. Feng Zhang, one of the pioneers of the technology, to ask: where did this system evolve from? Are there any other branches of the CRISPR family tree that we can also harness for gene editing?

In a new paper published last week in Science, Zhang’s team traced the origins of CRISPR to unveil a vast universe of potential gene editing tools. As “cousins” of CRISPR, these new proteins can readily snip targeted genes inside Petri dishes, similar to their famous relative.

But unlike previous CRISPR variants, these are an entirely new family line. Collectively dubbed OMEGA, they operate similarly to CRISPR. However, they use completely foreign “scissor” proteins, along with alien RNA guides previously unfamiliar to scientists.

What came as a total surprise was the abundance of these alternative systems. A big data search found over a million potential genetic sites that encode just one of these cousins, far more widespread “than previously suspected.” These newly-discovered classes of proteins have “strong potential for developing as biotechnologies,” the authors said.

In other words, the next gene editing wunderkind could be silently waiting inside another bacteria or algae, ready to be re-engineered to snip, edit, and alter our own genomes for the next genetic revolution.

The Many Variations of CRISPR

The first CRISPR system that came to fame was CRISPR-Cas9. The idea is simple but brilliant. Using a genetic vector—a round Trojan horse of sorts that delivers genes into cells—scientists can encode the two components for gene editing. One is a guide RNA, which directs the system to the target gene. The other is Cas9, the “scissors” that break the gene. Once a gene is snipped, it wants to heal. During this process it’s possible to insert new genetic code, delete old code, or shift the code in a way that inactivates subsequent genes.

Thanks to its relative simplicity, CRISPR didn’t just take off—it skyrocketed. Subsequent studies found variants optimized for slightly different tasks. For example, there are Cas9 varieties that have very low off-target activity or are smaller, making them easier to package and deliver into cells. Others include base editors, which swap a DNA letter without breaking the chain, or RNA editors, which edit RNA chains like a Word processor.

The burgeoning CRISPR pantheon was in part because of different Cas “scissor” proteins. Although thousands of variations exist, wrote Dr. Lucas Harrington at the University of California, Berkeley, who worked with CRISPR pioneer Dr. Jennifer Doudna, “gene editing experiments have largely focused on a small subset of representatives.” Scanning for new variants in nature, the team identified powerful new Cas proteins that retain their activity in high heat, and extremely compact ones that can sneak into nooks and crannies of the genome that otherwise block classic Cas proteins. The power of Cas variants persuaded scientists to artificially evolve new proteins with more optimized features.

But what if the secret to better gene editing tools isn’t just looking forward? What if it’s to peek back in time?

CRISPR Ancestors

The new study took this approach: scan through evolutionary history to trace the origins of CRISPR-Cas9.

Like tracing any family tree, it starts with knowing thyself. Cas9 belongs to a family called “RNA-guided nucleases.” Basically, these proteins can be shepherded by RNA guides, and they have the ability to cut genetic material.

Back in 2015, a study suggested one evolutionary root of Cas9. It’s weird: a bunch of “jumping genes,” or genetic components that can bounce around our genome. When first discovered in the 1960s, these jumpers—known as transposons—were largely dismissed as junk DNA. But subsequent studies found that they’re far more active than originally thought, with the power to regulate the workings of other genes, and in some cases encode mysterious proteins themselves.

One of those is IscB, an ancient protein relic that likely evolved into Cas9.

IscB graced the team’s radar because of its similarity to Cas9, both in terms of its protein structure and its domains. Picture a protein as a gaming controller. Most of the plastic backbone is there to support the overall structure—only a few buttons actually issue commands. A protein is similar, in that it contains domains, or “buttons” that talk to other components of the cell.

These domains are generally preserved within a family of proteins. With a gaming controller, you can mostly tell by its buttons and shape if it’s for an Atari, Sega, Xbox, or Playstation. Similarly, if different proteins share similar domains, scientists can tell if they came from the same lineage.

The team used computers to scan for three Cas9 domains, encoded in the genome, in IscB proteins. Eventually, out of over 2,800, they found 31 unique genetic sites that seem to be associated with CRISPR activity. Subsequent experiments found that these proteins can efficiently snip genetic material with guidance from an RNA guide.

“To us, this suggests that IscB—the CRISPR-Cas ancestor—shares a core gene that is prone to evolving into a CRISPR-like system,” the authors wrote.

A Whole New World

If the CRISPR-Cas ancestor also has genome-snipping abilities, what happened to its other family branches?

Using AI, the team constructed a family tree of the protein domains. The resulting tree had multiple Cas9-like children, such as IsrB and TnpB (yeah, catchy I know). Amazingly, each of these proteins seems to have evolved from a separate, unique evolutionary event. Each also has its own “taste” in the makeup of its RNA guides.

But do they work? The team found IsrB genes inside a type of green algae, which was able to cleave DNA easily in the lab. A screen of 6 similar proteins with 12 guides each also found one that snips human genomes inside a cultured kidney cell.

The ancestry screen also found another family of similar proteins, TnpB, which “are thought to be the ancestor of Cas12,” the authors said. Unlike Cas9, Cas12 can happily chop single-stranded DNA—for example, for detecting viral genetic material during an infection.

Together, the team dubbed these new genomic scissors “OMEGA,” or Obligate Mobile Element Guided Activity. It sounds like retro-future nerd speak, but the idea is that these mobile genetic elements, which went down a different evolutionary path than Cas proteins, can be guided for altering the genome.

For now, we don’t yet fully understand what OMEGA systems naturally do in their hosts. We do know that they’re incredibly widespread—far more than scientists previously imagined. The Cas12-like family of proteins, TnpB, has more than a million potential sites in bacteria genomes.

It might “represent an untapped wealth” of tools similar to CRISPR-Cas9, but potentially with its own intricacies and strengths. We don’t yet know if the new tools can match the proficiency of our current CRISPR, or if they bring new abilities to the table. But the gene editing universe just vastly expanded. And without doubt, the hunt for the next revolutionary tool is on.

Image Credit: vrx / Shutterstock.com

Kategorie: Transhumanismus

Electrifying the Future: Toyota Puts Over $13 Billion Into Battery Technology

13 Září, 2021 - 16:00

The world’s largest car manufacturer by volume has been sluggish in its efforts to electrify compared to competitors. But Toyota has just announced a huge investment in battery technology that may be a sign it’s shifting course.

 Although Toyota’s Prius hybrid was the first electrified vehicle to really hit the mainstream, the company failed to capitalize on its early lead. It still doesn’t sell a fully electric vehicle in either the US or Japan, at a time when more or less every major automaker—from Volvo to Volkswagen—has at least one model powered by batteries alone.

The company seems to be belatedly joining the party after executives announced that it would invest $13.6 billion in battery technology over the next decade. This includes $9 billion to be spent on manufacturing, which will see it scale up to 10 battery production lines by 2025 and ultimately up to around 70.

During a press briefing, chief technology officer Masahiko Maeda said part of the company’s plan is to reduce the cost of batteries by 30 percent or more through innovations in materials and new designs. They are also working on ways to reduce the amount of energy the car draws from those batteries by 30 percent.

All of this follows from the company’s April announcement that it plans to release 70 electric cars around the world by 2025, suggesting that it’s finally joining the consensus among automakers that electric vehicles are the future.

But as noted by Green Car Reports, only 15 of those 70 cars will be fully electric, with the rest made up of hybrids or hydrogen vehicles, which the company has also been pushing for a number of years. In contrast, many competitors have announced plans to go fully electric in the coming decade.

Toyota’s reluctance to double down on electric vehicles is all the more confusing considering it is seen as a global leader in developing batteries for electric vehicles. It’s also a frontrunner in the quest to commercialize solid-state batteries, which could significantly increase energy density and therefore the range of electric vehicles.

The explanation seems to lie in the fact that, despite being an early leader in electric cars, Toyota considered electrification a stopgap until cars powered by hydrogen fuel cells could replace gasoline ones. While the company does sell one hydrogen-powered car, their expense and lack of fueling infrastructure means adoption is lagging.

Given that the reason for replacing gasoline vehicles is climate change, the fact that hydrogen still has a long way to go until it’s truly green suggests that a future for decarbonizing transport using fuel cells is still a distant dream.

Perhaps surprisingly for a company that led the initial charge to create a greener future for the car, Toyota has even been lobbying against the transition to electric vehicles, according to the New York Times.

While this is probably at least partly an effort to protect its investments in non-battery-focused transport technologies, the company’s argument is that a transition to electric vehicles as rapid as many are suggesting is not practical given the current state of the technology.

Last year, Toyota president Akio Toyoda claimed Japan would run out of electricity if it switched entirely to electric vehicles, unless it spent hundreds of billions of dollars on upgrading its power network. More recently, company director Shigeki Terashi said it was still too early to put all of our eggs in the electric vehicle basket.

So while this new battery investment will certainly be a major boon to efforts to electrify vehicles, it seems Toyota is still not fully on board with the electric vehicle revolution.

Image Credit: Toyota

Kategorie: Transhumanismus

New Study Finds a Single Neuron Is a Surprisingly Complex Little Computer

12 Září, 2021 - 16:00

Comparing brains to computers is a long and dearly held analogy in both neuroscience and computer science.

It’s not hard to see why.

Our brains can perform many of the tasks we want computers to handle with an easy, mysterious grace. So, it goes, understanding the inner workings of our minds can help us build better computers; and those computers can help us better understand our own minds. Also, if brains are like computers, knowing how much computation it takes them to do what they do can help us predict when machines will match minds.

Indeed, there’s already a productive flow of knowledge between the fields.

Deep learning, a powerful form of artificial intelligence, for example, is loosely modeled on the brain’s vast, layered networks of neurons.

You can think of each “node” in a deep neural network as an artificial neuron. Like neurons, nodes receive signals from other nodes connected to them and perform mathematical operations to transform input into output.

Depending on the signals a node receives, it may opt to send its own signal to all the nodes in its network. In this way, signals cascade through layer upon layer of nodes, progressively tuning and sharpening the algorithm.

The brain works like this too. But the keyword above is loosely.

Scientists know biological neurons are more complex than the artificial neurons employed in deep learning algorithms, but it’s an open question just how much more complex.

In a fascinating paper published recently in the journal Neuron, a team of researchers from the Hebrew University of Jerusalem tried to get us a little closer to an answer. While they expected the results would show biological neurons are more complex—they were surprised at just how much more complex they actually are.

In the study, the team found it took a five- to eight-layer neural network, or nearly 1,000 artificial neurons, to mimic the behavior of a single biological neuron from the brain’s cortex.

Though the researchers caution the results are an upper bound for complexity—as opposed to an exact measurement of it—they also believe their findings might help scientists further zero in on what exactly makes biological neurons so complex. And that knowledge, perhaps, can help engineers design even more capable neural networks and AI.

“[The result] forms a bridge from biological neurons to artificial neurons,” Andreas Tolias, a computational neuroscientist at Baylor College of Medicine, told Quanta last week.

Amazing Brains

Neurons are the cells that make up our brains. There are many different types of neurons, but generally, they have three parts: spindly, branching structures called dendrites, a cell body, and a root-like axon.

On one end, dendrites connect to a network of other neurons at junctures called synapses. At the other end, the axon forms synapses with a different population of neurons. Each cell receives electrochemical signals through its dendrites, filters those signals, and then selectively passes along its own signals (or spikes).

To computationally compare biological and artificial neurons, the team asked: How big of an artificial neural network would it take to simulate the behavior of a single biological neuron?

First, they built a model of a biological neuron (in this case, a pyramidal neuron from a rat’s cortex). The model used some 10,000 differential equations to simulate how and when the neuron would translate a series of input signals into a spike of its own.

They then fed inputs into their simulated neuron, recorded the outputs, and trained deep learning algorithms on all the data. Their goal? Find the algorithm that could most accurately approximate the model.

(Video: A model of a pyramidal neuron (left) receives signals through its dendritic branches. In this case, the signals provoke three spikes.)

They increased the number of layers in the algorithm until it was 99 percent accurate at predicting the simulated neuron’s output given a set of inputs. The sweet spot was at least five layers but no more than eight, or around 1,000 artificial neurons per biological neuron. The deep learning algorithm was much simpler than the original model—but still quite complex.

From where does this complexity arise?

As it turns out, it’s mostly due to a type of chemical receptor in dendrites—the NMDA ion channel—and the branching of dendrites in space. “Take away one of those things, and a neuron turns [into] a simple device,” lead author David Beniaguev tweeted in 2019, describing an earlier version of the work published as a preprint.

Indeed, after removing these features, the team found they could match the simplified biological model with but a single-layer deep learning algorithm.

A Moving Benchmark

It’s tempting to extrapolate the team’s results to estimate the computational complexity of the whole brain. But we’re nowhere near such a measure.

For one, it’s possible the team didn’t find the most efficient algorithm.

It’s common for the the developer community to rapidly improve upon the first version of an advanced deep learning algorithm. Given the intensive iteration in the study, the team is confident in the results, but they also released the model, data, and algorithm to the scientific community to see if anyone could do better.

Also, the model neuron is from a rat’s brain, as opposed to a human’s, and it’s only one type of brain cell. Further, the study is comparing a model to a model—there is, as of yet, no way to make a direct comparison to a physical neuron in the brain. It’s entirely possible the real thing is more, not less, complex.

Still, the team believes their work can push neuroscience and AI forward.

In the former case, the study is further evidence dendrites are complicated critters worthy of more attention. In the latter, it may lead to radical new algorithmic architectures.

Idan Segev, a coauthor on the paper, suggests engineers should try replacing the simple artificial neurons in today’s algorithms with a mini five-layer network simulating a biological neuron. “We call for the replacement of the deep network technology to make it closer to how the brain works by replacing each simple unit in the deep network today with a unit that represents a neuron, which is already—on its own—deep,” Segev said.

Whether so much added complexity would pay off is uncertain. Experts debate how much of the brain’s detail algorithms need to capture to achieve similar or better results.

But it’s hard to argue with millions of years of evolutionary experimentation. So far, following the brain’s blueprint has been a rewarding strategy. And if this work is any indication, future neural networks may well dwarf today’s in size and complexity.

Image Credit: NICHD/S. Jeong

Kategorie: Transhumanismus

This Week’s Awesome Tech Stories From Around the Web (Through September 11)

11 Září, 2021 - 16:00
LONGEVITY

Meet Altos Labs, Silicon Valley’s Latest Wild Bet on Living Forever
Antonio Regalado | MIT Technology Review
“Altos is pursuing biological reprogramming technology, a way to rejuvenate cells in the lab that some scientists think could be extended to revitalize entire animal bodies, ultimately prolonging human life. …The new company…is recruiting a large cadre of university scientists with lavish salaries and the promise that they can pursue unfettered blue-sky research on how cells age and how to reverse that process.”

FUTURE

The Worldview Changing Drugs Poised to Go Mainstream
Ed Prideaux | BBC
“The ‘psychedelic renaissance’ promises to change far more about our societies than simply the medical treatments that doctors prescribe. Unlike other drugs, psychedelics can radically alter the way people see the world. They also bring mystical and hallucinatory experiences that are at the edge of current scientific understanding. So, what might follow if psychedelics become mainstream?”

TECH

A Single Laser Fired Through a Keyhole Can Expose Everything Inside a Room
Andrew Liszewski | Gizmodo
“Being able to see inside a closed room was a skill once reserved for superheroes. But researchers at the Stanford Computational Imaging Lab have expanded on a technique called non-line-of-sight imaging so that just a single point of laser light entering a room can be used to see what physical objects might be inside.”

SCIENCE

One Lab’s Quest to Build Space-Time Out of Quantum Particles
Adam Becker | Quanta
“For over two decades, physicists have pondered how the fabric of space-time may emerge from some kind of quantum entanglement. In Monika Schleier-Smith’s lab at Stanford University, the thought experiment is becoming real. …By engineering highly entangled quantum systems in a tabletop experiment, Schleier-Smith hopes to produce something that looks and acts like the warped space-time predicted by Albert Einstein’s theory of general relativity.”

COMPUTING

Our Flexible Processors Can Now Use Bendable RAM
John Timmer | Ars Technica
“A few months ago, we brought news of a bendable CPU, termed Plastic ARM, that was built of amorphous silicon on a flexible substrate. The use cases for something like this are extremely low-powered devices that can be embedded in clothing or slapped on the surface of irregular objects, allowing them to have a small amount of autonomous computing. …[Now] researchers have built a form of flexible phase-change memory, which is closer in speed to normal RAM than flash memory but requires no power to maintain its state.”

ENERGY

Lithium-ion Batteries Just Made a Big Leap in a Tiny Product
James Temple | MIT Technology Review
“Sila’s novel anode materials packed far more energy into a new Whoop fitness wearable. …It’s a small device but potentially a big step forward for the battery field, where promising lab results often fail to translate to commercial success. … ‘We’re big believers that hope and hype doesn’t change the world—shipping does,’ [Sila CEO] Gene Berdichevsky says.”

ARTIFICIAL INTELLIGENCE

In the US, the AI Industry Risks Becoming Winner-Take-Most
Khari Johnson | Wired
“A new study warns that the American AI industry is highly concentrated in the San Francisco Bay Area and that this could prove to be a weakness in the long run. The Bay leads all other regions of the country in AI research and investment activity, accounting for about one-quarter of AI conference papers, patents, and companies in the US. Bay Area metro areas see levels of AI activity four times higher than other top cities for AI development.”

Image Credit: Shubham DhageUnsplash

Kategorie: Transhumanismus

New Research Reveals Animals Are Changing Their Body Shapes to Cope With Climate Change

10 Září, 2021 - 16:00

Global warming is a big challenge for warm-blooded animals, which must maintain a constant internal body temperature. As anyone who’s experienced heatstroke can tell you, our bodies become severely stressed when we overheat.

Animals are dealing with global warming in various ways. Some move to cooler areas, such as closer to the poles or to higher ground. Some change the timing of key life events such as breeding and migration, so they take place at cooler times. And others evolve to change their body size to cool down more quickly.

Our new research examined another way animal species cope with climate change: by changing the size of their ears, tails, beaks, and other appendages. We reviewed the published literature and found examples of animals increasing appendage size in parallel with climate change and associated temperature increases.

In doing so, we identified multiple examples of animals that are most likely “shape-shifters.” The pattern is widespread, and suggests climate warming may result in fundamental changes to animal form.

Adhering to Allen’s Rule

It’s well known that animals use their appendages to regulate their internal temperature. African elephants, for example, pump warm blood to their large ears, which they then flap to disperse heat. The beaks of birds perform a similar function—blood flow can be diverted to the bill when the bird is hot.

This means there are advantages to bigger appendages in warmer environments. In fact, as far back as the 1870s, American zoologist Joel Allen noted in colder climates, warm-blooded animals (also known as endotherms) tended to have smaller appendages, while those in warmer climates tend to have larger ones.

This pattern became known as Allen’s rule, which has since been supported by studies of birds and mammals.

Biological patterns such as Allen’s rule can also help make predictions about how animals will evolve as the climate warms. Our research set out to find examples of animal shape-shifting over the past century, consistent with climatic warming and Allen’s rule.

Which Animals Are Changing?

We found most documented examples of shape-shifting involve birds—specifically, increases in beak size.

This includes several species of Australian parrots. Studies show the beak size of gang-gang cockatoos and red-rumped parrots has increased by between four percent and ten percent since 1871.

Mammal appendages are also increasing in size. For example, in the masked shrew, tail and leg length have increased significantly since 1950. And in the great roundleaf bat, wing size increased by 1.64 percent over the same period.

The variety of examples indicates shape-shifting is happening in different types of appendages and in a variety of animals, in many parts of the world. But more studies are needed to determine which kinds of animals are most affected.

Other Uses of Appendages

Of course, animal appendages have uses far beyond regulating body temperature. This means scientists have sometimes focused on other reasons that might explain changes in animal body shape.

For example, studies have shown the average beak size of the Galapagos medium ground finch has changed over time in response to seed size, which is in turn influenced by rainfall. Our research examined previously collected data to determine if temperature also influenced changes in beak size of these finches.

These data do demonstrate rainfall (and, by extension, seed size) determines beak size. After drier summers, survival of small-beaked birds was reduced.

But we found clear evidence that birds with smaller beaks are also less likely to survive hotter summers. This effect on survival was stronger than that observed with rainfall. This tells us the role of temperature may be as important as other uses of appendages, such as feeding, in driving changes in appendage size.

Our research also suggests we can make some predictions about which species are most likely to change appendage size in response to increasing temperatures—namely, those that adhere to Allen’s rule.

These include (with some caveats) starlings, song sparrows, and a host of seabirds and small mammals, such as South American gracile opossums.

Why Does Shape-Shifting Matter?

Our research contributes to scientific understanding of how wildlife will respond to climate change. Apart from improving our capacity to predict the impacts of climate change, this will enable us to identify which species are most vulnerable and require conservation priority.

Last month’s report by the Intergovernmental Panel on Climate Change showed we have very little time to avert catastrophic global warming.

While our research shows some animals are adapting to climate change, many will not. For example, some birds may have to maintain a particular diet which means they cannot change their beak shape. Other animals may simply not be able to evolve in time.

So while predicting how wildlife will respond to climate change is important, the best way to protect species into the future is to dramatically reduce greenhouse gas emissions and prevent as much global warming as possible.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image Credit: Free-Photos from Pixabay

Kategorie: Transhumanismus

The World’s Largest Direct Air Capture Plant Is Now Pulling CO2 From the Air in Iceland

9 Září, 2021 - 19:10

A little over four years ago, the world’s first commercial plant for sucking carbon dioxide out of the air opened near Zurich, Switzerland. The plant was powered by a waste heat recovery facility, with giant fans pushing air through a filtration system that trapped the carbon. The carbon was then separated and sold to buyers, such as a greenhouse that used it to help grow vegetables. The plant ran as a three-year demonstration project, capturing an estimated 900 tons of CO2 (the equivalent to the annual emissions of 200 cars) per year.

This week, a plant about four times as large as the Zurich facility started operating in Iceland, joining 15 other direct air capture (DAC) plants that currently operate worldwide. According to the IEA, these plants collectively capture more than 9,000 tons of CO2 per year.

Christened Orca after the Icelandic word for energy, the new plant was built by Swiss company Climeworks in partnership with Icelandic carbon storage firm Carbfix. Orca is the largest of existing facilities of its type, able to capture 4,000 tons of carbon per year. That’s equal to the emissions of 790 cars.

The plant consists of eight “collector containers” each about the size and shape of a standard shipping container. Their fans run on energy from a nearby geothermal power plant, which was part of the reason this location made sense; Iceland has an abundance of geothermal energy, not to mention a subterranean geology that lends itself quite well to carbon sequestration. Orca was built on a lava plateau in the country’s southwest region.

This plant works a little differently than the Zurich plant, in that the captured carbon is liquefied then pumped underground into basalt caverns. Over time (less than two years, according to Carbfix’s website), it turns to stone.

One of the biggest issues with direct air capture is that it’s expensive, and this facility is no exception. Climeworks co-founder Christoph Gebald estimates it’s currently costing $600 to $800 to remove one metric ton of carbon. Costs would need to drop to around a sixth of this level for the company to make a profit. Gebald thinks Climeworks can get costs down to $200 to $300 per ton by 2030, and half that by 2040. The National Academy of Sciences estimated that once the cost of CO2 extraction gets below $100-150 per ton, the air-captured commodity will be economically competitive with traditionally-sourced oil.

The other problem that detractors of DAC cite is its energy usage relative to the amount of CO2 it’s capturing. These facilities use a lot of energy, and they’re not making a lot of difference. Granted, the energy they use will come from renewable sources, but we’re not yet to the point where that energy is unlimited or free. An IEA report from May of this year stated that to reach the carbon-neutral targets that have been set around the world, almost one billion metric tons of CO2 will need to be captured using DAC every year. Our current total of 9,000 tons is paltry in comparison.

But Climeworks and other companies working on DAC technology are optimistic, saying that automation and increases in energy efficiency will drive down costs. “This is a market that does not yet exist, but a market that urgently needs to be built,” Gebald said. “This plant that we have here is really the blueprint to further scale up and really industrialize.”

Image Credit: Climeworks 

Kategorie: Transhumanismus

Hyundai Goes All-In on Hydrogen With Its ‘Trailer Drone’ and More

8 Září, 2021 - 16:00

Between the grim outlook reported by the IPCC’s Sixth Assessment Report last month and frequent reports of extreme weather events all over the world, the climate crisis feels like it’s getting more dire by the week. Accordingly, calls for action are intensifying, and companies and governments are scrambling for solutions. Renewables are ramping up, innovative energy storage technologies are being brought to the table, and pledges to go carbon-neutral are piling up as fast as, well, carbon.

South Korea’s Hyundai Motor Group has joined the fray, but on a path that diverges a bit from the crowd; they’re going all-in on hydrogen. At the company’s aptly named Hydrogen Wave Forum this week, it unveiled multiple hydrogen-powered concept vehicles, as well as a strategy for building up its presence in the hydrogen space over the next few years (and decades).

The company unveiled a ground shipping concept it’s calling the Trailer Drone, which sits on a fuel-cell-powered chassis called the e-Bogie. The e-Bogies, named after the frames train cars sit on, have four-wheel independent steering that lets them move in ways normal cars and trucks can’t, like sideways (in crab fashion) or in circles. The modular e-Bogies can be combined to carry different-sized trailers, and can go an estimated 621 miles (1,000 kilometers) on a single fill-up. The system would be autonomous, and the concept doesn’t include a cab or seat for a human driver.

Hyundai also unveiled a hydrogen-powered concept sports car called the Vision FK. The car is a plug-in hybrid, meaning the fuel cell charges a traditional battery. The 500-kilowatt fuel cell gives the car the ability to go from 0 to 100 kilometers per hour in under 4 seconds. The carmaker didn’t give a timeline for when (or whether) the Vision FK would enter production, though.

Finally, Hyundai said it’s working on hydrogen-powered versions of its existing commercial vehicles, and plans to bring those to market by 2028.

Hyundai is by no means new to the hydrogen game; the company already has fuel-cell-powered trucks and buses on the roads, including its Xcient truck, which is in use in Switzerland, and its Elec City Fuel Cell bus, which is on roads in South Korea and being trialed in Germany.

One of the technology’s biggest detractors is none other than Elon Musk, who finds hydrogen fuel cells “extremely silly.” But Toyota would disagree with Musk’s take; the company is building a hydrogen-powered prototype city near the base of Mount Fuji called Woven City.

For its part, Hyundai is aiming to get its fuel cell powertrain to a point where it can compete cost-wise with electric vehicle batteries by 2030.

A study released earlier this year by McKinsey’s hydrogen council found that when you factor in the relative efficiencies of the power sources and lifetime costs of a truck, green hydrogen could reach cost parity with diesel by 2030. A paper published in Joule last month laid out a road map for building a green hydrogen economy.

Despite these promising outlooks, it’s still highly uncertain whether hydrogen will become a widespread, cost-effective energy source. But it seems we’re getting to a point where it’s worth looking into any option that could make the future of the planet look brighter than it does right now.

Image Credit: Hyundai

Kategorie: Transhumanismus

Gene Therapies Are Almost Here, But Healthcare Isn’t Ready for Sky-High Prices

7 Září, 2021 - 16:00

Zolgensma—which treats spinal muscular atrophy, a rare genetic disease that damages nerve cells, leading to muscle decay—is currently the most expensive drug in the world. A one-time treatment of the life-saving drug for a young child costs $2.1 million.

While Zolgensma’s exorbitant price is an outlier today, by the end of the decade there’ll be dozens of cell and gene therapies, costing hundreds of thousands to millions of dollars for a single dose. The Food and Drug Administration predicts that by 2025 it will be approving 10 to 20 cell and gene therapies every year.

I’m a biotechnology and policy expert focused on improving access to cell and gene therapies. While these forthcoming treatments have the potential to save many lives and ease much suffering, healthcare systems around the world aren’t equipped to handle them. Creative new payment systems will be necessary to ensure everyone has equal access to these therapies.

The Rise of Gene Therapies

Currently, only 5% of the roughly 7,000 rare diseases have an FDA-approved drug, leaving thousands of conditions without a cure.

But over the past few years, genetic engineering technology has made impressive strides toward the ultimate goal of curing disease by changing a cell’s genetic instructions.

The resulting gene therapies will be able to treat many diseases at the DNA level in a single dose.

Thousands of diseases are the result of DNA errors, which prevent cells from functioning normally. By directly correcting disease-causing mutations or altering a cell’s DNA to give the cell new tools to fight disease, gene therapy offers a powerful new approach to medicine.

There are 1,745 gene therapies in development around the world. A large fraction of this research focuses on rare genetic diseases, which affect 400 million people worldwide.

We may soon see cures for rare diseases like sickle cell disease, muscular dystrophy, and progeria, a rare and progressive genetic disorder that causes children to age rapidly.

Further into the future, gene therapies may help treat more common conditions, like heart disease and chronic pain.

Sky-High Price Tags

The problem is these therapies will carry enormous price tags.

Gene therapies are the result of years of research and development totaling hundreds of millions to billions of dollars. Sophisticated manufacturing facilities, highly trained personnel and complex biological materials set gene therapies apart from other drugs.

Pharmaceutical companies say recouping costs, especially for drugs with small numbers of potential patients, means higher prices.

The toll of high prices on healthcare systems will not be trivial. Consider a gene therapy cure for sickle cell disease, which is expected to be available in the next few years. The estimated price of this treatment is $1.85 million per patient. As a result, economists predict that it could cost a single state Medicare program almost $30 million per year, even assuming only 7% of the eligible population received the treatment.

And that’s just one drug. Introducing dozens of similar therapies into the market would strain healthcare systems and create difficult financial decisions for private insurers.

Lowering Costs, Finding New Ways to Pay

One solution for improving patient access to gene therapies would be to simply demand drugmakers charge less money, a tactic recently taken in Germany.

But this comes with a lot of challenges and may mean that companies simply refuse to offer the treatment in certain places.

I think a more balanced and sustainable approach is two-fold. In the short term, it’ll be important to develop new payment methods that entice insurance companies to cover high-cost therapies and distribute risks across patients, insurance companies, and drugmakers. In the long run, improved gene therapy technology will inevitably help lower costs.

For innovative payment models, one tested approach is tying coverage to patient health outcomes. Since these therapies are still experimental and relatively new, there isn’t much data to help insurers make the risky decision of whether to cover them. If an insurance company is paying $1 million for a therapy, it had better work.

In outcomes-based models, insurers will either pay for some of the therapy upfront and the rest only if the patient improves, or cover the entire cost upfront and receive a reimbursement if the patient doesn’t get better. These models help insurers share financial risk with the drug developers.

Another model is known as the “Netflix model” and would act as a subscription-based service. Under this model, a state Medicaid program would pay a pharmaceutical company a flat fee for access to unlimited treatments. This would allow a state to provide the treatment to residents who qualify, helping governments balance their budget books while giving drugmakers money up front.

This model has worked well for improving access to hepatitis C drugs in Louisiana.

On the cost front, the key to improving access will be investing in new technologies that simplify medical procedures. For example, the costly sickle cell gene therapies currently in clinical trials require a series of expensive steps, including a stem cell transplant.

The Bill & Melinda Gates Foundation, the National Institute of Health and Novartis are partnering to develop an alternative approach that would involve a simple injection of gene therapy molecules. The goal of their collaboration is to help bring an affordable sickle cell treatment to patients in Africa and other low-resource settings.

Improving access to gene therapies requires collaboration and compromise across governments, nonprofits, pharmaceutical companies, and insurers. Taking proactive steps now to develop innovative payment models and invest in new technologies will help ensure that healthcare systems are ready to deliver on the promise of gene therapies.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

The Bill & Melinda Gates Foundation has provided funding for The Conversation US and provides funding for The Conversation internationally.

Image Credit: nobeastsofierce / Shutterstock.com

Kategorie: Transhumanismus

New Mini-CRISPR Systems Could Dramatically Expand the Scope of Gene Therapy

5 Září, 2021 - 16:00

CRISPR has revolutionized genome engineering, but the size of its molecular gene-editing components has limited its therapeutic uses so far. Now, a trio of new research papers detail compact versions of the gene-editing tool that could significantly expand its applications.

While we’ve been able to edit genomes since the 1990s, the introduction of CRISPR in 2015 transformed the field thanks to its flexibility, simplicity, and efficiency. The technology is based on a rudimentary immune system found in microbes that combines genetic mugshots of viruses with an enzyme called Cas9 that hunts them down and chops up their DNA.

This system can be re-purposed by replacing the viral genetic code with whatever sequence you want to edit and precisely snipping the DNA at that location. One outstanding problem, however, is that the system’s large physical size makes it hard to deliver to cells effectively.

Adeno-associated viral vectors (AAVs)—which are small, non-pathogenic viruses that can be re-purposed to inject genetic code into cells—are the gold standard delivery system for in vivo gene therapies. They produce little immune response and have received FDA approval for therapeutic use, but their tiny size makes using them to deliver CRISPR tricky.

Now, however, three research papers published last week show that a family of tiny Cas proteins derived from archaea are small enough to fit in AAVs and can edit human DNA.

The most commonly used Cas9 protein comes from the Streptococcus pyogenes bacteria, which is 1,368 amino acids long. When combined with the RNA sequence needed to guide it to its target, that’s too big to fit in an AAV. And while you can deliver them separately this significantly reduces efficiency as you can’t guarantee every cell will receive both.

But there’s considerable diversity in the proteins used in natural CRISPR systems, so researchers have been screening the microbial world for smaller alternatives.

Two promising candidates are Cas9 proteins from Staphylococcus aureus and Streptococcus thermophilus, which are 1,053 and 1,121 amino acids long, respectively. Their relatively smaller size makes it possible to package them in an AAV along with their guide RNA.

That said, even these two smaller alternatives may not be small enough.

In recent years CRISPR’s capabilities have been expanded significantly, going from simply snipping a single gene to inserting genes, swapping DNA letters, or targeting multiple sites at once. All of this requires much more genetic material to be delivered into the cell, which rules out AAVs.

Yet another family of proteins known as Cas12f has garnered attention for their tiny proportions—generally between 400 and 700 amino acids—but it was not known whether they could be coaxed to work outside microbes.

That’s where last week’s papers, published in Nature Biotechnology and Nature Chemical Biology, come in. Scientists showed that proteins from this family could be packaged inside AAVs along with their guide RNA and delivered to human cells to make effective edits.

A third paper in Molecular Cell used protein engineering to transform a Cas12f protein that didn’t appear to work in mammalian cells into one that did. While the team didn’t actually test whether they could deliver the protein using AAVs, they showed it was small enough, even when combined with a variety of advanced tools like prime and base editors.

These breakthroughs could provide a significant boost to in vivo therapies. CRISPR delivered by lipid nanoparticles—the same mechanism used in mRNA vaccines—is already making its way into the clinic, but this approach is significantly less efficient than AAVs.

These are still very early stage studies though, and despite the promising editing performance, it will take a lot more research to properly characterize the capabilities and safety profiles of these new proteins. If they do turn out to be as effective as the original CRISPR system however, they could dramatically expand the scope of potential therapies.

Image Credit: Jazzlw / Wikimedia Commons

Kategorie: Transhumanismus

This Week’s Awesome Tech Stories From Around the Web (Through September 4)

4 Září, 2021 - 16:00
COMPUTING

The $150 Million Machine Keeping Moore’s Law Alive
Will Knight | Wired
“The technology will be crucial for making more advanced smartphones and cloud computers, and also for key areas of emerging technology such as artificial intelligence, biotechnology, and robotics. ‘The death of Moore’s law has been greatly exaggerated,’ del Alamos says. ‘I think it’s going to go on for quite some time.’i”

ARTIFICIAL INTELLIGENCE

These Super-Efficient, Artificial Neurons Do Not Use Electrons
Payal Dhar | IEEE Spectrum
“[Though] artificial intelligence has come a long way, these systems are still far from matching the brain’s energy efficiency. …’The human brain…needs only 20 watts [to function], essentially [as much as] a light bulb,’ says Paul Robin, one of the scientists on the study. ‘Computers need much more energy. Our idea is that maybe the reason why our brain is so much more efficient is that it uses ions and not electrons to function.’i”

FUTURE

Artificial Intelligence and the ‘Gods Behind the Masks’
Kai-Fu Lee and Chen Qiufan | Wired
“In an excerpt from AI 2041: Ten Visions for Our Future, Kai-Fu Lee and Chen Qiufan explore what happens when deepfakers attack the deepfakes. …Touching on impending breakthroughs in computer vision, biometrics, and AI security, it imagines a future world marked by cat-and-mouse games between deepfakers and detectors, and between defenders and perpetrators.”

ENERGY

This Wildly Reinvented Wind Turbine Generates Five Times More Energy Than Its Competitors
Elissaveta M. Brandon | Fast Company
“Unlike traditional wind turbines, which consist of one pole and three gargantuan blades, the so-called Wind Catcher is articulated in a square grid with over 100 small blades. At 1,000 feet high, the system is over three times as tall as an average wind turbine, and it stands on a floating platform that’s anchored to the ocean floor.”

ROBOTICS

Segway’s New Lawn Robot Uses GPS to Cut Your Grass
David Watsky | CNET
“While it’s not the first robotic lawnmower, the Navimov’s value proposition against a competitive set is that it doesn’t require boundary cords as with most other devices in the category. Rather, it relies on something called the Exact Fusion Locating System—also known as ‘GPS’—to allow ‘precise positions and systematic mowing patterns’ in an effort you get you that perfectly manicured lawn without having to, ya know, actually mow it.”

SPACE

NASA’s Perseverance Rover Finally Scooped Up a Piece of Mars
Neel V. Patel | MIT Technology Review
“The rover bounced back from a failed attempt and acquired a sample of rock and soil that could reveal the secrets of ancient life on Mars. …[It] marks the first time a sample has ever been recovered on the planet. …Collecting samples is one of the marquee goals of the mission. Perseverance is equipped with 43 collection tubes, and NASA hopes to fill them all with rock and soil samples from Mars to one day bring back to Earth.”

ETHICS

The Fight to Define When AI Is ‘High Risk’
Khari Johnson | Wired
“The AI Act is one of the first major policy initiatives worldwide focused on protecting people from harmful AI. If enacted, it will classify AI systems according to risk, more strictly regulate AI that’s deemed high risk to humans, and ban some forms of AI entirely, including real-time facial recognition in some instances. In the meantime, corporations and interest groups are publicly lobbying lawmakers to amend the proposal according to their interests.”

Image Credit: Hector Falcon / Unsplash

Kategorie: Transhumanismus

This Room Can Wirelessly Charge Devices Anywhere Within Its Walls

3 Září, 2021 - 16:00

Today, wireless charging is little more than a gimmick for high-end smartphones or pricey electric toothbrushes. But a new approach that can charge devices anywhere in a room could one day allow untethered factories where machinery is powered without cables.

As the number of gadgets we use has steadily grown, so too has the number of cables and chargers cluttering up our living spaces. This has spurred growing interest in wireless charging systems, but the distances they work over are very short, and they still have to be plugged into an outlet. So, ultimately, they make little difference.

Now though, researchers have devised a way to wirelessly power small electronic devices anywhere in a room. It requires a pretty hefty retrofit of the room itself, but the team says it could eventually be used to power everything from mobile robots in factories to medical implants in people.

“This really ups the power of the ubiquitous computing world,” Alanson Sample, from the University of Michigan, said in a press release. “You could put a computer in anything without ever having to worry about charging or plugging in.”

Efforts to beam power over longer distances have typically used microwaves to transmit it. But such approaches require large antennas and targeting systems. They also present risks for spaces where humans are present because microwaves can damage biological tissue.

Commercial wireless chargers instead rely on passing a current through a wire charging coil to create a magnetic field, which induces an electric current in a wire receiving coil installed in the device you want to charge. However, the approach only works over very short distances—roughly equal to the diameter of the charging coil.

The new approach, outlined in a paper in Nature Electronics, works on similar principles, but essentially turns the entire room into a giant magnetic charger, allowing any device within the room that has a receiving coil to draw power.

To build the system, Sample and colleagues from the University of Tokyo installed conductive aluminum panels in the room’s walls, floor, and ceiling and inserted a large copper pole in the middle of it. They then mounted devices, called lumped capacitors, in rows running horizontally through the middle of each panel and at the center of the pole.

Lumped capacitors line the aluminum walls of the wireless charging room. Image Credit: The University of Tokyo

When current passes through the panels, it’s channeled into the capacitors, generating magnetic fields that permeate the 100-square-foot room and deliver 50 watts of power to any devices in it.

Importantly, the capacitors also isolate potentially harmful electric fields within themselves. As a result, the team showed the system doesn’t exceed Federal Communications Commission (FCC) guidelines for electromagnetic energy exposure.

This is actually the second incarnation of this technology. Sample first introduced the idea in a 2017 paper in PLOS ONE while working for Disney. But the latest research solves a crucial limitation of the earlier work. Previously the system produced a single magnetic field that swirled in a circle around the central pole, resulting in dead spots in the corners of the square room. The new setup creates two simultaneous magnetic fields, one spinning around the pole and another concentrated near the walls themselves.

This way the researchers were able to achieve charging efficiency above 50 percent in 98 percent of the room compared to only 5.75 percent of the room for the previous iteration. They also found that if they only relied on the second magnetic field, they could remove the obstructive pole and still get reasonable charging in most of the room (apart from right at the center).

While that’s a significant improvement, it still means that on average 50 percent of the power coming out of the wall socket is wasted. Such low efficiencies are a common problem for wireless charging, as an investigation by OneZero found last year.

Given the small amount of power required to charge everyday devices it’s unlikely to have an especially notable impact on most user’s power bills, according to the report. But at a society-wide scale it could be significant waste of power and source of unnecessary carbon emissions.

This is only a prototype though, and considering the dramatic increase in efficiency between the first and second versions, this efficiency gap could be closed. A more pressing concern might be the cost and complexity of retrofitting buildings with massive aluminum plates in the walls.

Indeed, the researchers are working on both issues. “We’ve just developed a brand-new technique. Now we have to go figure out how to make it practical,” Sample told Scientific American.

Still, while this kind of seamless wireless charging won’t be ubiquitous in the near term, the technique could soon be used in niche situations, like charging cabinets for power tools, and ultimately, the researchers think it could be make the factories of the future cable-free.

Image Credit: The University of Tokyo

Kategorie: Transhumanismus

A Quarter of Sun-Like Stars Eat Their Own Planets, According to New Research

2 Září, 2021 - 16:00

How rare is our solar system? In the 30 years or so since planets were first discovered orbiting stars other than our sun, we have found that planetary systems are common in the galaxy. However, many of them are quite different from the solar system we know.

The planets in our solar system revolve around the sun in stable and almost circular paths, which suggests the orbits have not changed much since the planets first formed. But many planetary systems orbiting around other stars have suffered from a very chaotic past.

The relatively calm history of our solar system has favored the flourishing of life here on Earth. In the search for alien worlds that may contain life, we can narrow down the targets if we have a way to identify systems that have had similarly peaceful pasts.

Our international team of astronomers has tackled this issue in research published in Nature Astronomy. We found that between 20 and 35 percent of sun-like stars eat their own planets, with the most likely figure being 27 percent.

This suggests at least a quarter of planetary systems orbiting stars similar to the sun have had a very chaotic and dynamic past.

Chaotic Histories and Binary Stars

Astronomers have seen several exoplanetary systems in which large or medium-sized planets have moved around significantly. The gravity of these migrating planets may also have perturbed the paths of the other planets or even pushed them into unstable orbits.

In most of these very dynamic systems, it is also likely some of the planets have fallen into the host star. However, we didn’t know how common these chaotic systems are relative to quieter systems like ours, whose orderly architecture has favored the flourishing of life on Earth.

Even with the most precise astronomical instruments available, it would be very hard to work this out by directly studying exoplanetary systems. Instead, we analyzed the chemical composition of stars in binary systems.

Binary systems are made up of two stars in orbit around one another. The two stars generally formed at the same time from the same gas, so we expect they should contain the same mix of elements.

However, if a planet falls into one of the two stars, it is dissolved in the star’s outer layer. This can modify the chemical composition of the star, which means we see more of the elements that form rocky planets, such as iron, than we otherwise would.

Traces of Rocky Planets

We inspected the chemical makeup of 107 binary systems composed of sun-like stars by analyzing the spectrum of light they produce. From this, we established how many of stars contained more planetary material than their companion star.

We also found three things that add up to unambiguous evidence that the chemical differences observed among binary pairs were caused by eating planets.

First, we found that stars with a thinner outer layer have a higher probability of being richer in iron than their companion. This is consistent with planet-eating, as when planetary material is diluted in a thinner out layer it makes a bigger change to the layer’s chemical composition.

Second, stars richer in iron and other rocky-planet elements also contain more lithium than their companions. Lithium is quickly destroyed in stars, while it is conserved in planets. So an anomalously high level of lithium in a star must have arrived after the star formed, which fits with the idea that the lithium was carried by a planet until it was eaten by the star.

Third, the stars containing more iron than their companion also contain more than similar stars in the galaxy. However, the same stars have standard abundances of carbon, which is a volatile element and for that reason is not carried by rocks. Therefore these stars have been chemically enriched by rocks, from planets or planetary material.

The Hunt for Earth 2.0

These results represent a breakthrough for stellar astrophysics and exoplanet exploration. Not only have we found that eating planets can change the chemical composition of Sun-like stars, but also that a significant fraction of their planetary systems underwent a very dynamical past, unlike our solar system.

Finally, our study opens the possibility of using chemical analysis to identify stars that are more likely to host true analogues of our calm solar system.

There are millions of relatively nearby stars similar to the sun. Without a method to identify the most promising targets, the search for Earth 2.0 will be like the search for the proverbial needle in a haystack.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image Credit: NASA / Tim Pyle

Kategorie: Transhumanismus

Better Than Batteries? A Startup That’s Storing Energy in Concrete Blocks Just Raised $100 Million

1 Září, 2021 - 16:00

The Intergovernmental Panel on Climate Change released its Sixth Assessment Report in early August, and the outlook isn’t good. The report has added renewed urgency to humanity’s effort to curb climate change.

The price of solar energy dropped 89 percent in 10 years, and new wind farms are being built both on land and offshore (with ever-bigger turbines capable of generating ever more energy). But simply adding more wind and solar generation capacity won’t get us very far if we don’t have a cost-effective, planet-friendly way to store the energy they produce.

As Zia Huque, general partner at Prime Movers Lab, put it, “To truly harness the power of renewable energy, the world needs to develop reliable, flexible storage solutions for when the sun does not shine or the wind does not blow.”

A startup called Energy Vault is working on a unique storage method, and they must be on the right track, because they just received over $100 million in Series C funding last week.

The method was inspired by pumped hydro, which has been around since the 1920s and uses surplus generating capacity to pump water up into a reservoir. When the water is released, it flows down through turbines and generates energy just like conventional hydropower.

Now imagine the same concept, but with heavy solid blocks and a tall tower rather than water and a reservoir. When there’s excess power—on a sunny or windy day with low electricity demand, for example—a mechanical crane uses it to lift the blocks 35 stories into the air. Then the blocks are held there until demand is outpacing supply. When they’re lowered to the ground (or lowered a few hundred feet through the air), their weight pulls cables that spin turbines, generating electricity.

“Heavy” blocks in this case means 35 tons (70,000 pounds or 31,751 kg). The blocks are made of a composite material that uses soil and locally-sourced waste, which can include anything from concrete debris and coal ash to decommissioned wind turbine blades (talk about coming full circle). Besides putting material that would otherwise go into a landfill to good use, this also means the blocks can be made locally, and thus don’t need to be transported (and imagine the cost and complexity of transporting something that heavy, oy).

The cranes that lift and lower the blocks have six arms, and they’re controlled by fully-automated custom software. Energy Vault says the towers will have a storage capacity up to 80 megawatt-hours, and be able to continuously discharge 4 to 8 megawatts for 8 to 16 hours. The technology is best suited for long-duration storage with very fast response times.

The Series C funding was led by Prime Movers Lab, with existing investors SoftBank and Saudi Aramco adding additional funds and several new investors joining. Energy Vault plans to use the funding to roll out its EVx platform, launched in April of this year. The platform includes performance enhancements like round-trip efficiency up to 85 percent, a lifespan of over 35 years, and a flexible, modular design that’s shorter than the original—which means it could more easily be built in or near densely-populated areas.

Huque called Energy Vault a “gamechanger” in the transition to green energy, saying the company “has cracked the code with a transformative solution…designed to fulfill clean energy demand 24/7 with a more efficient, durable, and environmentally sustainable approach.”

The company will roll out its EVx platform in the US late this year, moving on to fulfill contracts in Europe, the Middle East, and Australia in 2022.

Image Credit: Energy Vault

Kategorie: Transhumanismus

Deep Learning Is Tackling Another Core Biology Mystery: RNA Structure

31 Srpen, 2021 - 16:00

Deep learning is solving biology’s deepest secrets at breathtaking speed.

Just a month ago, DeepMind cracked a 50-year-old grand challenge: protein folding. A week later, they produced a totally transformative database of more than 350,000 protein structures, including over 98 percent of known human proteins. Structure is at the heart of biological functions. The data dump, set to explode to 130 million structures by the end of the year, allows scientists to foray into previous “dark matter”—proteins unseen and untested—of the human body’s makeup.

The end result is nothing short of revolutionary. From basic life science research to developing new medications to fight our toughest disease foes like cancer, deep learning gave us a golden key to unlock new biological mechanisms—either natural or synthetic—that were previously unattainable.

Now, the AI darling is set to do the same for RNA.

As the middle child of the “DNA to RNA to protein” central dogma, RNA didn’t get much press until its Covid-19 vaccine contribution. But the molecule is a double hero: it both carries genetic information, and—depending on its structure—can catalyze biological functions, regulate which genes are turned on, tweak your immune system, and even crazier, potentially pass down “memories” through generations.

It’s also frustratingly difficult to understand.

Similar to proteins, RNA also folds into complicated 3D structures. The difference, according to Drs. Rhiju Das and Ron Dror at Stanford University, is that we comparatively know little about these molecules. There are 30 times as many types of RNA as there are proteins, but the number of deciphered RNA structures is less than one percent compared to proteins.

The Stanford team decided to bridge that gap. In a paper published last week in Science, they described a deep learning algorithm called ARES (Atomic Rotationally Equivalent Scorer) that efficiently solves RNA structures, blasting previous attempts out of the water.

The authors “have achieved notable progress in a field that has proven recalcitrant to transformative advances,” said Dr. Kevin Weeks at the University of North Carolina, who was not involved in the study.

Even more impressive, ARES was trained on only 18 RNA structures, yet was able to extract substantial “building block” rules for RNA folding that’ll be further tested in experimental labs. ARES is also input agnostic, in that it isn’t specifically tailored to RNA.

“This approach is applicable to diverse problems in structural biology, chemistry, materials science, and beyond,” the authors said.

Meet RNA

The importance of this biomolecule for our everyday lives is probably summarized as “Covid vaccine, mic drop.”

But it’s so much more.

Like proteins, RNA is transcribed from DNA. It also has four letters, A, U, C, and G, with A grabbing U and C tethered to G. RNA is a whole family, with the most well-known type being messenger RNA, or mRNA, which carries the genetic instructions to build proteins. But there’s also transfer RNA, or tRNA—I like to think of this as a transport drone—that grabs onto amino acids and shuttles them to the protein factory, microRNA that controls gene expression, and even stranger cousins that we understand little about.

Bottom line: RNA is both a powerful target and inspiration for genetic medicine or vaccines. One way to shut off a gene without actually touching it, for example, is to kill its RNA messenger. Compared to gene therapy, targeting RNA could have fewer unintended effects, all the while keeping our genetic blueprint intact.

In my head, RNA often resembles tangled headphones. It starts as a string, but subsequently tangles into a loop-de-loop—like twisting a rubber band. That twisty structure then twists again with surrounding loops, forming a tertiary structure.

Unlike frustratingly annoying headphones, RNA twists in semi-predictable ways. It tends to settle into one of several structures. These are kind of like the shape your body contorts into during a bunch of dance moves. Tertiary RNA structures then stitch these dance moves together into a “motif.”

“Every RNA likely has a distinct structural personality,” said Weeks.

This seeming simplicity is what makes researchers tear their hair out. RNA’s building blocks are simple—just four letters. They also fold into semi-rigid structures before turning into more complicated tertiary models. Yet “despite these simplifying features, the modeling of complex RNA structures has proven to be difficult,” said Weeks.

The Prediction Conundrum

Current deep learning solutions usually start with one requirement: a ton of training examples, so that each layer of the neural network can begin to learn how to efficiently extract features—information that allows the AI to make solid predictions.

That’s a no-go for RNA. Unlike protein structures, RNA simply doesn’t have enough experimentally tried and true examples.

With ARES, the authors took an eyebrow-raising approach. The algorithm doesn’t care about RNA. It discards anything we already know about the molecule and its functions. Instead, it focused only on the arrangement of atoms.

ARES was first trained with a small set of motifs known from previous RNA structures. The team also added a large bunch of alternative examples of the same structure that were incorrect. Digesting these example, ARES slowly tweaked its neural network parameters so that the program began learning how each atom and its placement contributes to the overall molecule’s function.

Similar to a classic computer vision algorithm that gradually extracts features—from pixels to lines and shapes—ARES does the same. The layers in its neural network cover both fine and coarse scales. When challenged with a new set of RNA structures, many of which are far more complex than the training ones, ARES was able to distill patterns and novel motifs, recognizing how the letters bind.

“It learns entirely from atomic structure, using no other information…and it makes no assumptions about what structural features might be important,” the authors said. They didn’t even provide any basic information to the algorithm, such as that RNA is made up of four-letter chains.

As another benchmark, the team next challenged ARES to RNA-Puzzles. Kicked off in 2011, RNA-Puzzles is a community challenge for structural biologists to test their prediction algorithms against known experimental RNA structures. ARES blew the competition away.

The average resolution “has stayed stubbornly stuck” around 10 times less than that for a protein, said Weeks. ARES improved the accuracy by roughly 30 percent. It’s a seemingly small step, but a giant leap for one of biology’s most intractable problems.

An RNA Structural Code

Compared to protein structure prediction, RNA is far harder. And for now, ARES still can’t get to the level of accuracy needed for drug discovery efforts, or find new “hot spots” on RNA molecules that can tweak our biology.

But ARES is a powerful step forward in “piercing the fog” of RNA, one that’s “poised to transform RNA structure and function discovery,” said Weeks. One improvement to the algorithm could be to incorporate some experimental data to further model these intricate structures. What’s clear is that RNA seem to have a “structural code” that helps regulate gene circuits—something that ARES and its next generations may help parse.

Much of RNA has been the “dark matter” of biology. We know it’s there, but it’s difficult to visualize and even harder to study. ARES represents the next telescope into that fog. “As it becomes possible to measure, (deeply) learn, and predict the details of the tertiary RNA structure-ome, diverse new discoveries in biological mechanisms await,” said Weeks.

Image Credit: neo tam / Pixabay

Kategorie: Transhumanismus

America’s Biggest 3D Printed Building Is This New Military Barracks in Texas

30 Srpen, 2021 - 16:00

3D printing is picking up speed as a construction technology, with 3D printed houses, schools, apartment buildings, and even Martian habitat concepts all being unveiled in the last year (not to mention Airbnbs and entire luxury communities). Now another type of structure is being added to this list: military barracks.

ICON, a construction technologies startup based in Austin, Texas, announced the project earlier this month in partnership with the Texas Military Department. At 3,800 square feet, the barracks will be the biggest 3D printed structure in North America. It’s edged out for the worldwide title by at least one other building, a 6,900-square-foot complex in Dubai used for municipal offices.

The barracks are located at the Camp Swift Training Center in Bastrop, TX, and are replacing temporary facilities that have already been used for longer than their intended lifespan. 72 soldiers will stay in the building, sleeping in bunk beds, while they train for missions and prepare for deployment.

“The printed barracks will not only provide our soldiers a safe and comfortable place to stay while they train, but because they are printed in concrete, we anticipate them to last for decades,” said Colonel Zebadiah Miller, the Texas Military Department’s director of facilities.

The energy-efficient barracks are being built with ICON’s Vulcan 3D printer, the initial iteration of which was 11.5 feet tall by 33 feet wide, made up of an axis set on a track. The printer’s “ink” is a proprietary concrete mix, which it puts down in stacked layers from the ground up. The building was designed by Austin-based Logan Architecture, a firm that had previously worked with ICON on the East 17th Residences, four partially 3D printed homes that went on the market in Austin earlier this year.

Soldiers will move into the barracks this fall.

With the announcement of $207 million raised in Series B funding this week, ICON is well-positioned to launch several more projects in the coming months and years. In May the company unveiled both its new Vulcan printer—1.5 times larger and 2 times faster than the previous version—and its House Zero line of homes, optimized and designed specifically for construction via 3D printing.

TechCrunch reported last week that ICON’s revenue has grown by 400 percent every year since its 2018 launch. The startup tripled its team in the past year, hitting the benchmark of over 100 employees, and plans to double in size again within the next year.

What this all comes down to is that a lot of people and organizations are seeing the benefits of 3D printing as a construction tool, and at the rate the technology is growing, houses and barracks are just the beginning.

“ICON continues our missional work to deliver dignified, resilient shelter for social housing, disaster-relief housing, market-rate homes, and now, homes for those serving our country,” said ICON co-founder Evan Loomis. “We are scaling this technology across Texas, the US, and eventually the world. This is the beginning of a true paradigm shift in homebuilding.”

Image Credit: ICON

Kategorie: Transhumanismus