Singularity HUB

Syndikovat obsah Singularity Hub
News and Insights on Technology, Science, and the Future from Singularity Group
Aktualizace: 1 min 37 sek zpět

AI Is Helping Astronomers Search for Intelligent Alien Life—and They’ve Found 8 Strange New Signals

4 hodiny 1 min zpět

Some 540 million years ago, diverse life forms suddenly began to emerge from the muddy ocean floors of planet Earth. This period is known as the Cambrian Explosion, and these aquatic critters are our ancient ancestors.

All complex life on Earth evolved from these underwater creatures. Scientists believe all it took was an ever-so-slight increase in ocean oxygen levels above a certain threshold.

We may now be in the midst of a Cambrian Explosion for artificial intelligence (AI). In the past few years, a burst of incredibly capable AI programs like Midjourney, DALL-E 2, and ChatGPT have showcased the rapid progress we’ve made in machine learning.

AI is now used in virtually all areas of science to help researchers with routine classification tasks. It’s also helping our team of radio astronomers broaden the search for extraterrestrial life, and results so far have been promising.

Discovering Alien Signals With AI

As scientists searching for evidence of intelligent life beyond Earth, we have built an AI system that beats classical algorithms in signal detection tasks. Our AI was trained to search through data from radio telescopes for signals that couldn’t be generated by natural astrophysical processes.

When we fed our AI a previously studied dataset, it discovered eight signals of interest the classic algorithm missed. To be clear, these signals are probably not from extraterrestrial intelligence, and are more likely rare cases of radio interference.

Nonetheless, our findings—published today in Nature Astronomy—highlight how AI techniques are sure to play a continued role in the search for extraterrestrial intelligence.

Not So Intelligent

AI algorithms do not “understand” or “think.” They do excel at pattern recognition, and have proven exceedingly useful for tasks such as classification—but they don’t have the ability to problem solve. They only do the specific tasks they were trained to do.

So although the idea of an AI detecting extraterrestrial intelligence sounds like the plot of an exciting science fiction novel, both terms are flawed: AI programs are not intelligent, and searches for extraterrestrial intelligence can’t find direct evidence of intelligence.

Instead, radio astronomers look for radio “technosignatures.” These hypothesized signals would indicate the presence of technology and, by proxy, the existence of a society with the capability to harness technology for communication.

For our research, we created an algorithm that uses AI methods to classify signals as being either radio interference, or a genuine technosignature candidate. And our algorithm is performing better than we’d hoped.

What Our AI Algorithm Does

Technosignature searches have been likened to looking for a needle in a cosmic haystack. Radio telescopes produce huge volumes of data, and in it are huge amounts of interference from sources such as phones, WiFi, and satellites.

Search algorithms need to be able to sift out real technosignatures from “false positives,” and do so quickly. Our AI classifier delivers on these requirements.

It was devised by Peter Ma, a University of Toronto student and the lead author on our paper. To create a set of training data, Peter inserted simulated signals into real data, and then used this dataset to train an AI algorithm called an autoencoder. As the autoencoder processed the data, it “learned” to identify salient features in the data.

In a second step, these features were fed to an algorithm called a random forest classifier. This classifier creates decision trees to decide if a signal is noteworthy, or just radio interference—essentially separating the technosignature “needles” from the haystack.

After training our AI algorithm, we fed it more than 150 terabytes of data (480 observing hours) from the Green Bank Telescope in West Virginia. It identified 20,515 signals of interest, which we then had to manually inspect. Of these, eight signals had the characteristics of technosignatures, and couldn’t be attributed to radio interference.

Eight Signals, No Re-Detections

To try and verify these signals, we went back to the telescope to re-observe all eight signals of interest. Unfortunately, we were not able to re-detect any of them in our follow-up observations.

We’ve been in similar situations before. In 2020 we detected a signal that turned out to be pernicious radio interference. While we will monitor these eight new candidates, the most likely explanation is they were unusual manifestations of radio interference: not aliens.

Sadly the issue of radio interference isn’t going anywhere. But we will be better equipped to deal with it as new technologies emerge.

Narrowing the Search

Our team recently deployed a powerful signal processor on the MeerKAT telescope in South Africa. MeerKAT uses a technique called interferometry to combine its 64 dishes to act as a single telescope. This technique is better able to pinpoint where in the sky a signal comes from, which will drastically reduce false positives from radio interference.

If astronomers do manage to detect a technosignature that can’t be explained away as interference, it would strongly suggest humans aren’t the sole creators of technology within the galaxy. This would be one of the most profound discoveries imaginable.

At the same time, if we detect nothing, that doesn’t necessarily mean we’re the only technologically-capable “intelligent” species around. A non-detection could also mean we haven’t looked for the right type of signals, or our telescopes aren’t yet sensitive enough to detect faint transmissions from distant exoplanets.

We may need to cross a sensitivity threshold before a Cambrian Explosion of discoveries can be made. Alternatively, if we really are alone, we should reflect on the unique beauty and fragility of life here on Earth.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image Credit: ESO/José Francisco Salgado

Kategorie: Transhumanismus

Maryland Wants to Be the First US State to Switch to a 4-Day Work Week

2 Únor, 2023 - 16:00

Last summer, the biggest four-day work week trial in the world kicked off in the UK. 3,300 people started working 80 percent of their regular hours for 100 percent of their pay. Feedback from employees and companies was overwhelmingly positive; people felt they were more productive and less stressed, and some businesses even saw their financial performance improve.

Meanwhile, a similar trial was taking place in the US and other English-speaking countries (Australia, Ireland, the UK, New Zealand, and Canada), with 903 employees across 33 companies getting a day of the week back in exchange for consistent work output. This pilot was also a resounding success, with 96.9 percent of participants voting to stick with a four-day week rather than going back to five days. Employees’ self-assessed work performance improved, as did their “satisfaction across multiple domains of life.”

The message is clear: a four-day work week works. People like it. Companies like it. Everyone’s happier, and there’s no decrease in productivity or hit to financial performance. So now that we’re all in agreement, what comes next?

The state of Maryland is the first in the US to take a step towards standardizing the four-day week. A proposed bill would give tax credits to companies that implement a 32-hour work week without reducing their employees’ pay. They’d get credits of $750,000 per year for up to two years if they have at least 30 employees scale down to a shorter work week.

The tax credit would be used in part to help businesses cover the cost of collecting data about the trial and reporting it to the state. The state would have to pay the cost of administering the program, which could be as much as $250,000 a year.

So what’s in it for the state? It seems a bit counter-intuitive for a state government to incentivize its citizens to work less. What about growing the economy and staying competitive?

As we’ve unfortunately learned through the chaotic labor market of the last couple years, it’s hard to grow the economy when millions of people are unhappy with their jobs and voluntarily leave them. The instability and worker shortages brought by this state of affairs must be more harmful than working one less day per week—especially if that one day is making a difference in employee satisfaction.

That’s job satisfaction and overall life satisfaction. Less time behind a desk means more time doing whatever you please, be it spending time with family, exercising, or working on personal projects—and ideally, that means a happier you, one who’s more motivated to perform at work and less likely to quit in a flurry of frustration and stress.

“We have a real opportunity here to create a win-win,” said Vaughn Stewart, the Maryland state delegate who sponsored the bill in the House after learning about the global trial. “We can make a shift toward reducing working hours without harming productivity, and possibly even boosting companies’ bottom line because they not only have improved productivity but retention and recruitment.”

The Maryland Legislature will hold hearings on the bill this month. If it passes, it would be the first of its kind in the US, and would be the first official change to the work week since 1940, when the federal government changed the minimum standard from 44 hours to 40.

Stewart is cautiously optimistic, noting that he’s gotten more interest in this bill than in all the other bills he’s sponsored combined since he became a member of Maryland’s House of Delegates four years ago.

If it’s signed into law, Maryland’s four-day work week pilot would go into effect on July 1.

Image Credit: David from Pixabay

Kategorie: Transhumanismus

Maryland Wants to Be the First US State to Switch to a 4-Day Work Week

2 Únor, 2023 - 16:00

Last summer, the biggest four-day work week trial in the world kicked off in the UK. 3,300 people started working 80 percent of their regular hours for 100 percent of their pay. Feedback from employees and companies was overwhelmingly positive; people felt they were more productive and less stressed, and some businesses even saw their financial performance improve.

Meanwhile, a similar trial was taking place in the US and other English-speaking countries (Australia, Ireland, the UK, New Zealand, and Canada), with 903 employees across 33 companies getting a day of the week back in exchange for consistent work output. This pilot was also a resounding success, with 96.9 percent of participants voting to stick with a four-day week rather than going back to five days. Employees’ self-assessed work performance improved, as did their “satisfaction across multiple domains of life.”

The message is clear: a four-day work week works. People like it. Companies like it. Everyone’s happier, and there’s no decrease in productivity or hit to financial performance. So now that we’re all in agreement, what comes next?

The state of Maryland is the first in the US to take a step towards standardizing the four-day week. A proposed bill would give tax credits to companies that implement a 32-hour work week without reducing their employees’ pay. They’d get credits of $750,000 per year for up to two years if they have at least 30 employees scale down to a shorter work week.

The tax credit would be used in part to help businesses cover the cost of collecting data about the trial and reporting it to the state. The state would have to pay the cost of administering the program, which could be as much as $250,000 a year.

So what’s in it for the state? It seems a bit counter-intuitive for a state government to incentivize its citizens to work less. What about growing the economy and staying competitive?

As we’ve unfortunately learned through the chaotic labor market of the last couple years, it’s hard to grow the economy when millions of people are unhappy with their jobs and voluntarily leave them. The instability and worker shortages brought by this state of affairs must be more harmful than working one less day per week—especially if that one day is making a difference in employee satisfaction.

That’s job satisfaction and overall life satisfaction. Less time behind a desk means more time doing whatever you please, be it spending time with family, exercising, or working on personal projects—and ideally, that means a happier you, one who’s more motivated to perform at work and less likely to quit in a flurry of frustration and stress.

“We have a real opportunity here to create a win-win,” said Vaughn Stewart, the Maryland state delegate who sponsored the bill in the House after learning about the global trial. “We can make a shift toward reducing working hours without harming productivity, and possibly even boosting companies’ bottom line because they not only have improved productivity but retention and recruitment.”

The Maryland Legislature will hold hearings on the bill this month. If it passes, it would be the first of its kind in the US, and would be the first official change to the work week since 1940, when the federal government changed the minimum standard from 44 hours to 40.

Stewart is cautiously optimistic, noting that he’s gotten more interest in this bill than in all the other bills he’s sponsored combined since he became a member of Maryland’s House of Delegates four years ago.

If it’s signed into law, Maryland’s four-day work week pilot would go into effect on July 1.

Image Credit: David from Pixabay

Kategorie: Transhumanismus

Watch This Shape-Shifting Robot Melt to Escape a Cage, Then Reform

1 Únor, 2023 - 16:00

Flight. Invisibility. Mind-reading. Super-strength. These powers have mostly been limited to the realms of science fiction and fantasy, though we’re starting to see robots and computers replicate some of them. Now a small robot built by an international team has a new superpower: shape-shifting. Or perhaps a more accurate name would be… state-shifting.

Described in a paper published last week in Matter, the robot can go from a solid state to a liquid state based on manipulation of the magnetic fields around it. The team developed the robot as an attempt to get the best of both worlds in terms of robotic properties and capabilities. Hard robots often can’t access certain spaces because of their inflexible bodies, while flexible robots lack strength and durability. Why not make a bot that can do it all?

Credit: Wang and Pan et al. under CC BY-SA

Videos show the robot “escaping” from a cage and extracting a ball from a model of a human stomach. The researchers say it could have all kinds of real-world applications, from performing tasks in tight spaces (like soldering a circuit board) to accessing parts of the body that are hard to reach (like the inside of the intestines) to acting as a universal screw by melting and reforming into a screw socket.

The robot is made primarily of gallium, a soft, silvery metal that’s used in electronic circuits, semiconductors, and LEDs. Its most useful feature in this case is its very low melting point: gallium melts at a cool 85.57 degrees Fahrenheit (29.76 degrees Celsius). That’s just slightly above room temperature (in a warm room, admittedly), or the outdoor temperature on a midsummer day.

The team sprinkled magnetic particles throughout the gallium, and these are key to the robot’s functionality.

“The magnetic particles here have two roles,” said the paper’s senior author and mechanical engineer Carmel Majidi of Carnegie Mellon University. “One is that they make the material responsive to an alternating magnetic field, so you can, through induction, heat up the material and cause the phase change. But the magnetic particles also give the robots mobility and the ability to move in response to the magnetic field.”

Gallium’s low melting point meant that exposing it to a rapidly-changing magnetic field generated enough electricity within the metal to cause it to heat up and melt. In the “prison break” experiment the researchers set up, the robot escaped its cell and re-solidified into its original form on the other side. It should be noted that the bot isn’t yet able to re-assume its original form without help, though; there was a mold waiting for it outside the cell.

Despite not having quite reached Terminator status, the ease with which the robot’s state can be manipulated could give it a major advantage over existing phase-shifting materials, which tend to require heat guns or electrical currents to go from solid to liquid. The gallium-based bot is also more fluid in its liquid form than similar materials.

When they used it in a model of a human stomach to remove an object, the solid robot was able to move quickly to the object, melt down, surround the object, coalesce back into a solid, and move out of the stomach with the object. The team noted that although the robot worked well in the model, pure gallium would quickly melt inside a real human body; they’d have to add metals like bismuth and tin to raise the material’s melting point for use in biomedical applications.

Manipulated by magnetic fields, the robot removes a foreign object from a model human stomach. Credit: Wang and Pan et al. under CC BY-SA

“What we’re showing are just one-off demonstrations, proofs of concept, but much more study will be required to delve into how this could actually be used for drug delivery or for removing foreign objects,” said Majidi.

The team got some of their inspiration for the robot from sea cucumbers, which can quickly change their stiffness back and forth. They call their invention a “magnetoactive solid-liquid phase transitional machine.” Using magnetic fields, the robots were also able to jump over moats, climb walls, and support heavy weight.

The next step is for the team to search for more real-world applications for their technology, and tweak its properties accordingly. Chengfeng Pan, an engineer at the Chinese University of Hong Kong who led the study, said, “Now we’re pushing this material system in more practical ways to solve some very specific medical and engineering problems.”

Image Credit: Q. Wang et al/Matter 2023 (CC BY-SA)

Kategorie: Transhumanismus

Watch This Shape-Shifting Robot Melt to Escape a Cage, Then Reform

1 Únor, 2023 - 16:00

Flight. Invisibility. Mind-reading. Super-strength. These powers have mostly been limited to the realms of science fiction and fantasy, though we’re starting to see robots and computers replicate some of them. Now a small robot built by an international team has a new superpower: shape-shifting. Or perhaps a more accurate name would be… state-shifting.

Described in a paper published last week in Matter, the robot can go from a solid state to a liquid state based on manipulation of the magnetic fields around it. The team developed the robot as an attempt to get the best of both worlds in terms of robotic properties and capabilities. Hard robots often can’t access certain spaces because of their inflexible bodies, while flexible robots lack strength and durability. Why not make a bot that can do it all?

Videos show the robot “escaping” from a cage and extracting a ball from a model of a human stomach. The researchers say it could have all kinds of real-world applications, from performing tasks in tight spaces (like soldering a circuit board) to accessing parts of the body that are hard to reach (like the inside of the intestines) to acting as a universal screw by melting and reforming into a screw socket.

The robot is made primarily of gallium, a soft, silvery metal that’s used in electronic circuits, semiconductors, and LEDs. Its most useful feature in this case is its very low melting point: gallium melts at a cool 85.57 degrees Fahrenheit (29.76 degrees Celsius). That’s just slightly above room temperature (in a warm room, admittedly), or the outdoor temperature on a midsummer day.

The team sprinkled magnetic particles throughout the gallium, and these are key to the robot’s functionality.

“The magnetic particles here have two roles,” said the paper’s senior author and mechanical engineer Carmel Majidi of Carnegie Mellon University. “One is that they make the material responsive to an alternating magnetic field, so you can, through induction, heat up the material and cause the phase change. But the magnetic particles also give the robots mobility and the ability to move in response to the magnetic field.”

Gallium’s low melting point meant that exposing it to a rapidly-changing magnetic field generated enough electricity within the metal to cause it to heat up and melt. In the “prison break” experiment the researchers set up, the robot escaped its cell and re-solidified into its original form on the other side. It should be noted that the bot isn’t yet able to re-assume its original form without help, though; there was a mold waiting for it outside the cell.

Despite not having quite reached Terminator status, the ease with which the robot’s state can be manipulated could give it a major advantage over existing phase-shifting materials, which tend to require heat guns or electrical currents to go from solid to liquid. The gallium-based bot is also more fluid in its liquid form than similar materials.

When they used it in a model of a human stomach to remove an object, the solid robot was able to move quickly to the object, melt down, surround the object, coalesce back into a solid, and move out of the stomach with the object. The team noted that although the robot worked well in the model, pure gallium would quickly melt inside a real human body; they’d have to add metals like bismuth and tin to raise the material’s melting point for use in biomedical applications.

“What we’re showing are just one-off demonstrations, proofs of concept, but much more study will be required to delve into how this could actually be used for drug delivery or for removing foreign objects,” said Majidi.

The team got some of their inspiration for the robot from sea cucumbers, which can quickly change their stiffness back and forth. They call their invention a “magnetoactive solid-liquid phase transitional machine.” Using magnetic fields, the robots were also able to jump over moats, climb walls, and support heavy weight.

The next step is for the team to search for more real-world applications for their technology, and tweak its properties accordingly. Chengfeng Pan, an engineer at the Chinese University of Hong Kong who led the study, said, “Now we’re pushing this material system in more practical ways to solve some very specific medical and engineering problems.”

Image Credit: Q. Wang et al/Matter 2023 (CC BY-SA)

Kategorie: Transhumanismus

AI-Powered Brain Implant Smashes Speed Record for Turning Thoughts Into Text

31 Leden, 2023 - 16:00

We speak at a rate of roughly 160 words every minute. That speed is incredibly difficult to achieve for speech brain implants.

Decades in the making, speech implants use tiny electrode arrays inserted into the brain to measure neural activity, with the goal of transforming thoughts into text or sound. They’re invaluable for people who lose their ability to speak due to paralysis, disease, or other injuries. But they’re also incredibly slow, slashing word count per minute nearly ten-fold. Like a slow-loading web page or audio file, the delay can get frustrating for everyday conversations.

A team led by Drs. Krishna Shenoy and Jaimie Henderson at Stanford University is closing that speed gap.

Published on the preprint server bioRxiv, their study helped a 67-year-old woman restore her ability to communicate with the outside world using brain implants at a record-breaking speed. Known as “T12,” the woman gradually lost her speech from amyotrophic lateral sclerosis (ALS), or Lou Gehrig’s disease, which progressively robs the brain’s ability to control muscles in the body. T12 could still vocalize sounds when trying to speak—but the words came out unintelligible.

With her implant, T12’s attempts at speech are now decoded in real time as text on a screen and spoken aloud with a computerized voice, including phrases like “it’s just tough,” or “I enjoy them coming.” The words came fast and furious at 62 per minute, over three times the speed of previous records.

It’s not just a need for speed. The study also tapped into the largest vocabulary library used for speech decoding using an implant—at roughly 125,000 words—in a first demonstration on that scale.

To be clear, although it was a “big breakthrough” and reached “impressive new performance benchmarks” according to experts, the study hasn’t yet been peer-reviewed and the results are limited to the one participant.

That said, the underlying technology isn’t limited to ALS. The boost in speech recognition stems from a marriage between RNNs—recurrent neural networks, a machine learning algorithm previously effective at decoding neural signals—and language models. When further tested, the setup could pave the way to enable people with severe paralysis, stroke, or locked-in syndrome to casually chat with their loved ones using just their thoughts.

We’re beginning to “approach the speed of natural conversation,” the authors said.

Loss for Words

The team is no stranger to giving people back their powers of speech.

As part of BrainGate, a pioneering global collaboration for restoring communications using brain implants, the team envisioned—and then realized—the ability to restore communications using neural signals from the brain.

In 2021, they engineered a brain-computer interface (BCI) that helped a person with spinal cord injury and paralysis type with his mind. With a 96 microelectrode array inserted into the motor areas of the patient’s brain, the team was able to decode brain signals for different letters as he imagined the motions for writing each character, achieving a sort of “mindtexting” with over 94 percent accuracy.

The problem? The speed was roughly 90 characters per minute at most. While a large improvement from previous setups, it was still painfully slow for daily use.

So why not tap directly into the speech centers of the brain?

Regardless of language, decoding speech is a nightmare. Small and often subconscious movements of the tongue and surrounding muscles can trigger vastly different clusters of sounds—also known as phonemes. Trying to link the brain activity of every single twitch of a facial muscle or flicker of the tongue to a sound is a herculean task.

Hacking Speech

The new study, a part of the BrainGate2 Neural Interface System trial, used a clever workaround.

The team first placed four strategically located electrode microarrays into the outer layer of T12’s brain. Two were inserted into areas that control movements around the mouth’s surrounding facial muscles. The other two tapped straight into the brain’s “language center,” which is called Broca’s area.

In theory, the placement was a genius two-in-one: it captured both what the person wanted to say, and the actual execution of speech through muscle movements.

But it was also a risky proposition: we don’t yet know whether speech is limited to just a small brain area that controls muscles around the mouth and face, or if language is encoded at a more global scale inside the brain.

Enter RNNs. A type of deep learning, the algorithm has previously translated neural signals from the motor areas of the brain into text. In a first test, the team found that it easily separated different types of facial movements for speech—say, furrowing the brows, puckering the lips, or flicking the tongue—based on neural signals alone with over 92 percent accuracy.

The RNN was then taught to suggest phonemes in real time—for example, “huh,” “ah,” and “tze.” Phenomes help distinguish one word from another; in essence, they’re the basic element of speech.

The training took work: every day, T12 attempted to speak between 260 and 480 sentences at her own pace to teach the algorithm the particular neural activity underlying her speech patterns. Overall, the RNN was trained on nearly 11,000 sentences.

Having a decoder for her mind, the team linked the RNN interface with two language models. One had an especially large vocabulary at 125,000 words. The other was a smaller library with 50 words that’s used for simple sentences in everyday life.

After five days of attempted speaking, both language models could decode T12’s words. The system had errors: around 10 percent for the small library and nearly 24 percent for the larger one. Yet when asked to repeat sentence prompts on a screen, the system readily translated her neural activity into sentences three times faster than previous models.

The implant worked regardless if she attempted to speak or if she just mouthed the sentences silently (she preferred the latter, as it required less energy).

Analyzing T12’s neural signals, the team found that certain regions of the brain retained neural signaling patterns to encode for vowels and other phonemes. In other words, even after years of speech paralysis, the brain still maintains a “detailed articulatory code”—that is, a dictionary of phonemes embedded inside neural signals—that can be decoded using brain implants.

Speak Your Mind

The study builds upon many others that use a brain implant to restore speech, often decades after severe injuries or slowly-spreading paralysis from neurodegenerative disorders. The hardware is well known: the Blackrock microelectrode array, consisting of 64 channels to listen in on the brain’s electrical signals.

What’s different is how it operates; that is, how the software transforms noisy neural chatter into cohesive meanings or intentions. Previous models mostly relied on decoding data directly obtained from neural recordings from the brain.

Here, the team tapped into a new resource: language models, or AI algorithms similar to the autocomplete function now widely available for Gmail or texting. The technological tag-team is especially promising with the rise of GPT-3 and other emerging large language models. Excellent at generating speech patterns from simple prompts, the tech—when combined with the patient’s own neural signals—could potentially “autocomplete” their thoughts without the need for hours of training.

The prospect, while alluring, comes with a side of caution. GPT-3 and similar AI models can generate convincing speech on their own based on previous training data. For a person with paralysis who’s unable to speak, we would need guardrails as the AI generates what the person is trying to say.

The authors agree that, for now, their work is a proof of concept. While promising, it’s “not yet a complete, clinically viable system,” for decoding speech. For one, they said, we need to train the decoder with less time and make it more flexible, letting it adapt to ever-changing brain activity. For another, the error rate of roughly 24 percent is far too high for everyday use—although increasing the number of implant channels could boost accuracy.

But for now, it moves us closer to the ultimate goal of “restoring rapid communications to people with paralysis who can no longer speak,” the authors said.

Image Credit: Miguel Á. Padriñán from Pixabay

Kategorie: Transhumanismus

AI-Powered Brain Implant Smashes Speed Record for Turning Thoughts Into Text

31 Leden, 2023 - 16:00

We speak at a rate of roughly 160 words every minute. That speed is incredibly difficult to achieve for speech brain implants.

Decades in the making, speech implants use tiny electrode arrays inserted into the brain to measure neural activity, with the goal of transforming thoughts into text or sound. They’re invaluable for people who lose their ability to speak due to paralysis, disease, or other injuries. But they’re also incredibly slow, slashing word count per minute nearly ten-fold. Like a slow-loading web page or audio file, the delay can get frustrating for everyday conversations.

A team led by Drs. Krishna Shenoy and Jaimie Henderson at Stanford University is closing that speed gap.

Published on the preprint server bioRxiv, their study helped a 67-year-old woman restore her ability to communicate with the outside world using brain implants at a record-breaking speed. Known as “T12,” the woman gradually lost her speech from amyotrophic lateral sclerosis (ALS), or Lou Gehrig’s disease, which progressively robs the brain’s ability to control muscles in the body. T12 could still vocalize sounds when trying to speak—but the words came out unintelligible.

With her implant, T12’s attempts at speech are now decoded in real time as text on a screen and spoken aloud with a computerized voice, including phrases like “it’s just tough,” or “I enjoy them coming.” The words came fast and furious at 62 per minute, over three times the speed of previous records.

It’s not just a need for speed. The study also tapped into the largest vocabulary library used for speech decoding using an implant—at roughly 125,000 words—in a first demonstration on that scale.

To be clear, although it was a “big breakthrough” and reached “impressive new performance benchmarks” according to experts, the study hasn’t yet been peer-reviewed and the results are limited to the one participant.

That said, the underlying technology isn’t limited to ALS. The boost in speech recognition stems from a marriage between RNNs—recurrent neural networks, a machine learning algorithm previously effective at decoding neural signals—and language models. When further tested, the setup could pave the way to enable people with severe paralysis, stroke, or locked-in syndrome to casually chat with their loved ones using just their thoughts.

We’re beginning to “approach the speed of natural conversation,” the authors said.

Loss for Words

The team is no stranger to giving people back their powers of speech.

As part of BrainGate, a pioneering global collaboration for restoring communications using brain implants, the team envisioned—and then realized—the ability to restore communications using neural signals from the brain.

In 2021, they engineered a brain-computer interface (BCI) that helped a person with spinal cord injury and paralysis type with his mind. With a 96 microelectrode array inserted into the motor areas of the patient’s brain, the team was able to decode brain signals for different letters as he imagined the motions for writing each character, achieving a sort of “mindtexting” with over 94 percent accuracy.

The problem? The speed was roughly 90 characters per minute at most. While a large improvement from previous setups, it was still painfully slow for daily use.

So why not tap directly into the speech centers of the brain?

Regardless of language, decoding speech is a nightmare. Small and often subconscious movements of the tongue and surrounding muscles can trigger vastly different clusters of sounds—also known as phonemes. Trying to link the brain activity of every single twitch of a facial muscle or flicker of the tongue to a sound is a herculean task.

Hacking Speech

The new study, a part of the BrainGate2 Neural Interface System trial, used a clever workaround.

The team first placed four strategically located electrode microarrays into the outer layer of T12’s brain. Two were inserted into areas that control movements around the mouth’s surrounding facial muscles. The other two tapped straight into the brain’s “language center,” which is called Broca’s area.

In theory, the placement was a genius two-in-one: it captured both what the person wanted to say, and the actual execution of speech through muscle movements.

But it was also a risky proposition: we don’t yet know whether speech is limited to just a small brain area that controls muscles around the mouth and face, or if language is encoded at a more global scale inside the brain.

Enter RNNs. A type of deep learning, the algorithm has previously translated neural signals from the motor areas of the brain into text. In a first test, the team found that it easily separated different types of facial movements for speech—say, furrowing the brows, puckering the lips, or flicking the tongue—based on neural signals alone with over 92 percent accuracy.

The RNN was then taught to suggest phonemes in real time—for example, “huh,” “ah,” and “tze.” Phenomes help distinguish one word from another; in essence, they’re the basic element of speech.

The training took work: every day, T12 attempted to speak between 260 and 480 sentences at her own pace to teach the algorithm the particular neural activity underlying her speech patterns. Overall, the RNN was trained on nearly 11,000 sentences.

Having a decoder for her mind, the team linked the RNN interface with two language models. One had an especially large vocabulary at 125,000 words. The other was a smaller library with 50 words that’s used for simple sentences in everyday life.

After five days of attempted speaking, both language models could decode T12’s words. The system had errors: around 10 percent for the small library and nearly 24 percent for the larger one. Yet when asked to repeat sentence prompts on a screen, the system readily translated her neural activity into sentences three times faster than previous models.

The implant worked regardless if she attempted to speak or if she just mouthed the sentences silently (she preferred the latter, as it required less energy).

Analyzing T12’s neural signals, the team found that certain regions of the brain retained neural signaling patterns to encode for vowels and other phonemes. In other words, even after years of speech paralysis, the brain still maintains a “detailed articulatory code”—that is, a dictionary of phonemes embedded inside neural signals—that can be decoded using brain implants.

Speak Your Mind

The study builds upon many others that use a brain implant to restore speech, often decades after severe injuries or slowly-spreading paralysis from neurodegenerative disorders. The hardware is well known: the Blackrock microelectrode array, consisting of 64 channels to listen in on the brain’s electrical signals.

What’s different is how it operates; that is, how the software transforms noisy neural chatter into cohesive meanings or intentions. Previous models mostly relied on decoding data directly obtained from neural recordings from the brain.

Here, the team tapped into a new resource: language models, or AI algorithms similar to the autocomplete function now widely available for Gmail or texting. The technological tag-team is especially promising with the rise of GPT-3 and other emerging large language models. Excellent at generating speech patterns from simple prompts, the tech—when combined with the patient’s own neural signals—could potentially “autocomplete” their thoughts without the need for hours of training.

The prospect, while alluring, comes with a side of caution. GPT-3 and similar AI models can generate convincing speech on their own based on previous training data. For a person with paralysis who’s unable to speak, we would need guardrails as the AI generates what the person is trying to say.

The authors agree that, for now, their work is a proof of concept. While promising, it’s “not yet a complete, clinically viable system,” for decoding speech. For one, they said, we need to train the decoder with less time and make it more flexible, letting it adapt to ever-changing brain activity. For another, the error rate of roughly 24 percent is far too high for everyday use—although increasing the number of implant channels could boost accuracy.

But for now, it moves us closer to the ultimate goal of “restoring rapid communications to people with paralysis who can no longer speak,” the authors said.

Image Credit: Miguel Á. Padriñán from Pixabay

Kategorie: Transhumanismus

New ‘Mega Ranch’ Will Grow 45 Million Pounds of Mushroom Root for Plant-Based Meat

30 Leden, 2023 - 16:00

Last July, New York-based startup MyForest foods announced the opening of a vertical farm that would grow three million pounds of mycelium a year, all for plant-based bacon. Now competitor Meati Foods is blowing them out of the water with a facility that will be able to produce more than 45 million pounds of product once it’s fully scaled up. The company announced the opening of a factory it’s calling “Mega Ranch” in Thornton, Colorado (a suburb north of Denver) last week.

Meati makes a variety of plant-based imitation meat products, or “animal-free whole-food proteins,” including a classic steak, carne asada, a classic cutlet, and a crispy cutlet. The meats are made of 95 percent mushroom root, with additional ingredients including oat fiber, seasonings, fruit and vegetable juices, and lycopene (for color). With up to 17 grams of protein and 12 grams of dietary fiber per serving, the company says the meats are comparable to their animal-derived counterparts in nutritional value.

Mushroom roots are called mycelium, and they’re a different sort of root than what you typically see at the bottom of most plants and trees. Mycelium is a root-like structure of fungus made of a mass of branching, thread-like strands called hyphae. The hyphae absorb nutrients from soil or another substrate so the fungus can grow.

Companies are using mycelium as a base for all sorts of vegan materials, from packaging to leather to biomedical scaffolds. It’s a viable ingredient both because it’s easy to manipulate—the nutrients in the substrate it’s grown on can be tweaked to yield different properties, like making it stiffer or more flexible—and because it grows fast; Meati says its proprietary growth formula can turn a teaspoon of spores into the equivalent of hundreds of cows’ worth of whole-food protein in just a few days.

The mycelium is grown in stainless steel vats (similar to fermentation tanks at breweries), where it’s fed a liquid rich in sugar and nutrients that helps it grow faster than it would in the wild. Meati harvests mycelium fibers from the vats, then must assemble them in such a way that the texture resembles animal muscle.

Meati’s new Colorado plant will occupy 100,000 square feet, and will enable the company to produce tens of millions of pounds of its products by the end of this year. The products are already sold through retail and foodservice partners that include Sprouts Farmers Market, Sweetgreen, and Birdcall, and Meati’s aiming to get start selling at 7,000 new locations by the end of this year.

The company’s total funding to date is over $250 million. They expect to bring in tens of millions in revenue this year and hundreds of millions in 2024. Despite the Mega Ranch opening this year, they’re already scouting out a location for a “Giga Ranch” that will be able to produce hundreds of millions of pounds of product annually.

Despite the somewhat ailing state of the plant-based meat industry, Meati’s co-founder and CEO, Tyler Huggins, sees nothing but growth in his company’s future. “There is no shortage of stuff coming out, and we have no lack of demand,” he told TechCrunch. “Our pipeline is robust, and everything we produce in the next year or more is already pre-sold. It’s now about unlocking capacity to get the product out there.”

Image Credit: Meati

Kategorie: Transhumanismus

New ‘Mega Ranch’ Will Grow 45 Million Pounds of Mushroom Root for Plant-Based Meat

30 Leden, 2023 - 16:00

Last July, New York-based startup MyForest foods announced the opening of a vertical farm that would grow three million pounds of mycelium a year, all for plant-based bacon. Now competitor Meati Foods is blowing them out of the water with a facility that will be able to produce more than 45 million pounds of product once it’s fully scaled up. The company announced the opening of a factory it’s calling “Mega Ranch” in Thornton, Colorado (a suburb north of Denver) last week.

Meati makes a variety of plant-based imitation meat products, or “animal-free whole-food proteins,” including a classic steak, carne asada, a classic cutlet, and a crispy cutlet. The meats are made of 95 percent mushroom root, with additional ingredients including oat fiber, seasonings, fruit and vegetable juices, and lycopene (for color). With up to 17 grams of protein and 12 grams of dietary fiber per serving, the company says the meats are comparable to their animal-derived counterparts in nutritional value.

Mushroom roots are called mycelium, and they’re a different sort of root than what you typically see at the bottom of most plants and trees. Mycelium is a root-like structure of fungus made of a mass of branching, thread-like strands called hyphae. The hyphae absorb nutrients from soil or another substrate so the fungus can grow.

Companies are using mycelium as a base for all sorts of vegan materials, from packaging to leather to biomedical scaffolds. It’s a viable ingredient both because it’s easy to manipulate—the nutrients in the substrate it’s grown on can be tweaked to yield different properties, like making it stiffer or more flexible—and because it grows fast; Meati says its proprietary growth formula can turn a teaspoon of spores into the equivalent of hundreds of cows’ worth of whole-food protein in just a few days.

The mycelium is grown in stainless steel vats (similar to fermentation tanks at breweries), where it’s fed a liquid rich in sugar and nutrients that helps it grow faster than it would in the wild. Meati harvests mycelium fibers from the vats, then must assemble them in such a way that the texture resembles animal muscle.

Meati’s new Colorado plant will occupy 100,000 square feet, and will enable the company to produce tens of millions of pounds of its products by the end of this year. The products are already sold through retail and foodservice partners that include Sprouts Farmers Market, Sweetgreen, and Birdcall, and Meati’s aiming to get start selling at 7,000 new locations by the end of this year.

The company’s total funding to date is over $250 million. They expect to bring in tens of millions in revenue this year and hundreds of millions in 2024. Despite the Mega Ranch opening this year, they’re already scouting out a location for a “Giga Ranch” that will be able to produce hundreds of millions of pounds of product annually.

Despite the somewhat ailing state of the plant-based meat industry, Meati’s co-founder and CEO, Tyler Huggins, sees nothing but growth in his company’s future. “There is no shortage of stuff coming out, and we have no lack of demand,” he told TechCrunch. “Our pipeline is robust, and everything we produce in the next year or more is already pre-sold. It’s now about unlocking capacity to get the product out there.”

Image Credit: Meati

Kategorie: Transhumanismus

Technology Over the Long Run: Zoom Out to See How Dramatically the World Can Change Within a Lifetime

29 Leden, 2023 - 19:05

Technology can change the world in ways that are unimaginable, until they happen. Switching on an electric light would have been unimaginable for our medieval ancestors. In their childhood, our grandparents would have struggled to imagine a world connected by smartphones and the internet.

Similarly, it is hard for us to imagine the arrival of all those technologies that will fundamentally change the world we are used to.

We can remind ourselves that our own future might look very different from the world today by looking back at how rapidly technology has changed our world in the past. That’s what this article is about.

One insight I take away from this long-term perspective is how unusual our time is. Technological change was extremely slow in the past—the technologies that our ancestors got used to in their childhood were still central to their lives in their old age. In stark contrast to those days, we live in a time of extraordinarily fast technological change. For recent generations, it was common for technologies that were unimaginable in their youth to become common later in life.

The Long-Run Perspective on Technological Change

The big visualization offers a long-term perspective on the history of technology.

The timeline begins at the center of the spiral. The first use of stone tools, 3.4 million years ago, marks the beginning of this history of technology. Each turn of the spiral then represents 200,000 years of history. It took 2.4 million years—12 turns of the spiral—for our ancestors to control fire and use it for cooking.

To be able to visualize the inventions in the more recent past—the last 12,000 years—I had to unroll the spiral. I needed more space to be able to show when agriculture, writing, and the wheel were invented. During this period, technological change was faster, but it was still relatively slow: several thousand years passed between each of these three inventions.

From 1800 onwards, I stretched out the timeline even further to show the many major inventions that rapidly followed one after the other.

The long-term perspective that this chart provides makes it clear just how unusually fast technological change is in our time.

You can use this visualization to see how technology developed in particular domains. Follow, for example, the history of communication: from writing, to paper, to the printing press, to the telegraph, the telephone, the radio, all the way to the internet and smartphones.

Or follow the rapid development of human flight. In 1903, the Wright brothers took the first flight in human history (they were in the air for less than a minute), and just 66 years later, we landed on the moon. Many people saw both within their lifetimes: the first plane and the moon landing.

This large visualization also highlights the wide range of technology’s impact on our lives. It includes extraordinarily beneficial innovations, such as the vaccine that allowed humanity to eradicate smallpox, and it includes terrible innovations, like the nuclear bombs that endanger the lives of all of us.

What will the next decades bring?

The red timeline reaches up to the present and then continues in green into the future. Many children born today, even without any further increases in life expectancy, will live well into the 22nd century.

New vaccines, progress in clean, low-carbon energy, better cancer treatments—a range of future innovations could very much improve our living conditions and the environment around us. But, as I argue in a series of articles, there is one technology that could even more profoundly change our world: artificial intelligence.

One reason why artificial intelligence is such an important innovation is that intelligence is the main driver of innovation itself. This fast-paced technological change could speed up even more if it’s not only driven by humanity’s intelligence, but artificial intelligence too. If this happens, the change that is currently stretched out over the course of decades might happen within very brief time spans of just a year. Possibly even faster.

I think AI technology could have a fundamentally transformative impact on our world. In many ways it is already changing our world, as I documented in this companion article. As this technology is becoming more capable in the years and decades to come, it can give immense power to those who control it (and it poses the risk that it could escape our control entirely).

Such systems might seem hard to imagine today, but AI technology is advancing very fast. Many AI experts believe that there is a very real chance that human-level artificial intelligence will be developed within the next decades, as I documented in this article.

Technology Will Continue to Change the World—We Should All Make Sure That It Changes It for the Better

What is familiar to us today—photography, the radio, antibiotics, the internet, or the International Space Station circling our planet—was unimaginable to our ancestors just a few generations ago. If your great-great-great grandparents could spend a week with you they would be blown away by your everyday life.

What I take away from this history is that I will likely see technologies in my lifetime that appear unimaginable to me today.

In addition to this trend towards increasingly rapid innovation, there is a second long-run trend. Technology has become increasingly powerful. While our ancestors wielded stone tools, we are building globe-spanning AI systems and technologies that can edit our genes.

Because of the immense power that technology gives those who control it, there is little that is as important as the question of which technologies get developed during our lifetimes. Therefore I think it is a mistake to leave the question about the future of technology to the technologists. Which technologies are controlled by whom is one of the most important political questions of our time, because of the enormous power that these technologies convey to those who control them.

We all should strive to gain the knowledge we need to contribute to an intelligent debate about the world we want to live in. To a large part this means gaining the knowledge, and wisdom, on the question of which technologies we want.

Acknowledgements: I would like to thank my colleagues Hannah Ritchie, Bastian Herre, Natasha Ahuja, Edouard Mathieu, Daniel Bachler, Charlie Giattino, and Pablo Rosado for their helpful comments to drafts of this essay and the visualization. Thanks also to Lizka Vaintrob and Ben Clifford for a conversation that initiated this visualization.

This article was originally published on Our World in Data and has been republished here under a Creative Commons license. Read the original article.

Image Credit: Pat Kay / Unsplash

Kategorie: Transhumanismus

Technology Over the Long Run: Zoom Out to See How Dramatically the World Can Change Within a Lifetime

29 Leden, 2023 - 19:05

Technology can change the world in ways that are unimaginable, until they happen. Switching on an electric light would have been unimaginable for our medieval ancestors. In their childhood, our grandparents would have struggled to imagine a world connected by smartphones and the internet.

Similarly, it is hard for us to imagine the arrival of all those technologies that will fundamentally change the world we are used to.

We can remind ourselves that our own future might look very different from the world today by looking back at how rapidly technology has changed our world in the past. That’s what this article is about.

One insight I take away from this long-term perspective is how unusual our time is. Technological change was extremely slow in the past—the technologies that our ancestors got used to in their childhood were still central to their lives in their old age. In stark contrast to those days, we live in a time of extraordinarily fast technological change. For recent generations, it was common for technologies that were unimaginable in their youth to become common later in life.

The Long-Run Perspective on Technological Change

The big visualization offers a long-term perspective on the history of technology.

The timeline begins at the center of the spiral. The first use of stone tools, 3.4 million years ago, marks the beginning of this history of technology. Each turn of the spiral then represents 200,000 years of history. It took 2.4 million years—12 turns of the spiral—for our ancestors to control fire and use it for cooking.

To be able to visualize the inventions in the more recent past—the last 12,000 years—I had to unroll the spiral. I needed more space to be able to show when agriculture, writing, and the wheel were invented. During this period, technological change was faster, but it was still relatively slow: several thousand years passed between each of these three inventions.

From 1800 onwards, I stretched out the timeline even further to show the many major inventions that rapidly followed one after the other.

The long-term perspective that this chart provides makes it clear just how unusually fast technological change is in our time.

You can use this visualization to see how technology developed in particular domains. Follow, for example, the history of communication: from writing, to paper, to the printing press, to the telegraph, the telephone, the radio, all the way to the internet and smartphones.

Or follow the rapid development of human flight. In 1903, the Wright brothers took the first flight in human history (they were in the air for less than a minute), and just 66 years later, we landed on the moon. Many people saw both within their lifetimes: the first plane and the moon landing.

This large visualization also highlights the wide range of technology’s impact on our lives. It includes extraordinarily beneficial innovations, such as the vaccine that allowed humanity to eradicate smallpox, and it includes terrible innovations, like the nuclear bombs that endanger the lives of all of us.

What will the next decades bring?

The red timeline reaches up to the present and then continues in green into the future. Many children born today, even without any further increases in life expectancy, will live well into the 22nd century.

New vaccines, progress in clean, low-carbon energy, better cancer treatments—a range of future innovations could very much improve our living conditions and the environment around us. But, as I argue in a series of articles, there is one technology that could even more profoundly change our world: artificial intelligence.

One reason why artificial intelligence is such an important innovation is that intelligence is the main driver of innovation itself. This fast-paced technological change could speed up even more if it’s not only driven by humanity’s intelligence, but artificial intelligence too. If this happens, the change that is currently stretched out over the course of decades might happen within very brief time spans of just a year. Possibly even faster.

I think AI technology could have a fundamentally transformative impact on our world. In many ways it is already changing our world, as I documented in this companion article. As this technology is becoming more capable in the years and decades to come, it can give immense power to those who control it (and it poses the risk that it could escape our control entirely).

Such systems might seem hard to imagine today, but AI technology is advancing very fast. Many AI experts believe that there is a very real chance that human-level artificial intelligence will be developed within the next decades, as I documented in this article.

Technology Will Continue to Change the World—We Should All Make Sure That It Changes It for the Better

What is familiar to us today—photography, the radio, antibiotics, the internet, or the International Space Station circling our planet—was unimaginable to our ancestors just a few generations ago. If your great-great-great grandparents could spend a week with you they would be blown away by your everyday life.

What I take away from this history is that I will likely see technologies in my lifetime that appear unimaginable to me today.

In addition to this trend towards increasingly rapid innovation, there is a second long-run trend. Technology has become increasingly powerful. While our ancestors wielded stone tools, we are building globe-spanning AI systems and technologies that can edit our genes.

Because of the immense power that technology gives those who control it, there is little that is as important as the question of which technologies get developed during our lifetimes. Therefore I think it is a mistake to leave the question about the future of technology to the technologists. Which technologies are controlled by whom is one of the most important political questions of our time, because of the enormous power that these technologies convey to those who control them.

We all should strive to gain the knowledge we need to contribute to an intelligent debate about the world we want to live in. To a large part this means gaining the knowledge, and wisdom, on the question of which technologies we want.

Acknowledgements: I would like to thank my colleagues Hannah Ritchie, Bastian Herre, Natasha Ahuja, Edouard Mathieu, Daniel Bachler, Charlie Giattino, and Pablo Rosado for their helpful comments to drafts of this essay and the visualization. Thanks also to Lizka Vaintrob and Ben Clifford for a conversation that initiated this visualization.

This article was originally published on Our World in Data and has been republished here under a Creative Commons license. Read the original article.

Image Credit: Pat Kay / Unsplash

Kategorie: Transhumanismus

This Week’s Awesome Tech Stories From Around the Web (Through January 28)

28 Leden, 2023 - 16:00
ARTIFICIAL INTELLIGENCE

AI Has Designed Bacteria-Killing Proteins From Scratch—and They Work
Karmela Padavic-Callaghan | New Scientist
“The AI, called ProGen, works in a similar way to AIs that can generate text. ProGen learned how to generate new proteins by learning the grammar of how amino acids combine to form 280 million existing proteins. Instead of the researchers choosing a topic for the AI to write about, they could specify a group of similar proteins for it to focus on. In this case, they chose a group of proteins with antimicrobial activity.”

DIGITAL MEDIA

BuzzFeed to Use ChatGPT Creator OpenAI to Help Create Quizzes and Other Content
Alexandra Bruell | The Wall Street Journal
“BuzzFeed Inc. said it would rely on ChatGPT creator OpenAI to enhance its quizzes and personalize some content for its audiences, becoming the latest digital publisher to embrace artificial intelligence. In a memo to staff sent Thursday morning, which was reviewed by The Wall Street Journal, Chief Executive Jonah Peretti said he intends for AI to play a larger role in the company’s editorial and business operations this year.“

ROBOTICS

Metal Robot Can Melt Its Way Out of Tight Spaces to Escape
Karmela Padavic-Callaghan | New Scientist
“A miniature, shape-shifting robot can liquefy itself and reform, allowing it to complete tasks in hard-to-access places and even escape cages. It could eventually be used as a hands-free soldering machine or a tool for extracting swallowed toxic items.”

FUTURE

Don’t Be Sucked in by AI’s Head-Spinning Hype Cycles
Devin Coldewey | TechCrunch
“[AI] certainly can outplay any human at chess or go, and it can predict the structure of protein chains; it can answer any question confidently (if not correctly) and it can do a remarkably good imitation of any artist, living or dead. But it is difficult to tease out which of these things is important, and to whom, and which will be remembered as briefly diverting parlor tricks in 5 or 10 years, like so many innovations we have been told are going to change the world.”

SPACE

NASA Announces Successful Test of New Propulsion Technology for Treks to Deep Space
Kevin Hurler | Gizmodo
“The rotating detonation rocket engine, or RDRE, generates thrust with detonation, in which a supersonic exothermic front accelerates to produce thrust, much the same way a shockwave travels through the atmosphere after something like TNT explodes. NASA says that this design uses less fuel and provides more thrust than current propulsion systems and that the RDRE could be used to power human landers, as well as crewed missions to the Moon, Mars, and deep space.“

ARTIFICIAL INTELLIGENCE

The Best Use for AI Eye Contact Tech Is Making Movie Stars Look Straight at the Camera
James Vincent | The Verge
“This tech comes with a bunch of interesting questions, of course. Like: is constant unbroken eye contact good or a bit creepy? Are these tools useful for people who don’t naturally like eye contact? …But forget that high-brow trash for now, because here’s the stupidest and best use case of this technology yet: editing movie scenes so actors make eye contact with the camera.”

SCIENCE

Researchers Look a Dinosaur in Its Remarkably Preserved Face
Jeanne Timmons | Ars Technica
Borealopelta markmitchelli found its way back into the sunlight in 2017, millions of years after it had died. This armored dinosaur is so magnificently preserved that we can see what it looked like in life. Almost the entire animal—the skin, the armor that coats its skin, the spikes along its side, most of its body and feet, even its face—survived fossilization. It is, according to Dr. Donald Henderson, curator of dinosaurs at the Royal Tyrrell Museum, a one-in-a-billion find.”

TECH

Google, Not OpenAI, Has the Most to Gain From Generative AI
Mark Sullivan | Fast Company
“After spending billions on artificial intelligence R&D and acquisitions, Google finds itself ceding the AI limelight to OpenAI, an upstart that has captured the popular imagination with the public beta of its startlingly conversant chatbot, ChatGPT. Now Google reportedly fears the ChatGPT AI could reinvent search, its cornerstone business. But Google, which declared itself an ‘AI-first’ company in 2017, may yet regain its place in the sun. Its AI investments, which date back to the 2000s, may pay off, and could even power the company’s next quarter century of growth (Google turns 25 this year). Here’s why.”

BIOTECH

CRISPR Wants to Feed the World
Jennifer Doudna | Wired
“A great deal of the attention surrounding CRISPR has focused on the medical applications, and for good reason: The results are promising, and the personal stories are uplifting, offering hope to many who have suffered from long-neglected genetic diseases. In 2023, as CRISPR moves into agriculture and climate, we will have the opportunity to radically improve human health in a holistic way that can better safeguard our society and enable millions of people around the world to flourish.“

ETHICS

A Watermark for Chatbots Can Expose Text Written by an AI
Melissa Heikkilä | MIT Technology Review
“Hidden patterns purposely buried in AI-generated texts could help identify them as such, allowing us to tell whether the words we’re reading are written by a human or not. These ‘watermarks’ are invisible to the human eye but let computers detect that the text probably comes from an AI system. If embedded in large language models, they could help prevent some of the problems that these models have already caused.”

SCIENCE

Earth’s Inner Core: A Shifting, Spinning Mystery’s Latest Twist
Dennis Overbye | The New York Times
“Imagine Earth’s inner core—the dense center of our planet—as a heavy, metal ballerina. This iron-rich dancer is capable of pirouetting at ever-changing speeds. That core may be on the cusp of a big shift. Seismologists reported Monday in the journal Nature Geoscience that after brief but peculiar pauses, the inner core changes how it spins—relative to the motion of Earth’s surface—perhaps once every few decades. And, right now, one such reversal may be underway.”

Image Credit: Robert Linder / Unsplash

Kategorie: Transhumanismus

This Week’s Awesome Tech Stories From Around the Web (Through January 28)

28 Leden, 2023 - 16:00
ARTIFICIAL INTELLIGENCE

AI Has Designed Bacteria-Killing Proteins From Scratch—and They Work
Karmela Padavic-Callaghan | New Scientist
“The AI, called ProGen, works in a similar way to AIs that can generate text. ProGen learned how to generate new proteins by learning the grammar of how amino acids combine to form 280 million existing proteins. Instead of the researchers choosing a topic for the AI to write about, they could specify a group of similar proteins for it to focus on. In this case, they chose a group of proteins with antimicrobial activity.”

DIGITAL MEDIA

BuzzFeed to Use ChatGPT Creator OpenAI to Help Create Quizzes and Other Content
Alexandra Bruell | The Wall Street Journal
“BuzzFeed Inc. said it would rely on ChatGPT creator OpenAI to enhance its quizzes and personalize some content for its audiences, becoming the latest digital publisher to embrace artificial intelligence. In a memo to staff sent Thursday morning, which was reviewed by The Wall Street Journal, Chief Executive Jonah Peretti said he intends for AI to play a larger role in the company’s editorial and business operations this year.“

ROBOTICS

Metal Robot Can Melt Its Way Out of Tight Spaces to Escape
Karmela Padavic-Callaghan | New Scientist
“A miniature, shape-shifting robot can liquefy itself and reform, allowing it to complete tasks in hard-to-access places and even escape cages. It could eventually be used as a hands-free soldering machine or a tool for extracting swallowed toxic items.”

FUTURE

Don’t Be Sucked in by AI’s Head-Spinning Hype Cycles
Devin Coldewey | TechCrunch
“[AI] certainly can outplay any human at chess or go, and it can predict the structure of protein chains; it can answer any question confidently (if not correctly) and it can do a remarkably good imitation of any artist, living or dead. But it is difficult to tease out which of these things is important, and to whom, and which will be remembered as briefly diverting parlor tricks in 5 or 10 years, like so many innovations we have been told are going to change the world.”

SPACE

NASA Announces Successful Test of New Propulsion Technology for Treks to Deep Space
Kevin Hurler | Gizmodo
“The rotating detonation rocket engine, or RDRE, generates thrust with detonation, in which a supersonic exothermic front accelerates to produce thrust, much the same way a shockwave travels through the atmosphere after something like TNT explodes. NASA says that this design uses less fuel and provides more thrust than current propulsion systems and that the RDRE could be used to power human landers, as well as crewed missions to the Moon, Mars, and deep space.“

ARTIFICIAL INTELLIGENCE

The Best Use for AI Eye Contact Tech Is Making Movie Stars Look Straight at the Camera
James Vincent | The Verge
“This tech comes with a bunch of interesting questions, of course. Like: is constant unbroken eye contact good or a bit creepy? Are these tools useful for people who don’t naturally like eye contact? …But forget that high-brow trash for now, because here’s the stupidest and best use case of this technology yet: editing movie scenes so actors make eye contact with the camera.”

SCIENCE

Researchers Look a Dinosaur in Its Remarkably Preserved Face
Jeanne Timmons | Ars Technica
Borealopelta markmitchelli found its way back into the sunlight in 2017, millions of years after it had died. This armored dinosaur is so magnificently preserved that we can see what it looked like in life. Almost the entire animal—the skin, the armor that coats its skin, the spikes along its side, most of its body and feet, even its face—survived fossilization. It is, according to Dr. Donald Henderson, curator of dinosaurs at the Royal Tyrrell Museum, a one-in-a-billion find.”

TECH

Google, Not OpenAI, Has the Most to Gain From Generative AI
Mark Sullivan | Fast Company
“After spending billions on artificial intelligence R&D and acquisitions, Google finds itself ceding the AI limelight to OpenAI, an upstart that has captured the popular imagination with the public beta of its startlingly conversant chatbot, ChatGPT. Now Google reportedly fears the ChatGPT AI could reinvent search, its cornerstone business. But Google, which declared itself an ‘AI-first’ company in 2017, may yet regain its place in the sun. Its AI investments, which date back to the 2000s, may pay off, and could even power the company’s next quarter century of growth (Google turns 25 this year). Here’s why.”

BIOTECH

CRISPR Wants to Feed the World
Jennifer Doudna | Wired
“A great deal of the attention surrounding CRISPR has focused on the medical applications, and for good reason: The results are promising, and the personal stories are uplifting, offering hope to many who have suffered from long-neglected genetic diseases. In 2023, as CRISPR moves into agriculture and climate, we will have the opportunity to radically improve human health in a holistic way that can better safeguard our society and enable millions of people around the world to flourish.“

ETHICS

A Watermark for Chatbots Can Expose Text Written by an AI
Melissa Heikkilä | MIT Technology Review
“Hidden patterns purposely buried in AI-generated texts could help identify them as such, allowing us to tell whether the words we’re reading are written by a human or not. These ‘watermarks’ are invisible to the human eye but let computers detect that the text probably comes from an AI system. If embedded in large language models, they could help prevent some of the problems that these models have already caused.”

SCIENCE

Earth’s Inner Core: A Shifting, Spinning Mystery’s Latest Twist
Dennis Overbye | The New York Times
“Imagine Earth’s inner core—the dense center of our planet—as a heavy, metal ballerina. This iron-rich dancer is capable of pirouetting at ever-changing speeds. That core may be on the cusp of a big shift. Seismologists reported Monday in the journal Nature Geoscience that after brief but peculiar pauses, the inner core changes how it spins—relative to the motion of Earth’s surface—perhaps once every few decades. And, right now, one such reversal may be underway.”

Image Credit: Robert Linder / Unsplash

Kategorie: Transhumanismus

AstroForge’s Space Mining Tech Will Get Its First Real-World Test This Year

27 Leden, 2023 - 16:00

Asteroid mining has long caught the imagination of space entrepreneurs, but conventional wisdom has always been that it’s little more than a pipe dream. That may be about to change after a startup announced plans to launch two missions this year designed to validate its space mining technology.

There are estimated to be trillions of dollars worth of precious metals locked up in asteroids strewn throughout the solar system. Given growing concerns about the scarcity of key materials required for batteries and other electronics, there’s been growing interest in attempts to extract these resources.

The enormous cost of space missions and the huge technical challenges involved in mining in space have led many to dismiss the idea as unworkable. The industry has already seen one boom and bust cycle after leading players like Deep Space Industries folded after investors lost their nerve.

But now, California-based startup AstroForge has taken concrete steps toward its goal of becoming the first company to mine an asteroid and bring the materials back to Earth. This year it will launch two missions, one designed to test out its in-space mineral extraction technology and another that will carry out a survey mission of a promising asteroid close to Earth.

“With a finite supply of precious metals on Earth, we have no other choice than to look to deep space to source cost-effective and sustainable materials,” CEO and co-founder Matt Gialich said in a statement.

The company, which raised $13 million in seed funding last April, is planning to target asteroids rich in platinum group metals in deep space. These materials are in major demand in many high-tech industries, but their reserves are limited and geographically concentrated. Extracting them can also be very environmentally damaging.

AstroForge is developing mineral refining technology that it hopes will allow it to extract precious metals from these asteroids and return them to Earth. A prototype will catch a lift into orbit on a spacecraft designed by OrbAstro and launched by a SpaceX Falcon 9 rocket in April. It will be pre-loaded with asteroid-like material, which it will then attempt to vaporize and sort into its different chemical constituents.

Then in October, the company will attempt an even more ambitious mission. A 220-pound spacecraft also designed by OrbAstro, called Brokkr-2, will attempt an 8-month journey to reach an asteroid orbiting the sun about 22 million miles from Earth. It will carry a host of instruments designed to assess the target asteroid in situ.

Both of these missions are precursors designed to test out systems that will be needed for AstroForge’s first proper asteroid mining mission, expected later this decade. The company plans to target asteroids between 66 to 4,920 feet in diameter and break them apart from a distance before collecting the remains.

Even if these missions are a success, there’s still a long road towards making space mining practical. According to research AstroForge recently conducted with the Colorado School of Mines, the bulk of metal-rich asteroids are found in the asteroid belt between Mars and Jupiter, which is currently a 14-year round trip.

Nonetheless, off-world mining does appear to be having somewhat of a renaissance, with dozens of space resources startups springing up in recent years. If AstroForge succeeds in proving out its technology this year, it could give this fledgling industry a major boost.

Image Credit: NASA

Kategorie: Transhumanismus

AstroForge’s Space Mining Tech Will Get Its First Real-World Test This Year

27 Leden, 2023 - 16:00

Asteroid mining has long caught the imagination of space entrepreneurs, but conventional wisdom has always been that it’s little more than a pipe dream. That may be about to change after a startup announced plans to launch two missions this year designed to validate its space mining technology.

There are estimated to be trillions of dollars worth of precious metals locked up in asteroids strewn throughout the solar system. Given growing concerns about the scarcity of key materials required for batteries and other electronics, there’s been growing interest in attempts to extract these resources.

The enormous cost of space missions and the huge technical challenges involved in mining in space have led many to dismiss the idea as unworkable. The industry has already seen one boom and bust cycle after leading players like Deep Space Industries folded after investors lost their nerve.

But now, California-based startup AstroForge has taken concrete steps toward its goal of becoming the first company to mine an asteroid and bring the materials back to Earth. This year it will launch two missions, one designed to test out its in-space mineral extraction technology and another that will carry out a survey mission of a promising asteroid close to Earth.

“With a finite supply of precious metals on Earth, we have no other choice than to look to deep space to source cost-effective and sustainable materials,” CEO and co-founder Matt Gialich said in a statement.

The company, which raised $13 million in seed funding last April, is planning to target asteroids rich in platinum group metals in deep space. These materials are in major demand in many high-tech industries, but their reserves are limited and geographically concentrated. Extracting them can also be very environmentally damaging.

AstroForge is developing mineral refining technology that it hopes will allow it to extract precious metals from these asteroids and return them to Earth. A prototype will catch a lift into orbit on a spacecraft designed by OrbAstro and launched by a SpaceX Falcon 9 rocket in April. It will be pre-loaded with asteroid-like material, which it will then attempt to vaporize and sort into its different chemical constituents.

Then in October, the company will attempt an even more ambitious mission. A 220-pound spacecraft also designed by OrbAstro, called Brokkr-2, will attempt an 8-month journey to reach an asteroid orbiting the sun about 22 million miles from Earth. It will carry a host of instruments designed to assess the target asteroid in situ.

Both of these missions are precursors designed to test out systems that will be needed for AstroForge’s first proper asteroid mining mission, expected later this decade. The company plans to target asteroids between 66 to 4,920 feet in diameter and break them apart from a distance before collecting the remains.

Even if these missions are a success, there’s still a long road towards making space mining practical. According to research AstroForge recently conducted with the Colorado School of Mines, the bulk of metal-rich asteroids are found in the asteroid belt between Mars and Jupiter, which is currently a 14-year round trip.

Nonetheless, off-world mining does appear to be having somewhat of a renaissance, with dozens of space resources startups springing up in recent years. If AstroForge succeeds in proving out its technology this year, it could give this fledgling industry a major boost.

Image Credit: NASA

Kategorie: Transhumanismus

Deepfakes: Faces Created by AI Now Look More Real Than Genuine Photos

26 Leden, 2023 - 19:06

Even if you think you are good at analyzing faces, research shows many people cannot reliably distinguish between photos of real faces and images that have been computer-generated. This is particularly problematic now that computer systems can create realistic-looking photos of people who don’t exist.

A few years ago, a fake LinkedIn profile with a computer-generated profile picture made the news because it successfully connected with US officials and other influential individuals on the networking platform, for example. Counter-intelligence experts even say that spies routinely create phantom profiles with such pictures to home in on foreign targets over social media.

These deepfakes are becoming widespread in everyday culture which means people should be more aware of how they’re being used in marketing, advertising, and social media. The images are also being used for malicious purposes, such as political propaganda, espionage, and information warfare.

Making them involves something called a deep neural network, a computer system that mimics the way the brain learns. This is “trained” by exposing it to increasingly large data sets of real faces.

In fact, two deep neural networks are set against each other, competing to produce the most realistic images. As a result, the end products are dubbed GAN images, where GAN stands for “generative adversarial networks.” The process generates novel images that are statistically indistinguishable from the training images.

In a study published in iScience, my colleagues and I showed that a failure to distinguish these artificial faces from the real thing has implications for our online behavior. Our research suggests the fake images may erode our trust in others and profoundly change the way we communicate online.

We found that people perceived GAN faces to be even more real-looking than genuine photos of actual people’s faces. While it’s not yet clear why this is, this finding does highlight recent advances in the technology used to generate artificial images.

And we also found an interesting link to attractiveness: faces that were rated as less attractive were also rated as more real. Less attractive faces might be considered more typical, and the typical face may be used as a reference against which all faces are evaluated. Therefore, these GAN faces would look more real because they are more similar to mental templates that people have built from everyday life.

But seeing these artificial faces as authentic may also have consequences for the general levels of trust we extend to a circle of unfamiliar people—a concept known as “social trust.”

We often read too much into the faces we see, and the first impressions we form guide our social interactions. In a second experiment that formed part of our latest study, we saw that people were more likely to trust information conveyed by faces they had previously judged to be real, even if they were artificially generated.

It is not surprising that people put more trust in faces they believe to be real. But we found that trust was eroded once people were informed about the potential presence of artificial faces in online interactions. They then showed lower levels of trust, overall—independently of whether the faces were real or not.

This outcome could be regarded as useful in some ways, because it made people more suspicious in an environment where fake users may operate. From another perspective, however, it may gradually erode the very nature of how we communicate.

In general, we tend to operate on a default assumption that other people are basically truthful and trustworthy. The growth in fake profiles and other artificial online content raises the question of how much their presence and our knowledge about them can alter this “truth default” state, eventually eroding social trust.

Changing Our Defaults

The transition to a world where what’s real is indistinguishable from what’s not could also shift the cultural landscape from being primarily truthful to being primarily artificial and deceptive.

If we are regularly questioning the truthfulness of what we experience online, it might require us to re-deploy our mental effort from the processing of the messages themselves to the processing of the messenger’s identity. In other words, the widespread use of highly realistic, yet artificial, online content could require us to think differently—in ways we hadn’t expected to.

In psychology, we use a term called “reality monitoring” for how we correctly identify whether something is coming from the external world or from within our brains. The advance of technologies that can produce fake, yet highly realistic, faces, images, and video calls means reality monitoring must be based on information other than our own judgments. It also calls for a broader discussion of whether humankind can still afford to default to truth.

It’s crucial for people to be more critical when evaluating digital faces. This can include using reverse image searches to check whether photos are genuine, being wary of social media profiles with little personal information or a large number of followers, and being aware of the potential for deepfake technology to be used for nefarious purposes.

The next frontier for this area should be improved algorithms for detecting fake digital faces. These could then be embedded in social media platforms to help us distinguish the real from the fake when it comes to new connections’ faces.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image Credit: The faces in this article’s banner image may look realistic, but they were generated by a computer. NVIDIA via thispersondoesnotexist.com

Kategorie: Transhumanismus

Deepfakes: Faces Created by AI Now Look More Real Than Genuine Photos

26 Leden, 2023 - 19:06

Even if you think you are good at analyzing faces, research shows many people cannot reliably distinguish between photos of real faces and images that have been computer-generated. This is particularly problematic now that computer systems can create realistic-looking photos of people who don’t exist.

A few years ago, a fake LinkedIn profile with a computer-generated profile picture made the news because it successfully connected with US officials and other influential individuals on the networking platform, for example. Counter-intelligence experts even say that spies routinely create phantom profiles with such pictures to home in on foreign targets over social media.

These deepfakes are becoming widespread in everyday culture which means people should be more aware of how they’re being used in marketing, advertising, and social media. The images are also being used for malicious purposes, such as political propaganda, espionage, and information warfare.

Making them involves something called a deep neural network, a computer system that mimics the way the brain learns. This is “trained” by exposing it to increasingly large data sets of real faces.

In fact, two deep neural networks are set against each other, competing to produce the most realistic images. As a result, the end products are dubbed GAN images, where GAN stands for “generative adversarial networks.” The process generates novel images that are statistically indistinguishable from the training images.

In a study published in iScience, my colleagues and I showed that a failure to distinguish these artificial faces from the real thing has implications for our online behavior. Our research suggests the fake images may erode our trust in others and profoundly change the way we communicate online.

We found that people perceived GAN faces to be even more real-looking than genuine photos of actual people’s faces. While it’s not yet clear why this is, this finding does highlight recent advances in the technology used to generate artificial images.

And we also found an interesting link to attractiveness: faces that were rated as less attractive were also rated as more real. Less attractive faces might be considered more typical, and the typical face may be used as a reference against which all faces are evaluated. Therefore, these GAN faces would look more real because they are more similar to mental templates that people have built from everyday life.

But seeing these artificial faces as authentic may also have consequences for the general levels of trust we extend to a circle of unfamiliar people—a concept known as “social trust.”

We often read too much into the faces we see, and the first impressions we form guide our social interactions. In a second experiment that formed part of our latest study, we saw that people were more likely to trust information conveyed by faces they had previously judged to be real, even if they were artificially generated.

It is not surprising that people put more trust in faces they believe to be real. But we found that trust was eroded once people were informed about the potential presence of artificial faces in online interactions. They then showed lower levels of trust, overall—independently of whether the faces were real or not.

This outcome could be regarded as useful in some ways, because it made people more suspicious in an environment where fake users may operate. From another perspective, however, it may gradually erode the very nature of how we communicate.

In general, we tend to operate on a default assumption that other people are basically truthful and trustworthy. The growth in fake profiles and other artificial online content raises the question of how much their presence and our knowledge about them can alter this “truth default” state, eventually eroding social trust.

Changing Our Defaults

The transition to a world where what’s real is indistinguishable from what’s not could also shift the cultural landscape from being primarily truthful to being primarily artificial and deceptive.

If we are regularly questioning the truthfulness of what we experience online, it might require us to re-deploy our mental effort from the processing of the messages themselves to the processing of the messenger’s identity. In other words, the widespread use of highly realistic, yet artificial, online content could require us to think differently—in ways we hadn’t expected to.

In psychology, we use a term called “reality monitoring” for how we correctly identify whether something is coming from the external world or from within our brains. The advance of technologies that can produce fake, yet highly realistic, faces, images, and video calls means reality monitoring must be based on information other than our own judgments. It also calls for a broader discussion of whether humankind can still afford to default to truth.

It’s crucial for people to be more critical when evaluating digital faces. This can include using reverse image searches to check whether photos are genuine, being wary of social media profiles with little personal information or a large number of followers, and being aware of the potential for deepfake technology to be used for nefarious purposes.

The next frontier for this area should be improved algorithms for detecting fake digital faces. These could then be embedded in social media platforms to help us distinguish the real from the fake when it comes to new connections’ faces.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image Credit: The faces in this article’s banner image may look realistic, but they were generated by a computer. NVIDIA via thispersondoesnotexist.com

Kategorie: Transhumanismus

CRISPR’s Wild First Decade Only Scratches the Surface of Its Potential

25 Leden, 2023 - 20:11

Ten years ago, a little-known bacterial defense mechanism skyrocketed to fame as a powerful genome editor. In the decade since, CRISPR-Cas9 has spun off multiple variants, expanding into a comprehensive toolbox that can edit the genetic code of life.

Far from an ivory tower pursuit, its practical uses in research, healthcare, and agriculture came fast and furious.

You’ve seen the headlines. The FDA approved its use in tackling the underlying genetic mutation for sickle cell disease. Some researchers edited immune cells to fight untreatable blood cancers in children. Others took pig-to-human organ transplants from dream to reality in an attempt to alleviate the shortage of donor organs. Recent work aims to help millions of people with high cholesterol—and potentially bring CRISPR-based gene therapy to the masses—by lowering their chances of heart disease with a single injection.

But to Dr. Jennifer Doudna, who won the Nobel Prize in 2020 for her role in developing CRISPR, we’re just scratching the surface of its potential. Together with graduate student Joy Wang, Doudna laid out a roadmap for the technology’s next decade in an article in Science.

If the 2010s were focused on establishing the CRISPR toolbox and proving its effectiveness, this decade is when the technology reaches its full potential. From CRISPR-based therapies and large-scale screens for disease diagnostics to engineering high-yield crops and nutritious foods, the technology “and its potential impact are still in their early stages,” the authors wrote.

A Decade of Highlights

We’ve spilt plenty of ink on CRISPR advances, but it pays to revisit the past to predict the future—and potentially scout out problems along the way.

One early highlight was CRISPR’s incredible ability to rapidly engineer animal models of disease. Its original form easily snips away a targeted gene in a very early embryo, which when transplanted into a womb can generate genetically modified mice in just a month, compared to a year using previous methods. Additional CRISPR versions, such as base editing—swapping one genetic letter for another—and prime editing—which snips the DNA without cutting both strands—further boosted the toolkit’s flexibility at engineering genetically-altered organoids (think mini-brains) and animals. CRISPR rapidly established dozens of models for some of our most devasting and perplexing diseases, including various cancers, Alzheimer’s, and Duchenne muscular dystrophy—a degenerative disorder in which the muscle slowly wastes away. Dozens of CRISPR-based trials are now in the works.

CRISPR also accelerated genetic screening into the big data age. Rather than targeting one gene at a time, it’s now possible to silence, or activate, thousands of genes in parallel, forming a sort of Rosetta stone for translating genetic perturbations into biological changes. This is especially important for understanding genetic interactions, such as those in cancer or aging that we weren’t previously privy to, and gaining new ammunition for drug development.

But a crowning achievement for CRISPR was multiplexed editing. Like simultaneously tapping on multiple piano keys, this type of genetic engineering targets multiple specific DNA areas, rapidly changing a genome’s genetic makeup in one go.

The technology works in plants and animals. For eons, people have painstakingly bred crops with desirable features—be it color, size, taste, nutrition, or disease resilience. CRISPR can help select for multiple traits or even domesticate new crops in just one generation. CRISPR-generated hornless bulls, nutrient rich tomatoes, and hyper-muscular farm animals and fish are already reality. With the world population hitting 8 billion in 2022 and millions suffering from hunger, CRISPRed-crops may lend a lifeline—that is, if people are willing to accept the technology.

The Path Forward

Where do we go from here?

To the authors, we need to further boost CRISPR’s effectiveness and build trust. This means going back to the basics to increase the tool’s editing accuracy and precision. Here, platforms to rapidly evolve Cas enzymes, the “scissor” component of the CRISPR machinery, are critical.

There have already been successes: one Cas version, for example, acts as a guardrail for the targeting component—the sgRNA “bloodhound.” In classic CRISPR, the sgRNA works alone, but in this updated version, it struggles to bind without Cas assistance. This trick helps tailor the edit to a specific DNA site and increases accuracy so the cut works as predicted.

Similar strategies can also boost precision with fewer side effects or insert new genes in cells such as neurons and others that no longer divide. While already possible with prime editing, its efficiency can be 30 times lower than classic CRISPR mechanisms.

“A main goal for prime editing in the next decade is improving efficiency without compromising editing product purity—an outcome that has the potential to turn prime editing into one of the most versatile tools for precision editing,” the authors said.

But perhaps more important is delivery, which remains a bottleneck especially for therapeutics. Currently, CRISPR is generally used on cells outside the body that are infused back—as in the case of CAR-T—or in some cases, tethered to a viral carrier or encapsulated in fatty bubbles and injected into the body. There have been successes: in 2021, the FDA approved the first CRISPR-based shot to tackled a genetic blood disease, transthyretin amyloidosis.

Yet both strategies are problematic: not many types of cells can survive the CAR-T treatment—dying when reintroduced into the body—and targeting specific tissues and organs remains mostly out of reach for injectable therapies.

A key advance for the next decade, the authors said, is to shuttle the CRISPR cargo into the targeted tissue without harm and release the gene editor at its intended spot. Each of these steps, though seemingly simple on paper, presents its own set of challenges that will require both bioengineering and innovation to overcome.

Finally, CRISPR can synergize with other technological advances, the authors said. For example, by tapping into cell imaging and machine learning, we could soon engineer even more efficient genome editors. Thanks to faster and cheaper DNA sequencing, we can then easily monitor gene-editing consequences. These data can then provide a kind of feedback mechanism with which to engineer even more powerful genome editors in a virtuous loop.

Real-World Impact

Although further expanding the CRISPR toolbox is on the agenda, the technology is sufficiently mature to impact the real world in its second decade, the authors said.

In the near future, we should see “an increased number of CRISPR-based treatments moving to later stages of clinical trials.” Looking further ahead, the technology, or its variants, could make pig-to-human organ xenotransplants routine, rather than experimental. Large-scale screens for genes that lead to aging or degenerative brain or heart diseases—our top killers today—could yield prophylactic CRISPR-based treatments. It’s no easy task: we need both knowledge of the genetics underlying multifaceted genetic diseases—that is, when multiple genes come into play—and a way to deliver the editing tools to their target. “But the potential benefits may drive innovation in these areas well beyond what is possible today,” the authors said.

Yet with greater power comes greater responsibility. CRISPR has advanced at breakneck speed, and regulatory agencies and the public are still struggling to catch up. Perhaps the most notorious example was that of the CRISPR babies, where experiments carried out against global ethical guidelines propelled an international consortium to lay down a red line for human germ-cell editing.

Similarly, genetically modified organisms (GMOs) remain a controversial topic. Although CRISPR is far more precise than previous genetic tools, it’ll be up to consumers to decide whether to welcome a new generation of human-evolved foods—both plant and animal.

These are important conversations that need global discourse as CRISPR enters its second decade. But to the authors, the future looks bright.

“Just as during the advent of CRISPR genome editing, a combination of scientific curiosity and the desire to benefit society will drive the next decade of innovation in CRISPR technology,” they said. “By continuing to explore the natural world, we will discover what cannot be imagined and put it to real-world use for the benefit of the planet.”

Image Credit: NIH

Kategorie: Transhumanismus

CRISPR’s Wild First Decade Only Scratches the Surface of Its Potential

25 Leden, 2023 - 20:11

Ten years ago, a little-known bacterial defense mechanism skyrocketed to fame as a powerful genome editor. In the decade since, CRISPR-Cas9 has spun off multiple variants, expanding into a comprehensive toolbox that can edit the genetic code of life.

Far from an ivory tower pursuit, its practical uses in research, healthcare, and agriculture came fast and furious.

You’ve seen the headlines. The FDA approved its use in tackling the underlying genetic mutation for sickle cell disease. Some researchers edited immune cells to fight untreatable blood cancers in children. Others took pig-to-human organ transplants from dream to reality in an attempt to alleviate the shortage of donor organs. Recent work aims to help millions of people with high cholesterol—and potentially bring CRISPR-based gene therapy to the masses—by lowering their chances of heart disease with a single injection.

But to Dr. Jennifer Doudna, who won the Nobel Prize in 2020 for her role in developing CRISPR, we’re just scratching the surface of its potential. Together with graduate student Joy Wang, Doudna laid out a roadmap for the technology’s next decade in an article in Science.

If the 2010s were focused on establishing the CRISPR toolbox and proving its effectiveness, this decade is when the technology reaches its full potential. From CRISPR-based therapies and large-scale screens for disease diagnostics to engineering high-yield crops and nutritious foods, the technology “and its potential impact are still in their early stages,” the authors wrote.

A Decade of Highlights

We’ve spilt plenty of ink on CRISPR advances, but it pays to revisit the past to predict the future—and potentially scout out problems along the way.

One early highlight was CRISPR’s incredible ability to rapidly engineer animal models of disease. Its original form easily snips away a targeted gene in a very early embryo, which when transplanted into a womb can generate genetically modified mice in just a month, compared to a year using previous methods. Additional CRISPR versions, such as base editing—swapping one genetic letter for another—and prime editing—which snips the DNA without cutting both strands—further boosted the toolkit’s flexibility at engineering genetically-altered organoids (think mini-brains) and animals. CRISPR rapidly established dozens of models for some of our most devasting and perplexing diseases, including various cancers, Alzheimer’s, and Duchenne muscular dystrophy—a degenerative disorder in which the muscle slowly wastes away. Dozens of CRISPR-based trials are now in the works.

CRISPR also accelerated genetic screening into the big data age. Rather than targeting one gene at a time, it’s now possible to silence, or activate, thousands of genes in parallel, forming a sort of Rosetta stone for translating genetic perturbations into biological changes. This is especially important for understanding genetic interactions, such as those in cancer or aging that we weren’t previously privy to, and gaining new ammunition for drug development.

But a crowning achievement for CRISPR was multiplexed editing. Like simultaneously tapping on multiple piano keys, this type of genetic engineering targets multiple specific DNA areas, rapidly changing a genome’s genetic makeup in one go.

The technology works in plants and animals. For eons, people have painstakingly bred crops with desirable features—be it color, size, taste, nutrition, or disease resilience. CRISPR can help select for multiple traits or even domesticate new crops in just one generation. CRISPR-generated hornless bulls, nutrient rich tomatoes, and hyper-muscular farm animals and fish are already reality. With the world population hitting 8 billion in 2022 and millions suffering from hunger, CRISPRed-crops may lend a lifeline—that is, if people are willing to accept the technology.

The Path Forward

Where do we go from here?

To the authors, we need to further boost CRISPR’s effectiveness and build trust. This means going back to the basics to increase the tool’s editing accuracy and precision. Here, platforms to rapidly evolve Cas enzymes, the “scissor” component of the CRISPR machinery, are critical.

There have already been successes: one Cas version, for example, acts as a guardrail for the targeting component—the sgRNA “bloodhound.” In classic CRISPR, the sgRNA works alone, but in this updated version, it struggles to bind without Cas assistance. This trick helps tailor the edit to a specific DNA site and increases accuracy so the cut works as predicted.

Similar strategies can also boost precision with fewer side effects or insert new genes in cells such as neurons and others that no longer divide. While already possible with prime editing, its efficiency can be 30 times lower than classic CRISPR mechanisms.

“A main goal for prime editing in the next decade is improving efficiency without compromising editing product purity—an outcome that has the potential to turn prime editing into one of the most versatile tools for precision editing,” the authors said.

But perhaps more important is delivery, which remains a bottleneck especially for therapeutics. Currently, CRISPR is generally used on cells outside the body that are infused back—as in the case of CAR-T—or in some cases, tethered to a viral carrier or encapsulated in fatty bubbles and injected into the body. There have been successes: in 2021, the FDA approved the first CRISPR-based shot to tackled a genetic blood disease, transthyretin amyloidosis.

Yet both strategies are problematic: not many types of cells can survive the CAR-T treatment—dying when reintroduced into the body—and targeting specific tissues and organs remains mostly out of reach for injectable therapies.

A key advance for the next decade, the authors said, is to shuttle the CRISPR cargo into the targeted tissue without harm and release the gene editor at its intended spot. Each of these steps, though seemingly simple on paper, presents its own set of challenges that will require both bioengineering and innovation to overcome.

Finally, CRISPR can synergize with other technological advances, the authors said. For example, by tapping into cell imaging and machine learning, we could soon engineer even more efficient genome editors. Thanks to faster and cheaper DNA sequencing, we can then easily monitor gene-editing consequences. These data can then provide a kind of feedback mechanism with which to engineer even more powerful genome editors in a virtuous loop.

Real-World Impact

Although further expanding the CRISPR toolbox is on the agenda, the technology is sufficiently mature to impact the real world in its second decade, the authors said.

In the near future, we should see “an increased number of CRISPR-based treatments moving to later stages of clinical trials.” Looking further ahead, the technology, or its variants, could make pig-to-human organ xenotransplants routine, rather than experimental. Large-scale screens for genes that lead to aging or degenerative brain or heart diseases—our top killers today—could yield prophylactic CRISPR-based treatments. It’s no easy task: we need both knowledge of the genetics underlying multifaceted genetic diseases—that is, when multiple genes come into play—and a way to deliver the editing tools to their target. “But the potential benefits may drive innovation in these areas well beyond what is possible today,” the authors said.

Yet with greater power comes greater responsibility. CRISPR has advanced at breakneck speed, and regulatory agencies and the public are still struggling to catch up. Perhaps the most notorious example was that of the CRISPR babies, where experiments carried out against global ethical guidelines propelled an international consortium to lay down a red line for human germ-cell editing.

Similarly, genetically modified organisms (GMOs) remain a controversial topic. Although CRISPR is far more precise than previous genetic tools, it’ll be up to consumers to decide whether to welcome a new generation of human-evolved foods—both plant and animal.

These are important conversations that need global discourse as CRISPR enters its second decade. But to the authors, the future looks bright.

“Just as during the advent of CRISPR genome editing, a combination of scientific curiosity and the desire to benefit society will drive the next decade of innovation in CRISPR technology,” they said. “By continuing to explore the natural world, we will discover what cannot be imagined and put it to real-world use for the benefit of the planet.”

Image Credit: NIH

Kategorie: Transhumanismus

Electric Vehicle Batteries Could Meet Grid-Scale Storage Needs by 2030

23 Leden, 2023 - 16:00

Boosting the role of renewables in our electricity supply will require a massive increase in grid-scale energy storage. But new research suggests that electric vehicle batteries could meet short-term storage demands by as soon as 2030.

While solar and wind are rapidly becoming the cheapest source of electricity in many parts of the world, their intermittency is a significant problem. One potential solution is to use batteries to store energy for times when the sun doesn’t shine and the wind doesn’t blow, but building enough capacity to serve entire power grids would be enormously costly.

That’s why people have suggested making use of the huge number of batteries being installed in the ever-growing global fleet of electric vehicles. The idea is that when they’re not on the road, utilities could use these batteries to store excess energy and draw from it when demand spikes.

While there have been some early pilots, so far it has been unclear whether the idea really has legs. Now, a new economic analysis led by researchers at Leiden University in the Netherlands suggests that electric vehicle batteries could play a major role in grid-scale storage in the relatively near future.

There are two main ways that these batteries could aid the renewables transition, according to the team’s study published in Nature Communications. Firstly, so-called vehicle-to-grid technology could make it possible to do smart vehicle charging, only charging cars when power demand is low. It could also make it possible for vehicle owners to temporarily store electricity for utilities for a price.

But old car batteries could also make a significant contribution. Their capacity declines over repeated charge and discharge cycles, and batteries typically become unsuitable for use in electric vehicles by the time they drop to 70 to 80 percent of their original capacity. That’s because they can no longer hold enough power to make up for their added weight. Weight isn’t a problem for grid-scale storage though, so these car batteries can be repurposed.

The researchers note that the lithium-ion batteries used in cars are probably only suitable for short-term storage of under four hours, but this accounts for most of the projected demand. So far though, there hasn’t been a comprehensive study of how large a contribution both current and retired electric vehicle batteries could play in the future of the grid.

To try and fill that gap, the researchers combined data on how many batteries are estimated to be produced over the coming years, how quickly batteries will degrade based on local conditions, and how electric vehicles are likely to be used in different countries—for instance, how many miles people drive in a day and how often they charge.

They found that the total available storage capacity from these two sources by 2050 was likely to be between 32 and 62 terawatt-hours. The authors note that this is significantly higher than the 3.4 to 19.2 terawatt-hours the world is predicted to need by 2050, according to the International Renewable Energy Agency and research group Storage Lab.

However, not every electric vehicle owner is likely to participate in vehicle-to-grid schemes and not all batteries will get repurposed at the end of their lives. So the researchers investigated how different participation rates would impact the ability of electric vehicle batteries to contribute to grid storage.

They found that to meet global demand by 2050, only between 12 and 43 percent of vehicle owners would need to take part in vehicle to grid schemes. If only half of secondhand batteries are used for grid storage, the required participation rates would drop to just 10 percent. In the most optimistic scenarios, electric vehicle batteries could meet demand by 2030.

Lots of factors will impact whether or not this could ever be achieved, including things like how quickly vehicle-to-grid infrastructure can be rolled out, how easy it is to convince vehicle owners to take part, and the economics of recycling car batteries at the end of their lives. The authors note that governments can and should play a role in incentivizing participation and mandating the reuse of old batteries.

But either way, the results suggest there may be a promising alternative to a costly and time-consuming rollout of dedicated grid storage. Electric vehicle owners may soon be doing their part for the environment twice over.

Image Credit: Shutterstock.com/Roman Zaiets

Kategorie: Transhumanismus