Transhumanismus

Boston Dynamics Says Farewell to Its Humanoid Atlas Robot—Then Brings It Back Fully Electric

Singularity HUB - 18 hodin 47 min zpět

Yesterday, Boston Dynamics announced it was retiring its hydraulic Atlas robot. Atlas has long been the standard bearer of advanced humanoid robots. Over the years, the company was known as much for its research robots as it was for slick viral videos of them working out in military fatigues, forming dance mobs, and doing parkour. Fittingly, the company put together a send-off video of Atlas’s greatest hits and blunders.

But there were clues this wasn’t really the end, not least of which was the specific inclusion of the word “hydraulic” and the last line of the video, “‘Til we meet again, Atlas.” It wasn’t a long hiatus. Today, the company released hydraulic Atlas’s successor—electric Atlas.

The new Atlas is notable for several reasons. First, and most obviously, Boston Dynamics has finally done away with hydraulic actuators in favor of electric motors. To be clear, Atlas has long had an onboard battery pack—but now it’s fully electric. The advantages of going electric include less cost, noise, weight, and complexity. It also allows for a more polished design. From the company’s own Spot robot to a host of other humanoid robots, fully electric models are the norm these days. So, it’s about time Atlas made the switch.

Without a mess of hydraulic hoses to contend with, the new Atlas can now also contort itself in new ways. As you’ll note in the release video, the robot rises to its feet—a crucial skill for a walking robot—in a very, let’s say, special way. It folds its legs up along its torso and impossibly, for a human at least, pivots up through its waist (no hands). Once standing Atlas swivels its head 180 degrees, then does the same thing at each hip joint and the waist. It takes a few watches to really appreciate all the weirdness there.

The takeaway is that while Atlas looks like us, it’s capable of movements we aren’t and therefore has more flexibility in how it completes future tasks.

This theme of same-but-different is evident in its head too. Instead of opting for a human-like head that risks slipping into the uncanny valley, the team chose a featureless (for now) lighted circle. In an interview with IEEE Spectrum, Boston Dynamics CEO, Robert Playter, said the human-like designs they tried seemed “a little bit threatening or dystopian.”

“We’re trying to project something else: a friendly place to look to gain some understanding about the intent of the robot,” he said. “The design borrows from some friendly shapes that we’d seen in the past. For example, there’s the old Pixar lamp that everybody fell in love with decades ago, and that informed some of the design for us.”

While most of these upgrades are improvements, there is one area where it’s not totally clear how well the new form will fair: strength and power.

Hydraulics are known to provide both, and Atlas pushed its hydraulics to their limits carrying heavy objects, executing backflips, and doing 180-degree, in-air twists. According to the press release and Playter’s interviews, little has been lost in this category. In fact, they say, electric Atlas is stronger than hydraulic Atlas. Still, as with all things robotics, the ultimate proof of how capable it is will likely be in video form, which we’ll eagerly await.

Despite big design updates, the company’s messaging is perhaps more notable. Atlas used to be a research robot. Now, the company intends to sell them commercially.

This isn’t terribly surprising. There are now a number of companies competing in the humanoid robots space, including Agility, 1X, Tesla, Apptronik, and Figure—which just raised $675 million at a $2.6 billion valuation. Several are making rapid progress, with a heavy focus on AI, and have kicked off real-world pilots.

Where does Boston Dynamics fit in? With Atlas, the company has been the clear leader for years. So, it’s not starting from the ground floor. Also, thanks to its Spot and Stretch robots, the company already has experience commercializing and selling advanced robots, from identifying product-market fit to dealing with logistics and servicing. But AI was, until recently, less of a focus. Now, they’re folding reinforcement learning into Spot, have begun experimenting with generative AI too, and promise more is coming.

Hyundai acquired Boston Dynamics for $1.1 billion in 2021. This may prove advantageous, as they have access to a world-class manufacturing company along with its resources and expertise producing and selling machines at scale. It’s also an opportunity to pilot Atlas in real-world situations and perfect it for future customers. Plans are already in motion to put Atlas to work at Hyundai next year.

Still, it’s worth noting that, although humanoid robots are attracting attention, getting big time investment, and being tried out in commercial contexts, there’s likely a ways to go before they reach the kind of generality some companies are touting. Playter says Boston Dynamics is going for multi-purpose, but still niche, robots in the near term.

“It definitely needs to be a multi-use case robot. I believe that because I don’t think there’s very many examples where a single repetitive task is going to warrant these complex robots,” he said. “I also think, though, that the practical matter is that you’re going to have to focus on a class of use cases, and really making them useful for the end customer.”

Humanoid robots that tidy your house and do the dishes may not be imminent, but the field is hot, and AI is bringing a degree of generality not possible a year ago. Now that Boston Dynamics has thrown its name in the hat, things will only get more interesting from here. We’ll be keeping a close eye on YouTube to see what new tricks Atlas has up its sleeve.

Image Credit: Boston Dynamics

Kategorie: Transhumanismus

Exploding Stars Are Rare—but if One Was Close Enough, It Could Threaten Life on Earth

Singularity HUB - 16 Duben, 2024 - 19:37

Stars like the sun are remarkably constant. They vary in brightness by only 0.1 percent over years and decades, thanks to the fusion of hydrogen into helium that powers them. This process will keep the sun shining steadily for about 5 billion more years, but when stars exhaust their nuclear fuel, their deaths can lead to pyrotechnics.

The sun will eventually die by growing large and then condensing into a type of star called a white dwarf. But stars over eight times more massive than the sun die violently in an explosion called a supernova.

Supernovae happen across the Milky Way only a few times a century, and these violent explosions are usually remote enough that people here on Earth don’t notice. For a dying star to have any effect on life on our planet, it would have to go supernova within 100 light years from Earth.

I’m an astronomer who studies cosmology and black holes.

In my writing about cosmic endings, I’ve described the threat posed by stellar cataclysms such as supernovae and related phenomena such as gamma-ray bursts. Most of these cataclysms are remote, but when they occur closer to home they can pose a threat to life on Earth.

The Death of a Massive Star

Very few stars are massive enough to die in a supernova. But when one does, it briefly rivals the brightness of billions of stars. At one supernova per 50 years, and with 100 billion galaxies in the universe, somewhere in the universe a supernova explodes every hundredth of a second.

The dying star emits high-energy radiation as gamma rays. Gamma rays are a form of electromagnetic radiation with wavelengths much shorter than light waves, meaning they’re invisible to the human eye. The dying star also releases a torrent of high-energy particles in the form of cosmic rays: subatomic particles moving at close to the speed of light.

Supernovae in the Milky Way are rare, but a few have been close enough to Earth that historical records discuss them. In 185 AD, a star appeared in a place where no star had previously been seen. It was probably a supernova.

Observers around the world saw a bright star suddenly appear in 1006 AD. Astronomers later matched it to a supernova 7,200 light years away. Then, in 1054 AD, Chinese astronomers recorded a star visible in the daytime sky that astronomers subsequently identified as a supernova 6,500 light years away.

Johannes Kepler, the astronomer who observed what was likely a supernova in 1604. Image Credit: Kepler-Museum in Weil der Stadt

Johannes Kepler observed the last supernova in the Milky Way in 1604, so in a statistical sense, the next one is overdue.

At 600 light years away, the red supergiant Betelgeuse in the constellation of Orion is the nearest massive star getting close to the end of its life. When it goes supernova, it will shine as bright as the full moon for those watching from Earth, without causing any damage to life on our planet.

Radiation Damage

If a star goes supernova close enough to Earth, the gamma-ray radiation could damage some of the planetary protection that allows life to thrive on Earth. There’s a time delay due to the finite speed of light. If a supernova goes off 100 light years away, it takes 100 years for us to see it.

Astronomers have found evidence of a supernova 300 light years away that exploded 2.5 million years ago. Radioactive atoms trapped in seafloor sediments are the telltale signs of this event. Radiation from gamma rays eroded the ozone layer, which protects life on Earth from the sun’s harmful radiation. This event would have cooled the climate, leading to the extinction of some ancient species.

Safety from a supernova comes with greater distance. Gamma rays and cosmic rays spread out in all directions once emitted from a supernova, so the fraction that reach the Earth decreases with greater distance. For example, imagine two identical supernovae, with one 10 times closer to Earth than the other. Earth would receive radiation that’s about a hundred times stronger from the closer event.

A supernova within 30 light years would be catastrophic, severely depleting the ozone layer, disrupting the marine food chain and likely causing mass extinction. Some astronomers guess that nearby supernovae triggered a series of mass extinctions 360 to 375 million years ago. Luckily, these events happen within 30 light years only every few hundred million years.

When Neutron Stars Collide

But supernovae aren’t the only events that emit gamma rays. Neutron star collisions cause high-energy phenomena ranging from gamma rays to gravitational waves.

Left behind after a supernova explosion, neutron stars are city-size balls of matter with the density of an atomic nucleus, so 300 trillion times denser than the sun. These collisions created many of the gold and precious metals on Earth. The intense pressure caused by two ultradense objects colliding forces neutrons into atomic nuclei, which creates heavier elements such as gold and platinum.

A neutron star collision generates an intense burst of gamma rays. These gamma rays are concentrated into a narrow jet of radiation that packs a big punch.

If the Earth were in the line of fire of a gamma-ray burst within 10,000 light years, or 10 percent of the diameter of the galaxy, the burst would severely damage the ozone layer. It would also damage the DNA inside organisms’ cells, at a level that would kill many simple life forms like bacteria.

That sounds ominous, but neutron stars do not typically form in pairs, so there is only one collision in the Milky Way about every 10,000 years. They are 100 times rarer than supernova explosions. Across the entire universe, there is a neutron star collision every few minutes.

Gamma-ray bursts may not hold an imminent threat to life on Earth, but over very long time scales, bursts will inevitably hit the Earth. The odds of a gamma-ray burst triggering a mass extinction are 50 percent in the past 500 million years and 90 percent in the 4 billion years since there has been life on Earth.

By that math, it’s quite likely that a gamma-ray burst caused one of the five mass extinctions in the past 500 million years. Astronomers have argued that a gamma-ray burst caused the first mass extinction 440 million years ago, when 60 percent of all marine creatures disappeared.

A Recent Reminder

The most extreme astrophysical events have a long reach. Astronomers were reminded of this in October 2022, when a pulse of radiation swept through the solar system and overloaded all of the gamma-ray telescopes in space.

It was the brightest gamma-ray burst to occur since human civilization began. The radiation caused a sudden disturbance to the Earth’s ionosphere, even though the source was an explosion nearly two billion light years away. Life on Earth was unaffected, but the fact that it altered the ionosphere is sobering—a similar burst in the Milky Way would be a million times brighter.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image Credit: NASA, ESA, Joel Kastner (RIT)

Kategorie: Transhumanismus

A New Photonic Computer Chip Uses Light to Slash AI Energy Costs

Singularity HUB - 15 Duben, 2024 - 22:45

AI models are power hogs.

As the algorithms grow and become more complex, they’re increasingly taxing current computer chips. Multiple companies have designed chips tailored to AI to reduce power draw. But they’re all based on one fundamental rule—they use electricity.

This month, a team from Tsinghua University in China switched up the recipe. They built a neural network chip that uses light rather than electricity to run AI tasks at a fraction of the energy cost of NVIDIA’s H100, a state-of-the-art chip used to train and run AI models.

Called Taichi, the chip combines two types of light-based processing into its internal structure. Compared to previous optical chips, Taichi is far more accurate for relatively simple tasks such as recognizing hand-written numbers or other images. Unlike its predecessors, the chip can generate content too. It can make basic images in a style based on the Dutch artist Vincent van Gogh, for example, or classical musical numbers inspired by Johann Sebastian Bach.

Part of Taichi’s efficiency is due to its structure. The chip is made of multiple components called chiplets. Similar to the brain’s organization, each chiplet performs its own calculations in parallel, the results of which are then integrated with the others to reach a solution.

Faced with a challenging problem of separating images over 1,000 categories, Taichi was successful nearly 92 percent of the time, matching current chip performance, but slashing energy consumption over a thousand-fold.

For AI, “the trend of dealing with more advanced tasks [is] irreversible,” wrote the authors. “Taichi paves the way for large-scale photonic [light-based] computing,” leading to more flexible AI with lower energy costs.

Chip on the Shoulder

Today’s computer chips don’t mesh well with AI.

Part of the problem is structural. Processing and memory on traditional chips are physically separated. Shuttling data between them takes up enormous amounts of energy and time.

While efficient for solving relatively simple problems, the setup is incredibly power hungry when it comes to complex AI, like the large language models powering ChatGPT.

The main problem is how computer chips are built. Each calculation relies on transistors, which switch on or off to represent the 0s and 1s used in calculations. Engineers have dramatically shrunk transistors over the decades so they can cram ever more onto chips. But current chip technology is cruising towards a breaking point where we can’t go smaller.

Scientists have long sought to revamp current chips. One strategy inspired by the brain relies on “synapses”—the biological “dock” connecting neurons—that compute and store information at the same location. These brain-inspired, or neuromorphic, chips slash energy consumption and speed up calculations. But like current chips, they rely on electricity.

Another idea is to use a different computing mechanism altogether: light. “Photonic computing” is “attracting ever-growing attention,” wrote the authors. Rather than using electricity, it may be possible to hijack light particles to power AI at the speed of light.

Let There Be Light

Compared to electricity-based chips, light uses far less power and can simultaneously tackle multiple calculations. Tapping into these properties, scientists have built optical neural networks that use photons—particles of light—for AI chips, instead of electricity.

These chips can work two ways. In one, chips scatter light signals into engineered channels that eventually combine the rays to solve a problem. Called diffraction, these optical neural networks pack artificial neurons closely together and minimize energy costs. But they can’t be easily changed, meaning they can only work on a single, simple problem.

A different setup depends on another property of light called interference. Like ocean waves, light waves combine and cancel each other out. When inside micro-tunnels on a chip, they can collide to boost or inhibit each other—these interference patterns can be used for calculations. Chips based on interference can be easily reconfigured using a device called an interferometer. Problem is, they’re physically bulky and consume tons of energy.

Then there’s the problem of accuracy. Even in the sculpted channels often used for interference experiments, light bounces and scatters, making calculations unreliable. For a single optical neural network, the errors are tolerable. But with larger optical networks and more sophisticated problems, noise rises exponentially and becomes untenable.

This is why light-based neural networks can’t be easily scaled up. So far, they’ve only been able to solve basic tasks, such as recognizing numbers or vowels.

“Magnifying the scale of existing architectures would not proportionally improve the performances,” wrote the team.

Double Trouble

The new AI, Taichi, combined the two traits to push optical neural networks towards real-world use.

Rather than configuring a single neural network, the team used a chiplet method, which delegated different parts of a task to multiple functional blocks. Each block had its own strengths: One was set up to analyze diffraction, which could compress large amounts of data in a short period of time. Another block was embedded with interferometers to provide interference, allowing the chip to be easily reconfigured between tasks.

Compared to deep learning, Taichi took a “shallow” approach whereby the task is spread across multiple chiplets.

With standard deep learning structures, errors tend to accumulate over layers and time. This setup nips problems that come from sequential processing in the bud. When faced with a problem, Taichi distributes the workload across multiple independent clusters, making it easier to tackle larger problems with minimal errors.

The strategy paid off.

Taichi has the computational capacity of 4,256 total artificial neurons, with nearly 14 million parameters mimicking the brain connections that encode learning and memory. When sorting images into 1,000 categories, the photonic chip was nearly 92 percent accurate, comparable to “currently popular electronic neural networks,” wrote the team.

The chip also excelled in other standard AI image-recognition tests, such as identifying hand-written characters from different alphabets.

As a final test, the team challenged the photonic AI to grasp and recreate content in the style of different artists and musicians. When trained with Bach’s repertoire, the AI eventually learned the pitch and overall style of the musician. Similarly, images from van Gogh or Edvard Munch—the artist behind the famous painting, The Scream—fed into the AI allowed it to generate images in a similar style, although many looked like a toddler’s recreation.

Optical neural networks still have much further to go. But if used broadly, they could be a more energy-efficient alternative to current AI systems. Taichi is over 100 times more energy efficient than previous iterations. But the chip still requires lasers for power and data transfer units, which are hard to condense.

Next, the team is hoping to integrate readily available mini lasers and other components into a single, cohesive photonic chip. Meanwhile, they hope Taichi will “accelerate the development of more powerful optical solutions” that could eventually lead to “a new era” of powerful and energy-efficient AI.

Image Credit: spainter_vfx / Shutterstock.com

Kategorie: Transhumanismus

This Week’s Awesome Tech Stories From Around the Web (Through April 13)

Singularity HUB - 13 Duben, 2024 - 16:00
ROBOTICS

Is Robotics About to Have Its Own ChatGPT Moment?
Melissa Heikkilä | MIT Technology Review
“For decades, roboticists have more or less focused on controlling robots’ ‘bodies’—their arms, legs, levers, wheels, and the like—via purpose-driven software. But a new generation of scientists and inventors believes that the previously missing ingredient of AI can give robots the ability to learn new skills and adapt to new environments faster than ever before. This new approach, just maybe, can finally bring robots out of the factory and into our homes.”

ARTIFICIAL INTELLIGENCE

Humans Forget. AI Assistants Will Remember Everything
Boone Ashworth | Wired
“Human brains, Gruber says, are really good at story retrieval, but not great at remembering details, like specific dates, names, or faces. He has been arguing for digital AI assistants that can analyze everything you do on your devices and index all those details for later reference.”

BIOTECH

The Effort to Make a Breakthrough Cancer Therapy Cheaper
Cassandra Willyard | MIT Technology Review
“CAR-T therapies are already showing promise beyond blood cancers. Earlier this year, researchers reported stunning results in 15 patients with lupus and other autoimmune diseases. CAR-T is also being tested as a treatment for solid tumors, heart disease, aging, HIV infection, and more. As the number of people eligible for CAR-T therapies increases, so will the pressure to reduce the cost.”

ETHICS

Students Are Likely Writing Millions of Papers With AI
Amanda Hoover | Wired
“A year ago, Turnitin rolled out an AI writing detection tool that was trained on its trove of papers written by students as well as other AI-generated texts. Since then, more than 200 million papers have been reviewed by the detector, predominantly written by high school and college students. Turnitin found that 11 percent may contain AI-written language in 20 percent of its content, with 3 percent of the total papers reviewed getting flagged for having 80 percent or more AI writing.”

SCIENCE

Physicists Capture First-Ever Image of an Electron Crystal
Isaac Schultz | Gizmodo
“Electrons are typically seen flitting around their atoms, but a team of physicists has now imaged the particles in a very different state: nestled together in a quantum phase called a Wigner crystal, without a nucleus at their core. The phase is named after Eugene Wigner, who predicted in 1934 that electrons would crystallize in a lattice when certain interactions between them are strong enough. The recent team used high-resolution scanning tunneling microscopy to directly image the predicted crystal.”

GADGETS

Review: Humane Ai Pin
Julian Chokkattu | Wired
“Humane has potential with the Ai Pin. I like being able to access an assistant so quickly, but right now, there’s nothing here that makes me want to use it over my smartphone. Humane says this is just version 1.0 and that many of the missing features I’ve mentioned will arrive later. I’ll be happy to give it another go then.”

SPACE

The Moon Likely Turned Itself Inside Out 4.2 Billion Years Ago
Passant Rabie | Gizmodo
“A team of researchers from the University of Arizona found new evidence that supports one of the wildest formation theories for the moon, which suggests that Earth’s natural satellite may have turned itself inside out a few million years after it came to be. In a new study published Monday in Nature Geoscience, the researchers looked at subtle variations in the moon’s gravitational field to provide the first physical evidence of a sinking mineral-rich layer.”

TECH

How Tech Giants Cut Corners to Harvest Data for AI
Cade Metz, Cecilia Kang, Sheera Frenkel, Stuart A. Thompson, and Nico Grantade | The New York Times
“The race to lead AI has become a desperate hunt for the digital data needed to advance the technology. To obtain that data, tech companies including OpenAI, Google and Meta have cut corners, ignored corporate policies and debated bending the law, according to an examination by The New York Times.”

ENERGY

Artificial Intelligence’s ‘Insatiable’ Energy Needs Not Sustainable, Arm CEO Says
Peter Landers | The Wall Street Journal
“In a January report, the International Energy Agency said a request to ChatGPT requires 2.9 watt-hours of electricity on average—equivalent to turning on a 60-watt lightbulb for just under three minutes. That is nearly 10 times as much as the average Google search. The agency said power demand by the AI industry is expected to grow by at least 10 times between 2023 and 2026.”

FUTURE

Someday, Earth Will Have a Final Total Solar Eclipse
Katherine Kornei | The New York Times
“The total solar eclipse visible on Monday over parts of Mexico, the United States and Canada was a perfect confluence of the sun and the moon in the sky. But it’s also the kind of event that comes with an expiration date: At some point in the distant future, Earth will experience its last total solar eclipse. That’s because the moon is drifting away from Earth, so our nearest celestial neighbor will one day, millions or even billions of years in the future, appear too small in the sky to completely obscure the sun.”
archive page

Image Credit: Tim Foster / Unsplash

Kategorie: Transhumanismus

Elon Musk Doubles Down on Mars Dreams and Details What’s Next for SpaceX’s Starship

Singularity HUB - 12 Duben, 2024 - 21:13

Elon Musk has long been open about his dreams of using SpaceX to spread humanity’s presence further into the solar system. And last weekend, he gave an updated outline of his vision for how the company’s rockets could enable the colonization of Mars.

The serial entrepreneur has been clear for a number of years that the main motivation for founding SpaceX was to make humans a multiplanetary species. For a long time, that seemed like the kind of aspirational goal one might set to inspire and motivate engineers rather than one with a realistic chance of coming to fruition.

But following the successful launch of the company’s mammoth Starship vehicle last month, the idea is beginning to look less far-fetched. And in a speech at the company’s facilities in South Texas, Musk explained how he envisions using Starship to deliver millions of tons of cargo to Mars over the next couple of decades to create a self-sustaining civilization.

“Starship is the first design of a rocket that is actually capable of making life multiplanetary,” Musk said. “No rocket before this has had the potential to extend life to another planet.”

In a slightly rambling opening to the speech, Musk explained that making humans multiplanetary could be an essential insurance policy in case anything catastrophic happens to Earth. The red planet is the most obvious choice, he said, as it’s neither too close nor too far from Earth and has many of the raw ingredients required to support a functioning settlement.

But he estimates it will require us to deliver several million tons of cargo to the surface to get that civilization up and running. Starship is central to those plans, and Musk outlined the company’s roadmap for the massive rocket over the coming years.

Key to the vision is making the vehicle entirely reusable. That means the first hurdle is proving SpaceX can land and reuse both the Super Heavy first stage rocket and the Starship spacecraft itself. The second of those challenges will be tougher, as the vehicle must survive reentry to the atmosphere—in the most recent test, it broke up on its way back to Earth.

Musk says they plan to demonstrate the ability to land and reuse the Super Heavy booster this year, which he thinks has an 80 to 90 percent chance of success. Assuming they can get Starship to survive the extreme heat of reentry, they are also going to attempt landing the vehicle on a mock launch pad out at sea in 2024, with the aim of being able to land and reuse it by next year.

Proving the rocket works and is reusable is just the very first step in Musk’s Mars ambitions though. To achieve his goal of delivering a million people to the red planet in the next 20 years, SpaceX will have to massively ramp up its production and launch capabilities.

The company is currently building a second launch tower at its base in South Texas and is also planning to build two more at Cape Canaveral in Florida. Musk said the Texas sites would be mostly used for test launches and development work, with the Florida ones being the main hub for launches once Starship begins commercial operations.

SpaceX plans to build six Starships this year, according to Musk, but it is also building what he called a “giant factory” that will enable it to massively ramp up production of the spacecraft. The long-term goal is to produce multiple Starships a day. That’s crucial, according to Musk, because Starships initially won’t return from Mars and will instead be used as raw materials to construct structures on the surface.

The company also plans to continue development of Starship, boosting its carrying capacity from around 100 tons today to 200 tons in the future and enabling it to complete multiple launches in a day. SpaceX also hopes to demonstrate ship-to-ship refueling in orbit next year. It will be necessary to replenish the fuel used up by Starship on launch so it has a full tank as it sets off for Mars.

Those missions will depart when the orbits of Earth and Mars bring them close together, an alignment that only happens every 26 months. As such, Musk envisions entire armadas of Starships setting off together whenever these windows arrive.

SpaceX has done some early work on what needs to happen once Starships arrive at the red planet. They’ve identified promising landing sites and the infrastructure that will need setting up, including power generation, ice-mining facilities, propellant factories, and communication networks. But Musk admits they’ve yet to start development of any of these.

One glaring omission in the talk was any detail on who’s going to be paying for all of this. While the goal of making humankind multiplanetary is a noble one, it’s far from clear how the endeavor would make money for those who put up the funds to make it possible.

Musk estimates that the cost of each launch could eventually fall to just $2 to $3 million. And he noted that profits from the company’s Starlink satellites and Falcon 9 launch vehicle are currently paying for Starship’s development. But those revenue streams are unlikely to cover the thousands of launches a year required to make his Mars dreams a reality.

Still, the very fact that the questions these days are more about economics than technical feasibility is testament to the rapid progress SpaceX has made. The dream of becoming a multiplanetary species may not be science fiction for much longer.

Image Credit: SpaceX

Kategorie: Transhumanismus

This Company Is Growing Mini Livers Inside People to Fight Liver Disease

Singularity HUB - 11 Duben, 2024 - 23:10

Growing a substitute liver inside a human body sounds like science fiction.

Yet a patient with severe liver damage just received an injection that could grow an additional “mini liver” directly inside their body. If all goes well, it’ll take up the failing liver’s job of filtering toxins from the blood.

For people with end-stage liver disease, a transplant is the only solution. But matching donor organs are hard to come by. Across the globe, two million people die from liver failure each year.

The new treatment, helmed by biotechnology company LyGenesis, offers an unusual solution. Rather than transplanting a whole new liver, the team is injecting healthy donor liver cells into lymph nodes in the patient’s upper abdomen. In a few months, it’s hoped the cells will gradually replicate and grow into a functional miniature liver.

The patient is part of a Phase 2a clinical trial, a stage that begins to gauge whether a therapy is effective. In up to 12 people with end-stage liver disease, the trial will test multiple doses to find the “Goldilocks” zone of treatment—effective with minimal side effects.

If successful, the therapy could sidestep the transplant organ shortage problem, not just for liver disease, but potentially also for kidney failure or diabetes. The math also works in favor of patients. Instead of one donor organ per recipient, healthy cells from one person could help multiple people in need of new organs.

A Living Bioreactor

Most of us don’t think about lymph nodes until we catch a cold, and they swell up painfully under the chin. These structures are dotted throughout the body. Like tiny cellular nurseries, they help immune cells proliferate to fend off invading viruses and bacteria.

They also have a dark side. Lymph nodes aid the spread of breast and other types of cancers. Because they’re highly connected to a highway of lymphatic vessels, cancer cells tunnel into them and take advantage of nutrients in the blood to grow and spread across the body.

What seems like a biological downfall may benefit regenerative medicine. If lymph nodes can support both immune cells and cancer growth, they may also be able to incubate other cell types and grow them into tissues—or even replacement organs.

The idea diverges from usual regenerative therapies, such as stem cell treatments, which aim to revive damaged tissues at the spot of injury. This is a hard ask: When organs fail, they often scar and spew out toxic chemicals that prevent engrafted cells from growing.

Lymph nodes offer a way to skip these cellular cesspools entirely.

Growing organs inside lymph nodes may sound far-fetched, but over a decade ago, LyGenesis’ chief scientific officer and co-founder, Dr. Eric Lagasse, showed it was possible in mice. In one test, his team injected liver cells directly into a lymph node inside a mouse’s belly. They found the grafted cells stayed in the “nursery,” rather than roaming the body and causing unexpected side effects.

In a mouse model of lethal liver failure, an infusion of healthy liver cells into the lymph node grew into a mini liver in just twelve weeks. The transplanted cells took over their host, developing into cube-like cells characteristic of normal liver cells and leaving behind just a sliver of normal lymph node cells.

The graft could support immune system growth and grew cells to shuttle bile and other digestive chemicals. It also boosted the mice’s average survival rate. Without treatment, most mice died within 10 weeks of the start of the study. Most mice injected with liver cells survived past 30 weeks.

A similar strategy worked in dogs and pigs with damaged livers. Injecting donor cells into lymph nodes formed mini livers in less than two months in pigs. Under the microscope, the newborn structures resembled the liver’s intricate architecture, including “highways” for bile to easily flow along instead of accumulating, which causes even more damage and scarring.

The body has over 500 hundred lymph nodes. Injecting into other lymph nodes located elsewhere also grew mini livers, but they weren’t as effective.

“It’s all about location, location, location,” said Lagasse at the time.

A Daring Trial

With prior experience guiding their clinical trial, LyGenesis dosed a first patient in late March.

The team used a technique called endoscopic ultrasound to direct the cells into the designated lymph node. In the procedure, a thin, flexible tube with a small ultrasound device is inserted through the mouth into the digestive track. The ultrasound generates an image of the surrounding tissue and helps guide the tube to the target lymph node for injection.

The procedure may sound difficult, but compared to a liver transplant, it’s minimally invasive. In an interview with Nature, Dr. Michael Hufford, CEO of LyGenesis, said the patient is recovering well and already discharged from the clinic.

The company aims to enroll all 12 patients by mid 2025 to test the therapy’s safety and efficacy.

Many questions remain. The transplanted cells could grow into mini livers of different sizes, based on chemical signals from the body. Although not a problem in mice and pigs, could they potentially overgrow in humans? Meanwhile, patients receiving the treatment will need to take a hefty dose of medications to suppress their immune systems. How these will interact with the transplants is also unknown.

Another question is dosage. Lymph nodes are plentiful. The trial will inject liver cells into up to five lymph nodes to see if multiple mini livers can grow and function without side effects.

If successful, the therapy has wider reach.

In diabetic mice, seeding lymph nodes with pancreatic cellular clusters restored their blood sugar levels. A similar strategy could combat Type 1 diabetes in humans. The company is also looking into whether the technology can revive kidney function or even combat aging.

But for now, Hufford is focused on helping millions of people with liver damage. “This therapy will potentially be a remarkable regenerative medicine milestone by helping patients with ESLD [end-stage liver disease] grow new functional ectopic livers in their own body,” he said.

Image Credit: A solution with liver cells in suspension / LyGenesis

Kategorie: Transhumanismus

Harvard’s New Programmable Liquid Shifts Its Properties on Demand

Singularity HUB - 11 Duben, 2024 - 00:37

We’re surrounded by ingenious substances: a menu of metal alloys that can wrap up leftovers or skin rockets, paints in any color imaginable, and ever-morphing digital displays. Virtually all of these exploit the natural properties of the underlying materials.

But an emerging class of materials is more versatile, even programmable.

Known as metamaterials, these substances are meticulously engineered such that their structural makeup—as opposed to their composition—determines their properties. Some metamaterials might make long-distance wireless power transfer practical, others could bring “invisibility cloaks” or futuristic materials that respond to brainwaves.

But most examples are solid metamaterials—a Harvard team wondered if they could make a metafluid. As it turns out, yes, absolutely. The team recently described their results in Nature.

“Unlike solid metamaterials, metafluids have the unique ability to flow and adapt to the shape of their container,” Katia Bertoldi, a professor in applied mechanics at Harvard and senior author of the paper, said in a press release. “Our goal was to create a metafluid that not only possesses these remarkable attributes but also provides a platform for programmable viscosity, compressibility, and optical properties.”

The team’s metafluid is made up of hundreds of thousands of tiny, stretchy spheres—each between 50 to 500 microns across—suspended in oil. The spheres change shape depending on the pressure of the surrounding oil. At higher pressures, they deform, one hemisphere collapsing inward into a kind of half moon shape. They then resume their original spherical shape when the pressure is relieved.

The metafluid’s properties—such as viscosity or opacity—change depending on which of these shapes its constituent spheres assume. The fluid’s properties can be fine-tuned based on how many spheres are in the liquid and how big or thick they are.

Greater pressure causes the spheres to collapse. When the pressure is relieved, they resume their spherical shape. Credit: Adel Djellouli/Harvard SEAS

As a proof of concept, the team filled a hydraulic robotic gripper with their metafluid. Robots usually have to be programmed to sense objects and adjust grip strength. The team showed the gripper could automatically adapt to a blueberry, a glass, and an egg without additional sensing or programming required. The pressure of each object “programmed” the liquid to adjust, allowing the gripper to pick up all three, undamaged, with ease.

The team also showed the metafluid could switch from opaque, when its constituents were spherical, to more transparent, when they collapsed. The latter shape, the researchers said, functions like a lens focusing light, while the former scatters light.

The metafluid obscures the Harvard logo then becomes more transparent as the capsules collapse. Credit: Adel Djellouli/Harvard SEAS

Also of note, the metafluid behaves like a Newtonian fluid when its components are spherical, meaning its viscosity only changes with shifts in temperature. When they collapse, however, it becomes a non-Newtonian fluid, where its viscosity changes depending on the shear forces present. The greater the shear force—that is, parallel forces pushing in opposite directions—the more liquid the metafluid becomes.

Next, the team will investigate additional properties—such as how their creation’s acoustics and thermodynamics change with pressure—and look into commercialization. Making the elastic spheres themselves is fairly straightforward, and they think metafluids like theirs might be useful in robots, as “intelligent” shock absorbers, or in color-changing e-inks.

“The application space for these scalable, easy-to-produce metafluids is huge,” said Bertoldi.

Of course, the team’s creation is still in the research phase. There are a plenty of hoops yet to navigate before it shows up in products we all might enjoy. Still, the work adds to a growing list of metamaterials—and shows the promise of going from solid to liquid.

Image Credit: Adel Djellouli/Harvard SEAS

Kategorie: Transhumanismus

3 Body Problem: Is the Universe Really a ‘Dark Forest’ Full of Hostile Aliens in Hiding?

Singularity HUB - 9 Duben, 2024 - 20:17

We have no good reason to believe that aliens have ever contacted Earth. Sure, there are conspiracy theories, and some rather strange reports about harm to cattle, but nothing credible. Physicist Enrico Fermi found this odd. His formulation of the puzzle, proposed in the 1950s and now known as the Fermi Paradox, is still key to the search for extraterrestrial life (SETI) and messaging by sending signals into space (METI).

The Earth is about 4.5 billion years old, and life is at least 3.5 billion years old. The paradox states that, given the scale of the universe, favorable conditions for life are likely to have occurred many, many times. So where is everyone? We have good reasons to believe that there must be life out there, but nobody has come to call.

This is an issue that the character Ye Wenjie wrestles with in the first episode of Netflix’s 3 Body Problem. Working at a radio observatory, she does finally receive a message from a member of an alien civilization—telling her they are a pacifist and urging her not to respond to the message or Earth will be attacked.

The series will ultimately offer a detailed, elegant solution to the Fermi Paradox, but we will have to wait until the second season.

Or you can read the second book in Cixin Liu’s series, The Dark Forest. Without spoilers, the explanation set out in the books runs as follows: “The universe is a dark forest. Every civilization is an armed hunter stalking through the trees like a ghost, gently pushing aside branches that block the path and trying to tread without sound.”

Ultimately, everybody is hiding from everyone else. Differential rates of technological progress make an ongoing balance of power impossible, leaving the most rapidly progressing civilizations in a position to wipe out anyone else.

In this ever-threatening environment, those who play the survival game best are the ones who survive longest. We have joined a game which has been going on before our arrival, and the strategy that everyone has learned is to hide. Nobody who knows the game will be foolish enough to contact anyone—or to respond to a message.

Liu has depicted what he calls “the worst of all possible universes,” continuing a trend within Chinese science fiction. He is not saying that our universe is an actual dark forest, with one survival strategy of silence and predation prevailing everywhere, but that such a universe is possible and interesting.

Liu’s dark forest theory is also sufficiently plausible to have reinforced a trend in the scientific discussion in the west—away from worries about mutual incomprehensibility, and towards concerns about direct threat.

We can see its potential influence in the protocol for what to do on first contact that was proposed in 2020 by the prominent astrobiologists Kelly Smith and John Traphagan. “First, do nothing,” they conclude, because doing something could lead to disaster.

In the case of alien contact, Earth should be notified using pre-established signaling rather than anything improvised, they argue. And we should avoid doing anything that might disclose information about who we are. Defensive behavior would show our familiarity with conflict, so that would not be a good idea. Returning messages would give away the location of Earth—also a bad idea.

Again, the Smith and Traphagan thought is not that the dark forest theory is correct. Benevolent aliens really could be out there. The thought is simply that first contact would involve a high civilization-level risk.

This is different from assumptions from a great deal of Russian literature about space of the Soviet era, which suggested that advanced civilizations would necessarily have progressed beyond conflict, and would therefore share a comradely attitude. This no longer seems to be regarded as a plausible guide to protocols for contact.

Misinterpreting Darwin

The interesting thing is that the dark forest theory is almost certainly wrong. Or at least, it is wrong in our universe. It sets up a scenario in which there is a Darwinian process of natural selection, a competition for survival.

Charles Darwin’s account of competition for survival is evidence-based. By contrast, we have absolutely no evidence about alien behavior, or about competition within or between other civilizations. This makes for entertaining guesswork rather than good science, even if we accept the idea that natural selection could operate at group level, at the level of civilizations.

Even if you were to assume the universe did operate in accordance with Darwinian evolution, the argument is questionable. No actual forest is like the dark one. They are noisy places where co-evolution occurs.

Creatures evolve together, in mutual interdependence, and not alone. Parasites depend upon hosts, flowers depend upon birds for pollination. Every creature in a forest depends upon insects. Mutual connection does lead to encounters which are nasty, brutish and short, but it also takes other forms. That is how forests in our world work.

Interestingly, Liu acknowledges this interdependence as a counterpoint to the dark forest theory. The viewer, and the reader, are told repeatedly that “in nature, nothing exists alone”—a quote from Rachel Carson’s Silent Spring (1962). This is a text which tells us that bugs can be our friends and not our enemies.

There are many galaxies out there, and potentially plenty of life. Image Credit: X-ray: NASA/CXC/SAO

In Liu’s story, this is used to explain why some humans immediately go over to the side of the aliens, and why the urge to make contact is so strong, in spite of all the risks. Ye Wenjie ultimately replies to the alien warning.

The Carson allusions do not reinstate the old Russian idea that aliens will be advanced and therefore comradely. But they do help to paint a more varied and realistic picture than the dark forest theory.

For this reason, the dark forest solution to the Fermi Paradox is unconvincing. The fact that we do not hear anyone is just as likely to indicate that they are too far off, or we are listening in all the wrong ways, or else that there is no forest and nothing else to be heard.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image Credit: ESO/A. Ghizzi Panizza (www.albertoghizzipanizza.com)

Kategorie: Transhumanismus

Your Brain Breaks Its Own DNA to Form Memories That Can Last a Lifetime

Singularity HUB - 8 Duben, 2024 - 21:55

Some memories last a lifetime. The awe of seeing a full solar eclipse. The first smile you shared with your partner. The glimpse of a beloved pet who just passed away in their sleep.

Other memories, not so much. Few of us remember what we had for lunch a week ago. Why do some memories last, while others fade away?

Surprisingly, the answer may be broken DNA and inflammation in the brain. On the surface, these processes sound utterly detrimental to brain function. Broken DNA strands are usually associated with cancer, and inflammation is linked to aging.

But a new study in mice suggests that breaking and repairing DNA in neurons paves the way for long-lasting memories.

We form memories when electrical signals zap through neurons in the hippocampus, a seahorse-shaped region deep inside the brain. The electrical pulses wire groups of neurons together into networks that encode memories. The signals only capture brief snippets of a treasured experience, yet some can be replayed over and over for decades (although they do gradually decay like a broken record).

Like artificial neural networks, which power most of today’s AI, scientists have long thought that rewiring the brain’s connections happens fast and is prone to changes. But the new study found a subset of neurons that alter their connections to encode long-lasting memories.

To do this, strangely, the neurons recruit proteins that normally fend off bacteria and cause inflammation.

“Inflammation of brain neurons is usually considered to be a bad thing, since it can lead to neurological problems such as Alzheimer’s and Parkinson’s disease,” said study author Dr. Jelena Radulovic at Albert Einstein College of Medicine in a press release. “But our findings suggest that inflammation in certain neurons in the brain’s hippocampal region is essential for making long-lasting memories.”

Should I Stay or Should I Go?

We all have a mental scrapbook for our lives. When playing a memory—the whens, wheres, whos, and whats—our minds transport us through time to relive the experience.

The hippocampus is at the heart of this ability. In the 1950s, a man known as H.M. had his hippocampus removed to treat epilepsy. After the surgery, he retained old memories, but could no longer form new ones, suggesting that the brain region is a hotspot for encoding memories.

But what does DNA have to do with the hippocampus or memory?

It comes down to how brain cells are wired. Neurons connect with each other through little bumps called synapses. Like docks between two opposing shores, synapses pump out chemicals to transmit messages from one neuron to another. Depending on the signals, synapses can form a strong connection to their neighboring neurons, or they can dial down communications.

This ability to rewire the brain is called synaptic plasticity. Scientists have long thought it’s the basis of memory. When learning something new, electrical signals flow through neurons triggering a cascade of molecules. These stimulate genes that restructure the synapse to either bump up or decrease their connection with neighbors. In the hippocampus, this “dial” can rapidly change overall neural network wiring to record new memories.

Synaptic plasticity comes at a cost. Synapses are made up of a collection of proteins produced from DNA inside cells. With new learning, electrical signals from neurons cause temporary snips to DNA inside neurons.

DNA damage isn’t always detrimental. It’s been associated with memory formation since 2021. One study found breakage of our genetic material is widespread in the brain and was surprisingly linked to better memory in mice. After learning a task, mice had more DNA breaks in multiple types of brain cells, hinting that the temporary damage may be part of the brain’s learning and memory process.

But the results were only for brief memories. Do similar mechanisms also drive long-term ones?

“What enables brief experiences, encoded over just seconds, to be replayed again and again during a lifetime remains a mystery,” Drs.  Benjamin Kelvington and Ted Abel at the Iowa Neuroscience Institute, who were not involved in the work, wrote in Nature.

The Memory Omelet

To find an answer, the team used a standard method for assessing memory. They hosted mice in different chambers: Some were comfortable; others gave the critters a tiny electrical zap to the paws, just enough that they disliked the habitat. The mice rapidly learned to prefer the comfortable room.

The team then compared gene expression from mice with a recent memory—roughly four days after the test—to those nearly a month after the stay.

Surprisingly, genes involved in inflammation flared up in addition to those normally associated with synaptic plasticity. Digging deeper, the team found a protein called TLR9. Usually known as part of the body’s first line of defense against dangerous bacteria, TLR9 boosts the body’s immune response against DNA fragments from invading bacteria. Here, however, the gene became highly active in neurons inside the hippocampus—especially those with persistent DNA breaks that last for days.

What does it do? In one test, the team deleted the gene encoding TLR9 in the hippocampus. When challenged with the chamber test, these mice struggled to remember the “dangerous” chamber in a long-term memory test compared to peers with the gene intact.

Interestingly, the team found that TLR9 could sense DNA breakage. Deleting the gene prevented mouse cells from recognizing DNA breaks, causing not just loss of long-term memory, but also overall genomic instability in their neurons.

“One of the most important contributions of this study is the insight into the connection between DNA damage…and the persistent cellular changes associated with long-term memory,” wrote Kelvington and Abel.

Memory Mystery

How long-term memories persist remains a mystery. Immune responses are likely just one aspect.

In 2021, the same team found that net-like structures around neurons are crucial for long-term memory. The new study pinpointed TLR9 as a protein that helps form these structures, providing a molecular mechanism between different brain components that support lasting memories.

The results suggest “we are using our own DNA as a signaling system,” Radulovic told Nature, so that we can “retain information over a long time.”

Lots of questions remain. Does DNA damage predispose certain neurons to the formation of memory-encoding networks? And perhaps more pressing, inflammation is often associated with neurodegenerative disorders, such as Alzheimer’s disease. TLR9, which helped the mice remember dangerous chambers in this study, was previously involved in triggering dementia when expressed in microglia, the brain’s immune cells.

“How is it that, in neurons, activation of TLR9 is crucial for memory formation, whereas, in microglia, it produces neurodegeneration—the antithesis of memory?” asked Kelvington and Abel. “What separates detrimental DNA damage and inflammation from that which is essential for memory?”

Image Credit: geralt / Pixabay

Kategorie: Transhumanismus

This Week’s Awesome Tech Stories From Around the Web (Through April 6)

Singularity HUB - 6 Duben, 2024 - 16:00
COMPUTING

To Build a Better AI Supercomputer, Let There Be Light
Will Knight | Wired
“Lightmatter wants to directly connect hundreds of thousands or even millions of GPUs—those silicon chips that are crucial to AI training—using optical links. Reducing the conversion bottleneck should allow data to move between chips at much higher speeds than is possible today, potentially enabling distributed AI supercomputers of extraordinary scale.“

ROBOTICS

Apple Has Been Secretly Building Home Robots That Could End up as a New Product Line, Report Says
Aaron Mok | Business Insider
“Apple is in the early stages of looking into making home robots, a move that appears to be an effort to create its ‘next big thing’ after it killed its self-driving car project earlier this year, sources familiar with the matter told Bloomberg. Engineers are looking into developing a robot that could follow users around their houses, Bloomberg reported. They’re also exploring a tabletop at-home device that uses robotics to rotate the display, a more advanced project than the mobile robot.”

SPACE

A Tantalizing ‘Hint’ That Astronomers Got Dark Energy All Wrong
Dennis Overbye | The New York Times
“On Thursday, astronomers who are conducting what they describe as the biggest and most precise survey yet of the history of the universe announced that they might have discovered a major flaw in their understanding of dark energy, the mysterious force that is speeding up the expansion of the cosmos. Dark energy was assumed to be a constant force in the universe, both currently and throughout cosmic history. But the new data suggest that it may be more changeable, growing stronger or weaker over time, reversing or even fading away.”

COMPUTING

How ASML Took Over the Chipmaking Chessboard
Mat Honan and James O’Donnell | MIT Technology Review
“When asked what he thought might eventually cause Moore’s Law to finally stall out, van den Brink rejected the premise entirely. ‘There’s no reason to believe this will stop. You won’t get the answer from me where it will end,’ he said. ‘It will end when we’re running out of ideas where the value we create with all this will not balance with the cost it will take. Then it will end. And not by the lack of ideas.'”

TRANSPORTATION

The Very First Jet Suit Grand Prix Takes Off in Dubai
Mike Hanlon | New Atlas
“A new sport kicked away this month when the first ever jet-suit race was held in Dubai. Each racer wore an array of seven 130-hp jet engines (two on each arm and three in the backpack for a total 1,050 hp) that are controlled by hand-throttles. After that, the pilots use the three thrust vectors to gain lift, move forward and try to stay above ground level while negotiating the course…faster than anyone else.“

ROBOTICS

Toyota’s Bubble-ized Humanoid Grasps With Its Whole Body
Evan Ackerman | IEEE Spectrum
“Many of those motions look very human-like, because this is how humans manipulate things. Not to throw too much shade at all those humanoid warehouse robots, but as is pointed out in the video above, using just our hands outstretched in front of us to lift things is not how humans do it, because using other parts of our bodies to provide extra support makes lifting easier.”

FUTURE

‘A Brief History of the Future’ Offers a Hopeful Antidote to Cynical Tech Takes
Devin Coldewey | TechCrunch
“The future, he said, isn’t just what a Silicon Valley publicist tells you, or what ‘Big Dystopia’ warns you of, or even what a TechCrunch writer predicts. In the six-episode series, he talks with dozens of individuals, companies and communities about how they’re working to improve and secure a future they may never see. From mushroom leather to ocean cleanup to death doulas, Wallach finds people who see the same scary future we do but are choosing to do something about it, even if that thing seems hopelessly small or naïve.”

TECH

This AI Startup Wants You to Talk to Houses, Cars, and Factories
Steven Levy | Wired
“We’ve all been astonished at how chatbots seem to understand the world. But what if they were truly connect to the real world? What if the dataset behind the chat interface was physical reality itself, captured in real time by interpreting the input of billions of sensors sprinkled around the globe? That’s the idea behind Archetype AI, an ambitious startup launching today. As cofounder and CEO Ivan Poupyrev puts it, ‘Think of ChatGPT, but for physical reality.'”

FUTURE

How One Tech Skeptic Decided AI Might Benefit the Middle Class
Steve Lohr | The New York Times
“David Autor seems an unlikely AI optimist. The labor economist at the Massachusetts Institute of Technology is best known for his in-depth studies showing how much technology and trade have eroded the incomes of millions of American workers over the years. But Mr. Autor is now making the case that the new wave of technology—generative artificial intelligence, which can produce hyper-realistic images and video and convincingly imitate humans’ voices and writing—could reverse that trend.”

Image Credit: Harole Ethan / Unsplash

Kategorie: Transhumanismus

Life’s Origins: How Fissures in Hot Rocks May Have Kickstarted Biochemistry

Singularity HUB - 5 Duben, 2024 - 21:17

How did the building blocks of life originate?

The question has long vexed scientists. Early Earth was dotted with pools of water rich in chemicals—a primordial soup. Yet biomolecules supporting life emerged from the mixtures, setting the stage for the appearance of the first cells.

Life was kickstarted when two components formed. One was a molecular carrier—like, for example, DNA—to pass along and remix genetic blueprints. The other component was made up of proteins, the workhorses and structural elements of the body.

Both biomolecules are highly complex. In humans, DNA has four different chemical “letters,” called nucleotides, whereas proteins are made of 20 types of amino acids. The components have distinct structures, and their creation requires slightly different chemistries. The final products need to be in large enough amounts to string them together into DNA or proteins.

Scientists can purify the components in the lab using additives. But it begs the question: How did it happen on early Earth?

The answer, suggests Dr. Christof Mast, a researcher at Ludwig Maximilians University of Munich, may be cracks in rocks like those occurring in the volcanoes or geothermal systems that were abundant on early Earth. It’s possible that temperature differences along the cracks naturally separate and concentrate biomolecule components, providing a passive system to purify biomolecules.

Inspired by geology, the team developed heat flow chambers roughly the size of a bank card, each containing minuscule fractures with a temperature gradient. When given a mixture of amino acids or nucleotides—a “prebiotic mix”—the components readily separated.

Adding more chambers further concentrated the chemicals, even those that were similar in structure. The network of fractures also enabled amino acids to bond, the first step towards creating a functional protein.

“Systems of interconnected thin fractures and cracks…are thought to be ubiquitous in volcanic and geothermal environments,” wrote the team. By enriching the prebiotic chemicals, such systems could have “provided a steady driving force for a natural origins-of-life laboratory.”

Brewing Life

Around four billion years ago, Earth was a hostile environment, pummeled by meteorites and rife with volcanic eruptions. Yet somehow among the chaos, chemistry generated the first amino acids, nucleotides, fatty lipids, and other building blocks that support life.

Which chemical processes contributed to these molecules is up for debate. When each came along is also a conundrum. Like a “chicken or egg” problem, DNA and RNA direct the creation of proteins in cells—but both genetic carriers also require proteins to replicate.

One theory suggest sulfidic anions, which are molecules that were abundant in early Earth’s lakes and rivers, could be the link. Generated in volcanic eruptions, once dissolved into pools of water they can speed up chemical reactions that convert prebiotic molecules into RNA. Dubbed the “RNA world” hypothesis, the idea suggests that RNA was the first biomolecule to grace Earth because it can carry genetic information and speed up some chemical reactions.

Another idea is meteor impacts on early Earth generated nucleotides, lipids, and amino acids simultaneously, through a process that includes two abundant chemicals—one from meteors and another from Earth—and a dash of UV light.

But there’s one problem: Each set of building blocks requires a different chemical reaction. Depending on slight differences in structure or chemistry, it’s possible one geographic location might have skewed towards one type of prebiotic molecule over another.

How? The new study, published in Nature, offers an answer.

Tunnel Networks

Lab experiments mimicking early Earth usually start with well-defined ingredients that have already been purified. Scientists also clean up intermediate side-products, especially for multiple chemical reaction steps.

The process often results in “vanishingly small concentrations of the desired product,” or its creation can even be completely inhibited, wrote the team. The reactions also require multiple spatially separated chambers, which hardly resembles Earth’s natural environment.

The new study took inspiration from geology. Early Earth had complex networks of water-filled cracks found in a variety of rocks in volcanos and geothermal systems. The cracks, generated by overheating rocks, formed natural “straws” that could potentially filter a complex mix of molecules using a heat gradient.

Each molecule favors a preferred temperature based on its size and electrical charge. When exposed to different temperatures, it naturally moves towards its ideal pick. Called thermophoresis, the process separates a soup of ingredients into multiple distinct layers in one step.

The team mimicked a single thin rock fracture using a heat flow chamber. Roughly the size of a bank card, the chamber had tiny cracks 170 micrometers across, about the width of a human hair. To create a temperature gradient, one side of the chamber was heated to 104 degrees Fahrenheit and the other end chilled to 77 degrees Fahrenheit.

In a first test, the team added a mix of prebiotic compounds that included amino acids and DNA nucleotides into the chamber. After 18 hours, the components separated into layers like tiramisu. For example, glycine—the smallest of amino acids—became concentrated towards the top, whereas other amino acids with higher thermophoretic strength stuck to the bottom. Similarly, DNA letters and other life-sustaining chemicals also separated in the cracks, with some enriched by up to 45 percent.

Although promising, the system didn’t resemble early Earth, which had highly interconnected cracks varying in size. To better mimic natural conditions, the team next strung up three chambers, with the first branching into two others. This was roughly 23 times more efficient at enriching prebiotic chemicals than a single chamber.

Using a computer simulation, the team then modeled the behavior of a 20-by-20 interlinked chamber system, using a realistic flow rate of prebiotic chemicals. The chambers further enriched the brew, with glycine enriching over 2,000 times more than another amino acids.

Chemical Reactions

Cleaner ingredients are a great start for the formation of complex molecules. But lots of chemical reaction require additional chemicals, which also need to be enriched. Here, the team zeroed in on a reaction stitching two glycine molecules together.

At the heart is trimetaphosphate (TMP), which helps guide the reaction. TMP is especially interesting for prebiotic chemistry, and it was scarce on early Earth, explained the team, which “makes its selective enrichment critical.” A single chamber increased TMP levels when mixed with other chemicals.

Using a computer simulation, a TMP and glycine mix increased the final product—a doubled glycine—by five orders of magnitude.

“These results show that otherwise challenging prebiotic reactions are massively boosted” with heat flows that selectively enrich chemicals in different regions, wrote the team.

In all, they tested over 50 prebiotic molecules and found the fractures readily separated them. Because each crack can have a different mix of molecules, it could explain the rise of multiple life-sustaining building blocks.

Still, how life’s building blocks came together to form organisms remains mysterious. Heat flows and rock fissures are likely just one piece of the puzzle. The ultimate test will be to see if, and how, these purified prebiotics link up to form a cell.

Image Credit: Christof B. Mast

Kategorie: Transhumanismus

Quantum Computers Take a Major Step With Error Correction Breakthrough

Singularity HUB - 4 Duben, 2024 - 23:26

For quantum computers to go from research curiosities to practically useful devices, researchers need to get their errors under control. New research from Microsoft and Quantinuum has now taken a major step in that direction.

Today’s quantum computers are stuck firmly in the “noisy intermediate-scale quantum” (NISQ) era. While companies have had some success stringing large numbers of qubits together, they are highly susceptible to noise which can quickly degrade their quantum states. This makes it impossible to carry out computations with enough steps to be practically useful.

While some have claimed that these noisy devices could still be put to practical use, the consensus is that quantum error correction schemes will be vital for the full potential of the technology to be realized. But error correction is difficult in quantum computers because reading the quantum state of a qubit causes it to collapse.

Researchers have devised ways to get around this using error correction codes that spread each bit of quantum information across multiple physical qubits to create what is known as a logical qubit. This provides redundancy and makes it possible to detect and correct errors in the physical qubits without impacting the information in the logical qubit.

The challenge is that, until recently, it was assumed it could take roughly 1,000 physical qubits to create each logical qubit. Today’s largest quantum processors only have around that many qubits, suggesting that creating enough logical qubits for meaningful computations was still a distant goal.

That changed last year when researchers from Harvard and startup QuEra showed they could generate 48 logical qubits from just 280 physical ones. And now the collaboration between Microsoft and Quantinuum has gone a step further by showing that they can not only create logical qubits but can actually use them to suppress error rates by a factor of 800 and carry out more than 14,000 experimental routines without a single error.

“What we did here gives me goosebumps,” Microsoft’s Krysta Svore told New Scientist. “We have shown that error correction is repeatable, it is working, and it is reliable.”

The researchers were working with Quantinuum’s H2 quantum processor, which relies on trapped-ion technology and is relatively small at just 32 qubits. But by applying error correction codes developed by Microsoft, they were able to generate four logical qubits that only experienced an error every 100,000 runs.

One of the biggest achievements, the Microsoft team notes in a blog post, was the fact that they were able to diagnose and correct errors without destroying the logical qubits. This is thanks to an approach known as “active syndrome extraction” which is able to read information about the nature of the noise impacting qubits, rather than their state, Svore told IEEE Spectrum.

However, the error correction scheme had a shelf life. When the researchers carried out multiple operations on a logical qubit, followed by error correction, they found that by the second round the error rates were only half of those found in the physical qubits and by the third round there was no statistically significant impact.

And impressive as the results are, the Microsoft team points out in their blog post that creating truly powerful quantum computers will require logical qubits that make errors only once every 100 million operations.

Regardless, the result marks a massive jump in capabilities for error correction, which Quantinuum claimed in a press release represents the beginning of a new era in quantum computing. While that might be jumping the gun slightly, it certainly suggests that people’s timelines for when we will achieve fault-tolerant quantum computing may need to be updated.

Image Credit: Quantinuum H2 quantum computer / Quantinuum

Kategorie: Transhumanismus

Environmental DNA Is Everywhere. Scientists Are Gathering It All.

Singularity HUB - 2 Duben, 2024 - 21:18

In the late 1980s, at a federal research facility in Pensacola, Florida, Tamar Barkay used mud in a way that proved revolutionary in a manner she could never have imagined at the time: a crude version of a technique that is now shaking up many scientific fields. Barkay had collected several samples of mud—one from an inland reservoir, another from a brackish bayou, and a third from a low-lying saltwater swamp. She put these sediment samples in glass bottles in the lab, and then added mercury, creating what amounted to toxic sludge.

At the time, Barkay worked for the Environmental Protection Agency and she wanted to know how microorganisms in mud interact with mercury, an industrial pollutant, which required an understanding of all the organisms in a given environment—not just the tiny portion that could be successfully grown in petri dishes in the lab. But the underlying question was so basic that it remains one of those fundamental driving queries across biology. As Barkay, who is now retired, put it in a recent interview from Boulder, Colorado: “Who is there?” And, just as important, she added: “What are they doing there?”

Such questions are still relevant today, asked by ecologists, public health officials, conservation biologists, forensic practitioners, and those studying evolution and ancient environments—and they drive shoe-leather epidemiologists and biologists to far-flung corners of the world.

The 1987 paper Barkay and her colleagues published in the Journal of Microbiological Methods outlined a method“Direct Environmental DNA Extraction”—that would allow researchers to take a census. It was a practical tool, albeit a rather messy one, for detecting who was out there. Barkay used it for the rest of her career.

Today, the study gets cited as an early glimpse of eDNA, or environmental DNA, a relatively inexpensive, widespread, potentially automated way to observe the diversity and distribution of life. Unlike previous techniques, which could identify DNA from, say, a single organism, the method also collects the swirling cloud of other genetic material that surrounds it. In recent years, the field has grown significantly. “It’s got its own journal,” said Eske Willerslev, an evolutionary geneticist at the University of Copenhagen. “It’s got its own society, scientific society. It has become an established field.”

“We’re all flaky, right? There’s bits of cellular debris sloughing off all the time.”

eDNA serves as a surveillance tool, offering researchers a means of detecting the seemingly undetectable. By sampling eDNA, or mixtures of genetic material—that is, fragments of DNA, the blueprint of life—in water, soil, ice cores, cotton swabs, or practically any environment imaginable, even thin air, it is now possible to search for a specific organism or assemble a snapshot of all the organisms in a given place. Instead of setting up a camera to see who crosses the beach at night, eDNA pulls that information out of footprints in the sand. “We’re all flaky, right?” said Robert Hanner, a biologist at the University of Guelph in Canada. “There’s bits of cellular debris sloughing off all the time.”

As a method for confirming the presence of something, eDNA isn’t failproof. For instance, the organism detected in eDNA might not actually live in the location where the sample was collected; Hanner gave the example of a passing bird, a heron, that ate a salamander and then pooped out some of its DNA, which could be one reason signals of the amphibian are present in some areas where they’ve never been physically found.

Still, eDNA has the ability to help sleuth out genetic traces, some of which slough off in the environment, offering a thrilling—and potentially chilling—way to collect information about organisms, including humans, as they go about their everyday business.

The conceptual basis for eDNA—pronounced EE-DEE-EN-AY, not ED-NUH—dates back a hundred years, before the advent of so-called molecular biology, and it is often attributed to Edmond Locard, a French criminologist working in the early 20th century. In a series of papers published in 1929, Locard proposed a principle: Every contact leaves a trace. In essence, eDNA brings Locard’s principle to the 21st century.

For the first several decades, the field that became eDNA—Barkay’s work in the 1980s included—focused largely on microbial life. Looking back at its evolution, eDNA appeared slow to claw its way out of the proverbial mud.

It wasn’t until 2003 that the method turned up a vanished ecosystem. Led by Willerslev, the 2003 study pulled ancient DNA from less than a teaspoon of sediment, demonstrating for the first time the feasibility of detecting larger organisms with the technique, including plants and woolly mammoths. In the same study, sediment collected in a New Zealand cave (which notably had not been frozen) revealed an extinct bird: the moa. What is perhaps most remarkable is that these applications for studying ancient DNA stemmed from a prodigious amount of dung dropped on the ground hundreds of thousands of years ago.

Willerslev had first come up with the idea a few years earlier while contemplating a more recent pile of dung: In between his master’s degree and Ph.D. in Copenhagen, he found himself at loose ends, struggling to obtain bones, skeletal remains, or other physical specimens to study. But one autumn, he gazed out the window at “a dog taking a crap on the street,” he recalled. The scene prompted him to think about the DNA in feces, and how it washed away with rain, leaving no visible trace. But Willerslev wondered, “‘Could it be that the DNA could survive?’ That’s what I then set up to try to find out.”

The paper demonstrated the remarkable persistence of DNA, which, he said, survives in the environment for much longer than previous estimates suggested. Willerslev has since analyzed eDNA in frozen tundra in modern-day Greenland, dating back 2 million years ago, and he is working on samples from Angkor Wat, the enormous temple complex in Cambodia believed to have been built in the 12th century. “It should be the worst DNA preservation you can imagine,” he said. “I mean, it’s hot and humid.”

But, he said, “we can get DNA out.”

eDNA has the ability to help sleuth out genetic traces, offering a thrilling—and potentially chilling—way to collect information about organisms as they go about their everyday business.

Willerslev is now hardly alone in seeing a potential tool with seemingly limitless applications—especially now as advances enable researchers to sequence and analyze larger quantities of genetic information. “It’s an open window for many, many things,” he said, “and much more than I can think of, I’m sure.” It was not just ancient mammoths; eDNA could reveal present-day organisms hiding in our midst.

Scientists use eDNA to track creatures of all shapes and sizes, be it a single species, such as tiny bits of invasive algae, eels in Loch Ness, or a sightless sand-dwelling mole that hasn’t been seen in nearly 90 years; researchers sample entire communities, say, by looking at the eDNA found on wildflower blossoms or the eDNA blowing in the wind as a proxy for all the visiting birds and bees and other animal pollinators.

The next evolutionary leap forward in eDNA’s history took shape around the search for organisms currently living in earth’s aquatic environments. In 2008, a headline appeared: “Water retains DNA memory of hidden species.” It came not from the supermarket tabloid, but the respected trade publication Chemistry World, describing work by French researcher Pierre Taberlet and his colleagues. The group sought out brown-and-green bullfrogs, which can weigh more than 2 pounds and, because they mow down everything in their path, are considered an invasive species in western Europe. Finding bullfrogs usually involved skilled herpetologists scanning shorelines with binoculars who then returned after sunset to listen for their calls. The 2008 paper suggested an easier way—a survey that required a lot less personnel.

“You could get DNA from that species directly out of the water,” said Philip Thomsen, a biologist at Aarhus University (who was not involved in the study). “And that really kickstarted the field of environmental DNA.”

Frogs can be hard to detect, and they are not, of course, the only species that eludes more traditional, boots-on-the-ground detection. Thomsen began work on another organism that notoriously confounds measurement: fish. Counting fish is sometimes said to vaguely resemble counting trees—except they’re free-roaming, in dark places, and fish counters are doing their tally while blindfolded. Environmental DNA dropped the blindfold. One review of published literature on the technology—though it came with caveats, including imperfect and imprecise detections or details on abundance—found that eDNA studies on freshwater and marine fish and amphibians outnumbered terrestrial counterparts 7:1.

In 2011, Thomsen, then a Ph.D. candidate in Willerslev’s lab, published a paper demonstrating that the method could detect rare and threatened species, such as those in low abundance in Europe, including amphibians, mammals like the otter, crustaceans, and dragonflies. “We showed that only, like, a shot glass of water really was enough to detect these organisms,” he told Undark. It was clear: The method had direct applications in conservation biology for the detection and monitoring of species.

In 2012, the journal Molecular Ecology published a special issue on eDNA, and Taberlet and several colleagues outlined a working definition of eDNA as any DNA isolated from environmental samples. The method described two similar but slightly different approaches: One can answer a yes or no question: Is the bullfrog (or whatever) present or not? It does so by scanning the metaphoric barcode, short sequences of DNA that are particular to a species or family, called primers; the checkout scanner is a common technique called quantitative real-time polymerase chain reaction, or qPCR.

Scientists use eDNA to track creatures of all shapes and sizes, be it tiny bits of invasive algae, eels in Loch Ness, or a sightless sand-dwelling mole that hasn’t been seen in nearly 90 years.

Another approach, commonly known as DNA metabarcoding, essentially spits out a list of organisms present in a given sample. “You sort of ask the question, what is here?” Thomsen said. “And then you get all of the known things, but you also get some surprises, right? Because there were some species that you didn’t know were actually present.”

One aims to find the needle in a haystack; the other attempts to reveal the whole haystack. eDNA differs from more traditional sampling techniques where organisms, like fish, are caught, manipulated, stressed, and sometimes killed. The data obtained are objective; it’s standardized and unbiased.

“eDNA, one way or the other, is going to stay as one of the important methodologies in biological sciences,” said Mehrdad Hajibabaei, a molecular biologist at University of Guelph, who pioneered the metabarcoding approach, and who traced fish some 9,800 feet under the Labrador Sea. “Every day I see something bubbling up that didn’t occur to me.”

In recent years, the field of eDNA has expanded. The method’s sensitivity allows researchers to sample previously out-of-reach environments, for example, capturing eDNA from the air—an approach that highlights eDNA’s promises and its potential pitfalls. Airborne eDNA appears to circulate on a global dust belt, suggesting its abundance and omnipresence, and it can be filtered and analyzed to monitor plants and terrestrial animals. But eDNA blowing in the wind can lead to inadvertent contamination.

In 2019, Thomsen, for instance, left two bottles of ultra-pure water out in the open—one in a grassland, and the other near a marine harbor. After a few hours, the water contained detectable eDNA associated with birds and herring, suggesting that traces of non-terrestrial species settled into the samples; the organisms obviously did not inhabit the bottles. “So it must come from the air,” Thomsen told Undark. The results suggest a two-fold problem: For one, trace evidence can move around, where two organisms that come into contact can then tote around the other’s DNA, and just because certain DNA is present doesn’t mean that the species is actually there.

Moreover, there’s also no guarantee that the presence of eDNA indicates that a species is alive, and field surveys are still needed, he said, to understand a species’ breeding success, its health, or the status of its habitat. So far, then, eDNA does not necessarily replace physical observations or collections. In another study, in which Thomsen’s group collected eDNA on flowers to look for pollinating birds, more than half of the eDNA reported in the paper came from humans, contamination that potentially muddied the results and made it harder to detect the pollinators in question.

Similarly, in May 2023, a University of Florida team that previously studied sea turtles by the eDNA traces left as they crawl along the beach published a paper that turned up human DNA. The samples were intact enough to detect key mutations that might someday be used to identify individual people, suggesting that the biological surveillance also raised unanswered questions about ethical testing on humans and informed consent. If eDNA served as a seine net, then it indiscriminately swept up information about biodiversity and inevitably ended up with, as the UF team’s paper put it, “human genetic by-catch.”

While the privacy issues around footprints in the sand, so far, appear to exist mostly in the realm of hypothetical, the use of eDNA in legal litigation relating to wildlife is not only possible but already a reality. It’s also being used in criminal investigations: In 2021, for instance, a group of Chinese researchers reported that eDNA collected off a suspected murderer’s pants had, contrary to his claims, revealed that he’d likely been to the muddy canal where a dead body had been found.

The concerns about off-target eDNA, in terms of accuracy and its reach into human medicine and forensics, highlight another, much broader, shortcoming. As Hanner at the University of Guelph described the problem: “Our regulatory frameworks and policy tend to lag at least a decade or more behind the science.”

“Every day I see something bubbling up that didn’t occur to me.”

Today, there are countless potential regulatory applications for water quality monitoring, evaluating environmental impact (including offshore wind farms and oil and gas drilling to more run-of-the-mill strip mall development), species management, and enforcement of the Endangered Species Act. In a civil court case filed in 2021, the US Fish and Wildlife Service evaluated whether an imperiled fish existed in a particular watershed, using eDNA and more traditional sampling, and found that they did not. The courts said the agency’s lack of protections for that watershed were justified. The issue does not seem to be whether eDNA stood up in court; it did. “But you really can’t say that something does not exist in an environment,” said Hajibabaei.

He recently highlighted the issue of validation: eDNA infers a result, but needs more established criteria for confirming that these results are actually true (that an organism is actually present or absent, or in a certain quantity). A series of special meetings for scientists worked to address these issues of standardization, which he said include protocols, chain of custody, and criteria for data generation and analysis. In a review of eDNA studies, Hajibabaei and his colleagues found that the field is saturated with one-offs, or proof-of-concept studies attempting to show that eDNA analyses work. Research remains overwhelmingly siloed in academia.

As such, practitioners hoping to use eDNA in an applied contexts sometimes ask for the moon. Does the species exist in certain location? For instance, Hajibabaei said, someone recently asked him if he could totally refute the presence of a parasite, proving that it had not appeared in an aquaculture farm. “And I say, ‘Look, there is no way that I can say that is 100 percent.’”

Even with a rigorous analytic framework, he said, the issues with false negatives and false positives are particularly difficult to resolve without doing one of the things eDNA obviates—more traditional collection and manual inspection. Despite the limitations, a handful of companies are already starting to commercialize the technique. For instance, future applications could help a company confirm whether the bridge it is building will harm any locally endangered animals; an aquaculture outfit determine if the waters where it farms its fish are infested with sea lice; or a landowner who is curious whether new plantings are attracting a wider range of native bees.

The problem is rather fundamental given eDNA’s reputation as an indirect way of detecting the undetectable—or as a workaround in contexts when it’s simply not possible to dip a net and catch all the organisms in the sea.

“It is very hard to validate some of these scenarios,” Hajibabaei said. “And that’s basically the nature of the beast.”

eDNA opens up a lot of possibilities, answering a question originally posed by Barkay (and no doubt many others): “Who is there?” But increasingly it’s providing hints that get at the “What are they doing there?” question, too. Elizabeth Clare, a professor of biology at York University in Toronto, studies biodiversity. She said she has observed bats roosting in one spot during the day, but, by collecting airborne eDNA, she could also infer where bats socialize at night. In another study, domesticated dog eDNA turned up in red fox scat. The two canids did not appear to be interbreeding, but researchers did wonder if their closeness had led to confusion, or cross-contamination, before ultimately settling on another explanation: Foxes apparently ate dog poop.

So while eDNA does not inherently reveal animal behavior, by some accounts the field is making strides towards providing clues as to what an organism might be doing, and how it’s interacting with other species, in a given environment—gleaning information about health without directly observing behavior.

Take another possibility: large-scale biomonitoring. Indeed, for the last three years, more people than ever before have participated in a bold experiment that is already up and running: the collection of environmental samples from public sewers to track viral Covid-19 particles and other organisms that infect humans. Technically, wastewater sampling involves a related approach called eRNA, because some viruses only have genetic information stored in the form of RNA, rather than DNA. Still, the same principles apply. (Studies also suggest RNA, which determines which proteins an organism is expressing, could be used to assess ecosystem health; organisms that are healthy may express entirely different proteins compared to those that are stressed.) In addition to monitoring the prevalence of diseases, wastewater surveillance demonstrates how an existing infrastructure designed to do one thing—sewers were designed to collect waste—could be fashioned into a powerful tool for studying something else, like detecting pathogens.

Clare has a habit of doing just that. “I personally am one of those people who tends to use tools—not the way they were intended,” she said. Clare was among the researchers who noticed a gap in the research: There was a lot less eDNA work done on terrestrial organisms. So, she began working with what might be called a natural filter, that is worms that suck blood from mammals. “It’s a lot easier to collect 1,000 leeches than it is to find the animals. But they have blood-meals inside them and the blood carries the DNA of the animals they interacted with,” she said. “It’s like having a bunch of field assistants out surveying for you.” Then, one of her students thought the same thing for dung beetles, which are even easier to collect.

Clare is now spearheading a new application for another continuous monitoring system—leveraging existing air-quality monitors that measure pollutants, such as fine particulate matter, while also simultaneously vacuuming eDNA out of the sky. In late 2023, she only had a small sample set, but had already found that, as a byproduct of routine air quality monitoring, these preexisting tools doubled as filters for the material she is after. It was, more or less, a regulated, transcontinental network collecting samples in a very consistent way over long periods of time. “You could then use it to build up time series and high-resolution data on entire continents,” she said.

In the UK alone, Clare said, there are an estimated 150 different sites sucking a known quantity of air, every week, all year long, which amount to some 8,000 measurements a year. Clare and her co-authors recently analyzed at a tiny subset of these—17 measurements from two locations—and were able to identify more than 180 different taxonomic groups, more than 80 different kinds of plants and fungi, 26 different species of mammal, 34 different species of birds, plus at least 35 kinds of insects.

Certainly, other long-term ecological research sites exist. The US has a network of such facilities. But their scope of study does not include a globally distributed infrastructure that measures biodiversity constantly—including the passage of migrating birds overhead to the expansion and contraction of species with climate change. Arguably, eDNA will likely complement, rather than supplant, the distributed network of people, who record real-time, high-resolution, tempo-spatial observations on websites such as eBird or iNaturalist. Like a fuzzy image of an entirely new galaxy coming into view, the current resolution remains low.

“It’s sort of a generalized collection system, which is pretty much unheard of in biodiversity science,” said Clare. She was referring to the capacity to pull eDNA signals out of thin air, but the sentiment spoke to the method as a whole: “It’s not perfect,” she said, “but there’s nothing else that really does that.”

This article was originally published on Undark. Read the original article.

Image Credit: Undark + DALL-E

Kategorie: Transhumanismus

This Robot Predicts When You’ll Smile—Then Grins Back Right on Cue

Singularity HUB - 1 Duben, 2024 - 23:19

Comedy clubs are my favorite weekend outings. Rally some friends, grab a few drinks, and when a joke lands for us all—there’s a magical moment when our eyes meet, and we share a cheeky grin.

Smiling can turn strangers into the dearest of friends. It spurs meet-cute Hollywood plots, repairs broken relationships, and is inextricably linked to fuzzy, warm feelings of joy.

At least for people. For robots, their attempts at genuine smiles often fall into the uncanny valley—close enough to resemble a human, but causing a touch of unease. Logically, you know what they’re trying to do. But gut feelings tell you something’s not right.

It may be because of timing. Robots are trained to mimic the facial expression of a smile. But they don’t know when to turn the grin on. When humans connect, we genuinely smile in tandem without any conscious planning. Robots take time to analyze a person’s facial expressions to reproduce a grin. To a human, even milliseconds of delay raises hair on the back of the neck—like a horror movie, something feels manipulative and wrong.

Last week, a team at Columbia University showed off an algorithm that teaches robots to share a smile with their human operators. The AI analyzes slight facial changes to predict its operators’ expressions about 800 milliseconds before they happen—just enough time for the robot to grin back.

The team trained a soft robotic humanoid face called Emo to anticipate and match the expressions of its human companion. With a silicone face tinted in blue, Emo looks like a 60s science fiction alien. But it readily grinned along with its human partner on the same “emotional” wavelength.

Humanoid robots are often clunky and stilted when communicating with humans, wrote Dr. Rachael Jack at the University of Glasgow, who was not involved in the study. ChatGPT and other large language algorithms can already make an AI’s speech sound human, but non-verbal communications are hard to replicate.

Programming social skills—at least for facial expression—into physical robots is a first step toward helping “social robots to join the human social world,” she wrote.

Under the Hood

From robotaxis to robo-servers that bring you food and drinks, autonomous robots are increasingly entering our lives.

In London, New York, Munich, and Seoul, autonomous robots zip through chaotic airports offering customer assistance—checking in, finding a gate, or recovering lost luggage. In Singapore, several seven-foot-tall robots with 360-degree vision roam an airport flagging potential security problems. During the pandemic, robot dogs enforced social distancing.

But robots can do more. For dangerous jobs—such as cleaning the wreckage of destroyed houses or bridges—they could pioneer rescue efforts and increase safety for first responders. With an increasingly aging global population, they could help nurses to support the elderly.

Current humanoid robots are cartoonishly adorable. But the main ingredient for robots to enter our world is trust. As scientists build robots with increasingly human-like faces, we want their expressions to match our expectations. It’s not just about mimicking a facial expression. A genuine shared “yeah I know” smile over a cringe-worthy joke forms a bond.

Non-verbal communications—expressions, hand gestures, body postures—are tools we use to express ourselves. With ChatGPT and other generative AI, machines can already “communicate in video and verbally,” said study author Dr. Hod Lipson to Science.

But when it comes to the real world—where a glance, a wink, and smile can make all the difference—it’s “a channel that’s missing right now,” said Lipson. “Smiling at the wrong time could backfire. [If even a few milliseconds too late], it feels like you’re pandering maybe.”

Say Cheese

To get robots into non-verbal action, the team focused on one aspect—a shared smile. Previous studies have pre-programmed robots to mimic a smile. But because they’re not spontaneous, it causes a slight but noticeable delay and makes the grin look fake.

“There’s a lot of things that go into non-verbal communication” that are hard to quantify, said Lipson. “The reason we need to say ‘cheese’ when we take a photo is because smiling on demand is actually pretty hard.”

The new study focused on timing.

The team engineered an algorithm that anticipates a person’s smile and makes a human-like animatronic face grin in tandem. Called Emo, the robotic face has 26 gears—think artificial muscles—enveloped in a stretchy silicone “skin.” Each gear is attached to the main robotic “skeleton” with magnets to move its eyebrows, eyes, mouth, and neck. Emo’s eyes have built-in cameras to record its environment and control its eyeball movements and blinking motions.

By itself, Emo can track its own facial expressions. The goal of the new study was to help it interpret others’ emotions. The team used a trick any introverted teenager might know: They asked Emo to look in the mirror to learn how to control its gears and form a perfect facial expression, such as a smile. The robot gradually learned to match its expressions with motor commands—say, “lift the cheeks.” The team then removed any programming that could potentially stretch the face too much, injuring to the robot’s silicon skin.

“Turns out…[making] a robot face that can smile was incredibly challenging from a mechanical point of view. It’s harder than making a robotic hand,” said Lipson. “We’re very good at spotting inauthentic smiles. So we’re very sensitive to that.”

To counteract the uncanny valley, the team trained Emo to predict facial movements using videos of humans laughing, surprised, frowning, crying, and making other expressions. Emotions are universal: When you smile, the corners of your mouth curl into a crescent moon. When you cry, the brows furrow together.

The AI analyzed facial movements of each scene frame-by-frame. By measuring distances between the eyes, mouth, and other “facial landmarks,” it found telltale signs that correspond to a particular emotion—for example, an uptick of the corner of your mouth suggests a hint of a smile, whereas a downward motion may descend into a frown.

Once trained, the AI took less than a second to recognize these facial landmarks. When powering Emo, the robot face could anticipate a smile based on human interactions within a second, so that it grinned with its participant.

To be clear, the AI doesn’t “feel.” Rather, it behaves as a human would when chuckling to a funny stand-up with a genuine-seeming smile.

Facial expressions aren’t the only cues we notice when interacting with people. Subtle head shakes, nods, raised eyebrows, or hand gestures all make a mark. Regardless of cultures, “ums,” “ahhs,” and “likes”—or their equivalents—are integrated into everyday interactions. For now, Emo is like a baby who learned how to smile. It doesn’t yet understand other contexts.

“There’s a lot more to go,” said Lipson. We’re just scratching the surface of non-verbal communications for AI. But “if you think engaging with ChatGPT is interesting, just wait until these things become physical, and all bets are off.”

Image Credit: Yuhang Hu, Columbia Engineering via YouTube

Kategorie: Transhumanismus

This Week’s Awesome Tech Stories From Around the Web (Through March 30)

Singularity HUB - 30 Březen, 2024 - 16:00
COMPUTING

The Best Qubits for Quantum Computing Might Just Be Atoms
Philip Ball | Quanta
“In the search for the most scalable hardware to use for quantum computers, qubits made of individual atoms are having a breakout moment. …’We believe we can pack tens or even hundreds of thousands in a centimeter-scale device,’ [Mark Saffman, a physicist at the University of Wisconsin] said.”

ARTIFICIAL INTELLIGENCE

AI Chatbots Are Improving at an Even Faster Rate Than Computer Chips
Chris Stokel-Walker | New Scientist
“Besiroglu and his colleagues analyzed the performance of 231 LLMs developed between 2012 and 2023 and found that, on average, the computing power required for subsequent versions of an LLM to hit a given benchmark halved every eight months. That is far faster than Moore’s law, a computing rule of thumb coined in 1965 that suggests the number of transistors on a chip, a measure of performance, doubles every 18 to 24 months.”

FUTURE

How AI Could Explode the Economy
Dylan Matthews | Vox
“Imagine everything humans have achieved since the days when we lived in caves: wheels, writing, bronze and iron smelting, pyramids and the Great Wall, ocean-traversing ships, mechanical reaping, railroads, telegraphy, electricity, photography, film, recorded music, laundry machines, television, the internet, cellphones. Now imagine accomplishing 10 times all that—in just a quarter century. This is a very, very, very strange world we’re contemplating. It’s strange enough that it’s fair to wonder whether it’s even possible.”

DIGITAL MEDIA

What’s Next for Generative Video
Will Douglas Heaven | MIT Technology Review
“The first batch of models that could turn text into video appeared in late 2022, from companies including Meta, Google, and video-tech startup Runway. It was a neat trick, but the results were grainy, glitchy, and just a few seconds long. Fast-forward 18 months, and the best of Sora’s high-definition, photorealistic output is so stunning that some breathless observers are predicting the death of Hollywood. …As we continue to get to grips what’s ahead—good and bad—here are four things to think about.”

SENSORS

Salt-Sized Sensors Mimic the Brain
Gwendolyn Rak | IEEE Spectrum
“To gain a better understanding of the brain, why not draw inspiration from it? At least, that’s what researchers at Brown University did, by building a wireless communications system that mimics the brain using an array of tiny silicon sensors, each the size of a grain of sand. The researchers hope that the technology could one day be used in implantable brain-machine interfaces to read brain activity.”

ROBOTICS

Understanding Humanoid Robots
Brian Heater | TechCrunch
“A lot of smart people have faith in the form factor and plenty of others remain skeptical. One thing I’m confident saying, however, is that whether or not future factories will be populated with humanoid robots on a meaningful scale, all of this work will amount to something. Even the most skeptical roboticists I’ve spoken to on the subject have pointed to the NASA model, where the race to land humans on the moon led to the invention of products we use on Earth to this day.”

INTERNET

Blazing Bits Transmitted 4.5 Million Times Faster Than Broadband
Michael Franco | New Atlas
“An international research team has sent an astounding amount of data at a nearly incomprehensible speed. It’s the fastest data transmission ever using a single optical fiber and shows just how speedy the process can get using current materials.”

COMPUTING

How We’ll Reach a 1 Trillion Transistor GPU
Mark Liu and HS Philip Wong | IEEE Spectrum
“We forecast that within a decade a multichiplet GPU will have more than 1 trillion transistors. We’ll need to link all these chiplets together in a 3D stack, but fortunately, industry has been able to rapidly scale down the pitch of vertical interconnects, increasing the density of connections. And there is plenty of room for more. We see no reason why the interconnect density can’t grow by an order of magnitude, and even beyond.”

SPACE

Astronomers Watch in Real Time as Epic Supernova Potentially Births a Black Hole
Isaac Schultz | Gizmodo
“‘Calculations of the circumstellar material emitted in the explosion, as well as this material’s density and mass before and after the supernova, create a discrepancy, which makes it very likely that the missing mass ended up in a black hole that was formed in the aftermath of the explosion—something that’s usually very hard to determine,’ said study co-author Ido Irani, a researcher at the Weizmann Institute.”

ARTIFICIAL INTELLIGENCE

Large Language Models’ Emergent Abilities Are a Mirage
Stephen Ornes | Wired
“[In some tasks measured by the BIG-bench project, LLM] performance remained near zero for a while, then performance jumped. Other studies found similar leaps in ability. The authors described this as ‘breakthrough’ behavior; other researchers have likened it to a phase transition in physics, like when liquid water freezes into ice. …[But] a new paper by a trio of researchers at Stanford University posits that the sudden appearance of these abilities is just a consequence of the way researchers measure the LLM’s performance. The abilities, they argue, are neither unpredictable nor sudden.”

Image Credit: AedrianUnsplash

Kategorie: Transhumanismus

A New Treatment Rejuvenates Aging Immune Systems in Elderly Mice

Singularity HUB - 29 Březen, 2024 - 19:32

Our immune system is like a well-trained brigade.

Each unit has a unique specialty. Some cells directly kill invading foes; others release protein “markers” to attract immune cell types to a target. Together, they’re a formidable force that fights off biological threats—both pathogens from outside the body and cancer or senescent “zombie” cells from within.

With age, the camaraderie breaks down. Some units flare up, causing chronic inflammation that wreaks havoc in the brain and body. These cells increase the risk of dementia, heart disease, and gradually sap muscles. Other units that battle novel pathogens—such as a new strain of flu—slowly dwindle, making it harder to ward off infections.

All these cells come from a single source: a type of stem cell in bone marrow.

This week, in a study published in Nature, scientists say they restored the balance between the units in aged mice, reverting their immune systems back to a youthful state. Using an antibody, the team targeted a subpopulation of stem cells that eventually develops into the immune cells underlying chronic inflammation. The antibodies latched onto targets and rallied other immune cells to wipe them out.

In elderly mice, the one-shot treatment reinvigorated their immune systems. When challenged with a vaccine, the mice generated a stronger immune response than non-treated peers and readily fought off later viral infections.

Rejuvenating the immune system isn’t just about tackling pathogens. An aged immune system increases the risk of common age-related medical problems, such as dementia, stroke, and heart attacks.

“Eliminating the underlying drivers of aging is central to preventing several age-related diseases,” wrote stem cell scientists Drs. Yasar Arfat Kasu and Robert Signer at the University of California, San Diego, who were not involved in the study. The intervention “could thus have an outsized impact on enhancing immunity, reducing the incidence and severity of chronic inflammatory diseases and preventing blood disorders.”

Stem Cell Succession

All blood cells arise from a single source: hematopoietic stem cells, or blood stem cells, that reside in bone marrow.

Some of these stem cells eventually become “fighter” white blood cells, including killer T cells that—true to their name—directly destroy cancerous cells and infections. Others become B cells that pump out antibodies to tag invaders for elimination. This unit of the immune system is dubbed “adaptive” because it can tackle new intruders the body has never seen.

Still more blood stem cells transform into myriad other immune cell types—including those that literally eat their foes. These cells form the innate immune unit, which is present at birth and the first line of defense throughout our lifetime.

Unlike their adaptive comrades, which more precisely target invaders, the innate unit uses a “burn it all” strategy to fight off infections by increasing local inflammation. It’s a double-edged sword. While useful in youth, with age the unit becomes dominant, causing chronic inflammation that gradually damages the body.

The reason for this can be found in the immune system’s stem cell origins.

Blood stem cells come in multiple types. Some produce both immune units equally; others are biased towards the innate unit. With age, the latter gradually take over, increasing chronic inflammation while lowering protection against new pathogens. This is, in part, why elderly people are advised to get new flu shots, and why they were first in line for vaccination against Covid-19.

The new study describes a practical approach to rebalancing the aged immune system. Using an antibody-based therapy, the scientists directly obliterated the population of stem cells that lead to chronic inflammation.

Blood Bath

Like most cells, blood stem cells have a unique fingerprint—a set of proteins that dot their surfaces. A subset of the cells, dubbed my-HSCs, are more likely to produce cells in the innate immune system, which triggers chronic inflammation with age.

By mining multiple gene expression datasets from blood stem cells, the team found three protein markers they could use to identify and target my-HSCs cells in aged mice. They then engineered an antibody to target the cells for elimination.

Just a week after infusing it into elderly mice, the antibody had reduced the number of myHSC cells in their bone marrow without harming other blood stem cells. A genetic screen confirmed the mice’s immune profile was more like that of young mice.

The one-shot treatment lasted “strikingly” long, wrote Kasu and Signer. A single injection reduced the troublesome stem cells for at least two months—roughly a twelfth of a mouse’s lifespan. With my-HSCs no longer dominant, healthy blood stem cells gained ground inside the bone marrow. For at least four months, the treated mice produced more cells in the adaptive immune unit than their similarly aged peers, while having less overall inflammation.

As an ultimate test, the team challenged elderly mice with a difficult virus. To beat the infection, multiple components of the adaptive immune system had to rev up and work in concert.

Some elderly mice received a vaccine and the antibody treatment. Others only received the vaccine. Those treated with the antibody mounted a larger protective immune response. When given a dose of the virus, their immune systems rapidly recruited adaptive immune cells, and fought off the infection—whereas those receiving only the vaccine struggled.

Restoring Balance

The study shows that not all blood stem cells are alike. Eliminating those that cause inflammation directly changes the biological “age” of the entire immune system, allowing it to better tackle damaging changes in the body and fight off infections.

Like a leaking garbage can, innate immune cells can dump inflammatory molecules into their neighborhood. By cleaning up the source, the antibody could have also changed the environment the cells live in, so they are better able to thrive during aging.

Additionally, the immune system is an “eye in the sky” for monitoring cancer. Reviving immune function could restore the surveillance systems needed to eliminate cancer cells. The antibody treatment here could potentially tag-team with CAR T therapy or classic anti-cancer therapies, such as chemotherapy, as a one-two punch against the disease.

But it isn’t coming to clinics soon. Without unexpected setbacks or regulatory hiccups, the team estimates three to five years before testing in people. As a next step, they’re looking to expand the therapy to tackle other disorders related to a malfunctioning immune system.

Image Credit: Volker Brinkmann

Kategorie: Transhumanismus

These Plants Could Mine Valuable Metals From the Soil With Their Roots

Singularity HUB - 28 Březen, 2024 - 20:26

The renewable energy transition will require a huge amount of materials, and there are fears we may soon face shortages of some critical metals. US government researchers think we could rope in plants to mine for these metals with their roots.

Green technologies like solar power and electric vehicles are being adopted at an unprecedented rate, but this is also straining the supply chains that support them. One area of particular concern includes the metals required to build batteries, wind turbines, and other advanced electronics that are powering the energy transition.

We may not be able to sustain projected growth at current rates of production of many of these minerals, such as lithium, cobalt, and nickel. Some of these metals are also sourced from countries whose mining operations raise serious human rights or geopolitical concerns.

To diversify supplies, the government research agency ARPA-E is offering $10 million in funding to explore “phytomining,” in which certain species of plants are used to extract valuable metals from the soil through their roots. The project is focusing on nickel first, a critical battery metal, but in theory, it could be expanded to other minerals.

“In order to accomplish the goals laid out by President Biden to meet our clean energy targets, and support our economy and national security, it’s going to take [an] all-hands-on-deck approach and innovative solutions,” ARPA-E director Evelyn Wang said in a press release.

“By exploring phytomining to extract nickel as the first target critical material, ARPA-E aims to achieve a cost-competitive and low-carbon footprint extraction approach needed to support the energy transition.”

The concept of phytomining has been around for a while and relies on a class of plants known as “hyperaccumulators.” These species can absorb a large amount of metal through their roots and store it in their tissues. Phytomining involves growing these plants in soils with high levels of metals, harvesting and burning the plants, and then extracting the metals from the ash.

The ARPA-E project, known as Plant HYperaccumulators TO MIne Nickel-Enriched Soils (PHYTOMINES), is focusing on nickel because there are already many hyperaccumulators known to absorb the metal. But finding, or creating, species able to economically mine the metal in North America will still be a significant challenge.

One of the primary goals of the project is to optimize the amount of nickel these plants can take in. This could involve breeding or genetically modifying plants to enhance these traits or altering the microbiome of either the plants or the surrounding soil to boost absorption.

The agency also wants to gain a better understanding of the environmental and economic factors that could determine the viability of the approach, such as the impact of soil mineral composition, the land ownership status of promising sites, and the lifetime costs of a phytomining operation.

But while the idea is still at a nebulous stage, there is considerable potential.

“In soil that contains roughly 5 percent nickel—that is pretty contaminated—you’re going to get an ash that’s about 25 to 50 percent nickel after you burn it down,” Dave McNear, a biogeochemist at the University of Kentucky, told Wired.

“In comparison, where you mine it from the ground, from rock, that has about .02 percent nickel. So you are several orders of magnitude greater in enrichment, and it has far less impurities.”

Phytomining would also be much less environmentally damaging than traditional mining, and it could help remediate soil polluted with metals so they can be farmed more conventionally. While the focus is currently on nickel, the approach could be extended to other valuable metals too.

The main challenge will be finding a plant that is suitable for American climates that grows quickly. “The problem has historically been that they’re not often very productive plants,” Patrick Brown, a plant scientist at the University of California, Davis, told Wired. “And the challenge is you have to have high concentrations of nickel and high biomass to achieve a meaningful, economically viable outcome.”

Still, if researchers can square that circle, the approach could be a promising way to boost supplies of the critical minerals needed to support the transition to a greener economy.

Image Credit: Nickel hyperaccumulator Alyssum argenteum / David Stang via Wikimedia Commons

Kategorie: Transhumanismus

Now We Can See the Magnetic Maelstrom Around Our Galaxy’s Supermassive Black Hole

Singularity HUB - 28 Březen, 2024 - 00:10

Black holes are known for ferocious gravitational fields. Anything wandering too close, even light, will be swallowed up. But other forces may be at play too.

In 2021, astronomers used the Event Horizon Telescope (EHT) to make a polarized image of the enormous black hole at the center of the galaxy M87. The image showed an organized swirl of magnetic fields threading the matter orbiting the object. M87*, as the black hole is known, is nearly 1,000 times bigger than our own galaxy’s central black hole, Sagittarius A* (Sgr A*) and is dining on the equivalent of a few suns per year. With its comparatively modest size and appetite—Sgr A* is basically fasting at the moment—scientists wondered if our galaxy’s black hole would have strong magnetic fields too.

Now, we know.

In the first polarized image of Sgr A*, released alongside two papers published today (here and here), EHT scientists say the black hole has strong magnetic fields akin to those seen in M87*. The image depicts a fiery whirlpool (the disc of material falling into Sgr A*) circling the drain (the black hole’s shadow) with magnetic field lines woven throughout.

In contrast to unpolarized light, polarized light is oriented in only one direction. Like a pair of quality sunglasses, magnetized regions in space polarize light too. These polarized images of the two black holes therefore map out their magnetic fields.

And surprisingly, they’re similar.

Side-by-side polarized images of supermassive black holes M87* and Sagittarius A*. Image Credit: EHT Collaboration

“With a sample of two black holes—with very different masses and very different host galaxies—it’s important to determine what they agree and disagree on,” Mariafelicia De Laurentis, EHT deputy project scientist and professor at the University of Naples Federico II, said in a press release. “Since both are pointing us toward strong magnetic fields, it suggests that this may be a universal and perhaps fundamental feature of these kinds of systems.”

Making the image was no simple task. Compared to M87*, whose disc is larger and moves relatively slowly, imaging Sgr A* is like trying to photograph a cosmic toddler—its material is always in motion, reaching nearly the speed of light. The scientists had to use new tools in addition to those that yielded the polarized image of M87* and weren’t even sure the image would be possible.

Such technical feats take enormous teams of scientists organized across the globe. The first three pages of each new paper are dedicated to authors and affiliations. In addition, the EHT itself spans the world. Astronomers stitch observations made by eight telescopes into a virtual Earth-sized telescope capable of resolving objects the apparent size of a donut on the moon as viewed from the surface of our planet.

The EHT team plans to make more observations—the next round for Sgr A* begins next month—and add telescopes on Earth and space to increase the quality and breadth of the images. One outstanding question is whether Sgr A* has a jet of material shooting out from its poles like M87* does. The ability to make movies of the black hole later this decade—which should be spectacular—could resolve the mystery.

“We expect strong and ordered magnetic fields to be directly linked to the launching of jets as we observed for M87*,” Sara Issaoun, research co-leader and a fellow at Harvard & Smithsonian’s Center for Astrophysics, told Space.com. “Since Sgr A*, with no observed jet, seems to have a very similar geometry, perhaps there is also a jet lurking in Sgr A* waiting to be observed, which would be super exciting!”

The discovery of a jet, added to strong magnetic fields, would mean these features may be common to supermassive black holes across the spectrum. Learning more about their features and behavior can help scientists piece together a better picture of how galaxies, including the Milky Way, evolve over eons in tandem with the black holes at their hearts.

Image Credit: EHT Collaboration

Kategorie: Transhumanismus

Human Artificial Chromosomes Could Ferry Tons More DNA Cargo Into Cells

Singularity HUB - 26 Březen, 2024 - 22:48

The human genetic blueprint is deceptively simple. Our genes are tightly wound into 46 X-shaped structures called chromosomes. Crafted by evolution, they carry DNA and replicate when cells divide, ensuring the stability of our genome over generations.

In 1997, a study torpedoed evolution’s playbook. For the first time, a team created an artificial human chromosome using genetic engineering. When delivered into a human cell in a petri dish, the artificial chromosome behaved much like its natural counterparts. It replicated as cells divided, leading to human cells with 47 chromosomes.

Rest assured, the goal wasn’t to artificially evolve our species. Rather, artificial chromosomes can be used to carry large chunks of human genetic material or gene editing tools into cells. Compared to current delivery systems—virus carriers or nanoparticles—artificial chromosomes can incorporate far more synthetic DNA.

In theory, they could be designed to ferry therapeutic genes into people with genetic disorders or add protective ones against cancer.

Yet despite over two decades of research, the technology has yet to enter the mainstream. One challenge is that the short DNA segments linking up to form the chromosomes stick together once inside cells, making it difficult to predict how the genes will behave.

This month, a new study from the University of Pennsylvania changed the 25-year-old recipe and built a new generation of artificial chromosomes. Compared to their predecessors, the new chromosomes are easier to engineer and use longer DNA segments that don’t clump once inside cells. They’re also a large carrier, which in theory could shuttle genetic material roughly the size of the largest yeast chromosome into human cells.

“Essentially, we did a complete overhaul of the old approach to HAC [human artificial chromosome] design and delivery,” study author Dr. Ben Black said in a press release.

“The work is likely to reinvigorate efforts to engineer artificial chromosomes in both animals and plants,” wrote the University of Georgia’s Dr. R. Kelly Dawe, who was not involved in the study.

Shape of You

Since 1997, artificial genomes have become an established  biotechnology. They’ve been used to rewrite DNA in bacteria, yeast, and plants, resulting in cells that can synthesize life-saving medications or eat plastic. They could also help scientists better understand the functions of the mysterious DNA sequences littered throughout our genome.

The technology also brought about the first synthetic organisms. In late 2023, scientists revealed yeast cells with half their genes replaced by artificial DNA—the team hopes to eventually customize every single chromosome. Earlier this year, another study reworked parts of a plant’s chromosome, further pushing the boundaries of synthetic organisms.

And by tinkering with the structures of chromosomes—for example, chopping off suspected useless regions—we can better understand how they normally function, potentially leading to treatments for diseases.

The goal of building human artificial chromosomes isn’t to engineer synthetic human cells. Rather, the work is meant to advance gene therapy. Current methods for carrying therapeutic genes or gene editing tools into cells rely on viruses or nanoparticles. But these carriers have limited cargo capacity.

If current delivery vehicles are like sailboats, artificial human chromosomes are like cargo ships, with the capacity to carry a far larger and wider range of genes.

The problem? They’re hard to build. Unlike bacteria or yeast chromosomes, which are circular in shape, our chromosomes are like an “X.” At the center of each is a protein hub called the centromere that allows the chromosome to separate and replicate when a cell divides.

In a way, the centromere is like a button that keeps fraying pieces of fabric—the arms of the chromosome—intact. Earlier efforts to build human artificial chromosomes focused on these structures, extracting DNA letters that could express proteins inside human cells to anchor the chromosomes. However, these DNA sequences rapidly grabbed onto themselves like double-sided tape, ending in balls that made it difficult for cells to access the added genes.

One reason could be that the synthetic DNA sequences were too short, making the mini-chromosome components unreliable. The new study tested the idea by engineering a far larger human chromosome assembly than before.

Eight Is the Lucky Number

Rather than an X-shaped chromosome, the team designed their human artificial chromosome as a circle, which is compatible with replication in yeast. The circle packed a hefty 760,000 DNA letter pairs—roughly 1/200 the size of an entire human chromosome.

Inside the circle were genetic instructions to make a sturdier centromere—the “button” that keeps the chromosome structure intact and can make it replicate. Once expressed inside a yeast cell, the button recruited the yeast’s molecular machinery to build a healthy human artificial chromosome.

In its initial circular form in yeast cells, the synthetic human chromosome could then be directly passed into human cells through a process called cell fusion. Scientists removed the “wrappers” around yeast cells with chemical treatments, allowing the cells’ components—including the artificial chromosome—to merge directly into human cells inside petri dishes.

Like benevolent extraterrestrials, the added synthetic chromosomes happily integrated into their human host cells. Rather than clumping into noxious debris, the circles doubled into a figure-eight shape, with the centromere holding the circles together. The artificial chromosomes happily co-existed with native X-shaped ones, without changing their normal functions.

For gene therapy, it’s essential that any added genes remain inside the body even as cells divide. This perk is especially important for fast-dividing cells like cancer, which can rapidly adapt to therapies. If a synthetic chromosome is packed with known cancer-suppressing genes, it could keep cancers and other diseases in check throughout generations of cells.

The artificial human chromosomes passed the test. They recruited proteins from the human host cells to help them spread as the cells divided, thus conserving the artificial genes over generations.

A Revival

Much has changed since the first human artificial chromosomes.

Gene editing tools, such as CRISPR, have made it easier to rewrite our genetic blueprint. Delivery mechanisms that target specific organs or tissues are on the rise. But synthetic chromosomes may be regaining some of the spotlight.

Unlike viral carriers, the most often used delivery vehicle for gene therapies or gene editors, artificial chromosomes can’t tunnel into our genome and disrupt normal gene expression—making them potentially far safer.

The technology has vulnerabilities though. The engineered chromosomes are still often lost when cells divide. Synthetic genes placed near the centromere—the “button” of the chromosome—may also disrupt the artificial chromosome’s ability to replicate and separate when cells divide.

But to Dawe, the study has larger implications than human cells alone. The principles of re-engineering centromeres shown in this study could be used for yeast and potentially be “applicable across kingdoms” of living organisms.

The method could help scientists better model human diseases or produce drugs and vaccines. More broadly, “It may soon be possible to include artificial chromosomes as a part of an expanding toolkit to address global challenges related to health care, livestock, and the production of food and fiber,” he wrote.

Image Credit: Warren Umoh / Unsplash

Kategorie: Transhumanismus

‘Dark Stars’: Dark Matter May Form Exploding Stars—Finding Them Could Help Reveal What It’s Made Of

Singularity HUB - 26 Březen, 2024 - 01:08

Dark matter is a ghostly substance that astronomers have failed to detect for decades, yet which we know has an enormous influence on normal matter in the universe, such as stars and galaxies. Through the massive gravitational pull it exerts on galaxies, it spins them up, gives them an extra push along their orbits, or even rips them apart.

Like a cosmic carnival mirror, it also bends the light from distant objects to create distorted or multiple images, a process which is called gravitational lensing.

And recent research suggests it may create even more drama than this, by producing stars that explode.

For all the havoc it plays with galaxies, not much is known about whether dark matter can interact with itself, other than through gravity. If it experiences other forces, they must be very weak, otherwise they would have been measured.

A possible candidate for a dark matter particle, made up of a hypothetical class of weakly interacting massive particles (or WIMPs), has been studied intensely, so far with no observational evidence.

Recently, other types of particles, also weakly interacting but extremely light, have become the focus of attention. These particles, called axions, were first proposed in late 1970s to solve a quantum problem, but they may also fit the bill for dark matter.

Unlike WIMPs, which cannot “stick” together to form small objects, axions can do so. Because they are so light, a huge number of axions would have to account for all the dark matter, which means they would have to be crammed together. But because they are a type of subatomic particle known as a boson, they don’t mind.

In fact, calculations show axions could be packed so closely that they start behaving strangely—collectively acting like a wave—according to the rules of quantum mechanics, the theory which governs the microworld of atoms and particles. This state is called a Bose-Einstein condensate, and it may, unexpectedly, allow axions to form “stars” of their own.

This would happen when the wave moves on its own, forming what physicists call a “soliton,” which is a localized lump of energy that can move without being distorted or dispersed. This is often seen on Earth in vortexes and whirlpools, or the bubble rings that dolphins enjoy underwater.

The new study provides calculations which show that such solitons would end up growing in size, becoming a star, similar in size to, or larger than, a normal star. But finally, they become unstable and explode.

The energy released from one such explosion (dubbed a “bosenova”) would rival that of a supernova (an exploding normal star). Given that dark matter far outweighs the visible matter in the universe, this would surely leave a sign in our observations of the sky. We have yet to find such scars, but the new study gives us something to look for.

An Observational Test

The researchers behind the study say that the surrounding gas, made of normal matter, would absorb this extra energy from the explosion and emit some of it back. Since most of this gas is made of hydrogen, we know this light should be in radio frequencies.

Excitingly, future observations with the Square Kilometer Array radio telescope may be able to pick it up.

Artist’s impression of the SKA telescope. Image Credit: Wikipedia, CC BY-SA

So, while the fireworks from dark star explosions may be hidden from our view, we might be able to find their aftermath in the visible matter. What’s great about this is that such a discovery would help us work out what dark matter is actually made of—in this case, most likely axions.

What if observations do not detect the predicted signal? That probably won’t rule out this theory completely, as other “axion-like” particles are still possible. A failure of detection may indicate, however, that the masses of these particles are very different, or that they do not couple with radiation as strongly as we thought.

In fact, this has happened before. Originally, it was thought that axions would couple so strongly that they would be able to cool the gas inside stars. But since models of star cooling showed stars were just fine without this mechanism, the axion coupling strength had to be lower than originally assumed.

Of course, there is no guarantee that dark matter is made of axions. WIMPs are still contenders in this race, and there are others too.

Incidentally, some studies suggest that WIMP-like dark matter may also form “dark stars.” In this case, the stars would still be normal (made of hydrogen and helium), with dark matter just powering them.

These WIMP-powered dark stars are predicted to be supermassive and to live only for a short time in the early universe. But they could be observed by the James Webb Space Telescope. A recent study has claimed three such discoveries, although the jury is still out on whether that’s really the case.

Nevertheless, the excitement about axions is growing, and there are many plans to detect them. For example, axions are expected to convert into photons when they pass through a magnetic field, so observations of photons with a certain energy are targeting stars with magnetic fields, such as neutron stars, or even the sun.

On the theoretical front, there are efforts to refine the predictions for what the universe would look like with different types of dark matter. For example, axions may be distinguished from WIMPs by the way they bend the light through gravitational lensing.

With better observations and theory, we are hoping that the mystery of dark matter will soon be unlocked.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image Credit: ESA/Webb, NASA & CSA, A. Martel

Kategorie: Transhumanismus
Syndikovat obsah