Singularity HUB

Syndikovat obsah Singularity Hub
News and Insights on Technology, Science, and the Future from Singularity Group
Aktualizace: 25 min 39 sek zpět

Habitat for Humanity Is Using 3D Printing to Build Affordable Houses

24 Leden, 2022 - 16:00

Over the last year home prices have skyrocketed (along with prices of almost everything else), leaving millions of people unable to afford to move or to change their housing situation. Mortgage lender Freddie Mac estimated last year that the US has a housing supply shortage of 3.8 million homes. It’s partly Covid-related; construction has slowed due to labor shortages, high raw material costs, and supply chain issues—but the problem predates the pandemic, as demand for homes was already outpacing supply in 2019.

Middle- and low-income families have been hit hardest by the housing shortage. In an effort to assist those in need, Habitat for Humanity launched an initiative last year to incorporate 3D printing into its construction process to cut costs. The first home was completed last month, and a family moved in just before Christmas.

Located in Williamsburg, Virginia, the house has three bedrooms and two bathrooms, and the 3D printed portion of it—that is, the exterior walls—took just 28 hours to build. “Build” in this case meant a 3D printer moved along a track in the shape of the house, putting down one layer after another of a cement mixture.

According to Janet Green, the CEO of Habitat for Humanity Peninsula and Greater Williamsburg, the house cost 15 percent less to build than wood-frame houses, and construction took just three months from start to finish (that includes putting in windows, hooking up electricity and plumbing, installing appliances, etc.).

Habitat for Humanity uses volunteer labor and some donated items to keep its construction costs low; the average building cost of its homes is around $110,000. The houses are sold at no profit with a zero-interest mortgage. Buyers must make 45 to 80 percent of the median income of the area their home is located in, have excellent credit, and put in 300 volunteer hours, whether by taking part in their own home’s construction or working at a Habitat for Humanity resale store. The Williamsburg home was purchased by a single mother with one son.

The home’s 3D printing portion was done by Alquist 3D, which aims to lower the cost of housing and infrastructure in economically distressed and under-served communities. Alquist uses a gantry-style 3D printer made by COBOD, a Danish company that also partnered with GE to 3D print the bases of 650-foot-tall wind turbines. Alquist uses a Raspberry Pi-based monitoring system in its home to track environmental data and enable smart building applications, like maximizing energy efficiency or monitoring security.

The interior of the Williamsburg 3D printed house. Image Credit: Habitat for Humanity Peninsula and Greater Williamsburg/Consociate Media

Here’s the coolest part: Alquist also installs a 3D printer in the kitchen of every home it builds, and homeowners can use the printer to make replaceable parts for their houses, from doorknobs to light switch covers. What would you choose, a built-in 3D printer or a built-in flatscreen TV? It’s a tough call, but I’d probably go with the printer just for the novelty, if nothing else (though most of us would probably use the TV a lot more, let’s be honest).

The Williamsburg home was Habitat for Humanity’s first to involve 3D printing, but it won’t be the last. Use of the technology for home construction is becoming more and more popular, with homes or entire 3D printed communities going up in California, New York, Texas, Mexico, and elsewhere.

“Our goal is to help solve the housing crisis in America,” said Alquist 3D founder Zack Mannheimer. “This is just the beginning. There will be many more.” A second house in Tempe, Arizona is nearly done, and a family is expected to move in in February.

Image Credit: Habitat for Humanity Peninsula and Greater Williamsburg/Consociate Media

Kategorie: Transhumanismus

The Metaverse Is Money and Crypto Is King—Why You’ll Be on a Blockchain When You’re Virtual-World Hopping

23 Leden, 2022 - 16:00

You may think the metaverse will be a bunch of interconnected virtual spaces—the world wide web but accessed through virtual reality. This is largely correct, but there is also a fundamental but slightly more cryptic side to the metaverse that will set it apart from today’s internet: the blockchain.

In the beginning, Web 1.0 was the information superhighway of connected computers and servers that you could search, explore and inhabit, usually through a centralized company’s platform—for example, AOL, Yahoo, Microsoft, and Google. Around the turn of the millennium, Web 2.0 came to be characterized by social networking sites, blogging, and the monetization of user data for advertising by the centralized gatekeepers to “free” social media platforms, including Facebook, SnapChat, Twitter, and TikTok.

Web 3.0 will be the foundation for the metaverse. It will consist of blockchain-enabled decentralized applications that support an economy of user-owned crypto assets and data.

Blockchain? Decentralized? Crypto assets? As researchers who study social media and media technology, we can explain the technology that will make the metaverse possible.

Owning Bits

Blockchain is a technology that permanently records transactions, typically in a decentralized and public database called a ledger. Bitcoin is the most well-known blockchain-based cryptocurrency. Every time you buy some bitcoin, for example, that transaction gets recorded to the Bitcoin blockchain, which means the record is distributed to thousands of individual computers around the world.

This decentralized recording system is very difficult to fool or control. Public blockchains, like Bitcoin and Ethereum, are also transparent—all transactions are available for anyone on the internet to see, in contrast to traditional banking books.

Ethereum is a blockchain like Bitcoin, but Ethereum is also programmable through smart contracts, which are essentially blockchain-based software routines that run automatically when some condition is met. For example, you could use a smart contract on the blockchain to establish your ownership of a digital object, such as a piece of art or music, to which no one else can claim ownership on the blockchain—even if they save a copy to their computer. Digital objects that can be owned—currencies, securities, artwork—are crypto assets.

Items like artwork and music on a blockchain are nonfungible tokens (NFTs). Nonfungible means they are unique and not replaceable, the opposite of fungible items like currency—any dollar is worth the same as, and can be swapped with, any other dollar.

Importantly, you could use a smart contract that says you are willing to sell your piece of digital art for $1 million in ether, the currency of the Ethereum blockchain. When I click “agree,” the artwork and the ether automatically transfer ownership between us on the blockchain. There is no need for a bank or third-party escrow, and if either of us were to dispute this transaction—for example, if you claimed that I only paid $999,000—the other could easily point to the public record in the distributed ledger.

What does this blockchain crypto asset stuff have to do with the metaverse? Everything! To start, the blockchain allows you to own digital goods in a virtual world. You won’t just own that NFT in the real world, you’ll own it in the virtual world, too.

In addition, the metaverse isn’t being built by any one group or company. Different groups will build different virtual worlds, and in the future these worlds will be interoperable—forming the metaverse. As people move between virtual worlds—say from Decentraland’s virtual environments to Microsoft’s—they’ll want to bring their stuff with them. If two virtual worlds are interoperable, the blockchain will authenticate proof of ownership of your digital goods in both virtual worlds. Essentially, as long as you are able to access your crypto wallet within a virtual world, you will be able to access your crypto stuff.

Don’t Forget Your Wallet

So what will you keep in your crypto wallet? You will obviously want to carry cryptocurrencies in the metaverse. Your crypto wallet will also hold your metaverse-only digital goods, such as your avatars, avatar clothing, avatar animations, virtual decorations, and weapons.

What will people do with their crypto wallets? Among other things, shop. Just as you likely do on the web now, you will be able to purchase traditional digital goods like music, movies, games, and apps. You’ll also be able to buy physical-world items in the metaverse, and you’ll be able to view and “hold” 3D models of what you are shopping for, which could help you make more informed decisions.

Also, just like you can use ye old leather wallet to carry your ID, crypto wallets will be linkable to real-world identities, which could help facilitate transactions that require legal verification, such as buying a real-world car or home. Because your ID will be linked to your wallet, you won’t need to remember login information for all the websites and virtual worlds that you visit—just connect your wallet with a click and you are logged in. ID-associated wallets will also be useful for controlling access to age-restricted areas in the metaverse.

Your crypto wallet could also be linked to your contacts list, which would allow you to bring your social network information from one virtual world to another. “Join me for a pool party in FILL IN THE BLANK-world!”

At some point in the future, wallets could also be associated with reputation scores that determine the permissions you have to broadcast in public places and interact with people outside of your social network. If you act like a toxic misinformation-spreading troll, you may damage your reputation and potentially have your sphere of influence reduced by the system. This could create an incentive for people to behave well in the metaverse, but platform developers will have to prioritize these systems.

Big Business

Lastly, if the metaverse is money, then companies will certainly want to play too. The decentralized nature of blockchain will potentially reduce the need for gatekeepers in financial transactions, but companies will still have many opportunities to generate revenue, possibly even more than in current economies. Companies like Meta will provide large platforms where people will work, play, and congregate.

Major brands are also getting into the NFT mix, including Dolce & Gabbana, Coca-Cola, Adidas, and Nike. In the future, when you buy a physical world item from a company, you might also gain ownership of a linked NFT in the metaverse.

For example, when you buy that coveted name-brand outfit to wear to the real-world dance club, you might also become the owner of the crypto version of the outfit that your avatar can wear to the virtual Ariana Grande concert. And just as you could sell the physical outfit secondhand, you could also sell the NFT version for someone else’s avatar to wear.

These are a few of the many ways that metaverse business models will likely overlap with the physical world. Such examples will get more complex as augmented reality technologies increasingly come into play, further merging aspects of the metaverse and physical world. Although the metaverse proper isn’t here yet, technological foundations like blockchain and crypto assets are steadily being developed, setting the stage for a seemingly ubiquitous virtual future that is coming soon to a ‘verse near you.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image Credit: Duncan Rawlinson – Duncan.co/FlickrCC BY-NC

Kategorie: Transhumanismus

This Week’s Awesome Tech Stories From Around the Web (Through January 22)

22 Leden, 2022 - 16:00
ROBOTICS

Now You Can Rent a Robotic Worker—for Less Than Paying a Human
Will Knight | Wired
“Last year, to meet rising demand amid a shortage of workers, Polar hired its first robot employee. The robot arm performs a simple, repetitive job: lifting a piece of metal into a press, which then bends the metal into a new shape. And like a person, the robot worker gets paid for the hours it works. ​Jose Figueroa​, who manages Polar’s production line, says the robot, which is leased from a company called Formic, costs the equivalent of $8 per hour, compared with a minimum wage of $15 per hour for a human employee.”

BIOTECH

Going Bald? Lab-Grown Hair Cells Could Be On the Way
Antonio Regalado | MIT Technology Review
“We’re born with all the hair follicles we’ll ever have—but aging, cancer, testosterone, bad genetic luck, even covid-19 can kill the stem cells inside them that make hair. Once these stem cells are gone, so is your hair. [Ernesto] Lujan says his company can convert any cell directly into a hair stem cell by changing the patterns of genes active in it.”

AUTOMATION

Autonomous Battery-Powered Rail Cars Could Steal Shipments From Truckers
Tim de Chant | Ars Technica
“Parallel Systems isn’t just taking an existing freight train and swapping its diesel-electric locomotive for a battery version. Instead, it’s taking the traction motors and distributing them to every car on the train. It’s how many electric passenger trains operate, but it’s a system that has been slow to migrate to the freight world. Parallel Systems is going a step further, though. Each of its rail vehicles consists of a battery pack, electric motors, four wheels, and a package of sensors that allow it to operate autonomously.”

BIOTECH

A $3 Billion Bet on Finding the Fountain of Youth
Staff | The Economist
“Though preparations for the launch of what must surely be a candidate for the title of ‘Best financed startup in history’ have been rumoured for months, the firm formally announced itself, and its modus operandi, on January 19th. And, even at $3bn, its proposed product might be thought cheap at the price. For the alchemy its founders, Rick Klausner, Hans Bishop and Yuri Milner, hope one day to offer the world is an elixir of life.”

COMPUTING

Intel Selects Ohio for ‘Largest Silicon Manufacturing Location on the Planet’
Jon Porter | The Verge
“After helping to establish Silicon Valley, Gelsinger said the new site could become ‘the Silicon Heartland.’ Intel plans to invest up to $100 billion in the site over the next decade, as well as around $100 million in partnership with Ohio universities, colleges, and the US National Science Foundation to foster new talent.”

SPACE

Machine to Melt Moon Rocks and Derive Metals May Launch in 2024
Eric Berger | Ars Technica
“…a Houston-based company says there is value in the gray, dusty regolith spread across the entire lunar surface. The firm, Lunar Resources, is developing technology to extract iron, aluminum, magnesium, and silicon from the Moon’s regolith. These materials, in turn, would be used to manufacture goods on the Moon.”

ENTREPRENEURSHIP

‘It’s All Just Wild’: Tech Startups Reach a New Peak Froth
Erin Griffith | The New York Times
“How crazy is the money sloshing around in start-up land right now? It’s so crazy that more than 900 tech start-ups are each worth more than $1 billion. In 2015, 80 seemed like a lot. …Investors and founders have adopted a seize-the-day mentality, believing the pandemic created a once-in-a-lifetime opportunity to shake things up. ‘The basic fabric of the world is up for grabs,’ [entrepreneur Phil Libin] said, calling this time ‘the changiest the world has ever been.’i”

SCIENCE FICTION

What Happens If a Space Elevator Breaks
Rhett Alain | Wired
“In the first episode of Foundation, some people decide to set off explosives that separate the space elevator’s top station from the rest of the cable. The cable falls to the surface of the planet and does some real damage down there. What would a falling space elevator cable look like in real life? It’s not that simple to model, but we can make a rough guess.”

Image Credit: Pawel Czerwinski / Unsplash

Kategorie: Transhumanismus

Quantum Computing in Silicon Breaks a Crucial Threshold for the First Time

21 Leden, 2022 - 16:00

Quantum computers made from the same raw materials as standard computer chips hold obvious promise, but so far they’ve struggled with high error rates. That seems set to change after new research showed silicon qubits are now accurate enough to run a popular error-correcting code.

The quantum computers that garner all the headlines today tend to be made using superconducting qubits, such as those from Google and IBM, or trapped ions, such as those from IonQ and Honeywell. But despite their impressive feats, they take up entire rooms and have to be painstakingly handcrafted by some of the world’s brightest minds.

That’s why others are keen to piggyback on the miniaturization and fabrication breakthroughs we’ve made with conventional computer chips by building quantum processors out of silicon. Research has been going on in this area for years, and it’s unsurprisingly the route that Intel is taking in the quantum race. But despite progress, silicon qubits have been plagued by high error rates that have limited their usefulness.

The delicate nature of quantum states means that errors are a problem for all of these technologies, and error-correction schemes will be required for any of them to reach significant scale. But these schemes will only work if the error rates can be kept sufficiently low; essentially, you need to be able to correct errors faster than they appear.

The most promising family of error-correction schemes today are known as “surface codes” and they require operations on, or between, qubits to operate with a fidelity above 99 percent. That has long eluded silicon qubits, but in the latest issue of Nature three separate groups report breaking this crucial threshold.

The first two papers from researchers at RIKEN in Japan and QuTech, a collaboration between Delft University of Technology and the Netherlands Organization for Applied Scientific Research, use quantum dots for qubits. These are tiny traps made out of semiconductors that house a single electron. Information can be encoded into the qubits by manipulating the electrons’ spin, a fundamental property of elementary particles.

The key to both groups’ breakthroughs was primarily down to careful engineering of the qubits and control systems. But the QuTech group also used a diagnostic tool developed by researchers at Sandia National Laboratories to debug and fine-tune their system, while the RIKEN team discovered that upping the speed of operations boosted fidelity.

A third group from the University of New South Wales took a slightly different approach, using phosphorus atoms embedded into a silicon lattice as their qubits. These atoms can hold their quantum state for extremely long times compared to most other qubits, but the tradeoff is that it’s hard to get them to interact. The group’s solution was to entangle two of these phosphorus atoms with an electron, which enables them to talk to each other.

All three groups were able to achieve fidelities above 99 percent for both single qubit and two-qubit operations, which crosses the error-correction threshold. They even managed to carry out some basic proof-of-principle calculations using their systems. Nonetheless, they are still a long way from making a fault-tolerant quantum processor out of silicon.

Achieving high-fidelity qubit operations is only one of the requirements for effective error correction. The other is having a large number of spare qubits that can be dedicated to this task, while the remaining ones focus on whatever problem the processor has been set.

As an accompanying analysis in Nature notes, adding more qubits to these systems is certain to complicate things, and maintaining the same fidelities in larger systems will be tough. Finding ways to connect qubits across large systems will also be a challenge.

However, the promise of being able to build compact quantum computers using the same tried-and-true technology as existing computers suggests these are problems worth trying to solve.

Image Credit: UNSW/Tony Melov

Kategorie: Transhumanismus

Scientists Are Sequencing the Genome of Every Complex Species on Earth

20 Leden, 2022 - 16:00

The Earth Biogenome Project, a global consortium that aims to sequence the genomes of all complex life on Earth (some 1.8 million described species) in 10 years, is ramping up.

The project’s origins, aims, and progress are detailed in two multi-authored papers published this week. Once complete, it will forever change the way biological research is done.

Specifically, researchers will no longer be limited to a few “model species” and will be able to mine the DNA sequence database of any organism that shows interesting characteristics. This new information will help us understand how complex life evolved, how it functions, and how biodiversity can be protected.

The project was first proposed in 2016, and I was privileged to speak at its launch in London in 2018. It is currently in the process of moving from its startup phase to full-scale production.

The aim of phase one is to sequence one genome from every taxonomic family on Earth, some 9,400 of them. By the end of 2022, one-third of these species should be done. Phase two will see the sequencing of a representative from all 180,000 genera, and phase three will mark the completion of all the species.

The Importance of Weird Species

The grand aim of the Earth Biogenome Project is to sequence the genomes of all 1.8 million described species of complex life on Earth. This includes all plants, animals, fungi, and single-celled organisms with true nuclei (that is, all “eukaryotes”).

While model organisms like mice, rock cress, fruit flies, and nematodes have been tremendously important in our understanding of gene functions, it’s a huge advantage to be able to study other species that may work a bit differently.

Many important biological principles came from studying obscure organisms. For instance, genes were famously discovered by Gregor Mendel in peas, and the rules that govern them were discovered in red bread mold.

DNA was discovered first in salmon sperm, and our knowledge of some systems that keep it secure came from research on tardigrades. Chromosomes were first seen in mealworms and sex chromosomes in a beetle (sex chromosome action and evolution has also been explored in fish and platypus). And telomeres, which cap the ends of chromosomes, were discovered in pond scum.

Answering Biological Questions and Protecting Biodiversity

Comparing closely and distantly related species provides tremendous power to discover what genes do and how they are regulated. For instance, in another PNAS paper, coincidentally also published this week, my University of Canberra colleagues and I discovered Australian dragon lizards regulate sex by the chromosome neighborhood of a sex gene, rather than the DNA sequence itself.

Scientists also use species comparisons to trace genes and regulatory systems back to their evolutionary origins, which can reveal astonishing conservation of gene function across nearly a billion years. For instance, the same genes are involved in retinal development in humans and in fruit fly photoreceptors. And the BRCA1 gene that is mutated in breast cancer is responsible for repairing DNA breaks in plants and animals.

The genome of animals is also far more conserved than has been supposed. For instance, several colleagues and I recently demonstrated that animal chromosomes are 684 million years old.

It will be exciting, too, to explore the “dark matter” of the genome, and reveal how DNA sequences that don’t encode proteins can still play a role in genome function and evolution.

Another important aim of the Earth Biogenome Project is conservation genomics. This field uses DNA sequencing to identify threatened species, which includes about 28 percent of the world’s complex organisms, helping us monitor their genetic health and advise on management.

No Longer an Impossible Task

Until recently, sequencing large genomes took years and many millions of dollars. But there have been tremendous technical advances that now make it possible to sequence and assemble large genomes for a few thousand dollars. The entire Earth Biogenome Project will cost less in today’s dollars than the Human Genome Project, which was worth about US$3 billion in total.

In the past, researchers would have to identify the order of the four bases chemically on millions of tiny DNA fragments, then paste the entire sequence together again. Today they can register different bases based on their physical properties, or by binding each of the four bases to a different dye. New sequencing methods can scan long molecules of DNA that are tethered in tiny tubes, or squeezed through tiny holes in a membrane.

Why Sequence Everything?

But why not save time and money by sequencing just key representative species?

Well, the whole point of the Earth Biogenome Project is to exploit the variation between species to make comparisons, and also to capture remarkable innovations in outliers.

There is also the fear of missing out. For instance, if we sequence only 69,999 of the 70,000 species of nematode, we might miss the one that could divulge the secrets of how nematodes can cause diseases in animals and plants.

There are currently 44 affiliated institutions in 22 countries working on the Earth Biogenome Project. There are also 49 affiliated projects, including enormous projects such as the California Conservation Genomics Project, the Bird 10,000 Genomes Project, and UK’s Darwin Tree of Life Project, as well as many projects on particular groups such as bats and butterflies.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image Credit: paulbr75 / 2230 images

Kategorie: Transhumanismus

The World’s Biggest Vertical Farm Yet Is Going Up in Pennsylvania

19 Leden, 2022 - 17:41

Four and a half years ago I visited a vertical farming research facility in the Netherlands to learn the basics about growing food indoors without sun or soil. Since then, new vertical farms have sprung up from Asia to Europe and the US, with the technology’s uber-local and eco-friendly aspects taking on increased significance as urgency grows around the climate crisis. A new facility announced this week will be the largest vertical farm yet, and will sprout local greens for the most populous market in the US: New York City.

The company behind the project is Brooklyn-based Upward Farms, which was founded in 2013 and currently sells greens from its two existing farms in Manhattan and Brooklyn Whole Foods stores. The new farm will be built in Luzerne County, Pennsylvania, which is 135 miles west of Manhattan and 115 miles north of Philadelphia.

At 250,000 square feet (that’s about 6 acres) the Pennsylvania facility will dwarf the company’s existing farms, as well as most vertical farms in other parts of the world; Europe’s biggest one is less than a third of this size at 73,000 square feet.

As a quick refresher, vertical farms use LED lights to recreate the process of photosynthesis; red and blue wavelengths of light interact with the chlorophyll in plants to help form glucose and cellulose, the structural material in cell walls. The elements of sunlight that plants don’t use as efficiently, like heat, can be reduced or removed entirely, making for a quicker progression from seed to harvest.

Most vertical farms are hydroponic (plant roots sit in shallow troughs of nutrient-rich water) or aeroponic (roots dangle in the air and are periodically misted). But Upward Farms uses aquaponics to fertilize its crops.

Besides microgreens, Upward Farms raises fish: mercury-free, antibiotic-free, hormone-free hybrid striped bass, in tanks that are separate from the trays of greens. Manure from the fish is cultivated with soil-building microbes that convert nutrients from the fish into organic fertilizer for the plants. This makes for a soil microbiome that’s more dense, fertile, and productive than that of most indoor farms, according to the company (and they sell the fish to consumers, too!).

One of Upward Farms’ hybrid striped bass. Image Credit: Upward Farms

Upward Farms claims its yields are two times above the industry average thanks to its ecological farming method, which keeps the microbial cell count in soil much higher than it would be with chemical fertilizers. “There’s a communication layer that’s been built in by millions of years of evolution between plants and microbes,” said Jason Green, Upward Farms’ CEO and cofounder. “Plants can say, ‘Hey, I’m stressed in this way, my environment is imperfect in this way, can you help me?’ and plants recruit microbes to their service.”

Consumers in the West have gotten used to having access to almost any fruit or vegetable we want at any time of year. Out-of-season produce costs us just a little more, but the logistics involved in shipping fresh foods hundreds or thousands of miles are no small endeavor. From keeping fruits or veggies cold and pest-free to making sure they arrive unblemished to using chemicals to keep them fresh, this process strains both the environment and the food products.

The hope with vertical farms is that they’ll flip this paradigm on its head by growing food near major markets, minimizing or eliminating the complicated supply chain issues that drive up costs and make for outsized environmental footprints. “With this new facility, we’ll be able to reach some of the most populous areas of the US, and nearly 100 million Americans, within a single day of distribution versus the week it can take to receive products from the west coast,” Green said.

After raising $121 million in Series B funding last June, Upward will build its new facility in Pennsylvania this year, and hopes to start selling produce grown there in early 2023. The company also plans to expand to additional areas of the US in coming years.

Image Credit: Upward Farms

Kategorie: Transhumanismus

New Virus-Like Particles Can Deliver CRISPR to Any Cell in the Body

18 Leden, 2022 - 16:00

Gene therapy is a lot like landing a Mars rover.

Hear me out. The cargo—a rover or gene editing tools—is stuffed inside a highly technical protective ship and shot into a vast, complex space targeting its destination, be it Mars or human organs. The cargo is then released, and upon landing, begins its work. For Perseverance, it’s to help seek signs of ancient life; for gene editors, it’s to redesign life.

One critical difference? Unlike a Mars mission’s “seven minutes of terror,” during which the entry, descent, and landing occur too fast for human operators to interfere, gene therapy delivery is completely blind. Once inside the body, the entire flight sequence rests solely on the design of the carrier “spaceship.”

In other words, for gene therapy to work efficiently, smarter carriers are imperative.

This month, a team at Harvard led by Dr. David Liu launched a new generation of molecular carriers inspired by viruses. Dubbed engineered virus-like particles (eVLPs), these bubble-like carriers can deliver CRISPR and base editing components to a myriad of organs with minimal side effects.

Compared to previous generations, the new and improved eVLPs are more efficient at landing on target, releasing their cargo, and editing cells. As a proof of concept, the system restored vision in a mouse model of genetic blindness, disabled a gene associated with high cholesterol levels, and fixed a malfunctioning gene inside the brain. Even more impressive, it’s a plug-and-play system: by altering the targeting component, it’s in theory possible for the bubbles to land anywhere in the body. It’s like easily rejiggering a Mars-targeting spaceship for Jupiter or beyond.

“There’s so much need for a better way to deliver proteins into various tissues in animals and patients,” said Liu. “We’re hopeful that these eVLPs might be useful not just for the delivery of base editors, but also other therapeutically relevant proteins.”

“Overall, Liu and colleagues have developed an exciting new advance for the therapeutic delivery of gene editors,” said Dr. Sekar Kathiresan, co-founder and CEO of Verve Therapeutics, who was not involved in the study.

The Delivery Problem

We already have families of efficient gene editors. But carriers have been lacking.

Take base editing. A CRISPR variant, the technology took gene editing by storm due to its precision. Similar to the original CRISPR, the tool has two components: a guide RNA to hunt down the target gene and a reworked Cas protein that swaps out individual genetic letters. Unlike Cas9, the CRISPR “scissors,” base editing doesn’t break the DNA backbone, causing fewer errors. It’s the ultimate genetic “search and replace,” with the potential to treat hundreds of genetic disorders.

The problem is getting the tools inside cells. So far, viruses have been the go-to carrier, due to their inherent ability to infect cells. Here, scientists kneecap a virus’s ability to cause disease, instead hijacking its biology to carry DNA that encodes for the editing components. Once inside the cell, the added genetic code is transcribed into proteins, allowing cells to make their own gene editing tools.

It’s not optimal. Viruses, though efficient, can cause the cells to go into overdrive, producing far more gene editing tools than needed. This stresses the cell’s resources and leads to side effects. There’s also the chance of viruses tunneling and integrating into the genome itself, damaging genetic integrity and potentially leading to cancer.

So why not tap into a virus’s best attributes and nix the worst?

Enter eVLPs

eVLPs are like their namesakes: they mimic viral particles that are efficient at infecting cells, but cut out the dangerous parts: DNA. Picture a multi-layered pin cushion, but with an empty cavity to hold cargo.

Unlike viruses, these bubbles don’t carry any viral DNA and can’t cause infections, potentially making them far safer than viral carriers. The downside? They’re traditionally terrible at carrying cargo to its targets. It’s akin to a spaceship with awful homing machinery that crashes into other planets and causes an unexpected wave of disaster. They’re also not great at releasing the cargo even on the target site, trapping CRISPR machinery inside and making the whole gene-editing fix moot.

In the new study, Liu’s team started by analyzing those pain points. By limiting proteins inside the eVLPs that act as the carrier’s “safety belts,” they found, it’s easier for the cargo—the base editor proteins—to release. How they packed the cargo inside the particle bubbles also made a difference. The balance between the two—seat belt to protein passengers—seems to be key to protecting the cargo but allowing them to quickly bail when needed. And finally, dotting the outer shell of the spaceship with specific proteins helps the spaceship navigate towards its designated organ.

In other words, the team figured out the rules of the game. “Now that we know some of the key eVLP bottlenecks and how we can address them, even if we had to develop a new eVLP for an unusual type of protein cargo, we could probably do so much more efficiently,” said Liu.

The result is that a carrier can pack 16 times more cargo and up to a 26-fold increase in gene editing efficacy. It’s a “fourth-generation” carrier, said the authors.

Navigating Biospace

After first testing their new molecular spaceship in cultured cells in the lab, the team moved on to treating genetic disorders. They targeted three different biological “planets”—the eye, liver, and brain—showcasing the flexibility of the new carrier.

In mice with an inherited form of blindness, for example, the carrier was loaded with the appropriate gene editing tools and injected into a layer of tissue inside the eye. In just five weeks, the single injection rebooted retinal function to a point that—based on previous studies from the same lab—can restore the mice’s ability to see.

In another study, the team focused on a gene that often leads to brain disorders. Because of a tough barrier between the brain, blood, and other tissues, the brain is a notoriously difficult organ to access. With the new eVLP spaceship, the gene editors smoothly sailed through the barrier. Once inside brain cells, the tools had a roughly 50 percent chance of transforming damaged genes.

As an additional proof of concept, the new carriers honed in on the livers of mice with cholesterol problems. One injection amped up the mice’s ability to produce a protective molecule that thwarts heart disease.

Better Safe Than Sorry

Gene editing has always been haunted by the ghost of off-target effects. Using viruses to deliver the tools, for example, runs those risks as they last a long time, potentially overwhelming cells.

Not so for the new eVLPs. Because they’re completely engineered, they carry zero viral DNA and are safer. They’re also highly programmable—just a few changes to the targeting proteins can shift them towards another docking location in the body.

For the next step, the team is engineering better “seat belt” proteins inside the carriers for different molecules—either gene editors or therapeutic proteins such as insulin or cancer immunotherapies. They’re also further unpacking what makes eVLPs tick, aiming for next-generation carriers that can explore every nook and cranny of our bodies’ complex universe.

Image Credit:  nobeastsofierce / Shutterstock.com

Kategorie: Transhumanismus

Sensor-Packed ‘Electronic Skin’ Controls Robots With Haptic Feedback

17 Leden, 2022 - 16:00

Being able to beam yourself into a robotic body has all kinds of applications, from the practical to the fanciful. Existing interfaces that could make this possible tend to be bulky, but a wireless electronic skin made by Chinese researchers promises far more natural control.

While intelligent robots may one day be able to match humans’ dexterity and adaptability, they still struggle to carry out many of the tasks we’d like them to be able to do. In the meantime, many believe that creating ways for humans to teleoperate robotic bodies could be a useful halfway house.

The approach could be particularly useful for scenarios that are hazardous for humans yet still beyond the capabilities of autonomous robots. For instance, bomb disposal or radioactive waste cleanup, or more topically, medical professionals treating highly infectious patients.

While remote-controlled robots already exist, being able to control them through natural body movements could make the experience far more intuitive. It could also be crucial for developing practical robotic exoskeletons and better prosthetics, and even make it possible to create immersive entertainment experiences where users take control of a robotic body.

While solutions exist for translating human movement into signals for robots, it typically involves the use of cumbersome equipment that the user has to wear or complicated computer vision systems.

Now, a team of researchers from China has created a flexible electronic skin packed with sensors, wireless transmitters, and tiny vibrating magnets that can provide haptic feedback to the user. By attaching these patches to various parts of the body like the hand, forearm, or knee, the system can record the user’s movements and transmit them to robotic devices.

The research, described in a paper published in Science Advances, builds on rapid advances in flexible electronics in recent years, but its major contribution is packing many components into a compact, powerful, and user-friendly package.

The system’s sensors rely on piezoresistive materials, whose electrical resistance changes when subjected to mechanical stress. This allows them to act as bending sensors, so when the patches are attached to a user’s joint the change in resistance corresponds to the angle at which it is bent.

These sensors are connected to a central microcontroller via wiggly copper wires that wave up and down in a snake-like fashion. This zigzag pattern allows the wires to easily expand when stretched or bent, preventing them from breaking under stress. The voltage signals from the sensors are then processed and transmitted via Bluetooth, either directly to a nearby robotic device or a computer, which can then pass them on via a local network or the internet.

Crucially, the researchers have also built in a feedback system. The same piezoresistive sensors can be attached to parts of the robotic device, for instance on the fingertips where they can act as pressure sensors.

Signals from these sensors are transmitted to the electronic skin, where they are used to control tiny magnets that vibrate at different frequencies depending on how much pressure was applied. The researchers showed that humans controlling a robotic hand could use the feedback to distinguish between cubes of rubber with varying levels of hardness.

Importantly, the response time for the feedback signals was as low as 4 microseconds while operating directly over Bluetooth and just 350 microseconds operating over a local Wi-Fi network, which is below the 550 microseconds it takes for humans to react to tactile stimuli. Transmitting the signals over the internet led to considerably longer response times, though—between 30 and 50 milliseconds.

Nonetheless, the researchers showed that by combining different configurations of patches with visual feedback from VR goggles, human users could control a remote-controlled car with their fingers, use a robotic arm to carry out a COVID swab test, and even get a basic humanoid robot to walk, squat, clean a room, and help nurse a patient.

The patches are powered by an onboard lithium-ion battery that provides enough juice for all of its haptic feedback devices to operate continuously at full power for more than an hour. In standby mode it can last for nearly two weeks, and the device’s copper wires can even act as an antenna to wirelessly recharge the battery.

Inevitably, the system will still require considerable finessing before it can be used in real-world settings. But its impressive capabilities and neat design suggest that unobtrusive flexible sensors that could let us remotely control robots might not be too far away.

Image Credit: geralt / 23811 images

Kategorie: Transhumanismus

How Will the Universe End? Scientists Seek an Answer in the Biggest Galaxy Map Yet

16 Leden, 2022 - 20:30

This week, astrophysicists presented the biggest map of the universe yet.

Having nailed down the position of 7.5 million galaxies, the map is larger and more detailed than all its predecessors combined. And it’s nowhere near complete. Using the ultra-precise Dark Energy Spectroscopic Instrument (DESI), the team is adding the coordinates of a million galaxies a month with plans to run through 2026. The final atlas will cover a third of the sky and include 35 million galaxies up to 10 billion light years away.

Of course, this particular map won’t have much practical value for space explorers. Even at the speed of light, it’d take us tens of thousands to millions of years to reach our closest galactic neighbors. Absent a convenient network of intergalactic wormholes, we’re likely stuck in our home galaxy for the foreseeable future. But the map has another purpose.

“This project has a specific scientific goal: to measure very precisely the accelerating expansion of the universe,” Lawrence Berkeley National Laboratory’s Julien Guy told Wired. By measuring the expansion over time, scientists hope to shine a light on dark energy—the mysterious force that seems to be blowing the universe apart—and predict the ultimate fate of the cosmos.

Cosmological Cartography

To locate galaxies, DESI uses a collection of 5,000 fiber-optic cables positioned by robotic motors to within 10 microns, less than the thickness of a human hair. This precise positioning allows the instrument to sop up the photons of 5,000 distant galaxies at a time, record their spectra in detail, and determine how much the light has been stretched into the redder bits of the spectrum during its journey to Earth. This “redshift” is caused by the expansion of the universe and indicates how far away a galaxy is—the redder the light, the more distant the galaxy—thus adding a third dimension to galaxy maps.

DESI’s 3D “CT scan” of the universe. Each colored point represents a galaxy. Under the force of gravity, these galaxies huddle together in a “cosmic web” of clusters, filaments, and voids. Image Credit: D. Schlegel/Berkeley Lab using data from DESI

Whereas prior efforts like the Sloan Digital Sky Survey were slow and tedious—with scientists manually drilling holes and repositioning sensors—DESI is quick and automated, to the point of boring its operators on any given shift. But those shifts are prodigious, each adding some 100,000 galaxies to the map.

The scale is huge. Individual galaxies, each with hundreds of billions of stars, are reduced to points of light flowing in enormous filaments, clusters, and voids. “[These are] the biggest structures in the universe. But within them, you find an imprint of the very early universe, and the history of its expansion since then,” Guy said in a statement.

It’s by comparing the universe’s initial conditions just after the Big Bang to its expansion ever since that the team hopes to tease out a better understanding of how dark energy has changed over time.

A Profound Mystery 

In the 1990s, studies led by Lawrence Berkeley National Laboratory’s Saul Perlmutter and Australian National University’s Brian Schmidt attempted to measure the expansion of the universe. It had been assumed that the universe’s matter—including stars, planets, dust, gas, and dark matter—would act like a brake on its expansion. Like a ball tossed into the air, gravity’s pull would slow the universe down.

If you can measure the universe’s rate of expansion, you can predict its future trajectory. Will it grind to a halt under the force of its own matter and reverse course, imploding in a big crunch? Will it expand forever, eventually tearing itself apart? Or will it approach equilibrium, where the rate of expansion nears zero?

The teams gathered the light from supernovae with known luminosity—these are called standard candles in astrophysics—to measure the expansion rate. Their results were surprising, to put it mildly. Instead of slowing, they found expansion was accelerating over time. Some gargantuan force was counteracting gravity, and scientists didn’t have the faintest clue what it was.

Cosmologist Michael Turner dubbed this force “dark energy” and has called it “the most profound mystery in all of science.” Now, the race is on to better understand dark energy by putting together a more precise history of the universe’s evolution.

How the Universe Ends

If expansion continues, the universe will never truly end. Over unimaginable eons, each orders of magnitude longer than the current age of the universe, expansion will pull galaxies apart, snuff out stars, and tear matter into its elementary constituents. The end state of the universe would be a chilly and everlasting dark age.

But scientists don’t fully understand dark energy or know the fate of the universe with certainty. Which is why observations from projects like DESI are crucial. By mapping the large structure of the universe over time, scientists hope to chart how the rate of expansion—and perhaps the dark energy driving it—has changed and how it might in the future.

DESI isn’t the only mapping project out there. Other projects, like those that will be conducted by the European Space Agency’s Euclid spacecraft and NASA’s Nancy Grace Roman Telescope, will complement DESI’s findings by looking deeper into the universe, and cataloging even earlier galaxies from when it was just a few billion years old. Scientists are excited to mine this data hoard to further refine the universe’s origin story.

“In five years, we hope that we will find a deviation from this model of cosmology that will give us a hint of what really happens,” Guy told New Scientist. “Because today we are a little bit stuck in a simple model that describes perfectly well the data [we have], but doesn’t give us any new information.”

Image Credit: D. Schlegel/Berkeley Lab (using data from DESI)

Kategorie: Transhumanismus

This Week’s Awesome Tech Stories From Around the Web (Through January 15)

15 Leden, 2022 - 16:00
BIOTECH

In a First, Man Receives a Heart From a Genetically Altered Pig
Roni Caryn Rabin | The New York Times
“A 57-year-old man with life-threatening heart disease has received a heart from a genetically modified pig, a groundbreaking procedure that offers hope to hundreds of thousands of patients with failing organs. It is the first successful transplant of a pig’s heart into a human being.”

VIRTUAL REALITY

Second Life’s Creator Is Back to Build a ‘Metaverse That Doesn’t Harm People’
Mark Sullivan | Fast Company
“As Second Life positions itself as an alternative to a metaverse dominated by big tech, founder Philip Rosedale is returning as an advisor. …In his advisory role at Linden, Rosedale will focus on product development, with the aim of shaping Second Life’s version of the future metaverse.”

CRYPTOCURRENCY

Jack Dorsey’s Block Is Working to Decentralize Bitcoin Mining
Jon Porter | The Verge
“Block, the payment company formerly known as Square, is working on building an “open Bitcoin mining system,” its CEO Jack Dorsey has announced. In a thread, Block’s general manager for hardware Thomas Templeton outlined the company’s goals for the system, which is for it to be easily available, reliable, performant, and relatively power efficient compared to its hashrate. The overall aim is to make mining more decentralized, in turn making the overall Bitcoin network more resilient.”

ENVIRONMENT

The Radical Intervention That Might Save the ‘Doomsday’ Glacier
James Temple | MIT Technology Review
“Even if the world immediately halted the greenhouse-gas emissions driving climate change and warming the waters beneath the ice shelf, that wouldn’t do anything to thicken and restabilize the Thwaites’s critical buttress, says John Moore, a glaciologist and professor at the Arctic Centre at the University of Lapland in Finland. ‘So the only way of preventing the collapse … is to physically stabilize the ice sheets,’ he says. That will require what is variously described as active conservation, radical adaptation, or glacier geoengineering.”

ETHICS

First Transplant of a Genetically Altered Pig Heart Into a Person Sparks Ethics Questions
Megan Molteni | Stat
“The groundbreaking procedure raises hopes that animal organs might one day be routinely used for human transplants, which would shorten waiting lists—where thousands of seriously ill people languish and die every year. But it’s also raising a few eyebrows and a lot of questions from bioethicists. ‘There’s still relatively little known about how safe this is to try in humans, so I’m viewing this with a little apprehension,” said Arthur Caplan, the founding director of New York University School of Medicine’s Division of Medical Ethics.”

TRANSPORTATION

Is Norway the Future of Cars?
Shira Ovide | The New York Times
“Last year, Norway reached a milestone. Only about 8 percent of new cars sold in the country ran purely on conventional gasoline or diesel fuel. Two-thirds of new cars sold were electric, and most of the rest were electric-and-gasoline hybrids. …electric car enthusiasts are stunned by the speed at which the internal combustion engine has become an endangered species in Norway.”

SPACE

All Hail the Ariane 5 Rocket, Which Doubled the Webb Telescope’s Lifetime
Eric Berger | Ars Technica
“NASA’s Mission Systems Engineer for the Webb telescope, Mike Menzel, said the agency had completed its analysis of how much ‘extra’ fuel remained on board the telescope. Roughly speaking, Menzel said, Webb has enough propellant on board for 20 years of life. This is twice the conservative pre-launch estimate for Webb’s lifetime of a decade, and it largely comes down to the performance of the European Ariane 5 rocket that launched Webb on a precise trajectory on Christmas Day.”

CULTURE In Belle, the Internet Unlocks Our Best Selves

Cecilia D’Anastasio | Wired
“i‘When I look at other directors dealing with the theme of the internet, it tends to be kind of negative, like a dystopia,’ says Hosoda. ‘But I always look at the internet as something for the young generation to explore and create new worlds in. And I still, to this day, have that take on the internet. So it’s always been optimistic.” Watching Belle, it’s easy to become absorbed in that optimism. It’s visually stunning, with both its rural landscapes and a digital megalopolis packed tight with a breathtaking number of pixels.”

TECHNOLOGY

The Subversive Genius of Extremely Slow Email
Ian Bogost | The Atlantic
“Dmitry Minkovsky has been working on [slow email app] Pony over the past three years, with the goal of recovering some of the magic that online life had lost for him. …I used to find such projects appealing for their subversiveness: as art objects that make problems visible rather than proposing viable solutions to them. But now it’s clear that the internet needs design innovations—and brake mechanisms—to reduce its noxious impact. Our suffering arises, in part, from the speed and volume of our social interactions online. Maybe we can build our way toward fewer of them.”

Image Credit: Pawel Czerwinski / Unsplash

Kategorie: Transhumanismus

This Autonomous Delivery Robot Has External Airbags in Case It Hits a Person

14 Leden, 2022 - 16:00

Autonomous delivery was already on multiple companies’ research and development agenda before the pandemic, but when people stopped wanting to leave their homes it took on a whole new level of urgency (and potential for profit). Besides the fact that the pandemic doesn’t seem to be subsiding—note the continuous parade of new Greek-letter variants—our habits have been altered in a lasting way, with more people shopping online and getting groceries and other items delivered to their homes.

This week Nuro, a robotics company based in Mountain View, California unveiled what it hopes will be a big player in last-mile delivery. The company’s third-generation autonomous delivery vehicle has some impressive features, and some clever ones—like external airbags that deploy if the vehicle hits a pedestrian (which hopefully won’t happen too often, if ever).

Despite being about 20 percent smaller in width than the average sedan, the delivery bot has 27 cubic feet of space inside; for comparison’s sake, the tiny SmartForTwo has 12.4 cubic feet of cargo space, while the Tesla Model S has 26. It can carry up to 500 pounds and move at a speed of 45 miles per hour.

Image Credit: Nuro

Nuro has committed to minimizing its environmental footprint—the delivery bot runs on batteries, and according to the press release, the company will use 100 percent renewable electricity from wind farms in Texas to power the fleet (though it’s unclear how they’ll do this, as Texas is pretty far from northern California, and that’s where the vehicles will initially be operating; Nuro likely buys credits that go towards expanding wind energy in Texas).

Nuro’s first delivery bot was unveiled in 2018, followed by a second iteration in 2019. The company recently partnered with 7-Eleven to do autonomous deliveries in its hometown (Mountain View) using this second iteration, called the R2—though in the initial phase of the service, deliveries will be made by autonomous Priuses.

The newest version of the bot is equipped with sensors that can tell the difference between a pile of leaves and an animal, as well as how many pedestrians are standing at a crosswalk in dense fog. Nuro says the vehicle “was designed to feel like a friendly member of the community.” This sounds a tad dystopian—it is, after all, an autonomous robot on wheels—but the intention is in the right place. Customers will retrieve their orders and interact with the bot using a large exterior touchscreen.

Whether an optimal future is one where any product we desire can be delivered to our door within hours or minutes is a debate all its own, but it seems that’s the direction we’re heading in. Nuro will have plenty of competition in the last-mile delivery market, potentially including an Amazon system that releases multiple small wheeled robots from a large truck (Amazon patented the concept last year, but there’s been no further word about whether they’re planning to trial it). Nuro is building a manufacturing facility and test track in Nevada, and is currently in the pre-production phase.

Image Credit: Nuro

Kategorie: Transhumanismus

New Research: Memories May Be Stored in the Connections Between Brain Cells

13 Leden, 2022 - 16:00

All memory storage devices, from your brain to the RAM in your computer, store information by changing their physical qualities. Over 130 years ago, pioneering neuroscientist Santiago Ramón y Cajal first suggested that the brain stores information by rearranging the connections, or synapses, between neurons.

Since then, neuroscientists have attempted to understand the physical changes associated with memory formation. But visualizing and mapping synapses is challenging to do. For one, synapses are very small and tightly packed together. They’re roughly 10 billion times smaller than the smallest object a standard clinical MRI can visualize. Furthermore, there are approximately one billion synapses in the mouse brains researchers often use to study brain function, and they’re all the same opaque to translucent color as the tissue surrounding them.

A new imaging technique my colleagues and I developed, however, has allowed us to map synapses during memory formation. We found that the process of forming new memories changes how brain cells are connected to one another. While some areas of the brain create more connections, others lose them.

Mapping New Memories in Fish

Previously, researchers focused on recording the electrical signals produced by neurons. While these studies have confirmed that neurons change their response to particular stimuli after a memory is formed, they couldn’t pinpoint what drives those changes.

To study how the brain physically changes when it forms a new memory, we created 3D maps of the synapses of zebrafish before and after memory formation. We chose zebrafish as our test subjects because they are large enough to have brains that function like those of people, but small and transparent enough to offer a window into the living brain.

Zebrafish are particularly fitting models for neuroscience research. Zhuowei Du and Don B. Arnold, CC BY-NC-ND

To induce a new memory in the fish, we used a type of learning process called classical conditioning. This involves exposing an animal to two different types of stimuli simultaneously: a neutral one that doesn’t provoke a reaction and an unpleasant one that the animal tries to avoid. When these two stimuli are paired together enough times, the animal responds to the neutral stimulus as if it were the unpleasant stimulus, indicating that it has made an associative memory tying these stimuli together.

As an unpleasant stimulus, we gently heated the fish’s head with an infrared laser. When the fish flicked its tail, we took that as an indication that it wanted to escape. When the fish is then exposed to a neutral stimulus, a light turning on, tail flicking meant that it’s recalling what happened when it previously encountered the unpleasant stimulus.

Pavlov’s dog is the most well-known example of classical conditioning, in which a dog salivates in response to a ringing bell because it has formed an associative memory between the bell and food. Lili Chin/Flickr, CC BY-NC-ND

To create the maps, we genetically engineered zebrafish with neurons that produce fluorescent proteins that bind to synapses and make them visible. We then imaged the synapses with a custom-built microscope that uses a much lower dose of laser light than standard devices that also use fluorescence to generate images. Because our microscope caused less damage to the neurons, we were able to image the synapses without losing their structure and function.

When we compared the 3D synapse maps before and after memory formation, we found that neurons in one brain region, the anterolateral dorsal pallium, developed new synapses while neurons predominantly in a second region, the anteromedial dorsal pallium, lost synapses. This meant that new neurons were pairing together, while others destroyed their connections. Previous experiments have suggested that the dorsal pallium of fish may be analogous to the amygdala of mammals, where fear memories are stored.

Surprisingly, changes in the strength of existing connections between neurons that occurred with memory formation were small and indistinguishable from changes in control fish that did not form new memories. This meant that forming an associative memory involves synapse formation and loss, but not necessarily changes in the strength of existing synapses, as previously thought.

Could Removing Synapses Remove Memories?

Our new method of observing brain cell function could open the door not just to a deeper understanding of how memory actually works, but also to potential avenues for treatment of neuropsychiatric conditions like PTSD and addiction.

Associative memories tend to be much stronger than other types of memories, such as conscious memories about what you had for lunch yesterday. Associative memories induced by classical conditioning, moreover, are thought to be analogous to traumatic memories that cause PTSD. Otherwise harmless stimuli similar to what someone experienced at the time of the trauma can trigger recall of painful memories. For instance, a bright light or a loud noise could bring back memories of combat. Our study reveals the role that synaptic connections may play in memory, and could explain why associative memories can last longer and be remembered more vividly than other types of memories.

Currently the most common treatment for PTSD, exposure therapy, involves repeatedly exposing the patient to a harmless but triggering stimulus in order to suppress recall of the traumatic event. In theory, this indirectly remodels the synapses of the brain to make the memory less painful. Although there has been some success with exposure therapy, patients are prone to relapse. This suggests that the underlying memory causing the traumatic response has not been eliminated.

It’s still unknown whether synapse generation and loss actually drive memory formation. My laboratory has developed technology that can quickly and precisely remove synapses without damaging neurons. We plan to use similar methods to remove synapses in zebrafish or mice to see whether this alters associative memories.

It might be possible to physically erase the associative memories that underlie devastating conditions like PTSD and addiction with these methods. Before such a treatment can even be contemplated, however, the synaptic changes encoding associative memories need to be more precisely defined. And there are obviously serious ethical and technical hurdles that would need to be addressed. Nevertheless, it’s tempting to imagine a distant future in which synaptic surgery could remove bad memories.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image Credit: geralt / 23803 images

Kategorie: Transhumanismus

Watch a Jet Suit Pilot Deliver Supplies in a Mountain Warfare Rescue

12 Leden, 2022 - 17:53

In 2020 a jet suit pilot flew up a mountain to test whether it would make sense for emergency responders in wilderness areas to add jet suits to their toolkit. Last year the same suit was used by the British Royal Marines to board a ship in a staged “visit, board, search, and seizure” (military speak for getting on a ship whose captain or crew don’t want you there, like trying to capture an enemy ship or intercept terrorists or pirates). Most recently, a trial combining military and search-and-rescue missions used the jet suit as part of a NATO Mountain Warfare Rescue simulation.

In the Slovenian mountains a pilot strapped on the jet suit made by Gravity Industries, glided smoothly and rapidly uphill along a hiking path, and delivered blood plasma to a waiting group. The group included medics and an “injured” soldier who had been rescued from a deep gorge next to the trail.

The jet suit is powered by five gas turbine engines that together generate 318 pounds of thrust. A pilot can travel up to three miles on a single fuel-up, reaching speeds up to 50 miles per hour. The suit’s creator and Gravity Industries’ founder is Richard Browning, a former Royal Marine. After setting a Guinness World Record for the fastest speed flown in a body-controlled jet engine-powered suit in 2019, Browning started looking for practical and humanitarian applications for his invention.

In rescue situations like the NATO exercise, a jet suit pilot wouldn’t be replacing emergency personnel, but rather trying to reach an injured person as fast as possible. It’s also a quick way to deliver medical supplies or equipment, though whatever’s being delivered would have to be pretty light so as not to slow the engines’ thrust down too much.

Admittedly, integrating jet suits into teams of medics and first responders wouldn’t be the easiest thing. The suits are expensive, for starters—$400,000 as of last year. Browning claims it’s relatively easy to learn to operate a suit. “It’s a bit like riding a bicycle or skiing or one of those things where it’s just about you thinking about where you want to go and your body intuitively going there,” he said, adding, “We’ve had people learn to do this in four or five goes—with each go just lasting around 90 seconds.” It’s likely there would be a learning curve nonetheless (and it could be either really fun or sort of dangerous, depending on the learner and the setting).

Meanwhile, Gravity Industries is coming up with other uses for its product, like having a pilot race a Porsche Taycan through an uphill obstacle course. This can’t exactly be called practical or humanitarian, but it’s definitely entertaining. Would-be pilots curious to experience jet suit flight can register for Flight Experience or Flight Training packages, or can even commission their own jet suits.

It may be a while before we see widespread use of these things, but it doesn’t seem like a bad idea for emergency rescue teams to have a jet suit or two on hand.

Image Credit: Gravity Industries

Kategorie: Transhumanismus

Breakthrough Shot of Engineered Immune Cells Helps Heal Heart Damage

11 Leden, 2022 - 16:00

It’s now possible to transform immune cells directly into “super soldiers” inside the body.

I’m talking about CAR-T, the immunotherapy that revolutionized blood cancer treatment. Here, a person’s immune T cells are removed from the body, genetically enhanced to better target cancer cells, and infused back into the body. By hijacking and amping up the body’s own immune response, CAR-T therapy can readily battle cancers that were impervious to other treatments.

The problem? It’s expensive, time-consuming, and tailored to each person. It’s why one treatment course can cost nearly half a million dollars. It’s also partly why the powerful treatment hasn’t yet conquered other diseases that can benefit from super-powered T cells.

Until now. In a breakthrough, a study led by Dr. Jonathan A. Epstein at Penn Medicine directly transformed T cells inside the bodies of mice. The team used mRNA protected by nano-bubbles of fatty protein to dock onto T cells. Once absorbed, the cells readily translated mRNA into proteins—dotting the outside of T cells—that acted as homing beacons to find and destroy heart cells responsible for scarring. In mice with heart failure, the treatment reduced heart damage and improved pumping function.

The work “provides a strong rationale for the broadening of immunotherapies into disease areas with unmet needs,” said Torahito Gao and Dr. Yvonne Chen at the University of California, Los Angeles, who were not involved in the study.

What’s especially jaw-dropping is the transformation process. Since all it took was mRNA, in theory the therapy could be injected into anyone. If the idea sounds familiar, you’re right—it’s how Covid-19 mRNA vaccines work. So long to the multi-step hazards and costs of lab-engineered CAR-T; hello to a one-shot wonder that’s potentially safer, easier, and scalable to millions with heart disease.

“The most notable advancement is the ability to engineer T cells for a specific clinical application without having to take them out of the patient’s body,” said Epstein.

The Classic CAR-T Recipe

T cells are vicious. They circulate the body in the bloodstream, hunting for signs of intrusion—cancers, viruses, bacteria—and rallying other immune troops to knock out invaders. But immune attacks are a cat and mouse game, and intruders can eventually mutate to evade surveillance. Under constant threat, T cells gradually break down, depleting in number and strength.

Enter CAR-Ts, or T cells genetically enhanced with chimeric antigen receptor (CAR). Here, T cells are first extracted from the body—this is relatively easy since they live in our blood. Using genetic engineering, including CRISPR, the cells get a dose of new genes, CARs, which act as bloodhounds to hunt down the enemy. Some even get “brake” genes snipped out, turning T cells into formidable enemies that kill cancers on sight.

The CAR-Ts are then expanded in number to millions inside Petri dishes, and subsequently infused back into the patient. They’re a hearty bunch: the cells can exist for months to years, ready for battle anytime.

That’s great for cancer, but not so much for heart disease, explained Epstein. Unlike cancers, the cells that trigger heart scarring only briefly activate after an injury. These cells, dubbed fibroblasts, aren’t pure evil. Normally they pump out a goo-like molecule that helps build up scaffolding around heart cells. They’re also critical for wound healing. But after injury they get over-hyped, refusing to quiet down, and this in turn causes massive scarring in the heart—a condition called fibrosis, which contributes to eventual heart failure.

This Jekyll-and-Hyde nature of heart fibroblasts are exactly why normal CAR-T therapy likely wouldn’t work. Angry CAR-Ts that roam around the body for months targeting fibroblasts are a dangerous overkill.

CAR-T on Demand

Enter mRNA therapy. Unlike classic CAR-T, mRNA doesn’t touch a T cell’s genes. It just urges the cell to temporarily pump out some proteins.

If those proteins mimic the Covid-19 virus’s spike protein, you get a vaccine. If they mimic signature proteins on an activated fibroblast ready to do some damage, you get a solution for heart failure.

But what’s the target? Back in 2019, Epstein’s lab screened gene expression data from patients with or without heart disease and nailed down one protein: fibroblast activation protein (FAP). Healthy fibroblasts produce FAP at a low level, but once activated, the quantity of the protein shoots up. Like lighting a bonfire in pitch-black woods, FAP now serves as a perfect target for CAR-Ts to hunt down. Indeed, FAP-targeting CAR-T cells, once reinfused into mice, annihilated these fibroblasts, resulting in less scaring and better heart function.

Similar to classic CAR-T therapy, the original solution resulted in battle-eager cells that stuck around. In this study, the team took a page from the Covid-19 mRNA vaccine recipe book, aiming for a transient T cell powerup. It starts with a heavily modified mRNA that encodes the bloodhound component for FAP. The chemical modifications help stabilize the mRNA, explained the authors.

Next is a new delivery shuttle for the payload. Here, the team used lipid nanoparticles, tiny fatty spaceships that surround mRNA and carry it to its destination. To program the correct “GPS coordinates”—that is, where the spaceships should go to dock—the team decorated the outside of the capsule with protein “hooks” that specifically grab onto T cells.

The team then injected the contraptions directly into the bloodstreams of mice. As the fatty spaceships found and docked onto T cells, they fused with the cells’ bubbly membranes, releasing mRNA into their interior. The T cells adopted this new code as their own, faithfully translating it into CAR proteins—and voilá, the cells were transformed into FAP CAR-T cells.

It’s not just a physical makeover. Heart failure is marked by increasing scar tissue, causing the organ to balloon in size and reducing its pumping function. The mRNA treatment reversed all these symptoms in mice with heart fibrosis just one week after a single injection. In some mice the reversal was so stark that their heart chambers became indistinguishable from those of healthy counterparts.

Tap and Run

Unlike classic CAR-Ts, the cells here are like the Hulk. They’re powerful, but they only stick around for a bit before reversing back to their original form.

mRNA are highly unstable molecules. This is why the newly converted FAP CAR-T are undetectable within a week rather than months or years. For heart fibrosis, this is great news: the super-powered cells can come in right when needed and quickly chill out to avoid side effects. Here, the researchers didn’t notice any toxic effects from the treatment.

This isn’t the first study to engineer CAR-Ts inside the body. An earlier study used a virus to deliver genetic elements to help T cells in mice battle leukemia. However, the new solution with mRNA is likely safer, as it circumvents potential problems from the virus or the need for chemotherapy. It removes “a source of considerable toxicity that is typically associated with CAR-T cell therapies,” said Gao and Chen.

Lots more questions need answering before the treatment goes into human trials. High on the list is long-term toxicity. FAP-based therapies have triggered side effects in previous studies unrelated to heart fibrosis. Another is the optimal time for treatment—should it be directly after injury, or is there a larger window?

Regardless, the study represents a new era for an already game-changing therapy. The idea “is ground-breaking because it’s a whole new way of thinking about a therapeutic application, redirecting the T cell to control other aberrant cells. Obviously that makes huge sense in cancer, but that’s just the start of things,” said Dr. Jeffrey Molkentin at Cincinnati Children’s Hospital, who was not involved in the study. “It’s precedent-setting.”

Image Credit: Abhishek135 / 18 images

Kategorie: Transhumanismus

These Will Be the Earliest Use Cases for Quantum Computers

10 Leden, 2022 - 16:00

Quantum computing is expected to revolutionize a broad swathe of industries. But as the technology edges closer to commercialization, what will the earliest use cases be?

Quantum computing is still a long way from going mainstream. The industry had some significant breakthroughs in 2021 though, not least IBM’s unveiling of the first processor to cross the 100-qubit mark. But the technology is still experimental, and has yet to demonstrate its usefulness for solving real-world problems.

That milestone might not be so far off, though. Most quantum computing companies are aiming to produce fault-tolerant devices by 2030, which many see as the inflection point that will usher in the era of practical quantum computing.

Quantum computers will not be general-purpose machines, though. They will be able to solve some calculations that are completely intractable for current computers and dramatically speed up processing for others. But many of the things they excel at are niche problems, and they will not replace conventional computers for the vast majority of tasks.

That means the ability to benefit from this revolution will be highly uneven, which prompted analysts at McKinsey to investigate who the early winners could be in a new report. They identified the pharmaceutical, chemical, automotive, and financial industries as those with the most promising near-term use cases.

The authors take care to point out that making predictions about quantum computing is hard because many fundamental questions remain unanswered; for instance, the relative importance of the quantity and quality of qubits or whether there can be practical uses for early devices before they achieve fault tolerance.

It’s also important to note that there are currently fewer than 100 quantum algorithms that exhibit a quantum speed-up, the extent of which can vary considerably. That means the first and foremost question for business leaders is whether a quantum solution even exists for their problem.

But for some industries the benefits look clearer than others. For drug makers, the technology holds the promise of streamlining the industry’s long and incredibly expensive research and development process; the average drug takes 10 years and $2 billion to develop.

Quantum simulations could predict how proteins fold and tease out the properties of small molecules that could help produce new treatments. Once promising candidates have been found, quantum computers could also help optimize critical attributes like absorption and solubility.

Beyond research and development, quantum computers could also help companies optimize the clinical trials used to validate new drugs, for instance by helping identify and group participants or selecting trial sites.

Quantum simulation could also prove a powerful tool in the chemical industry, according to the report. Today’s chemists use computer-aided design tools that rely on approximations of molecular behavior and properties, but enabling full quantum mechanical simulations of molecules will dramatically expand their capabilities.

This could cut out the many rounds of trial-and-error lab experiments normally required to develop new products, instead relying on simulations to do the heavy lifting, with limited lab-based validation to confirm the results.

Quantum computers could also help to optimize the formulations used in all kinds of products—from detergents to paints—by modeling the complex molecular-level processes that govern their action.

For both the pharmaceutical and chemical industries, it’s not just the design of new products that could be impacted. Quantum computers could also help improve their production processes by helping researchers better understand the reaction mechanisms used to create drugs and chemicals, design new catalysts, or fine-tune conditions to optimize yields.

In the automotive industry, the technology could significantly boost prototyping and testing capabilities. Better simulation of everything from aerodynamic properties to thermodynamic behavior will reduce the cost of prototyping and lead to better designs. It could even make virtual testing possible, reducing the number of test vehicles required.

As carmakers look for greener ways to fuel their vehicles, quantum simulations could also contribute to finding new materials and better designs for hydrogen fuel cells and batteries. But the biggest impact could be on the day-to-day logistics involved in running a major automotive company.

Supply chain disruptions cost the industry about $15 billion a year, but quantum computers could simulate and optimize the sprawling global networks companies rely on to significantly reduce these headaches. They could also help fine-tune assembly line schedules to reduce inefficiencies and even optimize the movements of multi-robot teams as they put cars together.

Quantum computing’s impact on the financial industry will take longer to be felt, according to the report’s authors, but with the huge sums at stake it’s worth taking seriously. The technology could prove invaluable in modeling the behavior of large and complex portfolios to come up with better investment strategies. Similar approaches could also help optimize loan portfolios to reduce risk, which could allow lenders lower interest rates or free up capital.

How much of this comes to pass depends heavily on the future trajectory of quantum technology. Despite significant progress, there are still many unknowns, and plenty of scope for timelines to slip. Nonetheless, the potential of this new technology is starting to come into focus, and it seems that business leaders in those industries most susceptible to disruption would do well to start making plans.

Image Credit: Pete Linforth from Pixabay

Kategorie: Transhumanismus

How Could the Big Bang Arise From Nothing?

9 Leden, 2022 - 16:00

READER QUESTION: My understanding is that nothing comes from nothing. For something to exist, there must be material or a component available, and for them to be available, there must be something else available. Now my question: Where did the material come from that created the Big Bang, and what happened in the first instance to create that material? Peter, 80, Australia.

“The last star will slowly cool and fade away. With its passing, the universe will become once more a void, without light or life or meaning.” So warned the physicist Brian Cox in the recent BBC series Universe. The fading of that last star will only be the beginning of an infinitely long, dark epoch. All matter will eventually be consumed by monstrous black holes, which in their turn will evaporate away into the dimmest glimmers of light. Space will expand ever outwards until even that dim light becomes too spread out to interact. Activity will cease.

Or will it? Strangely enough, some cosmologists believe a previous, cold dark empty universe like the one which lies in our far future could have been the source of our very own Big Bang.

The First Matter

But before we get to that, let’s take a look at how “material”—physical matter—first came about. If we are aiming to explain the origins of stable matter made of atoms or molecules, there was certainly none of that around at the Big Bang—nor for hundreds of thousands of years afterwards. We do in fact have a pretty detailed understanding of how the first atoms formed out of simpler particles once conditions cooled down enough for complex matter to be stable, and how these atoms were later fused into heavier elements inside stars. But that understanding doesn’t address the question of whether something came from nothing.

So let’s think further back. The first long-lived matter particles of any kind were protons and neutrons, which together make up the atomic nucleus. These came into existence around one ten-thousandth of a second after the Big Bang. Before that point, there was really no material in any familiar sense of the word. But physics lets us keep on tracing the timeline backwards—to physical processes which predate any stable matter.

This takes us to the so-called “grand unified epoch.” By now, we are well into the realm of speculative physics, as we can’t produce enough energy in our experiments to probe the sort of processes that were going on at the time. But a plausible hypothesis is that the physical world was made up of a soup of short-lived elementary particles, including quarks, the building blocks of protons and neutrons. There was both matter and “antimatter” in roughly equal quantities: each type of matter particle, such as the quark, has an antimatter “mirror image” companion, which is near identical to itself, differing only in one aspect. However, matter and antimatter annihilate in a flash of energy when they meet, meaning these particles were constantly created and destroyed.

But how did these particles come to exist in the first place? Quantum field theory tells us that even a vacuum, supposedly corresponding to empty spacetime, is full of physical activity in the form of energy fluctuations. These fluctuations can give rise to particles popping out, only to be disappear shortly after. This may sound like a mathematical quirk rather than real physics, but such particles have been spotted in countless experiments.

The spacetime vacuum state is seething with particles constantly being created and destroyed, apparently “out of nothing.” But perhaps all this really tells us is that the quantum vacuum is (despite its name) a something rather than a nothing. The philosopher David Albert has memorably criticized accounts of the Big Bang which promise to get something from nothing in this way.

Suppose we ask: where did spacetime itself arise from? Then we can go on turning the clock yet further back, into the truly ancient “Planck epoch”—a period so early in the universe’s history that our best theories of physics break down. This era occurred only one ten-millionth of a trillionth of a trillionth of a trillionth of a second after the Big Bang. At this point, space and time themselves became subject to quantum fluctuations. Physicists ordinarily work separately with quantum mechanics, which rules the microworld of particles, and with general relativity, which applies on large, cosmic scales. But to truly understand the Planck epoch, we need a complete theory of quantum gravity, merging the two.

We still don’t have a perfect theory of quantum gravity, but there are attempts, like string theory and loop quantum gravity. In these attempts, ordinary space and time are typically seen as emergent, like the waves on the surface of a deep ocean. What we experience as space and time are the product of quantum processes operating at a deeper, microscopic level – processes that don’t make much sense to us as creatures rooted in the macroscopic world.

In the Planck epoch, our ordinary understanding of space and time breaks down, so we can’t any longer rely on our ordinary understanding of cause and effect either. Despite this, all candidate theories of quantum gravity describe something physical that was going on in the Planck epoch—some quantum precursor of ordinary space and time. But where did that come from?

Even if causality no longer applies in any ordinary fashion, it might still be possible to explain one component of the Planck-epoch universe in terms of another. Unfortunately, by now even our best physics fails completely to provide answers. Until we make further progress towards a “theory of everything,” we won’t be able to give any definitive answer. The most we can say with confidence at this stage is that physics has so far found no confirmed instances of something arising from nothing.

Cycles From Almost Nothing

To truly answer the question of how something could arise from nothing, we would need to explain the quantum state of the entire universe at the beginning of the Planck epoch. All attempts to do this remain highly speculative. Some of them appeal to supernatural forces like a designer. But other candidate explanations remain within the realm of physics—such as a multiverse, which contains an infinite number of parallel universes, or cyclical models of the universe, being born and reborn again.

The 2020 Nobel Prize-winning physicist Roger Penrose has proposed one intriguing but controversial model for a cyclical universe dubbed “conformal cyclic cosmology.” Penrose was inspired by an interesting mathematical connection between a very hot, dense, small state of the universe—as it was at the Big Bang—and an extremely cold, empty, expanded state of the universe—as it will be in the far future. His radical theory to explain this correspondence is that those states become mathematically identical when taken to their limits. Paradoxical though it might seem, a total absence of matter might have managed to give rise to all the matter we see around us in our universe.

In this view, the Big Bang arises from an almost nothing. That’s what’s left over when all the matter in a universe has been consumed into black holes, which have in turn boiled away into photons—lost in a void. The whole universe thus arises from something that, viewed from another physical perspective, is as close as one can get to nothing at all. But that nothing is still a kind of something. It is still a physical universe, however empty.

How can the very same state be a cold, empty universe from one perspective and a hot dense universe from another? The answer lies in a complex mathematical procedure called “conformal rescaling,” a geometrical transformation which in effect alters the size of an object but leaves its shape unchanged.

Penrose showed how the cold dense state and the hot dense state could be related by such rescaling so that they match with respect to the shapes of their spacetimes—although not to their sizes. It is, admittedly, difficult to grasp how two objects can be identical in this way when they have different sizes—but Penrose argues size as a concept ceases to make sense in such extreme physical environments.

In conformal cyclic cosmology, the direction of explanation goes from old and cold to young and hot: the hot dense state exists because of the cold empty state. But this “because” is not the familiar one—of a cause followed in time by its effect. It is not only size that ceases to be relevant in these extreme states: time does too. The cold dense state and the hot dense state are in effect located on different timelines. The cold empty state would continue on forever from the perspective of an observer in its own temporal geometry, but the hot dense state it gives rise to effectively inhabits a new timeline all its own.

It may help to understand the hot dense state as produced from the cold empty state in some non-causal way. Perhaps we should say that the hot dense state emerges from, or is grounded in, or realized by the cold, empty state. These are distinctively metaphysical ideas which have been explored by philosophers of science extensively, especially in the context of quantum gravity where ordinary cause and effect seem to break down. At the limits of our knowledge, physics and philosophy become hard to disentangle.

Experimental Evidence?

Conformal cyclic cosmology offers some detailed, albeit speculative, answers to the question of where our Big Bang came from. But even if Penrose’s vision is vindicated by the future progress of cosmology, we might think that we still wouldn’t have answered a deeper philosophical question—a question about where physical reality itself came from. How did the whole system of cycles come about? Then we finally end up with the pure question of why there is something rather than nothing—one of the biggest questions of metaphysics.

But our focus here is on explanations which remain within the realm of physics. There are three broad options to the deeper question of how the cycles began. It could have no physical explanation at all. Or there could be endlessly repeating cycles, each a universe in its own right, with the initial quantum state of each universe explained by some feature of the universe before. Or there could be one single cycle, and one single repeating universe, with the beginning of that cycle explained by some feature of its own end. The latter two approaches avoid the need for any uncaused events—and this gives them a distinctive appeal. Nothing would be left unexplained by physics.

Penrose envisages a sequence of endless new cycles for reasons partly linked to his own preferred interpretation of quantum theory. In quantum mechanics, a physical system exists in a superposition of many different states at the same time, and only “picks one” randomly, when we measure it. For Penrose, each cycle involves random quantum events turning out a different way—meaning each cycle will differ from those before and after it. This is actually good news for experimental physicists, because it might allow us to glimpse the old universe that gave rise to ours through faint traces, or anomalies, in the leftover radiation from the Big Bang seen by the Planck satellite.

Penrose and his collaborators believe they may have spotted these traces already, attributing patterns in the Planck data to radiation from supermassive black holes in the previous universe. However, their claimed observations have been challenged by other physicists and the jury remains out.

Endless new cycles are key to Penrose’s own vision. But there is a natural way to convert conformal cyclic cosmology from a multi-cycle to a one-cycle form. Then physical reality consists in a single cycling around through the Big Bang to a maximally empty state in the far future—and then around again to the very same Big Bang, giving rise to the very same universe all over again.

This latter possibility is consistent with another interpretation of quantum mechanics, dubbed the many-worlds interpretation. The many-worlds interpretation tells us that each time we measure a system that is in superposition, this measurement doesn’t randomly select a state. Instead, the measurement result we see is just one possibility—the one that plays out in our own universe. The other measurement results all play out in other universes in a multiverse, effectively cut off from our own. So no matter how small the chance of something occurring, if it has a non-zero chance then it occurs in some quantum parallel world. There are people just like you out there in other worlds who have won the lottery, or have been swept up into the clouds by a freak typhoon, or have spontaneously ignited, or have done all three simultaneously.

Some people believe such parallel universes may also be observable in cosmological data, as imprints caused by another universe colliding with ours.

Many-worlds quantum theory gives a new twist on conformal cyclic cosmology, though not one that Penrose agrees with. Our Big Bang might be the rebirth of one single quantum multiverse, containing infinitely many different universes all occurring together. Everything possible happens—then it happens again and again and again.

An Ancient Myth

For a philosopher of science, Penrose’s vision is fascinating. It opens up new possibilities for explaining the Big Bang, taking our explanations beyond ordinary cause and effect. It is therefore a great test case for exploring the different ways physics can explain our world. It deserves more attention from philosophers.

For a lover of myth, Penrose’s vision is beautiful. In Penrose’s preferred multi-cycle form, it promises endless new worlds born from the ashes of their ancestors. In its one-cycle form, it is a striking modern re-invocation of the ancient idea of the ouroboros, or world-serpent. In Norse mythology, the serpent Jörmungandr is a child of Loki, a clever trickster, and the giant Angrboda. Jörmungandr consumes its own tail, and the circle created sustains the balance of the world. But the ouroboros myth has been documented all over the world— including as far back as ancient Egypt.

The ouroboros of the one cyclic universe is majestic indeed. It contains within its belly our own universe, as well as every one of the weird and wonderful alternative possible universes allowed by quantum physics—and at the point where its head meets its tail, it is completely empty yet also coursing with energy at temperatures of a hundred thousand million billion trillion degrees Celsius. Even Loki, the shapeshifter, would be impressed.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image Credit: NASA

Kategorie: Transhumanismus

How a Handful of Prehistoric Geniuses Launched Humanity’s Technological Revolution

7 Leden, 2022 - 16:00

For the first few million years of human evolution, technologies changed slowly. Some three million years ago, our ancestors were making chipped stone flakes and crude choppers. Two million years ago, hand-axes. A million years ago, primitive humans sometimes used fire, but with difficulty. Then, 500,000 years ago, technological change accelerated, as spearpoints, firemaking, axes, beads, and bows appeared.

This technological revolution wasn’t the work of one people. Innovations arose in different groups—modern Homo sapiens, primitive sapiens, possibly even Neanderthals—and then spread. Many key inventions were unique: one-offs. Instead of being invented by different people independently, they were discovered once, then shared. That implies a few clever people created many of history’s big inventions.

And not all of them were modern humans.

The Tip of the Spear

500,000 years ago in southern Africa, primitive Homo sapiens first bound stone blades to wooden spears, creating the spearpoint. Spearpoints were revolutionary as weaponry, and as the first “composite tools”—combining components.

The spearpoint spread, appearing 300,000 years ago in East Africa and the Mideast, then 250,000 years ago in Europe, wielded by Neanderthals. That pattern suggests the spearpoint was gradually passed on from one people to another, all the way from Africa to Europe.

Catching Fire

400,000 years ago hints of fire, including charcoal and burnt bones, became common in Europe, the Mideast, and Africa. It happened at roughly the same time everywhere—rather than randomly in disconnected places—suggesting invention, then rapid spread. Fire’s utility is obvious, and keeping a fire going is easy. Starting a fire is harder, however, and was probably the main barrier. If so, widespread use of fire likely marked the invention of the fire-drill—a stick spun against another piece of wood to create friction, a tool still used today by hunter-gatherers.

Curiously, the oldest evidence for regular fire use comes from Europe, then inhabited by Neanderthals. Did Neanderthals master fire first? Why not? Their brains were as big as ours; they used them for something, and living through Europe’s ice-age winters, Neanderthals needed fire more than African Homo sapiens.

The Axe

270,000 years ago in central Africa, hand-axes began to disappear, replaced by a new technology, the core-axe. Core-axes looked like small, fat hand-axes, but were radically different tools. Microscopic scratches show core-axes were bound to wooden handles, making a true, hafted axe. Axes quickly spread through Africa, then were carried by modern humans into the Arabian peninsula, Australia, and ultimately Europe.

Ornamentation

The oldest beads are 140,000 years old and come from Morocco. They were made by piercing snail shells then stringing them on a cord. At the time, archaic Homo sapiens inhabited North Africa, so their makers weren’t modern humans.

Beads then appeared in Europe 115,000-120,000 years ago, worn by Neanderthals, and were finally adopted by modern humans in southern Africa 70,000 years ago.

Bow and Arrow

The oldest arrowheads appeared in southern Africa over 70,000 years ago, likely made by the ancestors of the Bushmen, who’ve lived there for 200,000 years. Bows then spread to modern humans in East Africa, to south Asia 48,000 years ago, on to Europe 40,000 years ago, and finally to Alaska and the Americas, 12,000 years ago.

Neanderthals never adopted bows, but the timing of the bow’s spread means it was likely used by Homo sapiens against them.

Trading Technology

It’s not impossible that people invented similar technologies in different parts of the world at roughly the same time, and in some cases, this must have happened. But the simplest explanation for the archaeological data we have is that instead of reinventing technologies, many advances were made just once, then spread widely. After all, assuming fewer innovations requires fewer assumptions.

But how did technology spread? It’s unlikely individual prehistoric people travelled long distances through lands held by hostile tribes (although there were obviously major migrations over generations), so African humans probably didn’t meet Neanderthals in Europe, or vice versa. Instead, technology and ideas diffused—transferred from one band and tribe to the next, and the next, in a vast chain linking modern Homo sapiens in southern Africa to archaic humans in North and East Africa, and Neanderthals in Europe.

Conflict could have driven exchange, with people stealing or capturing tools and weapons. Native Americans, for example, got horses by capturing them from the Spanish. But it’s likely that people often just traded technologies simply because it was safer and easier. Even today, modern hunter-gatherers, who lack money, still trade—Hadzabe hunters exchange honey for iron arrowheads made by neighboring tribes, for example.

Archaeology shows such trade is ancient. Ostrich eggshell beads from South Africa, up to 30,000 years old, have been found over 300 kilometers from where they were made. 200,000-300,000 years ago, archaic Homo sapiens in East Africa used tools from obsidian sourced from 50-150 kilometers away, further than modern hunter-gatherers typically travel.

Last, we shouldn’t overlook human generosity; some exchanges may simply have been gifts. Human history and prehistory were doubtless full of conflict, but then as now, tribes may have had peaceful interactions—treaties, marriages, friendships—and may simply have gifted technology to their neighbors.

Stone Age Geniuses

The pattern seen here—single origin, then spread of innovations—has another remarkable implication. Progress may have been highly dependent on single individuals, rather than being the inevitable outcome of larger cultural forces.

Consider the bow. It’s so useful that its invention seems both obvious and inevitable. But if it really was obvious, we’d see bows invented repeatedly in different parts of the world. But Native Americans didn’t invent the bow—neither did Australian Aborigines, nor people in Europe and Asia.

Instead, it seems one clever Bushman invented the bow, and then everyone else adopted it. That hunter’s invention would change the course of human history for thousands of years to come, determining the fates of peoples and empires.

The prehistoric pattern resembles what we’ve seen in historic times. Some innovations were developed repeatedly—farming, civilization, calendars, pyramids, mathematics, writing, and beer were invented independently around the world, for example. Certain inventions may be obvious enough to emerge in a predictable fashion in response to people’s needs.

But many key innovations—the wheel, gunpowder, the printing press, stirrups, the compass—seem to have been invented just once, before becoming widespread.

And likewise a handful of individuals—Steve Jobs, Thomas Edison, Nikola Tesla, the Wright Brothers, James Watt, Archimedes—played outsized roles in driving our technological evolution, which implies highly creative individuals had a huge impact.

That suggests the odds of hitting on a major technological innovation are low. Perhaps it wasn’t inevitable that fire, spearpoints, axes, beads, or bows would be discovered when they were.

Then, as now, one person could literally change the course of history, with nothing more than an idea.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image Credit: Clovis points from the Rummells-Maske Site, 13CD15, Cedar County, Iowa. Wikimedia Commons

Kategorie: Transhumanismus

Next Week You’ll Be Able to Eat ‘Chicken’ Made From Plants at KFC

6 Leden, 2022 - 16:00

It’s a new year, and for millions of people that brings resolutions like exercising more and eating healthier. Ice cream? Gone. Potato chips? In the trash. Burgers? Try a salad instead. Fried chicken? Well, hold on—is it really chicken?

In Kentucky Fried Chicken restaurants across the US, customers will soon have the option for the answer to be no. The fast food giant is looking to cash in on healthy eating resolutions with a well-timed release of a plant-based substitute for its traditional (read: real) chicken. KFC partnered with Beyond Meat, which has been steadily expanding its repertoire since its founding in 2009. After getting started with burgers, the company now makes sausage and chicken too.

Beyond Meat has been developing plant-based chicken nuggets for a while now, and more recently started focusing on a plant-based substitute that mimics the taste and texture of whole muscle chicken (like a chicken breast or thigh). Nuggets are easier to replicate with plant-based ingredients since their meat is ground up and doesn’t have as specific a texture, but a plant-based chicken breast is more complex.

“Plant-based chicken breast” sounds like (and is) an oxymoron. Unlike cultured chicken, which is also on the rise, plant-based chicken isn’t really chicken at all; it’s soy protein mixed with various ingredients that get its texture and taste close-ish to that of real chicken. Beyond Meat is unsurprisingly keeping proprietary details of its formula quiet, but it’s likely the company’s chicken is made with a process similar to that used by competitor Impossible Foods for its burgers.

Impossible Foods made its plant-based burgers taste and feel like real meat by adding a protein from soybeans called leghemoglobin. Leghemoglobin is chemically bound to a non-protein molecule called heme, an iron-containing molecule that gives red meat its color. By teasing out this key ingredient and figuring out how to derive it from plants, the company made a unique product that more closely approximated the mouthfeel of real beef.

This isn’t the first time KFC partners with Beyond Meat. The two companies tested plant-based chicken at an Atlanta restaurant in August 2019, selling out their limited supply in half a day. The current plan is to make Beyond Fried Chicken available for a limited time, but Kevin Hochman, CEO of KFC, is expecting the trial to go similarly well. “We expect it’ll sell out,” he said. “Based on the speed of that sell-out and customer reaction, that’ll determine what our plans will be next. But our intent isn’t to be one and done.”

KFC is owned by Yum! Brands, which also owns Pizza Hut and Taco Bell. There were around 4,000 KFC restaurants in the US at the end of 2020, but only 18 percent of KFC’s global sales are domestic; the chain sells far more in China than it does anywhere else. The plant-based fried chicken is only rolling out in the US market for now, though, following a growing trend to meet consumers’ interest in both eating healthier and reducing their environmental footprint.

Burger King was first to jump on the plant-based bandwagon with its Impossible Whopper. McDonald’s followed last year with its McPlant burger. Yum! Brands plants to launch more meat-free products, including a plant-based carne asada at Taco Bell. “We do think that ultimately this idea of more and more plant-based protein being consumed is a fait accompli,” said Hochman. “It’s going to happen, it’s really about when.”

You can try Beyond Fried Chicken at KFC stores nationwide starting January 10. $6.99 will buy you a six-piece order, with some price variation by location. I’m not much of a chicken nugget person, but heck, I’m even curious to try these things.

Image Credit: KFC

Kategorie: Transhumanismus

A Chinese Company Says It Will Be Selling Driverless Cars by 2024

5 Leden, 2022 - 18:03

As recently as 2015, auto industry insiders predicted fully self-driving cars would be on the road by 2020. As you know, that didn’t come to pass, and two years later we’re still waiting for the day we can kick back, put our feet up, and watch the scenery go by as sleek autonomous cars deliver us to our destinations.

Consumers in China may be a little closer to this vision than the rest of us. This week at the Consumer Electronics Show in Las Vegas, two companies announced development of a car with Level 4 autonomy, with plans to put the vehicle on the Chinese market in 2024.

Mobileye is an an Israeli subsidiary of chipmaker Intel (who knew?) that develops self-driving cars and advanced driver-assistance systems. This week at CES the company announced a new chip called EyeQ Ultra, part of its system-on-a-chip line, saying the chip will be able to do 176 trillion operations per second and is purpose-built for autonomous driving.

Geely, meanwhile, is a carmaker based in Hangzhou, China. Founded in 1997, the company’s full name is Zhejiang Geely Holding Group; they’re the largest private automaker in China, and reportedly sold over 1.3 million cars in 2020. Among Geely’s holdings is Swedish carmaker Volvo, as well as an electric vehicle brand called Zeekr that was launched in March of 2021.

The new self-driving car will be a collaboration between Geely and Mobileye, and will be produced under the Zeekr brand. To be clear, the car still won’t quite approach the put-your-feet-up driverless vision. There are five levels of automation in driving, with Level 5 being full autonomy, in which the vehicle can drive itself anywhere (around cities, on highways, on rural roads, etc.) in any conditions (rain, sun, fog, etc.) without human intervention. The Zeekr car will supposedly be Level 4, which means it will be able to operate without a safety driver under certain conditions (namely, good weather), and will still have a steering wheel.

It’s likely the car will be able to drive itself on highways, when all it needs to do is stay in its lane and maintain the speed limit without hitting any other cars. Whether it will be able to handle city driving—what with the pedestrians, stopped cars, cyclists, delivery trucks, and numerous other obstacles involved—will be another matter.

Though we learn to drive at the ripe young age of 16, we don’t realize the incredible amount of brainpower, accumulated experience, and reflexes that go into safely operating our vehicles. The human brain is a lot for a software system to live up to, and how well it does so can mean life or death in this case.

The car’s “brain” will use six of Mobileye’s EyeQ 5 chips, and will navigate with the help of the company’s Road Experience Management system, which maps roads in 3D using data drawn from all cars on the road with its hardware. The car will be built on Geely’s open-source Sustainable Experience Architecture, which includes redundancies for braking, steering, and power.

Geely and Mobileye say their self-driving car will be ready for consumers in China by 2024. Progress in autonomous driving has been notoriously slow, and two years is not a lot of time to make the advancements needed in order for this sort of technology to deploy safely. Perhaps the companies have a couple surprises up their sleeves?

Meanwhile, Geely also just announced a partnership with Alphabet subsidiary Waymo in which the Chinese carmaker will build electric self-driving vehicles to be used for ride hailing in the US market. There’s no shortage of plans around driverless cars, then—but how (and when) these plans end up shaking out may be another matter; stay tuned.

Image Credit: Geely

Kategorie: Transhumanismus

These 2021 Biotech Breakthroughs Will Shape the Future of Health and Medicine

4 Leden, 2022 - 16:00

It’s that time of year again! With 2021 behind us, we’re going down memory lane to highlight biotech innovations that shaped the year—with impact that will likely reverberate for many years to come. Covid-19 dominated the news, but science didn’t stand still.

Take gene editing. CRISPR spun off variations with breathtaking speed, expanding into a hefty toolbox packed with powerhouse gene editors far more efficient, reliable, and safer than their predecessors. CRISPRoff, for example, hijacks epigenetic processes to reversibly turn genes on and off—all without actually snipping or damaging the gene itself. Prime editing, the nip-tuck of DNA editing that only snips—rather than fully cutting—DNA received an upgrade to precisely edit up to 10,000 DNA letters in a variety of cells. Twin prime editing can rework entire genes. These powered-up CRISPR tools now make it possible to tackle previously untouchable genetic disorders.

Yet we’re still only scratching the surface of gene editing. Peeking into the CRISPR family tree, scientists found a vast universe of alternative CRISPR-like systems to further explore. AI is now helping identify new CRISPR proteins—and their kill switch. Other ideas jumped ship from CRISPR altogether, tapping into another powerful bacterial system to edit millions of DNA sequences without breaking a single DNA strand. Without doubt, the gene editing toolbox will keep expanding.

In other news, quantum mechanics hooked up with neuroscience to speed up AI. AI is now designing its own hardware chips at Google in an efficient full circle. Hopping into our own brains, in a stunning proof-of-concept, AI-powered brain implants were able to fight depression, with ongoing work to treat chronic pain and translate the brain’s electrical signals from thought to text. In the medical world, a fierce debate on an Alzheimer’s treatment sparked a new round of alluring ideas to tackle and tame our long-time mind-eating foe.

There’s a ton more. But here are the top three advances that’ll keep reshaping biotech far past 2021, with some runners-up.

mRNA Vaccines

I know, I know. We’re all tired of hearing about Covid-19 and vaccines. Yet their remarkable ability to fight a completely novel infectious virus is “nothing short of miraculous.” It also showcased the power of the decades-old technology that previously languished in labs, with a platform that’s far faster, simpler, and more adaptable than any previous vaccine technology. Because they no longer rely on physical target proteins from a virus—rather, just the genetic code for those proteins—designing a vaccine just requires a laptop and some ingenuity. “The era of the digital vaccine is here,” wrote a team from GlaxoSmithKline.

To enthusiasts, mRNA vaccines could transform current treatments for a wealth of diseases, and the field is exploding. Moderna, for example, launched an HIV vaccine human trial in August to begin assessing its safety, tackling a virus that’s escaped classic vaccine tactics for four decades. Along with the National Institutes of Health (NIH), the company also published data on an HIV vaccine candidate that lowered the chance of infection by nearly 80 percent in monkeys, with all subjects developing antibodies against 12 tested strains of HIV. It’s no small feat—the HIV target, Env, is a formidable target due to its complexity and is coated with a sugar armor to mask vaccine target points. The mRNA vaccine offers new hope.

Viruses aside, mRNA vaccines also represent a new solution to autoimmune or neurodegenerative diseases. BioNTech, the partner of Pfizer for developing Covid-19 vaccines, is applying the technology to tackle multiple sclerosis (MS). In MS, the immune system gradually strips away the insulation on nerve fibers, causing gradual and irreversible damage. Initial results in mice are positive, with the approach “highly flexible, fast, and cost efficient,” while potentially being personalized to each patient.

Further down the pipeline are mRNA vaccines that tackle cancer or those that deal with antibiotic resistance. Whether the tech can solve some of our toughest diseases remains to be seen, but the field is on a roll.

In Vivo Gene Therapy

CRISPR’s long been touted as a tool that can radically transform gene therapy. Earlier studies used the gene editing tool to bolster immune T-cells, transforming them into super soldiers that enhance their fight against blood cancers (CAR-T therapy). The tool also scored successes in battling anemia and other symptoms in patients with blood disorders. The down side was that cells needed to be gene-edited outside the body and infused back into the bloodstream. This year elevated CRISPR to the ultimate goal: directly editing genes inside the body, opening the door to curing hundreds of disorders resulting from faulty genetic code.

In a breakthrough, one trial from University College London edited a mutated gene in the liver that eventually leads to heart and nerve damage. Unlike previous attempts, here the CRISPR machinery was delivered into the bloodstream with a single infusion to switch the gene off, sharply decreasing the production of the mutant protein in six patients. Another trial snipped a dysfunctional gene that causes blindness. By directly injecting the treatment into the retina, volunteers were able to better sense light.

Both are edge cases. For the liver trial, CRISPR was delivered using lipid nanoparticles—little fatty space ships—that have an affinity for the liver, with more transient gene-editing effects. And unlike the retina, most of our body’s tissues aren’t immediately accessible to a simple injection. But as proofs of concept, the trials finally bring CRISPR into a vast world of gene-editing possibilities inside the body. Along with advances in delivery, CRISPR—and its many upgrades—is set to treat the untreatable.

An Unprecedented Look Into Human Development

The first few hours and days of a human embryo’s development are a black box—one we need to crack. Understanding early pregnancy is key to limiting birth defects and pregnancy loss, and improving assistive reproduction technologies.

The problem? Early embryos are hard to come by, and carry significant ethical and legal challenges. This year, several studies circumvented these problems, instead transforming skin cells into blastocysts, a cellular structure that resembles the very first stage of a human embryo.

Torpedoing the usual “sperm meets egg” narrative, the studies engineered the “first complete model of the human embryo” using embryonic stem cells and skin cells—no reproductive cells needed. Bathed in a nutritious liquid, the cells developed into blastocysts, containing cell types that eventually lead to all lineages to build our bodies. The artificial embryos are genetically similar to natural ones, stirring up debate on how long they should be allowed to develop. The nightmare scenario? Imagine a mini-brain growing inside an embryo made out of skin cells!

For now that’s technically impossible, but the ethical quandary has stirred up concern at the International Society for Stem Cell Research (ISSCR), which governs research related to human stem cells and embryos. Yet surprisingly, this year, they relaxed the 14-day rule for culturing embryos, giving permission to push embryo research past two weeks. With relaxed guidelines, upcoming studies could reveal what happens to a human embryo after implanting into the uterus, and gastrulation—when genetic cues lay out the body’s overall patterning and set the stage for organ development.

It’s a decision mired in controversy, but provides an unprecedented opportunity to revise IVF and, for the first time, examine the first stages of human development. It’s also bound to raise ethical quandaries: what if the embryos—natural or artificial—begin developing neurons that fire, or heart cells that pulse? As artificial blastocysts increasingly embody their biological counterparts, one thing is clear: with great power comes great responsibility.

Runners Up

AI predicting proteins: DeepMind and the University of Washington both engineered AI that can solve the structure of a protein based purely on its genetic code. It’s a “once in a generational advance,” a “breakthrough of the year,” and a tool that’ll change structural biology forever. Updates to the original AI can now also predict protein complexes—that is, how one protein unit interacts with another—and even their function. AI is also beginning to solve RNA structure, the messenger that bridges DNA to proteins. From synthetic biology to drug development, the impact is yet to come.

AI-designed drugs: It’s been a long time in the making, but the hype is now real. This year, Alphabet, Google’s parent company, launched a new venture called Isomorphic Labs to tackle a new world of drug development using AI. Powerful algorithms are making it increasingly easy to screen drug candidates from millions of chemicals. And the first AI-discovered drug is now going into clinical trials in a safety test for a lung disease that irreversibly degrades the organ’s function. It’s a significant milestone, and the trial may pave the road for the first AI-discovered, human-tested drug that treats diseases.

In another year of living with Covid-19, it’s clear that the pandemic can’t hold science down. I can’t wait to share the good, the weird, and (holds breath) more “breakthroughs of a generation” biotech stories in 2022.

Image Credit: Schäferle / 94 images / Pixabay

Kategorie: Transhumanismus