Singularity HUB

Syndikovat obsah Singularity Hub
News and Insights on Technology, Science, and the Future from Singularity University
Aktualizace: 29 min 53 sek zpět

An ‘Uncrashable’ Car? Luminar Says Its Lidar Can Get There

23 Červen, 2021 - 16:00

As a recent New York Times article highlighted, self-driving cars are taking longer to come to market than many experts initially predicted. Automated vehicles where riders can sit back, relax, and be delivered to their destinations without having to watch the road are continuously relegated to the “not-too-distant future.”

There’s not just debate on when this driverless future will arrive, there’s also a lack of consensus on how we’ll get there, that is, which technologies are most efficient, safe, and scalable to take us from human-driven to computer-driven (Tesla is the main outlier in this debate). The big players are lidar, cameras, ultrasonic sensors, and radar. Last week, one lidar maker showcased some new technology that it believes will tip the scales.

California-based Luminar has built a lidar it calls Iris. Iris not only has a longer range than existing systems, it’s also more compact; gone are the days of a big, bulky setup that all but takes over the car. Perhaps most importantly, the company is aiming to manufacture and sell Iris at a price point well below the industry standard.

Lidar scans a vehicle’s surroundings by sending out pulses of light in or near the visible spectrum, illuminating targets then analyzing the reflections that come back and using them to create high-resolution 3D ‘maps.’ Advances in laser technology and computing speed over the last decade or so made lidar a more viable technology for widespread use.

Earlier iterations used spinning mirrors to direct the light beams, but that made for clunky systems with lots of moving parts. In 2016 Quanergy unveiled the first solid-state lidar, the S3, steered with a chip containing a million tiny antennas. With a range of 200 meters, the company planned to sell the S3 for $250 (at least three of these would be required to give the vehicle full visibility, putting the cost at $750).

Iris uses waves of light up to 1,550 nanometers long (905 nanometers is the standard). The longer length yields increased visibility, allowing the map to incorporate objects others might miss, whether because they’re small, don’t reflect light well, or are too far away.

Luminar says Iris can detect and classify objects up to 250 meters away, or 500 meters for larger objects, and can detect the speed of moving objects in 3D (like a car changing lanes or a pedestrian stepping into the street). Rather than multiple lasers working in concert, Iris has just one laser and accompanying receiver, with two axis scanning mirrors giving the lidar a 120-degree by 30-degree field of vision.

Luminar CEO Austin Russell estimates Iris will initially be priced at around $1,000, and over time brought down to $500. Just two years ago, Wired reported industry leader Velodyne’s lidar costing “about $75,000.” Since then, though, Velodyne has also begun work on a solid-state lidar it aims to price below $500.

Luminar plans to integrate Iris into robotaxis and self-driving trucks through a design it’s calling Blade, a sleek gold-colored strip encircling the vehicle and containing all its sensors.

One of the most vocal detractors of lidar has been Elon Musk, who called the technology “a fool’s errand” and said anyone relying on it was “doomed.” A May sighting of a Tesla Model Y outfitted with Luminar lidar caused some speculation about whether Musk was reversing course, but as one analyst pointed out, it’s more likely that Tesla is using lidar to test and validate its own self-driving system, which relies primarily on cameras.

With or without Tesla as a customer, though, Luminar seems to be doing fine: the company made headlines last year when it secured a contract with Volvo, saying the Swedish automaker’s cars would reach Level 3 autonomy in 2022. As reported by The Verge, Luminar also has deals with Audi, Toyota Research Institute, Daimler, and Chinese automaker SAIC, among others.

A lot of the discussion around self-driving cars focuses on the supposed safety improvements the technology will herald. Humans, the story goes, are negligent and at times even reckless, the cause of over 33,000 accidents and 36,000 deaths per year in the US alone. But putting these huge-sounding numbers in context, you could actually argue that humans are very good at driving; there’s about one death from motor vehicle crashes per 100 million miles traveled.

Nevertheless, no fatalities is better than some, and that’s what Luminar wants (well, Luminar and anyone who’s ever driven or ridden in a car). Russell told The Verge that the company is “moving towards the vision of zero collisions, building the uncrashable car.” It will be a while yet before we can determine who’s the better driver, but incremental advances like Iris seem to indicate that computers are (slowly) catching up to us.

Image Credit: Luminar

Kategorie: Transhumanismus

The Four Stages of Intelligent Matter That Will Bring Us Iron Man’s ‘Endgame’ Nanosuit

22 Červen, 2021 - 16:00

Imagine clothing that can warm or cool you, depending on how you’re feeling. Or artificial skin that responds to touch, temperature, and wicks away moisture automatically. Or cyborg hands controlled with DNA motors that can adjust based on signals from the outside world.

Welcome to the era of intelligent matter—an unconventional AI computing idea directly woven into the fabric of synthetic matter. Powered by brain-based computing, these materials can weave the skins of soft robots or form microswarms of drug-delivering nanobots, all while reserving power as they learn and adapt.

Sound like sci-fi? It gets weirder. The crux that’ll guide us towards intelligent matter, said Dr. W.H.P. Pernice at the University of Munster and colleagues, is a distributed “brain” across the material’s “body”— far more alien than the structure of our own minds.

Picture a heated blanket. Rather than powering it with a single controller, it’ll have computing circuits sprinkled all over. This computing network can then tap into a type of brain-like process, called “neuromorphic computing.” This technological fairy dust then transforms a boring blanket into one that learns what temperature you like and at what times of the day to predict your preferences as a new season rolls around.

Oh yeah, and if made from nano-sized building blocks, it could also reshuffle its internal structure to store your info with a built-in memory.

“The long-term goal is de-centralized neuromorphic computing,” said Pernice. Taking inspiration from nature, we can then begin to engineer matter that’s powered by brain-like hardware, running AI across the entire material.

In other words: Iron Man’s Endgame nanosuit? Here we come.

Why Intelligent Matter?

From rockets that could send us to Mars to a plain cotton T-shirt, we’ve done a pretty good job using materials we either developed or harvested. But that’s all they are—passive matter.

In contrast, nature is rich with intelligent matter. Take human skin. It’s waterproof, only selectively allows some molecules in, and protects us from pressure, friction, and most bacteria and viruses. It can also heal itself after a scratch or rip, and it senses outside temperature to cool us down when it gets too hot.

While our skin doesn’t “think” in the traditional sense, it can shuttle information to the brain in a blink. Then the magic happens. With over 100 billion neurons, the brain can run massively parallel computations in its circuits, while consuming only about 20 watts—not too different from the 13” Macbook Pro I’m currently typing on. Why can’t a material do the same?

The problem is that our current computing architecture struggles to support brain-like computing because of energy costs and time lags.

Enter neuromorphic computing. It’s an idea that hijacks the brain’s ability to process data simultaneously with minimal energy. To get there, scientists are redesigning computer chips from the ground up. For example, instead of today’s chips that divorce computing modules from memory modules, these chips process information and store it at the same location. It might seem weird, but it’s what our brains do when learning and storing new information. This arrangement slashes the need for wires between memory and computation modules, essentially teleporting information rather than sending it down a traffic-jammed cable.

The end result is massively parallel computing at a very low energy cost.

The Road to Intelligent Matter

In Pernice and his colleagues’ opinion, there are four stages that can get us to intelligent matter.

The first is structural—basically your run-of-the-mill matter that can be complex but can’t change its properties. Think 3D printed frames of a lung or other organs. Intricate, but not adaptable.

Next is responsive matter. This can shift its makeup in response to the environment. Similar to an octopus changing its skin color to hide from predators, these materials can change their shape, color, or stiffness. One example is a 3D printed sunflower embedded with sensors that blossoms or closes depending on heat, force, and light. Another is responsive soft materials that can stretch and plug into biological systems, such as an artificial muscle made of silicon that can stretch and lift over 13 pounds repeatedly upon heating. While it’s a neat trick, it doesn’t adapt and can only follow its pre-programmed fate.

Higher up the intelligence food chain are adaptive materials. These have a built-in network to process information, temporarily store it, and adjust behavior from that feedback. One example are micro-swarms of tiny robots that move in a coordinated way, similar to schools of fish or birds. But because their behavior is also pre-programmed, they can’t learn from or remember their environment.

Finally, there’s intelligent material, which can learn and memorize.

“[It] is able to interact with its environment, learn from the input it receives, and self-regulates its action,” the team wrote.

It starts with four components. The first is a sensor, which captures information from both the outside world and the material’s internal state—think of a temperature sensor on your skin. Next is an actuator, basically something that changes the property of the material. For example, making your skin sweat more as the temperature goes up. The third is a memory unit that can store information long-term and save it as knowledge for the future. Finally, the last is a network—Bluetooth, wireless, or whatnot—that connects each component, similar to nerves in our brains.

“The close interplay between all four functional elements is essential for processing information, which is generated during the entire process of interaction between matter and the environment, to enable learning,” the team said.


Here’s where neuromorphic computing comes in.

“Living organisms, in particular, can be considered as unconventional computing systems,” the authors said. “Programmable and highly interconnected networks are particularly well suited to carrying out these tasks and brain-inspired neuromorphic hardware aims.”

The brain runs on neurons and synapses—the junctions that connect individual neurons into networks. Scientists have tapped into a wide variety of materials to engineer artificial components of the brain connected into networks. Google’s tensor processing unit and IBM’s TrueNorth are both famous examples; they allow computation and memory to occur in the same place, making them especially powerful for running AI algorithms.

But the next step, said the authors, is to distribute these mini brains inside a material while adding sensors and actuators, essentially forming a circuit that mimics the entire human nervous system. For the matter to respond quickly, we may need to tap into other technologies.

One idea is to use light. Chips that operate on optical neural networks can both calculate and operate at the speed of light. Another is to build materials that can reflect on their own decisions, with neural networks that listen and learn. Add to that matter that can physically change its form based on input—like from water to ice—and we may have a library of intelligent matter that could transform multiple industries, especially for autonomous nanobots and life-like prosthetics.

“A wide variety of technological applications of intelligent matter can be foreseen,” the authors said.

Image Credit: ktsdesign /

Kategorie: Transhumanismus

Why Flying Cars Could Be Here Within the Decade

21 Červen, 2021 - 16:00

Flying cars are almost a byword for the misplaced optimism of technologists, but recent news suggests their future may be on slightly firmer footing. The industry has seen a major influx of capital and big automakers seem to be piling in.

What actually constitutes a flying car has changed many times over the decades since the cartoon, The Jetsons, introduced the idea to the popular imagination. Today’s incarnation is known more formally as an electric vertical takeoff and landing (eVTOL) aircraft.

As the name suggests, the vehicles run on battery power rather than aviation fuel, and they’re able to take off and land like a helicopter. Designs vary from what are essentially gigantic multi-rotor drones to small fixed-wing aircraft with rotors that can tilt up or down, allowing them to hover or fly horizontally (like an airplane).

Aerospace companies and startups have been working on the idea for a number of years, but recent news suggests it might be coming closer to fruition. Last Monday, major automakers Hyundai and GM said they are developing vehicles of their own and are bullish about the prospects of this new mode of transport.

And the week prior, British flying car maker Vertical Aerospace announced plans to go public in a deal that values the company at $2.2 billion. Vertical Aerospace also said it had received $4 billion worth of preorders, including from American Airlines and Virgin Atlantic.

The deal was the latest installment in a flood of capital into the sector, with competitors Joby Aviation, Archer Aviation, and Lilium all recently announcing deals to go public too. Also joining them is Blade Urban Mobility, which currently operates heliports but plans to accommodate flying cars when they become available.

When exactly that will be is still uncertain, but there seems to be growing consensus that the second half of this decade might be a realistic prospect. Vertical is aiming to start deliveries by 2024. And the other startups, who already have impressive prototypes, are on a similar timeline.

Hyundai’s global chief operating officer, José Muñoz, told attendees at Reuters’ Car of the Future conference that the company is targeting a 2025 rollout of an air taxi service, while GM’s vice president of global innovation, Pamela Fletcher, went with a more cautious 2030 target. They’re not the only automakers getting in on the act, with Toyota, Daimler, and China’s Geely all developing vehicles alone or in partnership with startups.

Regulators also seem to be increasingly open to the idea.

In January, the Federal Aviation Administration (FAA) announced it expects to certify the first eVTOLs later this year and have regulations around their operation in place by 2023. And last month the European Union Aviation Safety Agency said it expected air taxi services to be running by 2024 or 2025.

While it seems fairly settled that the earliest flying cars will be taxis rather than private vehicles, a major outstanding question is the extent to which they will be automated.

The majority of prototypes currently rely on a human to pilot them. But earlier this month Larry Page’s air taxi startup Kitty Hawk announced it would buy drone maker 3D Robotics as it seeks to shift to a fully autonomous setup. The FAA recently created a new committee to draft a regulatory path for beyond-visual-line-of-sight (BVLOS) autonomous drone flights. This would likely be a first step along the path to allowing unmanned passenger aircraft.

What seems more certain is that there will be winners and losers in the recent rush to corner the air mobility market. As Chris Bryant points out in Bloomberg, these companies still face a host of technological, regulatory, and social hurdles, and the huge amounts of money flooding into the sector may be hard to justify.

Regardless of which companies make it out the other side, it’s looking increasingly likely that air taxis will be a significant new player in urban transport by the end of the decade.

Image Credit: Joby Aviation

Kategorie: Transhumanismus

Each of These Microscopic Glass Beads Stores an Image Encoded on a Strand of DNA

20 Červen, 2021 - 16:00

Increasingly, civilization’s information is stored digitally, and that storage is abundant and growing. We don’t bother deleting those seven high-definition videos of the ceiling or 20 blurry photos of a table corner taken by our kid. There’s plenty of room on a smartphone or in the cloud, and we count on both increasing every year.

As we fluidly copy information from device to device, this situation seems durable. But that’s not necessarily true.

The amount of data we create is increasing rapidly. And if we (apocalyptically) lost the ability to produce digital storage devices—hard drives or magnetic tape, for example—our civilization’s collective digital record would begin to sprout holes within years. In decades, it’d become all but unreadable. Digital storage isn’t like books or stone tablets. It has a shorter expiration date. And, although we take storage for granted, it’s still expensive and energy hungry.

Which is why researchers are looking for new ways to archive information. And DNA, life’s very own “hard drive,” may be one solution. DNA offers incredibly dense data storage, and under the right conditions, it can keep information intact for millennia.

In recent years, scientists have advanced DNA data storage. They’ve shown how we can encode individual books, photographs, and even GIFs in DNA and then retrieve them. But there hasn’t been a scalable way to organize and retrieve large collections of DNA files. Until now, that is.

In a new Nature Materials paper, a team from MIT and Harvard’s Broad Institute describe a DNA-based storage system that allows them to search for and pull individual files—in this case images encoded in DNA. It’s a bit like thumbing through your file cabinet, reading the paper tabs to identify a folder, and then pulling the deed to your car from it. Only, obviously, the details are bit more complicated.

“We need new solutions for storing these massive amounts of data that the world is accumulating, especially the archival data,” said Mark Bathe, an MIT professor of biological engineering and senior author of the paper. “DNA is a thousandfold denser than even flash memory, and another property that’s interesting is that once you make the DNA polymer, it doesn’t consume any energy. You can write the DNA and then store it forever.”

How to Organize a DNA Storage System

How does one encode an image in a strand of DNA, anyway? It’s a fairly simple matter of translation.

Each pixel of a digital image is encoded in bits. These bits are represented by 1s and 0s. To convert it into DNA, scientists assign each of these bits to the DNA’s four base molecules, or nucleotides, adenine, cytosine, guanine, and thymine—usually referred to in shorthand by the letters A, C, G, and T. The DNA bases A and G, for example, could represent 1, and C and T could represent 0.

Next, researchers string together (or synthesize) a chain of DNA bases representing each and every bit of information in the original file. To retrieve the image, researchers reverse the process, reading the sequence of DNA bases (or sequencing it) and translating the data back into bits.

The standard retrieval process has a few drawbacks, however.

Researchers use a technique called a polymerase chain reaction (PCR) to pull files. Each strand of DNA includes an identifying sequence that matches a short sequence of nucleotides called a PCR primer. When the primer is added to the DNA solution, it bonds with matching DNA strands—the ones we want to read—and only those sequences are amplified (that is, copied for sequencing). The problem? Primers can interact with off-target sequences. Worse, the process uses enzymes that chew up all the DNA.

“You’re kind of burning the haystack to find the needle, because all the other DNA is not getting amplified and you’re basically throwing it away,” said Bathe.

The microscopic glass spheres pictured here are DNA “files.” Each contains an image, encoded in DNA, and is coated in DNA tags describing the image within. Image Credit: Courtesy of the researchers (via MIT News)

To get around this, the Broad Institute team encapsulated the DNA strands in microscopic (6-micron) glass beads. They affixed short, single-stranded DNA labels to the surface of each bead. Like file names, the labels describe the bead’s contents. A tiger image might be labeled “orange,” “cat,” “wild.” A house cat might be labeled “orange,” “cat,” “domestic.” With just four labels per bead, you could uniquely label 1020 DNA files.

The team can retrieve specific files by adding complementary nucleotide sequences, or primers, corresponding to an individual file’s label. The primers contain fluorescent molecules, and when they link up with a complementary strand—that is, the searched-for label—they form a double helix and glow. Machines separate out the glowing beads, which are opened and the DNA inside sequenced. The rest of the DNA files remain untouched, left in peace to guard their information.

The best part of the method is its scalability. You could, in theory, have a huge DNA library stored in a test tube—Bathe notes a coffee mug of DNA could store all the world’s data—but without an easy way to search and retrieve the exact file you’re looking for, it’s worthless. With this method, everything can be retrieved.

George Church, a Harvard professor of genetics and well-known figure in the field of synthetic biology, called it a “giant leap” for the field.

“The rapid progress in writing, copying, reading, and low-energy archival data storage in DNA form has left poorly explored opportunities for precise retrieval of data files from huge…databases,” he said. “The new study spectacularly addresses this using a completely independent outer layer of DNA and leveraging different properties of DNA (hybridization rather than sequencing), and moreover, using existing instruments and chemistries.”

This Isn’t Coming For Your Computer

To be clear, all DNA data storage, including the work outlined in this study, remains firmly in the research phase. Don’t expect DNA hard drives for your laptop anytime soon.

Synthesizing DNA is still extremely expensive. It’d cost something like $1 trillion dollars to write a petabyte of data in DNA. To match magnetic tape, a common method of archival data storage, Bathe estimates synthesis costs would have to fall six orders of magnitude. Also, this isn’t the speediest technique (to put it mildly).

The cost of DNA synthesis will fall—the technology is being advanced in other areas as well—and with more work, the speed will improve. But the latter may be beside the point. That is, if we’re mainly concerned with backing up essential data for the long term with minimal energy requirements and no need to regularly access it, then speed is less important than fidelity, data density, and durability.

DNA already stores the living world’s information, now, it seems, it can do the same for all things digital too.

Image Credit: Courtesy of the researchers (via MIT News).

Kategorie: Transhumanismus

This Week’s Awesome Tech Stories From Around the Web (Through June 19)

19 Červen, 2021 - 16:00

Kill the 5-Day Work Week
Joe Pinsker | The Atlantic
“People who work a four-day week generally report that they’re healthier, happier, and less crunched for time; their employers report that they’re more efficient and more focused. These companies’ success points to a tantalizing possibility: that the conventional approach to work and productivity is fundamentally misguided.”


Flying Car Makers Want to Make ‘Uber Meets Tesla in the Air’
Cade Metz and Erin Griffith | The New York Times
“i‘Our dream is to free the world from traffic,’ said Sebastian Thrun, [an] engineer at the heart of this movement. That dream, most experts agree, is a long way from reality. But the idea is gathering steam. Dozens of companies are now building these aircraft, and three recently agreed to go public in deals that value them as high as $6 billion.”


Mathematicians Prove 2D Version of Quantum Gravity Really Works
Charlie Wood | Quanta
“In three towering papers, a team of mathematicians has worked out the details of Liouville quantum field theory, a two-dimensional model of quantum gravity. …The three papers forge a bridge between the pristine world of mathematics and the messy reality of physics—and they do so by breaking new ground in the mathematical field of probability theory. …i‘This is a masterpiece in mathematical physics,’ said Xin Sun, a mathematician at the University of Pennsylvania.”


We Asked Giant-Robot Experts to Critique Video Game Mecha
Pearse Anderson | Wired
“It’s all fun and games, but how often do you think about the long-term safety, maintenance, and unintended side effects of giant robots? If these mechs were real, a lot would change—and a lot could go wrong.”


The Father of the Web Is Selling the Source Code as an NFT
Josie Fischels | NPR
“Ever thought about what it would be like to own the World Wide Web? Now you sort of can—well, a digital representation of its source code anyway. …The work includes the original archive of dated and time-stamped files from 1990 and 1991, containing 9,555 lines of source code and original HTML documents that taught the earliest web users how to use the application.”


A Guide to Living at a Black Hole
Paul M. Sutter | Ars Technica
“While your typical space traveler might look for a home around a calm G-type star, some celestial citizens are brave enough to take up refuge around one of these monsters. It’s not an easy life, that’s for sure, but being neighbors with a black hole does mean you’ll almost certainly learn more about the fundamental nature of reality than anybody else.”


Aliens, Science, and Speculation in the Wake of ‘Oumuamua
Matthew Bothwell | Aeon
“Speculation might well be the creative engine of science, but it’s only when flights of imagination are followed up by intellectually honest, rigorous critique that we have a chance of learning more about our world. Many good ideas started off as wild speculation, but so did countless bad ones. …The critical thing, and the key to the scientific process, is the ability to sift the good ideas from the bad.”

Image Credit: Dan Asaki / Unsplash

Kategorie: Transhumanismus

Is It Time to Give Up on Consciousness as ‘the Ghost in the Machine’?

17 Červen, 2021 - 16:00

As individuals, we feel that we know what consciousness is because we experience it daily. It’s that intimate sense of personal awareness we carry around with us, and the accompanying feeling of ownership and control over our thoughts, emotions, and memories.

But science has not yet reached a consensus on the nature of consciousness, which has important implications for our belief in free will and our approach to the study of the human mind.

Beliefs about consciousness can be roughly divided into two camps. There are those who believe consciousness is like a ghost in the machinery of our brains, meriting special attention and study in its own right. And there are those, like us, who challenge this, pointing out that what we call consciousness is just another output generated backstage by our efficient neural machinery.

Over the past 30 years, neuroscientific research has been gradually moving away from the first camp. Using research from cognitive neuropsychology and hypnosis, our recent paper argues in favor of the latter position, even though this seems to undermine the compelling sense of authorship we have over our consciousness.

And we argue this isn’t simply a topic of mere academic interest. Giving up on the ghost of consciousness to focus scientific endeavor on the machinery of our brains could be an essential step we need to take to better understand the human mind.

Is Consciousness Special?

Our experience of consciousness places us firmly in the driver’s seat, with a sense that we’re in control of our psychological world. But seen from an objective perspective, it’s not at all clear that this is how consciousness functions, and there’s still much debate about the fundamental nature of consciousness itself.

One reason for this is that many of us, including scientists, have adopted a dualist position on the nature of consciousness. Dualism is a philosophical view that draws a distinction between the mind and the body. Even though consciousness is generated by the brain—a part of the body—dualism claims that the mind is distinct from our physical features, and that consciousness cannot be understood through the study of the physical brain alone.

It’s easy to see why we believe this to be the case. While every other process in the human body ticks and pulses away without our oversight, there is something uniquely transcendental about our experience of consciousness. It’s no surprise that we’ve treated consciousness as something special, distinct from the automatic systems that keep us breathing and digesting.

But a growing body of evidence from the field of cognitive neuroscience, which studies the biological processes underpinning cognition, challenges this view. Such studies draw attention to the fact that many psychological functions are generated and carried out entirely outside of our subjective awareness, by a range of fast, efficient non-conscious brain systems.

Consider, for example, how effortlessly we regain consciousness each morning after losing it the night before, or how, with no deliberate effort, we instantly recognize and understand shapes, colors, patterns, and faces we encounter.

Consider that we don’t actually experience how our perceptions are created, how our thoughts and sentences are produced, how we recall our memories or how we control our muscles to walk and our tongues to talk. Simply put, we don’t generate or control our thoughts, feelings, or actions; we just seem to become aware of them.

Becoming Aware

The way we simply become aware of thoughts, feelings, and the world around us suggests that our consciousness is generated and controlled backstage, by brain systems that we remain unaware of.

Our recent paper argues that consciousness involves no separate independent psychological process distinct from the brain itself, just as there’s no additional function to digestion that exists separately from the physical workings of the gut.

While it’s clear that both the experience and content of consciousness are real, we argue that, from a science explanation, they are epiphenomenal: secondary phenomena based on the machinations of the physical brain itself. In other words, our subjective experience of consciousness is real, but the functions of control and ownership we attribute to that experience are not.

Future Study of the Brain

Our position is neither obvious nor intuitive. But we contend that continuing to place consciousness in the driver’s seat, above and beyond the physical workings of the brain, and attributing cognitive functions to it, risks confusion and delaying a better understanding of human psychology and behavior.

To better align psychology with the rest of the natural sciences, and to be consistent with how we understand and study processes like digestion and respiration, we favor a perspective change. We should redirect our efforts to studying the non-conscious brain, and not the functions previously attributed to consciousness.

This doesn’t of course exclude psychological investigation into the nature, origins, and distribution of the belief in consciousness. But it does mean refocusing academic efforts on what happens beneath our awareness, where we argue the real neuro-psychological processes take place.

Our proposal feels personally and emotionally unsatisfying, but we believe it provides a future framework for the investigation of the human mind—one that looks at the brain’s physical machinery rather than the ghost that we’ve traditionally called consciousness.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image Credit: chenspecPixabay 

Kategorie: Transhumanismus

These Mice Were Born From Sperm That Spent Almost 6 Years in Space

16 Červen, 2021 - 16:00

As inconceivable as it still sounds, the wheels have been set in motion for humans to one day reach and colonize Mars. There’s already a detailed design for the first Martian city, SpaceX is building its first offshore spaceport to one day launch Starships to Mars (among other missions), and NASA recently flew a helicopter on Mars. But there are a few big pieces of the puzzle still missing, including the answer to one crucial question: after we get humans to Mars, how will we ensure that our species is able to continue there? In other words, we don’t yet know whether humans can reproduce in space.

A study by Japanese researchers just brought us a small step closer to answering some of these questions. The scientists sent freeze-dried mouse sperm to the International Space Station (ISS), where it stayed for varying lengths of time—from nine months up to five years and ten months—before being brought back to Earth and used to impregnate female mice. The team published their results last week in the journal Science Advances.

Space Station Sperm

Sending sperm to the ISS seems like a bizarre idea. What could this tell us about humans’ potential to reproduce in space, especially since the sperm essentially took an extended field trip then came back, rather than being turned into babies off-Earth?

The team mainly wanted to study the impact of space radiation on DNA and fertility. Radiation can cause damage to somatic cells as well as germline cells, and in space there’s more potential for harm from solar particle events, where the sun sends out high-charge, high-energy particles. These are called HZE particles, and they’re potent enough to break the interwoven strands of DNA’s double helix. Galactic cosmic rays coming from outside the solar system can cause harm, too.

A species can certainly survive with some genetic mutations, but as more of them accumulate and are passed on to new generations, and the DNA of those generations undergoes additional mutations, it’s not long before you’re dealing with an intractable set of problems—or, as the team put it in their paper, “If radiation were continuously irradiated into the body of a species and several mutations accumulated in germ cells over a long period of time, then the species would become a different species.”

By storing the sperm on the ISS, then, the team was able to observe its behavior and its resilience to space radiation, namely whether its DNA was damaged.

What Happened

Out of 66 male mice, the scientists took sperm samples from the 12 that were healthiest and had the most genetic diversity. They divided the sperm into six different boxes, sending three to space and keeping three on Earth as a control group. The boxes sent to space were brought back nine months later, two years and nine months later, and five years and ten months later, respectively (this final timespan, by the way, is the longest that samples have ever been held on the ISS for biological research).

The team tested the repatriated sperm and the embryos created with them for things like abnormal chromosome segregation, cytoplasmic damage, cell number of blastocysts, and apoptosis rate. They found that the time the sperm spent in space didn’t cause damage to their DNA.

The space sperm and the control group sperm were both used to impregnate female mice via IVF, and all the mouse pups were born healthy, with no significant differences between them. Not only that, the team bred an additional generation to check the health of those pups, too, and found no abnormalities. “The space radiation did not affect sperm DNA or fertility after preservation on ISS, and many genetically normal offspring were obtained without reducing the success rate compared to the ground-preserved control,” they wrote.

What We Know, What We Don’t

While these results are promising, they’re really just the beginning of our research on reproducing in space, and there are a few important caveats.

For starters, the International Space Station is only about 250 miles from Earth, and it’s partially shielded from space radiation by Earth’s magnetic field. There are a lot more harmful particles flying around as you get farther away, and radiation on Mars is a major concern for human health in general, before even considering its impact on reproduction.

In addition, while five years and ten months is a long time in terms of biological research done in space, it’s not long relative to an average human lifespan; who’s to say what will happen to germline cells inside human bodies when they’re in space for 10 or 20 years, or when they’re born in space? And radiation isn’t the only wild card up there—there’s also microgravity (as if being pregnant and giving birth with gravity’s help wasn’t already hard enough).

One piece of good news? The study found that freeze-drying sperm actually gives the sperm’s nucleus a higher tolerance for radiation as compared to fresh sperm. This could be relevant for sending samples of germline cells to space and preserving or utilizing them there, perhaps to ensure high genetic diversity for future colonies. A project called the Lunar Ark aims to store DNA on the moon as a “modern global insurance policy.”

Since we’re still decades away from human reproduction actually happening off Earth, what’s next in terms of relevant research? The team notes that NASA is planning to launch a multi-purpose outpost to orbit the moon as part of its Artemis program, and they’re hoping to perform similar research with freeze-dried sperm there to study the effects of radiation deeper in space. “These discoveries are essential and important for mankind to progress into the space age,” they wrote.

Image Credit: Teruhiko Wakayama/University of Yamanashi

Kategorie: Transhumanismus

A Google AI Designed a Computer Chip as Well as a Human Engineer—But Much Faster

15 Červen, 2021 - 16:00

AI has finally come full circle.

A new suite of algorithms by Google Brain can now design computer chips—those specifically tailored for running AI software—that vastly outperform those designed by human experts. And the system works in just a few hours, dramatically slashing the weeks- or months-long process that normally gums up digital innovation.

At the heart of these robotic chip designers is a type of machine learning called deep reinforcement learning. This family of algorithms, loosely based on the human brain’s workings, has triumphed over its biological neural inspirations in games such as Chess, Go, and nearly the entire Atari catalog.

But game play was just these AI agents’ kindergarten training. More recently, they’ve grown to tackle new drugs for Covid-19, solve one of biology’s grandest challenges, and reveal secrets of the human brain.

In the new study, by crafting the hardware that allows it to run more efficiently, deep reinforcement learning is flexing its muscles in the real world once again. The team cleverly adopted elements of game play into the chip design challenge, resulting in conceptions that were utterly “strange and alien” to human designers, but nevertheless worked beautifully.

It’s not just theory. A number of the AI’s chip design elements were incorporated into Google’s tensor processing unit (TPU), the company’s AI accelerator chip, which was designed to help AI algorithms run more quickly and efficiently.

“That was our vision with this work,” said study author Anna Goldie. “Now that machine learning has become so capable, that’s all thanks to advancements in hardware and systems, can we use AI to design better systems to run the AI algorithms of the future?”

The Science and Art of Chip Design

I don’t generally think about the microchips in my phone, laptop, and a gazillion other devices spread across my home. But they’re the bedrock—the hardware “brain”—that controls these beloved devices.

Often no larger than a fingernail, microchips are exquisite feats of engineering that pack tens of millions of components to optimize computations. In everyday terms, a badly-designed chip means slow loading times and the spinning wheel of death—something no one wants.

The crux of chip design is a process called “floorplanning,” said Dr. Andrew Kahng, at the University of California, San Diego, who was not involved in this study. Similar to arranging your furniture after moving into a new space, chip floorplanning involves shifting the location of different memory and logic components on a chip so as to optimize processing speed and power efficiency.

It’s a horribly difficult task. Each chip contains millions of logic gates, which are used for computation. Scattered alongside these are thousands of memory blocks, called macro blocks, which save data. These two main components are then interlinked through tens of miles of wiring so the chip performs as optimally as possible—in terms of speed, heat generation, and energy consumption.

“Given this staggering complexity, the chip-design process itself is another miracle—in which the efforts of engineers, aided by specialized software tools, keep the complexity in check,” explained Kahng. Often, floorplanning takes weeks or even months of painstaking trial and error by human experts.

Yet even with six decades of study, the process is still a mixture of science and art. “So far, the floorplanning task, in particular, has defied all attempts at automation,” said Kahng. One estimate shows that the number of different configurations for just the placement of “memory” macro blocks is about 102,500—magnitudes larger than the number of stars in the universe.

Game Play to the Rescue

Given this complexity, it seems crazy to try automating the process. But Google Brain did just that, with a clever twist.

If you think of macro blocks and other components as chess pieces, then chip design becomes a sort of game, similar to those previously mastered by deep reinforcement learning. The agent’s task is to sequentially place macro blocks, one by one, onto a chip in an optimized manner to win the game. Of course, any naïve AI agent would struggle. As background learning, the team trained their agent with over 10,000 chip floorplans. With that library of knowledge, the agent could then explore various alternatives.

During the design, it worked with a type of “trial-and-error” process that’s similar to how we learn. At any stage of developing the floorplan, the AI agent assesses how it’s doing using a learned strategy, and decides on the most optimal way to move forward—that is, where to place the next component.

“It starts out with a blank canvas, and places each component of the chip, one at a time, onto the canvas. At the very end it gets a score—a reward—based on how well it did,” explained Goldie. The feedback is then used to update the entire artificial neural network, which forms the basis of the AI agent, and get it ready for another go-around.

The score is carefully crafted to follow the constraints of chip design, which aren’t always the same. Each chip is its own game. Some, for example, if deployed in a data center, will need to optimize power consumption. But a chip for self-driving cars should care more about latency so it can rapidly detect any potential dangers.

The Bio-Chip

Using this approach, the team didn’t just find a single chip design solution. Their AI agent was able to adapt and generalize, needing just six extra hours of computation to identify optimized solutions for any specific needs.

“Making our algorithm generalize across these different contexts was a much bigger challenge than just having an algorithm that would work for one specific chip,” said Goldie.

It’s a sort of “one-shot” mode of learning, said Kahng, in that it can produce floorplans “superior to those developed by human experts for existing chips.” A main throughline seemed to be that the AI agent laid down macro blocks in decreasing order of size. But what stood out was just how alien the designs were. The placements were “rounded and organic,” a massive departure from conventional chip designs with angular edges and sharp corners.

Human designers thought “there was no way that this is going to be high quality. They almost didn’t want to evaluate them,” said Goldie.

But the team pushed the project from theory to practice. In January, Google integrated some AI-designed elements into their next-generation AI processors. While specifics are being kept under wraps, the solutions were intriguing enough for millions of copies to be physically manufactured.

The team plans to release its code for the broader community to further optimize—and understand—the machine’s brain for chip design. What seems like magic today could provide insights into even better floorplan designs, extending the gradually-slowing (or dying) Moore’s Law to further bolster our computational hardware. Even tiny improvements in speed or power consumption in computing could make a massive difference.

“We can…expect the semiconductor industry to redouble its interest in replicating the authors’ work, and to pursue a host of similar applications throughout the chip-design process,” said Kahng.

“The level of the impact that [a new generation of chips] can have on the carbon footprint of machine learning, given it’s deployed in all sorts of different data centers, is really valuable. Even one day earlier, it makes a big difference,” said Goldie.

Image Credit: Laura Ockel / Unsplash

Kategorie: Transhumanismus

Scientists in Spain Just Got a Step Closer to Building a Practical Quantum Repeater

14 Červen, 2021 - 16:00

A quantum internet could play a key role in tying together many of the most promising applications for quantum technologies. The main impetus for quantum communication networks today is security, because a feature of messages encoded in quantum states is that reading them changes their content, alerting the receiver to any eavesdropping.

But being able to share quantum states over large distances has other promising applications as well. For a start, it could allow quantum computers in different locations to share data, creating distributed quantum computers much more powerful than their individual components.

It could also make it possible to create instantaneous links between large networks of quantum sensors or atomic clocks, to measure phenomena like gravitational waves in unprecedented resolution or provide ultra-precise timekeeping.

The big problem at the moment is that creating quantum connections over long distances is tough. Now though, researchers from the the Institute of Photonic Sciences in Spain have demonstrated a new kind of quantum memory that could help build repeaters that can greatly extend the range of quantum networks.

Repeaters are a standard piece of telecoms equipment used in conventional communications networks. Because electrical and optical signals gradually dissipate as they pass through cables, repeaters that read the signal and retransmit it are added at regular intervals to ensure the signals don’t lose strength or fidelity.

But the same features that make quantum communication so secure also preclude the use of traditional repeaters, because reading the signal will scramble it. So instead, quantum repeaters will have to rely on entanglement—the phenomenon Einstein dubbed “spooky action at a distance.”

When two particles interact, their quantum states can become entangled so that even when separated by great distances, measuring one will tell you the state of the other. This can be used to transmit quantum information over long distances instantaneously, and if you can chain multiple entanglements together that would serve as an effective way of boosting a quantum signal without having to read and retransmit it.

The challenge is to hold all these delicate quantum states together long enough to get your message across, which is where the memory comes in. If each repeater is able to store an entanglement for a short while, it gives you time to link all stages of the network together.

Last year, Chinese researchers linked together quantum memories 30 miles apart, but they used a gas of rubidium atoms held in place with lasers. The setup had a variety of issues that made it incompatible with standard telecoms networks.

The new setup, reported in Nature, has a number of characteristics that bring it far closer to a practical quantum repeater. For a start, they’ve swapped out the gas cloud for a solid crystal embedded with rare earth elements. The signals can also be transmitted over standard telecoms lines, the entanglement is reliable and measurable, and the memories can hold multiple entanglements at once.

The system works by generating a pair of entangled photons and firing one at the memory and one down an optical fiber. This happens at two identical nodes at either end of the optical fiber. Halfway down the connecting cable is a beamsplitter, which mixes the two messenger photons together in such a way that the information about which node they came from is erased.

These photons are detected as they exit the beamsplitter, and because it’s impossible to tell which node they came from, it’s impossible to tell which memory the photon they were entangled with is in. Under the strange rules of quantum mechanics, this uncertainty means the stored photon is in both memories simultaneously, which results in an entanglement between the two nodes.

In the initial experiments, the two nodes were only separated by 165 feet of optical fiber, though the crystal memories can hold their state for 25 microseconds, which should be enough time to entangle memories up to 3 miles away.

That’s still not very far, and would require a considerable number of complex nodes to propagate a signal any significant distance. But the researchers highlight several potential improvements that could boost storage times, which could increase the distance that could be covered.

While it’s likely to take considerable extra finessing before the technology is ready for real-world demonstrations, this is a big step towards practical devices that will make up a core part of the future quantum internet.

Image Credit: sakkmesterke/

Kategorie: Transhumanismus

Why It Took 20 Years to ‘Finish’ the Human Genome—and Why There’s Still More to Do

13 Červen, 2021 - 16:00

The release of the draft human genome sequence in 2001 was a seismic moment in our understanding of the human genome, and paved the way for advances in our understanding of the genomic basis of human biology and disease.

But sections were left unsequenced, and some sequence information was incorrect. Now, two decades later, we have a much more complete version, published as a preprint (which is yet to undergo peer review) by an international consortium of researchers.

Technological limitations meant the original draft human genome sequence covered just the “euchromatic” portion of the genome—the 92% of our genome where most genes are found, and which is most active in making gene products such as RNA and proteins.

The newly updated sequence fills in most of the remaining gaps, providing the full 3.055 billion base pairs (“letters”) of our DNA code in its entirety. This data has been made publicly available, in the hope other researchers will use it to further their research.

Why Did It Take 20 Years?

Much of the newly sequenced material is the “heterochromatic” part of the genome, which is more “tightly packed” than the euchromatic genome and contains many highly repetitive sequences that are very challenging to read accurately.

These regions were once thought not to contain any important genetic information but they are now known to contain genes that are involved in fundamentally important processes such as the formation of organs during embryonic development. Among the 200 million newly sequenced base pairs are an estimated 115 genes predicted to be involved in producing proteins.

Two key factors made the completion of the human genome possible:

1. Choosing a very special cell type

The newly published genome sequence was created using human cells derived from a very rare type of tissue called a complete hydatidiform mole, which occurs when a fertilized egg loses all the genetic material contributed to it by the mother.

Most cells contain two copies of each chromosome, one from each parent and each parent’s chromosome contributing a different DNA sequence. A cell from a complete hydatidiform mole has two copies of the father’s chromosomes only, and the genetic sequence of each pair of chromosomes is identical. This makes the full genome sequence much easier to piece together.

2. Advances in sequencing technology

After decades of glacial progress, the Human Genome Project achieved its 2001 breakthrough by pioneering a method called “shotgun sequencing,” which involved breaking the genome into very small fragments of about 200 base pairs, cloning them inside bacteria, deciphering their sequences, and then piecing them back together like a giant jigsaw.

This was the main reason the original draft covered only the euchromatic regions of the genome—only these regions could be reliably sequenced using this method.

The latest sequence was deduced using two complementary new DNA-sequencing technologies. One was developed by PacBio, and allows longer DNA fragments to be sequenced with very high accuracy. The second, developed by Oxford Nanopore, produces ultra-long stretches of continuous DNA sequence. These new technologies allow the jigsaw pieces to be thousands or even millions of base pairs long, making them easier to assemble.

The new information has the potential to advance our understanding of human biology including how chromosomes function and maintain their structure. It is also going to improve our understanding of genetic conditions such as Down syndrome that have an underlying chromosomal abnormality.

Is the Genome Now Completely Sequenced?

Well, no. An obvious omission is the Y chromosome, because the complete hydatidiform mole cells used to compile this sequence contained two identical copies of the X chromosome. However, this work is underway and the researchers anticipate their method can also accurately sequence the Y chromosome, despite it having highly repetitive sequences.

Even though sequencing the (almost) complete genome of a human cell is an extremely impressive landmark, it is just one of several crucial steps towards fully understanding humans’ genetic diversity.

The next job will be to study the genomes of diverse populations (the complete hydatidiform mole cells were European). Once the new technology has matured sufficiently to be used routinely to sequence many different human genomes, from different populations, it will be better positioned to make a more significant impact on our understanding of human history, biology, and health.

Both care and technological development are needed to ensure this research is conducted with a full understanding of the diversity of the human genome to prevent exacerbation of health disparities by limiting discoveries to specific populations.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image Credit: Arek SochaPixabay 

Kategorie: Transhumanismus

This Week’s Awesome Tech Stories From Around the Web (Through June 12)

12 Červen, 2021 - 16:00

These Creepy Fake Humans Represent a New Age in AI
Karen Hao | MIT Technology Review
“[The simulated humans] are synthetic data designed to feed the growing appetite of deep-learning algorithms. Firms like Datagen offer a compelling alternative to the expensive and time-consuming process of gathering real-world data. They will make it for you: how you want it, when you want—and relatively cheaply.”


For $2,700, You Too Can Have Your Very Own Robotic Dog
Victoria Song | Gizmodo
“You’re probably familiar with Spot, Boston Dynamics’ highly advanced, nightmare-inducing robot dog. And while it went on sale last year, few of us have an extra $74,500 lying around to buy one. However, Chinese firm Unitree Robotics has a similar quadruped bot that’s not only a fraction of the size, but it also starts at a mere $2,700. For an advanced robot dog, that’s actually pretty dang affordable.”


Terran R Rocket From Relativity Space Will Be Completely 3D Printed, Completely Reusable
Evan Ackerman | IEEE Spectrum
“This week, Relativity Space is announcing the Terran R, a 65 meter tall entirely 3D-printed two stage launch vehicle capable of delivering 20,000 kg into low Earth orbit and then returning all of its bits and pieces safely back to the ground to be launched all over again. Relativity Space’s special sauce is that they 3D print as close to absolutely everything as they possibly can, reducing the part count of their rockets by several orders of magnitude.”


Wake Forest Teams Win a NASA Prize for 3D Printing Human Liver Tissue
A. Tarantola | Engadget
“i‘I cannot overstate what an impressive accomplishment this is. When NASA started this challenge in 2016, we weren’t sure there would be a winner,’ Jim Reuter, NASA associate administrator for space technology, said in a recent press statement. ‘It will be exceptional to hear about the first artificial organ transplant one day and think this novel NASA challenge might have played a small role in making it happen.’i”


How Risky Is It to Send Jeff Bezos to Space?
Eric Niiler | Wired
“The rich-guy space race between Bezos and Branson (SpaceX’s Elon Musk is the odd man out for now) may convince other well-heeled space tourists who want assurances that a rocket ride is both fun and safe. But experts note that space travel is always risky, even when spacecraft have undergone years of testing. Blue Origin’s flight will be its first launch with human passengers; previous flights have only carried a mannequin. For Virgin Galactic, it will be only the second time the rocket plane has carried people.”


OpenAI Claims to Have Mitigated Bias and Toxicity in GPT-3
Kyle Wiggers | VentureBeat
“In a study published today, OpenAI, the lab best known for its research on large language models, claims it’s discovered a way to improve the ‘behavior’ of language models with respect to ethical, moral, and societal values. The approach, OpenAI says, can give developers the tools to dictate the tone and personality of a model depending on the prompt that the model’s given.”


Neuroscientists Have Discovered a Phenomenon That They Can’t Explain
Ed Yong | The Atlantic
“Put it this way: The neurons that represented the smell of an apple in May and those that represented the same smell in June were as different from each other as those that represent the smells of apples and grass at any one time. …’Scientists are meant to know what’s going on, but in this particular case, we are deeply confused. We expect it to take many years to iron out,’ [said neuroscientists Carl Schoonover].”


Global Banking Regulators Call for Toughest Rules for Cryptocurrencies
Kalyeena Makortoff | The Guardian
“The Basel Committee on Banking Supervision, which consists of regulators from the world’s leading financial centres, is proposing a ‘new conservative prudential treatment’ for crypto-assets that would force banks to put aside enough capital to cover 100% of potential losses. That would be the highest capital requirement of any asset, illustrating that cryptocurrencies and related investments are seen as far more risky and volatile than conventional stocks or bonds.”


DNA Jumps Between Species. Nobody Knows How Often.
Christie Wilcox | Quanta
“Recent studies of a range of animals—other fish, reptiles, birds and mammals—point to a similar conclusion: The lateral inheritance of DNA, once thought to be exclusive to microbes, occurs on branches throughout the tree of life.”


Italy’s Failed Digital Democracy Dream Is a Warning
Michele Barbero | Wired UK
“Aside from the Five Star’s shortcomings and latest woes, however, citizens’ direct participation in party politics by means of digital tools is likely to pick up pace in the near future. ‘We are going to see more and more the use of the internet to delegate powers to party members,’ says D’Alimonte: ‘The internet is changing the functioning of democracy, we are just at the beginning.’i”

Image Credit: baikang yuan / Unsplash

Kategorie: Transhumanismus

Waymo Self-Driving Trucks Will Soon Start Moving Freight Across Texas

11 Červen, 2021 - 16:00

Last month, self-driving technology company TuSimple shipped a truckload of watermelons across the state of Texas ten hours faster than normal. They did this by using their automated driving system for over 900 miles of the journey. The test drive was considered a success, and marked the beginning of a partnership between TuSimple and produce distributor Guimarra. This is one of the first such partnerships announced, but TuSimple may soon have some competition from another big player in the driverless vehicles game: Alphabet Inc. subsidiary Waymo.

Yesterday, Waymo announced a partnership with transportation logistics company JB Hunt to move cargo in automated trucks in Texas. The first route they’ll drive is between Houston and Fort Worth, which Waymo claims is “one of the most highly utilized freight corridors in the country.”

Houston to Fort Worth on I-45

At around 260 miles long, much of the route is a straight shot on Interstate 45. The trucks will have human safety drivers on board who will likely take over some of the city driving portions, but the goal is to use the automated system as much as possible. A software technician will be on board as well, which makes sense given software will be doing the bulk of the driving.

Waymo has been testing driverless trucks in Texas since last August, when it established a hub in Dallas from which to deploy its fleet of 18-wheelers, complete with cameras, lidar, and on-board computers. It’s no coincidence that the Lone Star State is getting so much driverless action; its mild climate and vast highway network make for a lot of space to drive and a lot of time throughout the year to do so—and a 2017 bill allows vehicles to operate without a driver there.

There are five levels of automation in driving, or if you count level 0 (where there’s no automation at all and a human driver is fully in control at all times), six. At Level 5—full autonomy—the vehicle can drive itself anywhere (around cities, on highways, on rural roads, etc.) in any conditions (rain, sun, fog, etc.) without human intervention.

Waymo’s technology, called Waymo Driver (“the world’s most experienced driver,” if the company has its way) is considered Level 4, which means it could operate without a safety driver under certain conditions (namely, good weather). “This will be one of the first opportunities for JB Hunt to receive data and feedback on customer freight moved with a Class 8 tractor operating at this level of autonomy,” said Craig Harper, JB Hunt’s chief sustainability officer and executive vice president.

Despite looming concerns over job losses due to automation in trucking, proponents say that not only will self-driving technology help fill an ongoing shortage of drivers, it will improve safety on roads and decrease food waste, since shipments will arrive to their final destinations faster. JB Hunt, Harper said, believes “there will be a need for highly skilled, professional drivers for many years to come,” but autonomous technologies will improve efficiency and safety.

Whether it’s TuSimple, Waymo, or other players left to enter the field (or in this case, the state of Texas), it seems we’ll soon be finding out if that’s the case.

Image Credit: Waymo

Kategorie: Transhumanismus

This Drone Bus Will Carry 40 Passengers Between Cities for the Price of a Train Ticket

10 Červen, 2021 - 16:00

Multiple companies are working on new aerial modes of transportation, be they taxis that fly, drones that drive, or cars that drive and fly. What most of these vehicles have in common is that they’re intended for just a few people to ride in at once, like airborne Ubers. But a New York-based startup is thinking bigger, quite literally: Kelekona is developing an electric vertical takeoff and landing (eVTOL) vehicle that will be able to transport 40 people at once.

The aircraft’s design is sleek and futuristic, with a flat shape not unlike a UFO. But for all the apparent flare of the design, Kelekona’s founder, Braeden Kelekona, actually has practicality on the brain; he told Digital Trends, “We have a really small airspace in New York. It never made sense to us to create a small aircraft that was only able to carry up to six people. You have to have the kind of mass transit we rely on here in the city. It makes sense to try to move as many people as possible in one aircraft, so that we’re not hogging airspace.”

He’s got a point. There’s a lot more space in the sky than on the ground, obviously, but flight paths need to be carefully planned and contained within specific areas, particularly in and near big cities. If flying taxis became affordable enough for people to use them the way we use Uber and Lyft today, there would quickly be all sorts of issues with traffic and congestion, both in the sky and with takeoff and landing space on the ground. So why not take a scaled approach from the beginning?

Speaking of affordability, Kelekona says that’s a priority, too. It may play out differently, especially in the technology’s early stages, but the intention is for tickets on the drone bus to cost the same as a train ticket for an equivalent distance. The first route, from Manhattan to the Hamptons, will reportedly have a 30-minute flight time and an $85 ticket price.

Other intended routes include Los Angeles to San Francisco, New York City to Washington DC, and London to Paris—all in an hour, which is comparable to the time it takes for a regular flight right now. One of the differences, ideally, will be that the eVTOLs will be able to land and take off closer to city centers, given that they won’t require long runways.

For this same reason, the company also envisions a streamlined approach to connecting warehouses; its aircraft will be able to carry 12 to 24 shipping containers, or a 10,000-pound cargo payload.

Moving that much weight, plus the weight of the aircraft itself, will demand a lot of battery power. The aircraft’s body will be made of 3D printed composite and aluminum and equipped with eight thrust vectoring fans with propellors whose pitch can change for the different stages of flight: vertical takeoff, forward flight, and landing. All of this will be built around a giant modular battery pack.

“Instead of building an interesting airframe and then trying to figure out how to put the battery into that aircraft, we started with the battery first and put things on top of it,” Kelekona said. The battery pack will have 3.6 megawatt hours of capacity, and will be built for easy swapping out with new iterations as battery technology continues to improve. The aircraft’s energy requirements will likely be the biggest challenge Kelekona faces in its design, production, and launch; at present, the aircraft is still in the computer simulation phase.

A British company called GKN Aerospace is developing a similar concept. Announced in February, Skybus would fit 30 to 50 passengers, and is intended for “mass transit over extremely congested routes.” Despite being made for vertical takeoff and landing, though, the aircraft design has large wings on either side; this would make it more challenging to find adaptable space in urban areas.

Kelekona plans to start with cargo-only routes, with passenger routes planned for 2024, pending approval by the FAA.

Image Credit: Kelekona

Kategorie: Transhumanismus

NASA Is Returning to Venus, Where It’s 470°C. Will We Find Life When We Get There?

9 Červen, 2021 - 16:00

NASA has selected two missions, dubbed DAVINCI+ and VERITAS, to study the “lost habitable” world of Venus. Each mission will receive approximately $500 million for development and both are expected to launch between 2028 and 2030.

It had long been thought there was no life on Venus, due to its extremely high temperatures. But late last year, scientists studying the planet’s atmosphere announced the surprising (and somewhat controversial) discovery of phosphine. On Earth, this chemical is produced primarily by living organisms.

The news sparked renewed interest in Earth’s “twin,” prompting NASA to plan state-of-the-art missions to look more closely at the planetary environment of Venus—which could hint at life-bearing conditions.

Conditions for Life

Ever since the Hubble Space Telescope revealed the sheer number of nearby galaxies, astronomers have become obsessed with searching for exoplanets in other star systems, particularly ones that appear habitable.

But there are certain criteria for a planet to be considered habitable. It must have a suitable temperature, atmospheric pressure similar to Earth’s, and available water.

In this regard, Venus probably wouldn’t have attracted much attention if it were outside our solar system. Its skies are filled with thick clouds of sulphuric acid (which is dangerous for humans), the land is a desolate backdrop of extinct volcanoes, and 90 percent of the surface is covered in red hot lava flows.

Despite this, NASA will search the planet for environmental conditions that may have once supported life. In particular, any evidence that Venus may have once had an ocean would change all our existing models of the planet.

And interestingly, conditions on Venus are far less harsh at a height of about 50 kilometers above the surface. In fact, the pressure at these higher altitudes eases so much that conditions become much more Earth-like, with breathable air and balmy temperatures.

If life (in the form of microbes) does exist on Venus, this is probably where it would be found.

The DAVINCI+ Probe

NASA’s DAVINCI+ (Deep Atmosphere Venus Investigation of Noble gases, Chemistry, and Imaging) mission has several science goals, relating to:

Atmospheric origin and evolution

It will aim to understand the atmospheric origins on Venus, focusing on how it first formed, how it evolved, and how (and why) it is different from the atmospheres of Earth and Mars.

Atmospheric composition and surface interaction

This will involve understanding the history of water on Venus and the chemical processes at work in its lower atmosphere. It will also try to determine whether Venus ever had an ocean. Since life on Earth started in our oceans, this would become the starting point in any search for life.

Surface properties

These findings could shed light on how Venus and Earth began similarly and then diverged in their evolution. This aspect of the mission will provide insights into geographically complex tessera regions on Venus (which have highly deformed terrain), and will investigate their origins and tectonic, volcanic, and weathering history.

The DAVINCI+ spacecraft, upon arrival at Venus, will drop a spherical probe full of sensitive instruments through the planet’s atmosphere. During its descent, the probe will sample the air, constantly measuring the atmosphere as it falls and returning the measurements back to the orbiting spacecraft.

The probe will carry a mass spectrometer, which can measure the mass of different molecules in a sample. This will be used to detect any noble gases or other trace gases in Venus’s atmosphere.

In-flight sensors will also help measure the dynamics of the atmosphere, and a camera will take high-contrast images during the probe’s descent. Only four spacecraft have ever returned images from the surface of Venus, and the last such photo was taken in 1982.


Meanwhile, the VERITAS (Venus Emissivity, Radio Science, InSAR, Topography, and Spectroscopy) mission will map surface features to determine the planet’s geologic history and further understand why it developed so differently to Earth.

Historical geology provides important information about ancient changes in climate, volcanic eruptions and earthquakes. This data can be used to anticipate the possible size and frequency of future events.

The mission will also seek to understand the internal geodynamics that shaped the planet. In other words, we may be able to build a picture of Venus’s continental plate movements and compare it with Earth’s.

In parallel with DAVINCI+, VERITAS will take planet-wide, high-resolution topographic images of Venus’s surface, mapping surface features including mountains and valleys.

At the same time, the Venus Emissivity Mapper (VEM) instrument on board the orbiting VERITAS spacecraft will map emissions of gas from the surface, with such accuracy that it will be able to detect near-surface water vapor. Its sensors are so powerful they will be able to see through the thick clouds of sulphuric acid.

Key Insight Into Conditions on Venus

The most exciting thing about these two missions is the orbit-to-surface probe. In the 1980s, four landers made it to the surface of Venus, but could only operate for two days due to crushing pressure. The pressure there is 93 bar, which is the same as being 900 meters below sea level on Earth.

Then there’s the lava. Many lava flows on Venus stretch for several hundred kilometers. And this lava’s mobility may be enhanced by the planet’s average surface temperature of about 470°C.

Meanwhile, “shield” volcanoes on Venus are an impressive 700 kilometers wide at the base, but only about 5.5 kilometers high on average. The largest shield volcano on Earth, Mauna Loa in Hawaii, is only 120 kilometers wide at the base.

The information obtained from DAVINCI+ and VERITAS will provide crucial insight into not only how Venus formed, but how any rocky, life-giving planet forms. Ideally, this will equip us with valuable markers to look for when searching for habitable worlds outside our solar system.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image Credit: NASA/JPL

Kategorie: Transhumanismus

Scientists Used CRISPR to Engineer a New ‘Superbug’ That’s Invincible to All Viruses

8 Červen, 2021 - 16:00

Can we reprogram existing life at will?

To synthetic biologists, the answer is yes. The central code for biology is simple. DNA letters, in groups of three, are translated into amino acids—Lego blocks that make proteins. Proteins build our bodies, regulate our metabolism, and allow us to function as living beings. Designing custom proteins often means you can redesign small aspects of life—for example, getting a bacteria to pump out life-saving drugs like insulin.

All life on Earth follows this rule: a combination of 64 DNA triplet codes, or “codons,” are translated into 20 amino acids.

But wait. The math doesn’t add up. Why wouldn’t 64 dedicated codons make 64 amino acids? The reason is redundancy. Life evolved so that multiple codons often make the same amino acid.

So what if we tap into those redundant “extra” codons of all living beings, and instead insert our own code?

A team at the University of Cambridge recently did just that. In a technological tour de force, they used CRISPR to replace over 18,000 codons with synthetic amino acids that don’t exist anywhere in the natural world. The result is a bacteria that’s virtually resistant to all viral infections—because it lacks the normal protein “door handles” that viruses need to infect the cell.

But that’s just the beginning of engineering life’s superpowers. Until now, scientists have only been able to slip one designer amino acid into a living organism. The new work opens the door to hacking multiple existing codons at once, copyediting at least three synthetic amino acids at the same time. And when it’s 3 out of 20, that’s enough to fundamentally rewrite life as it exists on Earth.

We’ve long thought that “liberating a subset of…codons for reassignment could improve the robustness and versatility of genetic-code expansion technology,” wrote Drs. Delilah Jewel and Abhishek Chatterjee at Boston College, who were not involved in the study. “This work elegantly transforms that dream into a reality.”

Hacking the DNA Code

Our genetic code underlies life, inheritance, and evolution. But it only works with the help of proteins.

The program for translating genes, written in DNA’s four letters, into the actual building blocks of life relies on a full cellular decryption factory.

Think of DNA’s letters—A, T, C, and G—as a secret code, written on a long slip of crinkled paper wrapped around a spool. Groups of three “letters,” or codons, are the crux—they encode which amino acid a cell makes. A messenger molecule (mRNA), a spy of sorts, stealthily copies the DNA message and sneaks back into the cellular world, shuttling the message to the cell’s protein factory—a sort of central intelligence organization.

There, the factory recruits multiple “translators” to decipher the genetic code into amino acids, aptly named tRNAs. The letters are grouped in threes, and each translator tRNA physically drags its associated amino acid to the protein factory, one by one, so that the factory eventually makes a chain that wraps into a 3D protein.

But like any robust code, nature has programmed redundancy into its DNA-to-protein translation process. For example, the DNA codes TCG, TCA, AGC, and AGT all encode for a single amino acid, serine. While it works in biology, the authors wondered: what if we tap into that code, hijack it, and redirect some of life’s directions using synthetic amino acids?

Hijacking the Natural Code

The new study sees nature’s redundancy as a way to introduce new capabilities into cells.

For us, one question was “could you reduce the number of codons that are used to encode a particular amino acid, and thereby create codons that are free to create other monomers [amino acids]?” asked lead author Dr. Jason Chin.

For example, if TCG is for serine, why not free up the others—TCA, AGC, and AGT— for something else?

It’s a great idea in theory, but a truly daunting task in practice. It means that the team has to go into a cell and replace every single codon they want to reprogram. A few years back, the same group showed that it’s possible in E. Coli, the lab and pharmaceutical’s favorite bug. At that time, the team made an astronomical leap in synthetic biology by synthesizing the entire E. Coli genome from scratch. During the process, they also played around with the natural genome, simplifying it by replacing some amino acid codons with their synonyms—say, removing TCGs and replacing them with AGCs. Even with the modifications, the bacteria were able to thrive and reproduce easily.

It’s like taking a very long book and figuring out which words to replace with synonyms without changing the meaning of sentences—so that the edits don’t physically hurt the bacteria’s survival. One trick, for example, was to delete a protein dubbed “release factor 1,” which makes it easier to reprogram the UAG codon with a brand new amino acid. Previous work showed that this can assign new building blocks to natural codons that are truly “blank”—that is, they don’t encode anything naturally anyways.

A Synthetic Creature

Chin’s team took this much further.

The team cooked up a method called REXER (replicon excision for enhanced genome engineering through programmed recombination)—yeah, scientists are all about the backcronyms—which includes the wunderkind gene editing tool, CRISPR-Cas9. With CRISPR, they precisely snipped out large parts of the E. coli bacterial genome, made entirely from scratch inside a test tube, and then replaced more than 18,000 occurrences of ‘extra’ codons that encode for serine with synonym codons.

Because the trick only targeted redundant protein code, the cells were able to go about their normal business—including making serine—but now with multiple natural codons free. It’s like replacing “hi” with “oy,” making “hi” now free to be assigned a completely different meaning.

The team next did some house cleaning. They removed the cells’ natural translators—the tRNAs—that normally read the now-defunct codons without harming the cells. They introduced new synthetic versions of tRNAs to read the new codons. The engineered bacteria were then naturally evolved inside a test tube to grow more rapidly.

The results were spectacular. The superpowered strain, Syn61.Δ3(ev5), is basically a bacterial X-Men that grows rapidly and is resistant to a cocktail of different viruses that normally infect bacteria.

“Because all of biology uses the same genetic code, the same 64 codons and the same 20 amino acids, that means viruses also use the same code…they use the cell’s machinery to build the viral proteins to reproduce the virus,” explained Chin. Now that the bacteria cell can no longer read nature’s standard genetic code, the virus can no longer tap into the bacterial machinery to reproduce—meaning the engineered cells are now resistant to being hijacked by almost any viral invader.

“These bacteria may be turned into renewable and programmable factories that produce a wide range of new molecules with novel properties, which could have benefits for biotechnology and medicine, including making new drugs, such as new antibiotics,” said Chin.

Viral infection aside, the study rewrites what’s possible for synthetic biology.

“This will enable countless applications,” said Jewel and Chatterjee, such as completely artificial biopolymers, that is, materials compatible with biology that could change entire disciplines such as medicine or brain-machine interfaces. Here, the team was able to string up a chain of artificial amino acid building blocks to make a type of molecule that forms the basis of some drugs, such as those for cancer or antibiotics.

But perhaps the most exciting prospect is the ability to dramatically rewrite existing life. Similar to bacteria, we—and all life in the biosphere—operate on the same biological code. The study now shows it’s possible to get past the hurdle of only 20 amino acids making up the building blocks of life by tapping into our natural biological processes.

Next up, the team is looking to potentially further reprogram our natural biological code to encode even more synthetic protein building blocks into bacterial cells. They’ll also move towards other cells—mammalian, for example, to see if it’s possible to compress our genetic code.

Image Credit: nadya_il from Pixabay

Kategorie: Transhumanismus

How Long Can We Live? New Research Says the Human Lifespan Tops Out at 150

7 Červen, 2021 - 16:00

Even with a healthy diet, plenty of exercise, a lucky draw in the genetic lottery, and the best medicine known to man, your natural lifespan has a hard limit of 150 years, say researchers. But understanding why could help us break through that ceiling.

In the last decade, breakthroughs in our understanding of the aging process and promising early results from age reversal experiments in animals have dragged longevity research out of the academic backwaters and firmly into the mainstream. Alongside this renaissance, there’s been a significant influx of private capital into companies trying to turn these findings into therapies.

Many of the key mechanisms that are suspected to underpin aging have been identified, including the shortening of DNA snippets called telomeres, which are involved in cell replication, the spread of senescent “zombie cells” that damage surrounding tissue, and epigenetic changes to our genes thanks to environmental factors like diet, pollution, and stress. But a comprehensive understanding of how these all interact to slowly wear us down has been lacking.

Now, researchers from a Singapore-based biotech company called Gero think they’ve found the key to our steady decline and its inevitable end point. As we age, they say, our body’s ability to recover from shocks and stressors weakens in a gradual and predictable way, eventually petering out completely somewhere between 120 and 150 years of age.

They reached this conclusion by analyzing blood cell counts from nearly half a million people from the US, UK, and Russia, publishing their results in Nature Communications. These values change over the short term in response to things like disease, and longer-term changes are a known biomarker of aging.

But the researchers also made a novel discovery. As people get older, their bodies take longer to get these values back to baseline after a disruption. The team did a similar analysis on daily step counts from a smaller group using wearable sensors, and they found exactly the same pattern.

Importantly, this loss in resilience was seen in even the healthiest individuals with no chronic health problems, suggesting it is independent of the diseases of aging that normally finish us off. When the group extrapolated their model forwards, they found the body eventually completely loses the ability to return to equilibrium, which puts an upper limit on normal lifespans.

“This work, in my opinion, is a conceptual breakthrough because it determines and separates the roles of fundamental factors in human longevity—the aging, defined as progressive loss of resilience, and age-related diseases, as “executors of death” following the loss of resilience,” co-author Andrei Gudkov, from Roswell Park Comprehensive Cancer Center, said in a press release.

“It explains why even most effective prevention and treatment of age-related diseases could only improve the average but not the maximal lifespan unless true anti-aging therapies have been developed.“

The authors point out in the paper that their study provides no answers as to what the mechanism for this loss of aging might be. But they say it suggests therapies aimed at treating specific chronic diseases are unlikely to extend lifespans beyond the ceiling they have identified.

Instead the focus should be on identifying and tackling the source, or sources, of this loss of resilience, which they hope their new measure will make easier. Boosting the body’s resilience wouldn’t only help us live longer, it would also help us recover better from disease longer into life, which could help extend something even more important than lifespan: healthspan.

The team speculates that recent research on telomere shortening that makes similar lifespan estimates suggests this could be a promising avenue for this search. But whether any of the aging mechanisms we’ve already identified play a role in this loss of resilience and how easy it will be to arrest them is currently unclear. What’s certain is that this new study has at least given longevity researchers a target to aim at.

Image Credit: Stijn te Strake on Unsplash  

Kategorie: Transhumanismus

Google and Harvard Unveil the Largest High-Resolution Map of the Brain Yet

6 Červen, 2021 - 16:00

Last Tuesday, teams from Google and Harvard published an intricate map of every cell and connection in a cubic millimeter of the human brain.

The mapped region encompasses the various layers and cell types of the cerebral cortex, a region of brain tissue associated with higher-level cognition, such as thinking, planning, and language. According to Google, it’s the largest brain map at this level of detail to date, and it’s freely available to scientists (and the rest of us) online. (Really. Go here. Take a stroll.)

“The human brain is an immensely complex network of brain cells which is responsible for all human behavior, but until now, we haven’t been able to completely map these connections within even a small region of the brain,” said Dr. Alexander Shapson-Coe, a postdoctoral fellow at Harvard’s Lichtman Lab and lead author of a preprint paper about the work.

To make the map, the teams sliced donated tissue into 5,300 sections, each 30 nanometers thick, and imaged them with a scanning electron microscope at a resolution of 4 nanometers. The resulting 225 million images were computationally aligned and stitched into a 3D digital representation of the region. Machine learning algorithms segmented cells and classified synapses, axons, dendrites, cells, and other structures, and humans checked their work.

Last year, Google and the Janelia Research Campus of the Howard Hughes Medical Institute made headlines when they similarly mapped a portion of a fruit fly brain. That map, at the time the largest yet, covered some 25,000 neurons and 20 million synapses. In addition to targeting the human brain, itself of note, the new map includes tens of thousands of neurons and 130 million synapses. It takes up 1.4 petabytes of disk space.

By comparison, over three decades’ worth of satellite images of Earth by NASA’s Landsat program require 1.3 petabytes of storage. Collections of brain images on the smallest scales are like “a world in a grain of sand,” the Allen Institute’s Clay Reid told Nature, quoting William Blake in reference to an earlier map of the mouse brain.

All that, however, is but a millionth of the human brain. Which is to say, a similarly detailed map of the entire thing is yet years away. Still, the work shows how fast the field is moving. A map of this scale and detail would have been unimaginable a few decades ago.

How to Map a Brain

The study of the brain’s cellular circuitry is known as connectomics.

Obtaining the human connectome, or the wiring diagram of a whole brain, is a moonshot akin to the human genome. And like the human genome, at first, it seemed an impossible feat.

The only complete connectomes are for simple creatures: the nematode worm (C. elegans) and the larva of a sea creature called C. intestinalis. There’s a very good reason for that. Until recently, the mapping process was time-consuming and costly.

Researchers mapping C. elegans in the 1980s used a film camera attached to an electron microscope to image slices of the worm, then reconstructed the neurons and synaptic connections by hand, like a maddeningly difficult three-dimensional puzzle. C. elegans has only 302 neurons and roughly 7,000 synapses, but the rough draft of its connectome took 15 years, and a final draft took another 20. Clearly, this approach wouldn’t scale.

What’s changed? In short, automation.

These days the images themselves are, of course, digital. A process known as focused ion beam milling shaves down each slice of tissue a few nanometers at a time. After one layer is vaporized, an electron microscope images the newly exposed layer. The imaged layer is then shaved away by the ion beam and the next one imaged, until all that’s left of the slice of tissue is a nanometer-resolution digital copy. It’s a far cry from the days of Kodachrome.

But maybe the most dramatic improvement is what happens after scientists complete that pile of images.

Instead of assembling them by hand, algorithms take over. Their first job is ordering the imaged slices. Then they do something impossible until the last decade. They line up the images just so, tracing the path of cells and synapses between them and thus building a 3D model. Humans still proofread the results, but they don’t do the hardest bit anymore. (Even the proofreading can be refined. Renowned neuroscientist and connectomics proponent Sebastian Seung, for example, created a game called Eyewire, where thousands of volunteers review structures.)

“It’s truly beautiful to look at,” Harvard’s Jeff Lichtman, whose lab collaborated with Google on the new map, told Nature in 2019. The programs can trace out neurons faster than the team can churn out image data, he said. “We’re not able to keep up with them. That’s a great place to be.”

But Why…?

In a 2010 TED talk, Seung told the audience you are your connectome. Reconstruct the connections and you reconstruct the mind itself: memories, experience, and personality.

But connectomics has not been without controversy over the years.

Not everyone believes mapping the connectome at this level of detail is necessary for a deep understanding of the brain. And, especially in the field’s earlier, more artisanal past, researchers worried the scale of resources required simply wouldn’t yield comparably valuable (or timely) results.

“I don’t need to know the precise details of the wiring of each cell and each synapse in each of those brains,” nueroscientist Anthony Movshon said in 2019. “What I need to know, instead, is the organizational principles that wire them together.” These, Movshon believes, can likely be inferred from observations at lower resolutions.

Also, a static snapshot of the brain’s physical connections doesn’t necessarily explain how those connections are used in practice.

“A connectome is necessary, but not sufficient,” some scientists have said over the years. Indeed, it may be in the combination of brain maps—including functional, higher-level maps that track signals flowing through neural networks in response to stimuli—that the brain’s inner workings will be illuminated in the sharpest detail.

Still, the C. elegans connectome has proven to be a foundational building block for neuroscience over the years. And the growing speed of mapping is beginning to suggest goals that once seemed impractical may actually be within reach in the coming decades.

Are We There Yet?

Seung has said that when he first started out he estimated it’d take a million years for a person to manually trace all the connections in a cubic millimeter of human cortex. The whole brain, he further inferred, would take on the order of a trillion years.

That’s why automation and algorithms have been so crucial to the field.

Janelia’s Gerry Rubin told Stat he and his team have overseen a 1,000-fold increase in mapping speed since they began work on the fruit fly connectome in 2008. The full connectome—the first part of which was completed last year—may arrive in 2022.

Other groups are working on other animals, like octopuses, saying comparing how different forms of intelligence are wired up may prove particularly rich ground for discovery.

The full connectome of a mouse, a project already underway, may follow the fruit fly by the end of the decade. Rubin estimates going from mouse to human would need another million-fold jump in mapping speed. But he points to the trillion-fold increase in DNA sequencing speed since 1973 to show such dramatic technical improvements aren’t unprecedented.

The genome may be an apt comparison in another way too. Even after sequencing the first human genome, it’s taken many years to scale genomics to the point we can more fully realize its potential. Perhaps the same will be true of connectomics.

Even as the technology opens new doors, it may take time to understand and make use of all it has to offer.

“I believe people were impatient about what [connectomes] would provide,” Joshua Vogelstein, cofounder of the Open Connectome Project, told The Verge last year. “The amount of time between a good technology being seeded, and doing actual science using that technology is often approximately 15 years. Now it’s 15 years later and we can start doing science.”

Proponents hope brain maps will yield new insights into how the brain works—from thinking to emotion and memory—and how to better diagnose and treat brain disorders.

“This advance opens up the possibility of comparing networks of healthy and diseased brains, to identify the network changes that are thought to cause mental illnesses and other neurological disorders,” Shapson-Coe said.

Others, Google among them no doubt, hope to glean insights that could lead to more efficient computing (the brain is astonishing in this respect) and more powerful artificial intelligence.

There’s no telling exactly what scientists will find as, neuron by synapse, they map the inner workings of our minds—but it seems all but certain great discoveries await.

Update (6/9/2021): Added quotes about the significance of the work from Alexander Shapson-Coe, a postdoctoral fellow at Harvard’s Lichtman Institute and lead author of a paper describing the study.

Image Credit: Google / Harvard

Kategorie: Transhumanismus

This Week’s Awesome Tech Stories From Around the Web (Through June 5)

5 Červen, 2021 - 16:00

China’s Gigantic Multi-Modal AI Is No One-Trick Pony
A. Tarantola | Engadget
“When Open AI’s GPT-3 model made its debut in May of 2020, its performance was widely considered to be the literal state of the art. …But oh what a difference a year makes. Researchers from the Beijing Academy of Artificial Intelligence announced on Tuesday the release of their own generative deep learning model, Wu Dao, a mammoth AI seemingly capable of doing everything GPT-3 can do, and more.”


United Airlines Wants to Bring Back Supersonic Air Travel
Lauren Hirsch | The New York Times
“…United Airlines said it was ordering 15 jets that can travel faster than the speed of sound from Boom Supersonic, a start-up in Denver. …Boom, which has raised $270 million from venture capital firms and other investors, said it planned to introduce aircraft in 2025 and start flight tests in 2026. It expects the plane, which it calls the Overture, to carry passengers before the end of the decade.”


Spacex Signs ‘Blockbuster Deal’ To Send Space Tourists to the ISS
Amanda Kooser | CNET
“On Wednesday, space tourism company Axiom Space announced a ‘blockbuster deal’ with SpaceX that will send private crews to the ISS through 2023. Axiom and SpaceX already had a deal in place for a Dragon spacecraft flight with three private citizens and former NASA astronaut Michael López-Alegría in early 2022. The new agreement expands the scope to a total of four flights.”


Why Electric Cars Will Take Over Sooner Than You Think
Justin Rowlatt | BBC News
“This isn’t a fad, this isn’t greenwashing. Yes, the fact many governments around the world are setting targets to ban the sale of petrol and diesel vehicles gives impetus to the process. But what makes the end of the internal combustion engine inevitable is a technological revolution. And technological revolutions tend to happen very quickly.”


Have Autonomous Robots Started Killing in War?
James Vincent | The Verge
“…over the past week, a number of publications tentatively declared, based on a UN report from the Libyan civil war, that killer robots may have hunted down humans autonomously for the first time. As one headline put it: ‘The Age of Autonomous Killer Robots May Already Be Here.’ But is it? As you might guess, it’s a hard question to answer.”


Chart: Behind the Three-Decade Collapse of Lithium-Ion Battery Costs
Rahul Rao | IEEE Spectrum
“Between 1991 and 2018, the average price of the batteries that power mobile phones, fuel electric cars, and underpin green energy storage fell more than thirtyfold, according to work by Micah Ziegler Jessika Trancik and at the Massachusetts Institute of Technology. …Batteries today, the researchers say, have mass-production scales and energy densities unthinkable 30 years ago.”


The UK Has a Plan for a New ‘Pandemic Radar’ System
Maryn McKenna | Wired
“i‘What we really need is a broadly distributed, high-fidelity, always-on surveillance system…’ says Samuel V. Scarpino, an assistant professor at Northeastern University who directs its Emergent Epidemics Lab. ‘This is not something that can be built easily. But we have a narrow window right now, where basically the whole planet knows that we need to solve this.’i”


Vilnius, Lithuania Built a ‘Portal’ to Another City To Help Keep People Connected
Kim Lyons | The Verge
“They really went all-in on the idea and the design; it looks quite a bit like something out of the erstwhile sci-fi movie/show Stargate. …The portals both have large screens and cameras that broadcast live images between the two cities—a kind of digital bridge, according to its creators—meant to encourage people to ‘rethink the meaning of unity,’ Go Vilnius said in a press release. Aw.”


Amazon Devices Will Soon Automatically Share Your Internet With Neighbors
Dan Goodin | Ars Technica
“Amazon’s experimental wireless mesh networking turns users into guinea pigs. …By default, a variety of Amazon devices will enroll in the system come June 8. And since only a tiny fraction of people take the time to change default settings, that means millions of people will be co-opted into the program whether they know anything about it or not.”

Image Credit: Praewthida K / Unsplash

Kategorie: Transhumanismus

When Will the First Baby Be Born in Space?

4 Červen, 2021 - 16:00

When the first baby is born off-Earth, it will be a milestone as momentous as humanity’s first steps out of Africa. Such a birth would mark the beginning of a multi–planet civilization for the human species.

For the first half-century of the space age, only governments launched satellites and people into Earth orbit. No longer. Hundreds of private space companies are building a new industry that already has US$300 billion in annual revenue.

I’m a professor of astronomy who has written a book and a number of articles about humans’ future in space. Today, all activity in space is tethered to Earth. But I predict that in around 30 years people will start living in space; and soon after, the first off-Earth baby will be born.

The Players in Space

Space started as a duopoly as the United States and the Soviet Union vied for supremacy in a geopolitical contest with loud military overtones. But while NASA achieved the moon landings in 1969, its budget has since shrunk by a factor of three. Russia is no longer an economic superpower, and its presence in space is a pale shadow of the program that launched the first satellite and the first person into orbit.

The new kid on the block is China. After a late start, the Chinese space program is surging, fueled by a budget that has recently grown faster than their economy. China is building a space station, the country has landed probes on the moon and Mars, and it is planning a moon base. On its current trajectory, China will soon be the dominant space power.

Governments will continue to launch rockets, but it would be safe to say that the future of private space flight arrived in 2016 when, for the first time, commercial launches outnumbered launches by all the world’s countries combined. But the most exciting progress is being made by private space companies that are marketing space for tourism and recreation. Elon Musk’s goal for SpaceX is to carry 100 people at a time to the moon, Mars, and beyond, although in public presentations he is coy about giving a timeline. Jeff Bezos’ company, Blue Origin, also aims to colonize the solar system. Such grandiose plans have skeptics, but remember that these are the two richest people in the world.

Living on the Moon or Mars

For a spacecraft, the trip to Mars is about 1,000 times farther than a trip to the moon, so the moon will be humanity’s first home away from home.

China is partnering with Russia to build a long-term facility at the moon’s South Pole sometime between 2036 and 2045. NASA plans to put “boots on the moon” in 2024 and establish a a permanent settlement called the Artemis Base Camp within another decade. As part of the Artemis mission, NASA is also planning to launch a lunar space station in 2024 called Gateway. NASA is teaming up with SpaceX for this and future lunar projects, and the lunar station will make it easier for SpaceX to resupply the future lunar colony.

After the moon comes Mars, and the collaboration between SpaceX and NASA is accelerating the timeline for getting there. NASA’s plans are purposeful, but the organization hasn’t given a timeline. Elon Musk, on the other hand, has loudly proclaimed that he intends to have a colony on Mars by 2050. Humanity’s attempt to colonize the moon will give us a good sense of the challenges we might face on Mars.

Sex and Babies in Space

For a civilization to be really free from Earth, the population needs to grow, and that means babies. Living on the moon or Mars will be arduous and stressful, so the first inhabitants will probably spend only a few years there at a time and are unlikely to start a family.

But once people do take up permanent residency off-Earth, there are still many unknowns. First, little research has been done on the biology of pregnancy and reproductive health in a space or low-gravity environment like the moon or Mars. It’s possible there will be unexpected hazards to the fetus or mother. Second, babies are fragile, and raising them is not easy. The infrastructure of these bases would have to be sophisticated to make some version of normal family life possible, a process that will take decades.

With these uncertainties in mind, it seems likely that the first off-Earth baby will be born much closer to home. A Dutch startup called SpaceLife Origin wants to send a heavily pregnant woman 250 miles up just long enough to give birth. They talk a good story, but the legal, medical and ethical obstacles are formidable. Another company, called Orbital Assembly Corporation, plans to open a luxury hotel in orbit in 2027 called the Voyager Station. Current plans show that it would hold 280 guests and 112 crew members, with its spinning-wheel design providing artificial gravity. But the breathless news reports omit any discussion of the difficulty and cost of such a project.

However, on April 12, 2021, NASA announced that it is considering allowing a reality TV show to send a civilian to the International Space Station and film them for 10 days. It’s plausible that this idea could be extended, with a wealthy couple booking a long-term stay for the entire process from conception to birth in orbit.

At the moment, there’s no evidence anyone has had sex in space. But with about 600 people having been in Earth orbit, including one NASA couple who kept their marriage a secret, one space historian was able to gather plenty of space age salacious moments.

My guess is that sometime around 2040, a unique individual will be born. They may carry the citizenship of their parents, or they may be born in a facility operated by a corporation and end up stateless. But I prefer to think of this future person as the first true citizen of the galaxy.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image Credit: NASA/Dennis Davidson/WikimediaCommons

Kategorie: Transhumanismus

SpaceX Will Have an Offshore Spaceport Ready for Starship Launches as Soon as Next Year

3 Červen, 2021 - 16:00

A year ago SpaceX made headlines after posting job openings for operations engineers. The task at hand? To help design and build an offshore rocket launch facility—aka, a floating spaceport. Between the job postings and Elon Musk’s tweet that the spaceports were intended for launches to Mars, the moon, and hypersonic travel around Earth, the whole thing seemed somewhat outlandish.

A year later, though, SpaceX is forging ahead with its plans. Musk tweeted this week that construction on the first spaceport has begun, and rockets may launch from it as soon as next year.

Ocean spaceport Deimos is under construction for launch next year

— Elon Musk (@elonmusk) May 30, 2021

The floating spaceport plans have actually been in motion since almost a year ago, when a SpaceX-affiliated LLC bought two offshore oil rigs in July of 2020. The rigs were sold by Valaris, the world’s biggest offshore drilling company, which is headquartered in Houston and incorporated in the UK. After filing for Chapter 11 protection in August of 2020, the company completed a financial restructuring and came out of bankruptcy this past April.

The rigs SpaceX bought are classified as “ultra-deepwater semi-submersible,” and they sold for $3.5 million each. A semi-submersible is an offshore drilling platform that can be moved from place to place; while most of it floats above the water’s surface, it ‘anchors’ itself using pontoon-type columns submerged under water. Ultra-deepwater drilling takes place at depths of 1,500 meters (~5,000 feet) or deeper.

What this all adds up to? SpaceX bought some of the sturdiest floating rigs out there—aka, what you’d expect for a place rockets will launch from and land on. And not just any rockets—the biggest ever used in spaceflight. Starship’s 160-foot spacecraft plus 230-foot booster makes for a 394-foot-tall (taller than the length of a football field) by 30-foot-wide rocket.

Both rigs are located in the Port of Brownsville at the southern tip of Texas, very near the border with Mexico—and conveniently, near SpaceX’s Starship development facility in Boca Chica (whose name Musk wants to change to “Starbase.” I mean, if you lived there, that name alone would give you some bragging rights, wouldn’t it?).

SpaceX quickly renamed the rigs, from “rigs 8500 and 8501” to Phobos and Deimos, the names of Mars’s two moons. It seems construction on Deimos is moving along first, according to Musk’s tweet.

Right now it seems entirely possibly that the spaceports will be launch-ready long before the spaceships are; of the rocket’s first five high-altitude flights, three exploded on contact during landing and the fourth exploded a few minutes after landing. The fifth flight, which just took place a month ago, was explosion-free and thus successful. To get to the three-a-day launches Musk envisions, though, SpaceX will need a far better scorecard than one out of five.

On the plus side, the company did just hit a significant milestone in reusability when one of its B1051 boosters completed its tenth flight over the course of just 26 months.

And there are all kinds of plans in the works, from sending a Starship into orbit on its way to Hawaii to launching the “full stack” Super Heavy booster and Starship as soon as July.

We can’t be sure that SpaceX’s plans will play out on the exact timeline given (which, in the case of the spaceports, is appropriately vague; “as soon as next year” allows for a solid 11 months or so of wiggle room), but the company thus far hasn’t had many issues with a lack of follow-through. That means it’s only a matter of time until we see rockets launching off converted oil rigs and heading for the moon, all corners of Earth, and Mars.

Image Credit: SpaceX

Kategorie: Transhumanismus