Singularity HUB

Syndikovat obsah Singularity Hub
News and Insights on Technology, Science, and the Future from Singularity University
Aktualizace: 1 min 48 sek zpět

How the Wild New Materials of the Future Will Be Discovered With AI

22 Březen, 2018 - 16:00

How materials for computer chips, solar panels, and batteries are developed looks to be in the early stages of a radical change. The same goes for research related to areas like superconductors and thermoelectrics.

The reason? The new possibilities created by machine learning in materials science.

“This is something that is set to explode in people’s faces, as it were. Within the last five years, there has been a huge growth in materials science research teams using AI/machine learning techniques. The amount of scientific papers on the subject has been growing almost exponentially,” says Dr. James Warren, director of the Materials Genome Program in the Material Measurement Laboratory of NIST.

“We already see real-world advances based on the research, but I think we are only at the beginning. Machine learning could benefit every step of the scientific process for developing and improving new materials.”

Early Days but Real-World Solutions

I’m not an engineer, nor a scientist. I get to ask really smart people really stupid questions for a living. That is pretty much how I define being a journalist.

The way I think about materials science is that it’s about stuff. That’s also how I think about parts of engineering and manufacturing. It’s about putting stuff together. The quality of your finished product relies on the quality and abilities of the stuff used to make it.

This is why materials science is critically important to technological progress. Want a better computer chip? You need the right materials. More efficient batteries for self-driving cars or solar panels? Same answer.

A concrete example of how machine learning can aid the development of new materials comes from Stanford University where a team led by Evan Reed, assistant professor of Materials Science and Engineering, has been using it to develop better electrolytes for lithium-ion batteries.

Electrolytes are often composed of a range of materials. Finding the optimal combination and composition of said materials can be difficult.

“We have developed a machine learning model that has been outperforming experts’ intuition when predicting which materials to use,” Reed says.

New Examples  Abound

Valentin Stanev, a research scientist at University of Maryland, has been using machine learning in superconductor research.

“We have a list of all superconductors that we know of, but we still don’t have a good way of figuring out if something is going to be a superconductor. I applied machine learning to the process to help find ways to develop such a framework,” he explains.

Stanev sees big potential for machine learning in other areas too, such as the development of thermoelectric materials, which absorb heat and turn it into electricity.

“A huge percentage of our energy production is wasted as heat. Being able to catch just a small percentage of that will have an enormous impact,” he says.

Beyond superconductors and thermoelectric materials, scientists think machine learning could lead to advances in hydrogen storage units for fuel cells. In healthcare, it could help make new materials that better control how drugs dissolve a stint. It could also lead to new metallic glasses, a subset of metals without a crystalline structure, which have many possible applications, including nanotube construction.

Machine learning might even have applications in scientific processes themselves.

“Many processes in materials science rely on some sort of classification or fitting. Traditionally, this has been done by hand or some simple linear model after significant data processing,” explains Shyam Dwaraknath, computational chemist postdoctoral fellow at Berkeley Lab. “Machine learning makes these tasks much easier while improving the quality, speed, and amount of data that can be extracted. This has yielded automated methods for constructing phase diagrams, predicting structures for new compositions, and even analyzing micrographs in place of humans.”

Data Is the Magical Ingredient

There is still some way to go, though. The machine learning and materials science revolution is very much nascent. One area of development is sorting out where machine learning does and doesn’t make sense.

“The materials science community is actively seeking to identify the areas where ideas from machine learning could have an impact, with ongoing work ranging from materials selection problems to faster and more efficient data collection and analysis,” Evan Reed says.

Shyam Dwaraknath adds, “We’re just now entering the age of big data in materials science with large databases of well-curated and directly comparable data, but the true complexity of materials is far larger than that. For comparison, all the data on the internet, about a sextillion bytes, is just now reaching the number of atoms in a grain of sand.”

Another unsolved challenge? How to turn new, theoretical materials science insights into actual materials and solutions—especially on an industrial scale.

“It is like the difference in knowing the ingredients and knowing the exact recipe for, say, a soufflé. You need to know the exact process. That is the difference between ending up with a nice, light soufflé or a brick,” James Warren says.

The Up-Swinging Curve

While there are challenges, all scientists interviewed have high expectations when it comes to machine learning’s potential in materials science.

Valentin Stanev says new applications of machine learning in the scientific process could reduce the time needed to run experiments by up to 80%.

“You can have a machine learning toolbox built into your experimental setup. It looks at the results coming out of the experiment and can algorithmically decide what experiment to do next and from these deduce the general outcome of a series of experiments. In a way, you may only need to run 10 or 20% of the experiment to get the full picture,” he explains.

Other possibilities include handing partial control of experiments over to an AI system that autonomously makes decisions on what next steps to take.

And according to Evan Reed, machine learning might even be used for a kind of reverse engineering.

“Imagine that you need a battery that has a certain set of properties. You feed those into the machine learning model that then automatically runs through all available, known materials and suggests a range of batteries consisting of different materials that meet your specifications.”

James Warren sees potential uses coming sooner rather than later.

“Many of these advances are not nearly as far off as people think—in many cases we are talking a few years, tops. A lot of people in the community have a sense of, ‘What the hell just happened?’ Hopefully, others will too,” he says with a semi-laugh.

Warren believes machine learning is a key to future advances in the space, helping scientists push back the theoretical limits of materials and perhaps leading to development of many exciting new kinds of materials.

Image Credit: Jackie Niam /

Kategorie: Transhumanismus

Powerful New Algorithm Is a Big Step Towards Whole-Brain Simulation

21 Březen, 2018 - 16:00

The renowned physicist Dr. Richard Feynman once said: “What I cannot create, I do not understand. Know how to solve every problem that has been solved.”

An increasingly influential subfield of neuroscience has taken Feynman’s words to heart. To theoretical neuroscientists, the key to understanding how intelligence works is to recreate it inside a computer. Neuron by neuron, these whizzes hope to reconstruct the neural processes that lead to a thought, a memory, or a feeling.

With a digital brain in place, scientists can test out current theories of cognition or explore the parameters that lead to a malfunctioning mind. As philosopher Dr. Nick Bostrom at the University of Oxford argues, simulating the human mind is perhaps one of the most promising (if laborious) ways to recreate—and surpass—human-level ingenuity.

There’s just one problem: our computers can’t handle the massively parallel nature of our brains. Squished within a three-pound organ are over 100 billion interconnected neurons and trillions of synapses.

Even the most powerful supercomputers today balk at that scale: so far, machines such as the K computer at the Advanced Institute for Computational Science in Kobe, Japan can tackle at most ten percent of neurons and their synapses in the cortex.

This ineptitude is partially due to software. As computational hardware inevitably gets faster, algorithms increasingly become the linchpin towards whole-brain simulation.

This month, an international team completely revamped the structure of a popular simulation algorithm, developing a powerful piece of technology that dramatically slashes computing time and memory use.

Using today’s simulation algorithms, only small progress (dark red area of center brain) would be possible on the next generation of supercomputers. However, the new technology allows researchers to simulate larger parts of the brain while using the same amount of computer memory. This makes the new technology more appropriate for future use in supercomputers for whole-brain level simulation. Image Credit: Forschungszentrum Jülich/Frontiers

The new algorithm is compatible with a range of computing hardware, from laptops to supercomputers. When future exascale supercomputers hit the scene—projected to be 10 to 100 times more powerful than today’s top performers—the algorithm can immediately run on those computing beasts.

“With the new technology we can exploit the increased parallelism of modern microprocessors a lot better than previously, which will become even more important in exascale computers,” said study author Jakob Jordan at the Jülich

Research Center in Germany, who published the work in Frontiers in Neuroinformatics.

“It’s a decisive step towards creating the technology to achieve simulations of brain-scale networks,” the authors said.

The Trouble With Scale

Current supercomputers are composed of hundreds of thousands of subdomains called nodes. Each node has multiple processing centers that can support a handful of virtual neurons and their connections.

A main issue in brain simulation is how to effectively represent millions of neurons and their connections inside these processing centers to cut time and power.

One of the most popular simulation algorithms today is the Memory-Usage Model. Before scientists simulate changes in their neuronal network, they need to first create all the neurons and their connections within the virtual brain using the algorithm.

Here’s the rub: for any neuronal pair, the model stores all information about connectivity in each node that houses the receiving neuron—the postsynaptic neuron.

In other words, the presynaptic neuron, which sends out electrical impulses, is shouting into the void; the algorithm has to figure out where a particular message came from by solely looking at the receiver neuron and data stored within its node.

It sounds like a strange setup, but the model allows all the nodes to construct their particular portion of the neural network in parallel. This dramatically cuts down boot-up time, which is partly why the algorithm is so popular.

But as you probably guessed, it comes with severe problems in scaling. The sender node broadcasts its message to all receiver neuron nodes. This means that each receiver node needs to sort through every single message in the network—even ones meant for neurons housed in other nodes.

That means a huge portion of messages get thrown away in each node, because the addressee neuron isn’t present in that particular node. Imagine overworked post office staff skimming an entire country’s worth of mail to find the few that belong to their jurisdiction. Crazy inefficient, but that’s pretty much what goes on in the Memory-Usage Model.

The problem becomes worse as the size of the simulated neuronal network grows.  Each node needs to dedicate memory storage space to an “address book” listing all its neural inhabitants and their connections. At the scale of billions of neurons, the “address book” becomes a huge memory hog.

Size Versus Source

The team hacked the problem by essentially adding a zip code to the algorithm.

Here’s how it works. The receiver nodes contain two blocks of information. The first is a database that stores data about all the sender neurons that connect to the nodes. Because synapses come in several sizes and types that differ in their memory consumption, this database further sorts its information based on the type of synapses formed by neurons in the node.

This setup already dramatically differs from its predecessor, in which connectivity data is sorted by the incoming neuronal source, not synapse type. Because of this, the node no longer has to maintain its “address book.”

“The size of the data structure is therefore independent of the total number of neurons in the network,” the authors explained.

The second chunk stores data about the actual connections between the receiver node and its senders. Similar to the first chunk, it organizes data by the type of synapse. Within each type of synapse, it then separates data by the source (the sender neuron).

In this way, the algorithm is far more specific than its predecessor: rather than storing all connection data in each node, the receiver nodes only store data relevant to the virtual neurons housed within.

The team also gave each sender neuron a target address book. During transmission the data is broken up into chunks, with each chunk containing a zip code of sorts directing it to the correct receiving nodes.

Rather than a computer-wide message blast, here the data is confined to the receiver neurons that they’re supposed to go to.

Speedy and Smart

The modifications panned out.

In a series of tests, the new algorithm performed much better than its predecessors in terms of scalability and speed. On the supercomputer JUQUEEN in Germany, the algorithm ran 55 percent faster than previous models on a random neural network, mainly thanks to its streamlined data transfer scheme.

At a network size of half a billion neurons, for example, simulating one second of biological events took about five minutes of JUQUEEN runtime using the new algorithm. Its predecessor clocked in at six times that.

This really “brings investigations of fundamental aspects of brain function, like plasticity and learning unfolding over minutes…within our reach,” said study author Dr. Markus Diesmann at the Jülich Research Centre.

As expected, several scalability tests revealed that the new algorithm is far more proficient at handling large networks, reducing the time it takes to process tens of thousands of data transfers by roughly threefold.

“The novel technology profits from sending only the relevant spikes to each process,” the authors concluded. Because computer memory is now uncoupled from the size of the network, the algorithm is poised to tackle brain-wide simulations, the authors said.

While revolutionary, the team notes that a lot more work remains to be done. For one, mapping the structure of actual neuronal networks onto the topology of computer nodes should further streamline data transfer. For another, brain simulation software needs to regularly save its process so that in case of a computer crash, the simulation doesn’t have to start over.

“Now the focus lies on accelerating simulations in the presence of various forms of network plasticity,” the authors concluded. With that solved, the digital human brain may finally be within reach.

Image Credit: Jolygon /

Kategorie: Transhumanismus

New MIT Startup Targets Working Fusion Reactor in 15 Years. Can It Be Done?

20 Březen, 2018 - 17:15

The joke is that nuclear fusion is 20 years away, and always will be. This joke, now a cliché, arose from optimistic scientists suggesting in the 1950s (and then in most subsequent decades) that nuclear fusion was just 20 years away.

Yet the fact remains that when an MIT spin-out startup, Commonwealth Fusion Systems, promises to have a working fusion reactor in the next 15 years, there is a mismatch between promise and expectation. The promise: cheap, clean, limitless energy, a solution to the crisis of fossil fuels and climate change. MIT’s press release exults “potentially an inexhaustible and zero-carbon source of energy.”

The only problem: we’ve heard this a few times before. Is anything different now?

Another great cliché about fusion energy: “The idea is simple; you put the sun in a bottle. The only problem is building the bottle.” Nuclear fusion powers the stars, but it requires incredibly hot and dense conditions in plasma to work.

A vast amount of energy can be released when two light nuclei fuse together: deuterium-tritium fusion, the type pursued by the ITER experiment, releases 17.6 MeV per reaction, over a million times more energy per molecule than you get from blowing up TNT. But to release this energy, you need to overcome immense electrostatic repulsion between the nuclei, which are both positively charged. The strong force takes over at short distances and causes fusion, releasing all that energy, but you need to get the nuclei very close together—femtometers apart. This can happen in stars due to the immense gravitational pressure from the material, but it’s a little trickier on Earth.

For a start, you will struggle to find materials that won’t be badly damaged by temperatures of hundreds of millions of degrees Celsius.

Plasma is made up of charged particles; matter with electrons have been stripped away. This means it can be contained by a magnetic field that can twist the plasma into a circle. Twists and kinks in the magnetic field can allow the plasma to compress as well. In the 1950s and 1960s, a whole generation of devices with exotic-sounding nicknames—the Stellarator, the Perhapsatron, the Z-Pinch—were being developed. But the plasma they contained was unstable. Plasma itself generates electromagnetic fields; it can be described by the fiendishly complicated theory of magnetohydrodynamics. Slight deviations or imperfections on the surface of the plasma grew wildly out of control. These devices didn’t work as expected.

A device invented in the Soviet Union, the tokamak, offered greatly improved performance. At the same time, the laser was invented, allowing for a new type of fusion to occur—inertial confinement fusion.

Here, you don’t aim to contain the plasma for a sustained burn through magnetic fields, but compress it explosively with lasers for a small amount of time. But inertial confinement experiments suffered from similar instabilities. Experiments have been conducted since the 1970s, and perhaps one day the idea will work—but the biggest one to date, the National Ignition Facility in Livermore, California, has not yet “broken even,” that is, produced more energy than is required to run it.

The hopes of the world for fusion mostly rest with ITER, the construction of the world’s largest-ever tokamak for magnetic confinement fusion.

The project hopes to ignite plasma for 20 minutes to produce 500 MW of power with 50 MW nominal input. Full fusion experiments are scheduled for 2035, but issues with the international collaboration between the US, the USSR (as was), Japan, and Europe have meant that the project is delayed and over-budget, costing €13 billion and 12 years late already. This is not uncommon for projects that require vast construction of complex scientific facilities; similar problems have plagued previous fusion reactors.

The official ITER timeline has the first fusion reactor that will work as a power plant, with ignition and sustained burn—DEMO—scheduled for 2040, or perhaps 2050. In other words, fusion power is 20 years away. The trend has been to try to address instabilities and tokamak problems by constructing ever-bigger power plants; ITER will be bigger than JET in Culham, the world’s largest working tokamak, and DEMO will presumably be bigger still.

Over the years, several teams have hoped to beat the international collaboration to the punch with smaller designs. It’s not just a question of speed, but of practicality; if a fusion reactor really takes billions of dollars and decades to build, can it ever compete economically? Who will pay that initial capital cost? It may be the case that if fusion requires tokamaks this big, some combination of solar and storage will be cheaper than fusion by the time it works. Some of these designs—see the furor around “cold fusion” and “bubble fusion”—have been pathological science, where experimental results were incorrect or, perhaps in some cases, falsified.

Others are more legitimate attempts. Startups with novel fusion reactor designs—or, in some cases, revived versions of older attempts—are beginning to appear.

Tri Alpha hopes to fire clouds of plasma at each other in a design similar to the Large Hadron Collider, then confine the fusing plasma in a magnetic field for long enough to break even and generate power. They have achieved the required temperatures and confinement of plasma for a few milliseconds, and they’ve also seen over $500 million in venture capital funding from Goldman Sachs and Microsoft’s Paul Allen, amongst others.

Lockheed Martin’s Skunk Works team, famous for unveiling its secret projects, made waves in 2013 by announcing they were working on a compact fusion reactor, producing 100 MW and about the size of a jet engine. At the time, they said a prototype would be ready in five years. Naturally, they didn’t reveal too many details about the design. They’ve been scarce since, although in 2016 they confirmed they were still funding the project, and the announcement was met with skepticism by many from the mainstream fusion community.

It’s in this context, with a range of smaller alternatives to ITER starting to crop up, that the MIT researchers are throwing their hats into the ring. Bob Mumgaard, CEO of Commonwealth Fusion Systems, stated: “The aspiration is to have a working power plant in time to combat climate change. We think we have the science, speed, and scale to put carbon-free fusion power on the grid in 15 years.”

MIT’s new venture is sticking with the tokamak design, as they have done in the past. Their device, SPARC, would hope to produce 100 MW of power in 10-second confinement pulses. Getting energy from fusion in pulses has been achieved before, but of course the real symbolic and scientific goal is to break even.

The special sauce, then, is newly available high-temperature superconducting magnets (HTSCs) made of yttrium barium copper oxide. Given that HTSCs can produce higher magnetic fields for the same temperature, it may be possible to compress the plasma with less input power, a smaller magnetic apparatus, and to achieve fusion conditions in a device 1/65th the volume of ITER. This, at any rate, is the plan. They hope to have the superconducting magnets constructed in the next three years. Preliminary calculations using reactor systems code have, in the past, suggested that HTSCs could make fusion energy cheaper and perhaps cost-competitive in the future.

A visualization of Commonwealth Fusion Systems’s SPARC tokamak experiment. Image Credit: Ken Filar, PSFC research affiliate

The researchers are optimistic: “Our strategy is to use conservative physics, based on decades of work at MIT and elsewhere,” says Martin Greenwald, deputy director of MIT’s Plasma Science and  Fusion Centre. “If SPARC does achieve its expected performance, my sense is that’s sort of a Kitty Hawk moment for fusion, by robustly demonstrating net power, in a device that scales to a real power plant.”

There are many other designs and startups that similarly promise to circumvent the ever-ballooning tokamaks and budgets of the international collaboration. It’s hard to say whether any of the dark horses will prove to have the secret ingredient for fusion, or whether ITER, with the weight of the scientific community and government funding, will win out. And even then, it’s harder to say if, or when, fusion will become the superlative energy source that those who work on it (and write press releases) dream of. I wish everyone attempting to make this dream a reality the very best of luck. But if there’s one thing we’ve learned from the last 70 years of attempts, it’s that fusion is hard. I wouldn’t bet on a timeline.

Image Credit: maytree /

Kategorie: Transhumanismus

Republish Our Content—Singularity Hub Launches Creative Commons

20 Březen, 2018 - 16:00

It’s been a long time coming, but we’re excited to finally announce we have officially launched a new way for publishers to easily (and legally) republish our articles on their sites.

We believe an important component of the future of learning is the free flow of information.

So we’ve made some of our content immediately republishable under a Creative Commons Attribution-NoDerivatives 4.0 International license and will add more articles in the coming months. Please note you can only republish content that includes the “republish” button. You can see it in the sidebar of this article above our featured post, but here’s a visual example too:

Like this post, you will see future and past content tagged Creative Commons with the republish button in the sidebar. If you’re looking for our republishable content in one place, you can easily find all our Creative Commons articles here.

Under the Creative Commons license we chose, CC BY-ND 4.0, anyone is free to:

Share — copy and redistribute the material in any medium or format for any purpose, even commercially. The licensor cannot revoke these freedoms as long as you follow the license terms.

But there’s a bit more to it than that. Please carefully read our Republishing Guidelines to better understand best practices. We selected this license as it simplifies many issues with websites running ads or selling products and allows greater potential distribution of Singularity Hub articles.

By joining the ranks of incredible news sites, such as Aeon and The Conversation, that release content under Creative Commons, we hope to further empower and engage readers with our authors’ perspectives and thought leadership on the future.

Don’t hesitate to reach out to us with questions. You can also pitch us your ideas and articles to reach a wider audience! And lastly, don’t forget to sign up for our newsletters to get the latest news on technological breakthroughs and issues shaping the future.

Image Credit: Tithi Luadthong /

Kategorie: Transhumanismus

$10 million XPRIZE Aims for Robot Avatars That Let You See, Hear, and Feel by 2021

19 Březen, 2018 - 16:00

Ever wished you could be in two places at the same time? The XPRIZE Foundation wants to make that a reality with a $10 million competition to build robot avatars that can be controlled from at least 100 kilometers away.

The competition was announced by XPRIZE founder Peter Diamandis at the SXSW conference in Austin last week, with an ambitious timeline of awarding the grand prize by October 2021. Teams have until October 31st to sign up, and they need to submit detailed plans to a panel of judges by the end of next January.

The prize, sponsored by Japanese airline ANA, has given contestants little guidance on how they expect them to solve the challenge other than saying their solutions need to let users see, hear, feel, and interact with the robot’s environment as well as the people in it.

XPRIZE has also not revealed details of what kind of tasks the robots will be expected to complete, though they’ve said tasks will range from “simple” to “complex,” and it should be possible for an untrained operator to use them.

That’s a hugely ambitious goal that’s likely to require teams to combine multiple emerging technologies, from humanoid robotics to virtual reality high-bandwidth communications and high-resolution haptics.

If any of the teams succeed, the technology could have myriad applications, from letting emergency responders enter areas too hazardous for humans to helping people care for relatives who live far away or even just allowing tourists to visit other parts of the world without the jet lag.

“Our ability to physically experience another geographic location, or to provide on-the-ground assistance where needed, is limited by cost and the simple availability of time,” Diamandis said in a statement.

“The ANA Avatar XPRIZE can enable creation of an audacious alternative that could bypass these limitations, allowing us to more rapidly and efficiently distribute skill and hands-on expertise to distant geographic locations where they are needed, bridging the gap between distance, time, and cultures,” he added.

Interestingly, the technology may help bypass an enduring hand break on the widespread use of robotics: autonomy. By having a human in the loop, you don’t need nearly as much artificial intelligence analyzing sensory input and making decisions.

Robotics software is doing a lot more than just high-level planning and strategizing, though. While a human moves their limbs instinctively without consciously thinking about which muscles to activate, controlling and coordinating a robot’s components requires sophisticated algorithms.

The DARPA Robotics Challenge demonstrated just how hard it was to get human-shaped robots to do tasks humans would find simple, such as opening doors, climbing steps, and even just walking. These robots were supposedly semi-autonomous, but on many tasks they were essentially tele-operated, and the results suggested autonomy isn’t the only problem.

There’s also the issue of powering these devices. You may have noticed that in a lot of the slick web videos of humanoid robots doing cool things, the machine is attached to the roof by a large cable. That’s because they suck up huge amounts of power.

Possibly the most advanced humanoid robot—Boston Dynamics’ Atlas—has a battery, but it can only run for about an hour. That might be fine for some applications, but you don’t want it running out of juice halfway through rescuing someone from a mine shaft.

When it comes to the link between the robot and its human user, some of the technology is probably not that much of a stretch. Virtual reality headsets can create immersive audio-visual environments, and a number of companies are working on advanced haptic suits that will let people “feel” virtual environments.

Motion tracking technology may be more complicated. While even consumer-grade devices can track peoples’ movements with high accuracy, you will probably need to don something more like an exoskeleton that can both pick up motion and provide mechanical resistance, so that when the robot bumps into an immovable object, the user stops dead too.

How hard all of this will be is also dependent on how the competition ultimately defines subjective terms like “feel” and “interact.” Will the user need to be able to feel a gentle breeze on the robot’s cheek or be able to paint a watercolor? Or will simply having the ability to distinguish a hard object from a soft one or shake someone’s hand be enough?

Whatever the fidelity they decide on, the approach will require huge amounts of sensory and control data to be transmitted over large distances, most likely wirelessly, in a way that’s fast and reliable enough that there’s no lag or interruptions. Fortunately 5G is launching this year, with a speed of 10 gigabits per second and very low latency, so this problem should be solved by 2021.

And it’s worth remembering there have already been some tentative attempts at building robotic avatars. Telepresence robots have solved the seeing, hearing, and some of the interacting problems, and MIT has already used virtual reality to control robots to carry out complex manipulation tasks.

South Korean company Hankook Mirae Technology has also unveiled a 13-foot-tall robotic suit straight out of a sci-fi movie that appears to have made some headway with the motion tracking problem, albeit with a human inside the robot. Toyota’s T-HR3 does the same, but with the human controlling the robot from a “Master Maneuvering System” that marries motion tracking with VR.

Combining all of these capabilities into a single machine will certainly prove challenging. But if one of the teams pulls it off, you may be able to tick off trips to the Seven Wonders of the World without ever leaving your house.

Image Credit: ANA Avatar XPRIZE

Kategorie: Transhumanismus

This 3D Printed House Goes Up in a Day for Under $10,000

18 Březen, 2018 - 16:00

There aren’t a ton of ways to build a house other than the way houses have always been built, which is to say, by putting up four walls then adding a roof. This ages-old technique had to be modernized at some point, though, and as with everything else in our lives these days, technology’s delivering that modernization. In this case, instead of being built the old-fashioned way, houses can now be printed.

Last week at the South By Southwest festival in Austin, Texas, construction technologies startup ICON and housing nonprofit New Story unveiled their version of a 3D printed house. The model is 650 square feet and consists of a living room, kitchen, bedroom, bathroom, and shaded porch. It went from zero to finished in under 24 hours, and it cost less than $10,000. Equivalent homes built in developing countries will cost a mere $4,000 each.

This isn’t the first 3D printed house to spring up (or, rather, to be plopped down); there are similar structures created with similar technology in Russia, Dubai, Amsterdam, and elsewhere, but this is the first permitted 3D printed home to go up in the US.

ICON’s crane-like printer is called the Vulcan, and it pours a concrete mix into a software-dictated pattern; instead of one wall going up at a time, one layer is put down at a time, the whole structure “growing” from the ground up. The printer consists of an axis set on a track, giving it a flexible and theoretically unlimited print area.

“With 3D printing, you not only have a continuous thermal envelope, high thermal mass, and near zero-waste, but you also have speed, a much broader design palette, next-level resiliency, and the possibility of a quantum leap in affordability,” said Jason Ballard, ICON’s co-founder. “This isn’t 10 percent better, it’s 10 times better.”

The house has a greater purpose than just wowing techies, though. ICON and New Story’s vision is one of 3D printed houses acting as a safe, affordable housing alternative for people in need. New Story has already built over 800 homes in Haiti, El Salvador, Bolivia, and Mexico, partnering with the communities they serve to hire local labor and purchase local materials rather than shipping everything in from abroad.

New Story is in the process of raising $600,000 to fund a planned 100-home community in El Salvador. It will be the first-ever community of 3D printed homes. Printing will begin later this year, and the goal is for families to be moving in by Q3 of 2019. Donors can fund a full house with just $4,000.

Six hundred and fifty square feet may not sound like much space for more than one to two people, but it’s a huge step up from the lean-tos and shacks that make up the slums where millions of people live. ICON and New Story hope the Salvadorian community will serve as a scalable model that can be exported to developing countries around the world, providing a high-quality housing option for the millions who currently lack  one.

Image Credit: Adam Brophy

“Instead of waiting for profit motivation to bring construction advances to the global south, we are fast-tracking innovations like 3D home printing that can be a powerful tool toward ending homelessness,” said Alexandria Lafci, COO of New Story.

The homes are built to the International Building Code structural standard and are expected to last as long or longer than standard concrete masonry unit homes.

While 3D printed houses are a great alternative to the flimsy lean-tos millions of people call home, there are some limitations to consider in terms of them being a solution to global housing shortages.

The biggest need for affordable, safe housing in the developing world is in or near big cities; take the slums of Cape Town, Nairobi, or Mumbai as an example. Replacing families’ current homes in these locations with printed houses may prove difficult simply due to space constraints; 3D printed communities are far more practical in rural areas where there’s less population density, and may not be a truly scalable solution in urban areas until the communities get vertical. 3D printed high-rises are already in the works, though not yet for the purpose of affordable housing.

If skyscrapers can be printed and used as offices, it’s only a matter of time before they can be used for housing as well. And in the meantime, $4,000 a pop for a safe, cozy home where there was no home before is a solid step in the right direction.

Image Credit: New Story

Kategorie: Transhumanismus

This Week’s Awesome Stories From Around the Web (Through March 17)

17 Březen, 2018 - 16:00

China Wants to Shape the Global Future of Artificial Intelligence
Will Knight | MIT Technology Review
“China’s booming AI industry and massive government investment in the technology have raised fears in the US and elsewhere that the nation will overtake international rivals in a fundamentally important technology. In truth, it may be possible for both the US and the Chinese economies to benefit from AI. But there may be more rivalry when it comes to influencing the spread of the technology worldwide. ‘I think this is the first technology area where China has a real chance to set the rules of the game,’ says Ding.”


Astronaut’s Gene Expression No Longer Same as His Identical Twin, NASA Finds
Susan Scutti | CNN
“Preliminary results from NASA’s Twins Study reveal that 7% of astronaut Scott Kelly’s genetic expression—how his genes function within cells—did not return to baseline after his return to Earth two years ago. The study looks at what happened to Kelly before, during and after he spent one year aboard the International Space Station through an extensive comparison with his identical twin, Mark, who remained on Earth.”


This Cheap 3D-Printed Home Is a Start for the 1 Billion Who Lack Shelter
Tamara Warren | The Verge
“ICON has developed a method for printing a single-story 650-square-foot house out of cement in only 12 to 24 hours, a fraction of the time it takes for new construction. If all goes according to plan, a community made up of about 100 homes will be constructed for residents in El Salvador next year. The company has partnered with New Story, a nonprofit that is vested in international housing solutions. ‘We have been building homes for communities in Haiti, El Salvador, and Bolivia,’ Alexandria Lafci, co-founder of New Story, tells The Verge.”


Our Microbiomes Are Making Scientists Question What It Means to Be Human
Rebecca Flowers | Motherboard
“Studies in genetics and Watson and Crick’s discovery of DNA gave more credence to the idea of individuality. But as scientists learn more about the microbiome, the idea of humans as a singular organism is being reconsidered: ‘There is now overwhelming evidence that normal development as well as the maintenance of the organism depend on the microorganisms…that we harbor,’ they state (others have taken this position, too).”


Stephen Hawking, Who Awed Both Scientists and the Public, Dies
Joe Palca | NPR
“Hawking was probably the best-known scientist in the world. He was a theoretical physicist whose early work on black holes transformed how scientists think about the nature of the universe. But his fame wasn’t just a result of his research. Hawking, who had a debilitating neurological disease that made it impossible for him to move his limbs or speak, was also a popular public figure and best-selling author. There was even a biopic about his life, The Theory of Everything, that won an Oscar for the actor, Eddie Redmayne, who portrayed Hawking.”

Image Credit: NASA/JPL-Caltech/STScI

Kategorie: Transhumanismus

Stephen Hawking: Martin Rees Looks Back on Colleague’s Spectacular Success Against All Odds

16 Březen, 2018 - 16:00

Soon after I enrolled as a graduate student at Cambridge University in 1964, I encountered a fellow student, two years ahead of me in his studies, who was unsteady on his feet and spoke with great difficulty. This was Stephen Hawking. He had recently been diagnosed with a degenerative disease, and it was thought that he might not survive long enough even to finish his PhD. But he lived to the age of 76, passing away on March 14, 2018.

It really was astonishing. Astronomers are used to large numbers. But few numbers could be as large as the odds I’d have given against witnessing this lifetime of achievement back then. Even mere survival would have been a medical marvel, but of course he didn’t just survive. He became one of the most famous scientists in the world—acclaimed as a world-leading researcher in mathematical physics, for his best-selling books and for his astonishing triumph over adversity.

Perhaps surprisingly, Hawking was rather laid back as an undergraduate student at Oxford University. Yet his brilliance earned him a first class degree in physics, and he went on to pursue a research career at the University of Cambridge. Within a few years of the onset of his disease, he was wheelchair-bound, and his speech was an indistinct croak that could only be interpreted by those who knew him. In other respects, fortune had favored him. He married a family friend, Jane Wilde, who provided a supportive home life for him and their three children.

Early Work

The 1960s were an exciting period in astronomy and cosmology. This was the decade when evidence began to emerge for black holes and the Big Bang. In Cambridge, Hawking focused on the new mathematical concepts being developed by the mathematical physicist Roger Penrose, then at University College London, which were initiating a renaissance in the study of Einstein’s theory of general relativity.

Using these techniques, Hawking worked out that the universe must have emerged from a “singularity”—a point in which all laws of physics break down. He also realized that the area of a black hole’s event horizon—a point from which nothing can escape—could never decrease. In the subsequent decades, the observational support for these ideas has strengthened—most spectacularly with the 2016 announcement of the detection of gravitational waves from colliding black holes.

Hawking was elected to the Royal Society, Britain’s main scientific academy, at the exceptionally early age of 32. He was by then so frail that most of us suspected that he could scale no further heights. But, for Hawking, this was still just the beginning.

He worked in the same building as I did. I would often push his wheelchair into his office, and he would ask me to open an abstruse book on quantum theory—the science of atoms, not a subject that had hitherto much interested him. He would sit hunched motionless for hours—he couldn’t even to turn the pages without help. I remember wondering what was going through his mind, and if his powers were failing. But within a year, he came up with his best ever idea—encapsulated in an equation that he said he wanted on his memorial stone.

Scientific Stardom

The great advances in science generally involve discovering a link between phenomena that seemed hitherto conceptually unconnected. Hawking’s “eureka moment” revealed a profound and unexpected link between gravity and quantum theory: he predicted that black holes would not be completely black, but would radiate energy in a characteristic way.

This radiation is only significant for black holes that are much less massive than stars—and none of these have been found. However, “Hawking radiation” had very deep implications for mathematical physics—indeed one of the main achievements of a theoretical framework for particle physics called string theory has been to corroborate his idea.

Indeed, the string theorist Andrew Strominger from Harvard University (with whom Hawking recently collaborated) said that this paper had caused “more sleepless nights among theoretical physicists than any paper in history.” The key issue is whether information that is seemingly lost when objects fall into a black hole is in principle recoverable from the radiation when it evaporates. If it is not, this violates a deeply believed principle of general physics. Hawking initially thought such information was lost, but later changed his mind.

Hawking continued to seek new links between the very large (the cosmos) and the very small (atoms and quantum theory) and to gain deeper insights into the very beginning of our universe—addressing questions like “was our big bang the only one?”. He had a remarkable ability to figure things out in his head. But he also worked with students and colleagues who would write formulas on a blackboard—he would stare at it, say whether he agreed and perhaps suggest what should come next.

He was specially influential in his contributions to “cosmic inflation”—a theory that many believe describes the ultra-early phases of our expanding universe. A key issue is to understand the primordial seeds which eventually develop into galaxies. Hawking proposed (as, independently, did the Russian theorist Viatcheslav Mukhanov) that these were “quantum fluctuations” (temporary changes in the amount of energy in a point in space)—somewhat analogous to those involved in “Hawking radiation” from black holes.

He also made further steps towards linking the two great theories of 20th century physics: the quantum theory of the microworld and Einstein’s theory of gravity and space-time.

Declining Health and Cult Status

In 1987, Hawking contracted pneumonia. He had to undergo a tracheotomy, which removed even the limited powers of speech he then possessed. It had been more than ten years since he could write, or even use a keyboard. Without speech, the only way he could communicate was by directing his eye towards one of the letters of the alphabet on a big board in front of him.

But he was saved by technology. He still had the use of one hand; and a computer, controlled by a single lever, allowed him to spell out sentences. These were then declaimed by a speech synthesizer, with the androidal American accent that thereafter became his trademark.

His lectures were, of course, pre-prepared, but conversation remained a struggle. Each word involved several presses of the lever, so even a sentence took several minutes to construct. He learnt to economize with words. His comments were aphoristic or oracular, but often infused with wit. In his later years, he became too weak to control this machine effectively, even via facial muscles or eye movements, and his communication—to his immense frustration—became even slower.

Stephen Hawking in zero gravity with Peter Diamandis. Image Credit: Jim Campbell

At the time of his tracheotomy operation, he had a rough draft of a book, which he’d hoped would describe his ideas to a wide readership and earn something for his two eldest children, who were then of college age. On his recovery from pneumonia, he resumed work with the help of an editor. When the US edition of A Brief History of Time appeared, the printers made some errors (a picture was upside down), and the publishers tried to recall the stock. To their amazement, all copies had already been sold. This was the first inkling that the book was destined for runaway success, reaching millions of people worldwide.

And he quickly became somewhat of a cult figure, featuring on popular TV shows ranging from the Simpsons to The Big Bang Theory. This was probably because the concept of an imprisoned mind roaming the cosmos plainly grabbed people’s imagination. If he had achieved equal distinction in, say, genetics rather than cosmology, his triumph probably wouldn’t have achieved the same resonance with a worldwide public.

As shown in the feature film The Theory of Everything, which tells the human story behind his struggle, Hawking was far from being the archetype unworldy or nerdish scientist. His personality remained amazingly unwarped by his frustrations and handicaps. He had robust common sense, and was ready to express forceful political opinions.

However, a downside of his iconic status was that that his comments attracted exaggerated attention even on topics where he had no special expertise—for instance, philosophy, or the dangers from aliens or from intelligent machines. And he was sometimes involved in media events where his “script” was written by the promoters of causes about which he may have been ambivalent.

Ultimately, Hawking’s life was shaped by the tragedy that struck him when he was only 22. He himself said that everything that happened since then was a bonus. And what a triumph his life has been. His name will live in the annals of science and millions have had their cosmic horizons widened by his best-selling books. He has also inspired millions by a unique example of achievement against all the odds—a manifestation of amazing willpower and determination.

This article was originally published on The Conversation. Read the original article.

Image Credit: NASA/Paul. E. Alers

Kategorie: Transhumanismus

Everyone Is Talking About AI—But Do They Mean the Same Thing?

15 Březen, 2018 - 16:00

In 2017, artificial intelligence attracted $12 billion of VC investment. We are only beginning to discover the usefulness of AI applications. Amazon recently unveiled a brick-and-mortar grocery store that has successfully supplanted cashiers and checkout lines with computer vision, sensors, and deep learning. Between the investment, the press coverage, and the dramatic innovation, “AI” has become a hot buzzword. But does it even exist yet?

At the World Economic Forum Dr. Kai-Fu Lee, a Taiwanese venture capitalist and the founding president of Google China, remarked, “I think it’s tempting for every entrepreneur to package his or her company as an AI company, and it’s tempting for every VC to want to say ‘I’m an AI investor.’” He then observed that some of these AI bubbles could burst by the end of 2018, referring specifically to “the startups that made up a story that isn’t fulfillable, and fooled VCs into investing because they don’t know better.”

However, Dr. Lee firmly believes AI will continue to progress and will take many jobs away from workers. So, what is the difference between legitimate AI, with all of its pros and cons, and a made-up story?

If you parse through just a few stories that are allegedly about AI, you’ll quickly discover significant variation in how people define it, with a blurred line between emulated intelligence and machine learning applications.

I spoke to experts in the field of AI to try to find consensus, but the very question opens up more questions. For instance, when is it important to be accurate to a term’s original definition, and when does that commitment to accuracy amount to the splitting of hairs? It isn’t obvious, and hype is oftentimes the enemy of nuance. Additionally, there is now a vested interest in that hype—$12 billion, to be precise.

This conversation is also relevant because world-renowned thought leaders have been publicly debating the dangers posed by AI. Facebook CEO Mark Zuckerberg suggested that naysayers who attempt to “drum up these doomsday scenarios” are being negative and irresponsible. On Twitter, business magnate and OpenAI co-founder Elon Musk countered that Zuckerberg’s understanding of the subject is limited. In February, Elon Musk engaged again in a similar exchange with Harvard professor Steven Pinker. Musk tweeted that Pinker doesn’t understand the difference between functional/narrow AI and general AI.

Given the fears surrounding this technology, it’s important for the public to clearly understand the distinctions between different levels of AI so that they can realistically assess the potential threats and benefits.

As Smart As a Human?

Erik Cambria, an expert in the field of natural language processing, told me, “Nobody is doing AI today and everybody is saying that they do AI because it’s a cool and sexy buzzword. It was the same with ‘big data’ a few years ago.”

Cambria mentioned that AI, as a term, originally referenced the emulation of human intelligence. “And there is nothing today that is even barely as intelligent as the most stupid human being on Earth. So, in a strict sense, no one is doing AI yet, for the simple fact that we don’t know how the human brain works,” he said.

He added that the term “AI” is often used in reference to powerful tools for data classification. These tools are impressive, but they’re on a totally different spectrum than human cognition. Additionally, Cambria has noticed people claiming that neural networks are part of the new wave of AI. This is bizarre to him because that technology already existed fifty years ago.

However, technologists no longer need to perform the feature extraction by themselves. They also have access to greater computing power. All of these advancements are welcomed, but it is perhaps dishonest to suggest that machines have emulated the intricacies of our cognitive processes.

“Companies are just looking at tricks to create a behavior that looks like intelligence but that is not real intelligence, it’s just a mirror of intelligence. These are expert systems that are maybe very good in a specific domain, but very stupid in other domains,” he said.

This mimicry of intelligence has inspired the public imagination. Domain-specific systems have delivered value in a wide range of industries. But those benefits have not lifted the cloud of confusion.

Assisted, Augmented, or Autonomous

When it comes to matters of scientific integrity, the issue of accurate definitions isn’t a peripheral matter. In a 1974 commencement address at the California Institute of Technology, Richard Feynman famously said, “The first principle is that you must not fool yourself—and you are the easiest person to fool.” In that same speech, Feynman also said, “You should not fool the layman when you’re talking as a scientist.” He opined that scientists should bend over backwards to show how they could be wrong. “If you’re representing yourself as a scientist, then you should explain to the layman what you’re doing—and if they don’t want to support you under those circumstances, then that’s their decision.”

In the case of AI, this might mean that professional scientists have an obligation to clearly state that they are developing extremely powerful, controversial, profitable, and even dangerous tools, which do not constitute intelligence in any familiar or comprehensive sense.

The term “AI” may have become overhyped and confused, but there are already some efforts underway to provide clarity. A recent PwC report drew a distinction between “assisted intelligence,” “augmented intelligence,” and “autonomous intelligence.” Assisted intelligence is demonstrated by the GPS navigation programs prevalent in cars today. Augmented intelligence “enables people and organizations to do things they couldn’t otherwise do.” And autonomous intelligence “establishes machines that act on their own,” such as autonomous vehicles.

Roman Yampolskiy is an AI safety researcher who wrote the book “Artificial Superintelligence: A Futuristic Approach.” I asked him whether the broad and differing meanings might present difficulties for legislators attempting to regulate AI.

Yampolskiy explained, “Intelligence (artificial or natural) comes on a continuum and so do potential problems with such technology. We typically refer to AI which one day will have the full spectrum of human capabilities as artificial general intelligence (AGI) to avoid some confusion. Beyond that point it becomes superintelligence. What we have today and what is frequently used in business is narrow AI. Regulating anything is hard, technology is no exception. The problem is not with terminology but with complexity of such systems even at the current level.”

When asked if people should fear AI systems, Dr. Yampolskiy commented, “Since capability comes on a continuum, so do problems associated with each level of capability.” He mentioned that accidents are already reported with AI-enabled products, and as the technology advances further, the impact could spread beyond privacy concerns or technological unemployment. These concerns about the real-world effects of AI will likely take precedence over dictionary-minded quibbles. However, the issue is also about honesty versus deception.

Is This Buzzword All Buzzed Out?

Finally, I directed my questions towards a company that is actively marketing an “AI Virtual Assistant.” Carl Landers, the CMO at Conversica, acknowledged that there are a multitude of explanations for what AI is and isn’t.

He said, “My definition of AI is technology innovation that helps solve a business problem. I’m really not interested in talking about the theoretical ‘can we get machines to think like humans?’ It’s a nice conversation, but I’m trying to solve a practical business problem.”

I asked him if AI is a buzzword that inspires publicity and attracts clients. According to Landers, this was certainly true three years ago, but those effects have already started to wane. Many companies now claim to have AI in their products, so it’s less of a differentiator. However, there is still a specific intention behind the word. Landers hopes to convey that previously impossible things are now possible. “There’s something new here that you haven’t seen before, that you haven’t heard of before,” he said.

According to Brian Decker, founder of Encom Lab, machine learning algorithms only work to satisfy their preexisting programming, not out of an interior drive for better understanding. Therefore, he views AI as an entirely semantic argument.

Decker stated, “A marketing exec will claim a photodiode controlled porch light has AI because it ‘knows when it is dark outside,’ while a good hardware engineer will point out that not one bit in a register in the entire history of computing has ever changed unless directed to do so according to the logic of preexisting programming.”

Although it’s important for everyone to be on the same page regarding specifics and underlying meaning, AI-powered products are already powering past these debates by creating immediate value for humans. And ultimately, humans care more about value than they do about semantic distinctions. In an interview with Quartz, Kai-Fu Lee revealed that algorithmic trading systems have already given him an 8X return over his private banking investments. “I don’t trade with humans anymore,” he said.

Image Credit: vrender /

Kategorie: Transhumanismus

In Landmark Study, Human Stem Cells Restore Monkeys’ Movement After Spinal Cord Injury

14 Březen, 2018 - 16:30

Stem cell therapy is highly attractive in its intuitive simplicity: you clean out injured cells, plop down a gang of healthy replacements, sit back, and wait for the body to heal itself.

For spinal cord injuries the potential of stem cells to restore movement seems especially within reach.

But as it happens, the body isn’t quite the simple find-and-replace system. Stem cells when transplanted alone often don’t take, dying off inside the host’s hostile environment before they have a chance to restore function.

For the last three decades, neuroscientists have been scratching their heads, testing cocktail after cocktail of special molecules that can boost stem cell survival. And while there have been successes in rodent models, scaling the therapy to work in primates—a critical step towards human trials—has floundered.

Until now. Last month, a “landmark” study published in Nature Medicine detailed a recipe for transplanted human stem cells to survive and integrate inside the injured spines of monkeys.

Nine months after surgery, the cells extended hundreds of thousands of branches that formed synapses with the monkey’s surviving spinal cord neurons. What’s more, the hosts’ spinal neurons also welcomed the human cells as their own, reaching out to form new connections that restored the animal’s ability to grasp objects.

“The growth we observe from these cells is remarkable—and unlike anything I thought possible even ten years ago,” said lead author Dr. Mark Tuszynski at the University of California, San Diego Translational Neuroscience Institute. “We definitely have more confidence to do this type of treatment in humans.”

The Rodent Trap

Trauma to the spinal cord shears the long, delicate neuronal branches—the axons—that the brain uses to talk to the rest of the body. To restore motor function, scientists need to coax the body to repair or regenerate these connections.

But here’s the problem. After injury the spinal cord quickly reorganizes the extracellular matrix—an intricate web of structural molecules—around the damaged site. Like roadblocks, these proteins effectively inhibit transplanted stem cells from extending out their long-reaching axon branches. What’s more, the injured site also lacks supportive growth factors and other beneficial molecules that act as a nurturing cocoon for the stem cells.

To get around this double whammy, scientists have formulated dozens of growth-promoting cocktails to give the transplanted stem cells a boost. The strategy seems to work.

Back in 2014, Tuszynski transformed skin cells from a healthy human donor, converted them into iPSC cells, and embedded these artificial stem cells into a matrix containing growth factors.

After grafting into rats with two-week-old spinal cord injuries, the human cells fully matured into new neurons, extending axons along the rats’ spinal cord. But shockingly, the team didn’t see any improvement in function, partially due to scarring at the transplant site.

“We are trying to do as much as we possibly can to identify the best way of translating neural stem cell therapies for spinal cord injury to patients,” Tuszynski said at the time.

A New Hope

True to his word, Tuszynski took his transplant protocol to monkeys, which are a much better model for the human spinal cord.

The team cut into a section of the monkey’s spinal cord, and two weeks later—a good approximation for the wait time for patients to stabilize—grafted human stem cells into the injured site along with growth factors.

The transplanted human cells were genetically engineered to grow a bright green under fluorescent light. After transplant at the injured site (labeled “graft”), the cells shot out hundreds of thousands of brand new connections (white arrows) within the monkey spinal cord. Image Credit: Mark Tuszynski, UC San Diego School of Medicine

It didn’t work. In the first four monkeys, the grafts didn’t even stay in place.

“Had we attempted human transplantation without prior large animal testing, there would have been substantial risk of clinical trial failure,” Tuszynski said.

The team quickly realized that they needed to boost the amount of a crucial protein ingredient in their recipe to better “glue” the graft in place.

The team also found issues with immunosuppression, timing, and the surgical procedure. For example, they had to tilt the surgical table during the operation to prevent the cerebral spinal fluid—a liquid buffer in the spinal cord—from washing the graft away. In addition, the monkeys required a heftier dose of immunosuppressive drugs to prevent the body from attacking the human cells.

With the tweaks in place, the grafts, each containing roughly 20 million human neural stem cells, stayed in place in the remaining five monkeys.

The results were stunning. As early as two months after transplant the team found an explosion of new neuronal branches. From the injured site, the stem cells developed into mature neurons, sprouting up to 150,000 axons that snaked along the monkey’s spinal cord.

Some of the branches traveled as far as 50 millimeters from the graft site, roughly the length of two spinal cord fragments in humans. Along the way, they made extensive connections with the monkeys’ undamaged cells.

Even more promising, the monkeys’ own axons also formed synapses with the human neural graft, forming reciprocal connections. These connections are crucial for voluntary arm movements in humans, and this is some of the first solid evidence that transplanted stem cells could form such circuits.

Nine months later, the new neural connections helped the injured monkeys regain some movement in their affected forelimbs, giving them back the ability to grasp soft, squishy objects (for example, an orange) at will. In contrast, the monkeys with failed grafts had little control over fine movements in their palm and fingers—they could only rest the orange on their knuckles.

That may not seem like much, but the authors say that nine months is just a blink in time for functional recovery.

“Grafts, and the new circuitry they were part of, were still maturing at the end of our observations, so it seems possible that recovery might have continued,” said study author Dr. Ephron Rosenzweig.

Although the functional improvements were only partial, Dr. Gregoire Courtine at the Swiss Federal Institute of Techonology (EPFL) at Geneva calls the study “a landmark in regeneration medicine.”

“It is not surprising given that the functional integration of new cells and connections into the operation of the nervous system would require time and specific rehabilitation procedures,” he said, adding that the study offers valuable insights for potential human use.

Dr. Steve Goldman, a neurologist at the University of Rochester not involved with the work, agrees.

“It’s a big leap to go from rodents to primates,” he said. “This is a really heroic study from that standpoint.”

To Tuszynski, the work is only beginning. For one, not all stem cells are created equal, and his team is trying to determine which ones are most effective at functional repair.

For another, he is also exploring additional ways to further boost the functionality of the regenerated neurons, so that their axons can extend across the injured site and completely replace those lost to injury.

“Patience will be required when moving to humans,” he cautioned, adding that additional safety trials will be necessary before clinical trials. But care pays off.

“There is clearly significant potential here that we hope will benefit humans with spinal cord injury,” he said.

Image Credit: Kamol Jindamanee /

Kategorie: Transhumanismus

An Innovator’s City Guide to Shanghai

14 Březen, 2018 - 15:00

Shanghai is a city full of life. With its population of 24 million, Shanghai embraces vibrant growth, fosters rising diversity, and attracts visionaries, innovators, and adventurers. Fintech, artificial intelligence, and e-commerce are booming. Now is a great time to explore this multicultural, inspirational city as it experiences quick growth and ever greater influence.

Meet Your Guide

Qingsong (Dora) Ke Singularity University Chapter: Shanghai Chapter
Profession: Associate Director for Asia Pacific, IE Business School and IE University; Mentor, Techstars Startup Weekend; Mentor, Startupbootcamp; China President, Her Century Your City Guide to Shanghai, China

Top three industries in the city: Automotive, Retail, and Finance

1. Coworking Space: Mixpace

With 10 convenient locations in the Shanghai downtown area, Mixpace offers affordable prices and various office and event spaces to both foreign and local entrepreneurs and startups.

2. Makerspace: XinCheJian

The first hackerspace and a non-profit in China, Xinchejian was founded to support projects in physical computing, open source hardware, and the Internet of Things. It hosts regular events and talks to facilitate development of hackerspaces in China.

3. Local meetups/ networks: FinTech Connector

FinTech Connector is a community connecting local fintech entrepreneurs and start-ups with global professionals, thought leaders, and investors for the purpose of disrupting financial services with cutting-edge technology.

4. Best coffee shop with free WiFi: Seesaw

Clean and modern décor, convenient locations, a quiet environment, and high-quality coffee make Seesaw one of the most popular coffee shops in Shanghai.

5. The startup neighborhood: Knowledge & Innovation Community (KIC)

Located near 10 prestigious universities and over 100 scientific research institutions, KIC attempts to integrate Silicon Valley’s innovative spirit with the artistic culture of the Left Bank in Paris.

6. Well-known investor or venture capitalist: Nanpeng (Neil) Shen

Global executive partner at Sequoia Capital, founding and managing partner at Sequoia China, and founder of and Home Inn, Neil Shen was named Best Venture Capitalist by Forbes China in 2010–2013 and ranked as the best Chinese investor among Global Best Investors by Forbes in 2012–2016.

7. Best way to get around: Metro

Shanghai’s 17 well-connected metro lines covering every corner of the city at affordable prices are the best way to get around.

8. Local must-have dish and where to get it: Mini Soupy Bun (steamed dumplings, xiaolongbao) at Din Tai Fung in Shanghai.

Named one of the top ten restaurants in the world by the New York Times, Din Tai Fung makes the best xiaolongbao, a delicious soup with stuffed dumplings.

9. City’s best-kept secret: Barber Shop

This underground bar gets its name from the barber shop it’s hidden behind. Visitors must discover how to unlock the door leading to Barber Shop’s sophisticated cocktails and engaging music. (No website for this underground location, but the address is 615 Yongjia Road).

10. Touristy must-do: Enjoy the nightlife and the skyline at the Bund

On the east side of the Bund are the most modern skyscrapers, including Shanghai Tower, Shanghai World Financial Centre, and Jin Mao Tower. The west side of the Bund features 26 buildings of diverse architectural styles, including Gothic, Baroque, Romanesque, and others; this area is known for its exotic buildings.

11. Local volunteering opportunity: Shanghai Volunteer

Shanghai Volunteer is a platform to connect volunteers with possible opportunities in various fields, including education, elderly care, city culture, and environment.

12. Local University with great resources: Shanghai Jiao Tong University

Established in 1896, Shanghai Jiao Tong University is the second-oldest university in China and one of the country’s most prestigious. It boasts notable alumni in government and politics, science, engineering, business, and sports, and it regularly collaborates with government and the private sector.

This article is for informational purposes only. All opinions in this post are the author’s alone and not those of Singularity University. Neither this article nor any of the listed information therein is an official endorsement by Singularity University.

Image Credits: Qinsong (Dora) Ke

Banner Image Credit: ESB Professional /

Kategorie: Transhumanismus

What If the AI Revolution Is Neither Utopia nor Apocalypse, but Something in Between?

13 Březen, 2018 - 16:00

Why does everyone assume that the AI revolution will either lead to a fiery apocalypse or a glorious utopia, and not something in between? Of course, part of this is down to the fact that you get more attention by saying “The end is nigh!” or “Utopia is coming!”

But part of it is down to how humans think about change, especially unprecedented change. Millenarianism doesn’t have anything to do with being a “millennial,” being born in the 90s and remembering Buffy the Vampire Slayer. It is a way of thinking about the future that involves a deeply ingrained sense of destiny. A definition might be: “Millenarianism is the expectation that the world as it is will be destroyed and replaced with a perfect world, that a redeemer will come to cast down the evil and raise up the righteous.”

Millenarian beliefs, then, intimately link together the ideas of destruction and creation. They involve the idea of a huge, apocalyptic, seismic shift that will destroy the fabric of the old world and create something entirely new. Similar belief systems exist in many of the world’s major religions, and also the unspoken religion of some atheists and agnostics, which is a belief in technology.

Look at some futurist beliefs around the technological Singularity. In Ray Kurzweil’s vision, the Singularity is the establishment of paradise. Everyone is rendered immortal by biotechnology that can cure our ills; our brains can be uploaded to the cloud; inequality and suffering wash away under the wave of these technologies. The “destruction of the world” is replaced by a Silicon Valley buzzword favorite: disruption. And, as with many millenarian beliefs, your mileage varies on whether this destruction paves the way for a new utopia—or simply ends the world.

There are good reasons to be skeptical and interrogative towards this way of thinking. The most compelling reason is probably that millenarian beliefs seem to be a default mode of how humans think about change; just look at how many variants of this belief have cropped up all over the world.

These beliefs are present in aspects of Christian theology, although they only really became mainstream in their modern form in the 19th and 20th centuries. Ideas like the Tribulations—many years of hardship and suffering—before the Rapture, when the righteous will be raised up and the evil punished. After this destruction, the world will be made anew, or humans will ascend to paradise.

Despite being dogmatically atheist, Marxism has many of the same beliefs. It is all about a deterministic view of history that builds to a crescendo. In the same way as Rapture-believers look for signs that prophecies are beginning to be fulfilled, so Marxists look for evidence that we’re in the late stages of capitalism. They believe that, inevitably, society will degrade and degenerate to a breaking point—just as some millenarian Christians do.

In Marxism, this is when the exploitation of the working class by the rich becomes unsustainable, and the working class bands together and overthrows the oppressors. The “tribulation” is replaced by a “revolution.” Sometimes revolutionary figures, like Lenin, or Marx himself, are heralded as messiahs who accelerate the onset of the Millennium; and their rhetoric involves utterly smashing the old system such that a new world can be built. Of course, there is judgment, when the righteous workers take what’s theirs and the evil bourgeoisie are destroyed.

Even Norse mythology has an element of this, as James Hughes points out in his essay in Nick Bostrom’s book Global Catastrophic Risks. Ragnarok involves men and gods being defeated in a final, apocalyptic battle—but because that was a little bleak, they add in the idea that a new earth will arise where the survivors will live in harmony.

Judgement day is a cultural trope, too. Take the ancient Egyptians and their beliefs around the afterlife; the Lord of the underworld, Osiris, weighs the mortal’s heart against a feather. “Should the heart of the deceased prove to be heavy with wrongdoing, it would be eaten by a demon, and the hope of an afterlife vanished.”

Perhaps in the Singularity, something similar goes on. As our technology and hence our power improve, a final reckoning approaches: our hearts, as humans, will be weighed against a feather. If they prove too heavy with wrongdoing—with misguided stupidity, with arrogance and hubris, with evil—then we will fail the test, and we will destroy ourselves. But if we pass, and emerge from the Singularity and all of its threats and promises unscathed, then we will have paradise. And, like the other belief systems, there’s no room for non-believers; all of society is going to be radically altered, whether you want it to be or not, whether it benefits you or leaves you behind. A technological rapture.

It almost seems like every major development provokes this response. Nuclear weapons did, too. Either this would prove the final straw and we’d destroy ourselves, or the nuclear energy could be harnessed to build a better world. People talked at the dawn of the nuclear age about electricity that was “too cheap to meter.” The scientists who worked on the bomb often thought that with such destructive power in human hands, we’d be forced to cooperate and work together as a species.

When we see the same response over and over again to different circumstances, cropping up in different areas, whether it’s science, religion, or politics, we need to consider human biases. We like millenarian beliefs; and so when the idea of artificial intelligence outstripping human intelligence emerges, these beliefs spring up around it.

We don’t love facts. We don’t love information. We aren’t as rational as we’d like to think. We are creatures of narrative. Physicists observe the world and we weave our observations into narrative theories, stories about little billiard balls whizzing around and hitting each other, or space and time that bend and curve and expand. Historians try to make sense of an endless stream of events. We rely on stories: stories that make sense of the past, justify the present, and prepare us for the future.

And as stories go, the millenarian narrative is a brilliant and compelling one. It can lead you towards social change, as in the case of the Communists, or the Buddhist uprisings in China. It can justify your present-day suffering, if you’re in the tribulation. It gives you hope that your life is important and has meaning. It gives you a sense that things are evolving in a specific direction, according to rules—not just randomly sprawling outwards in a chaotic way. It promises that the righteous will be saved and the wrongdoers will be punished, even if there is suffering along the way. And, ultimately, a lot of the time, the millenarian narrative promises paradise.

We need to be wary of the millenarian narrative when we’re considering technological developments and the Singularity and existential risks in general. Maybe this time is different, but we’ve cried wolf many times before. There is a more likely, less appealing story. Something along the lines of: there are many possibilities, none of them are inevitable, and lots of the outcomes are less extreme than you might think—or they might take far longer than you think to arrive. On the surface, it’s not satisfying. It’s so much easier to think of things as either signaling the end of the world or the dawn of a utopia—or possibly both at once. It’s a narrative we can get behind,  a good story, and maybe, a nice dream.

But dig a little below the surface, and you’ll find that the millenarian beliefs aren’t always the most promising ones, because they remove human agency from the equation. If you think that, say, the malicious use of algorithms, or the control of superintelligent AI, are serious and urgent problems that are worth solving, you can’t be wedded to a belief system that insists utopia or dystopia are inevitable. You have to believe in the shades of grey—and in your own ability to influence where we might end up. As we move into an uncertain technological future, we need to be aware of the power—and the limitations—of dreams.

Image Credit: Photobank gallery /

We are a participant in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for us to earn fees by linking to and affiliated sites.

Kategorie: Transhumanismus

How to Overhaul Your Business to Take Advantage of the Internet of Things

12 Březen, 2018 - 16:00

If you’re not learning, you’re missing out on earnings

It’s easy to write off the Internet of Things (IoT) as a great technology solution looking for a problem; yet another acronym clogging up the hype cycle.

High-performance organizations, however, see IoT very differently. For them, IoT is already on the front line, where data and machine learning combine to power them exponentially ahead. When these organizations look at IoT, they don’t see a new technology to connect things. Instead, they see a business decision—and a better way to inform it.

A Nervous System For Your Business

The rate of technological innovation continues to dramatically increase the power and miniaturization of mobile phones, computers, and sensors, putting ever smarter, cheaper, and faster services at our fingertips.

For smart organizations, this opens up new opportunities to gather data more granularly in real time, putting it in the hands of decision-makers at all levels of the business. They know IoT is not just about connecting devices; it’s a tool to connect intelligence. This tool can be your business’s eyes, ears, and memory—one that never sleeps and never stops.

Combining IoT devices and machine learning capabilities creates a nervous system for your organization, helping you learn how it operates and how customers interact with your products, then feeding in insights that allow you to make data-informed decisions.

These decisions range from local and tactical—like when to restock shop shelves—to global and strategic, like continuing with a brand supplier or category. Businesses based around this sort of data-informed decision-making outperform and out-earn their competition.

In short, if you’re not learning, then you’re missing out on earnings. Or worse still, you are investing valuable resources without a clear understanding of the outcome your investment yields.

A healthy IoT nervous system fuels this learning and enables software to better fit with your business reality. In turn, this leads you to better business decisions, a more responsive organization, and higher performance.

The Connected Learning Loop

We developed a simple, three-stage model to help people think through how they might apply IoT to their own business: meet the Connected Learning Loop.

Target Value Hypothesis

Before diving straight into the technology solution, any new initiative needs to target a business hypothesis that these new tools could help address. Key questions to consider include:

What questions do we struggle to address as a business?

Which indicators would help guide our thinking?

Where is the richest source of insight?

Who needs to know more, or is best positioned to act in response to this information if we had it?

Answering these questions with the strengths of connected devices and machine learning capabilities in mind is the first step to a more responsive organization.

Connect and Collect

Once we have defined a question to answer, it’s time to get started collecting data by connecting devices.

This shouldn’t mean providing a mountain of raw data streams from exciting new sensors. Maintain your focus on the business questions you’ve identified, and ensure the data being ingested and analyzed is directed at the hypothesis you’re aiming to gather evidence for.

Machine learning is the key enabler here, taking what might have been a previously unmanageable volume of data and processing it to be decision-ready. For example, these algorithms could turn a raw camera feed of a retail store—manuallly analyzed by people after the fact—into a responsive feed that includes live counting, path tracking, and group size monitoring of customers in a service environment.

Actionable Insight

Once data is collected and processed as appropriate, it’s time to trial how well it can inform responses to our targeted value hypothesis.

The most effective applications of collected data are uncovered by putting new information in the hands of those most able to act upon it. This means those on the front line of the business, active on the outer edges, can preempt issues, delight customers, or optimize daily operations in real time based on real evidence. This closes the Connected Learning Loop and feeds insight into additional value hypotheses to target.

Start Learning Today

The most exciting thing about this approach is that it doesn’t take a multi-year, multi-million-dollar program to start realizing the benefits of IoT. Anyone, in any organization, can get started today.

Together, Transport for London (TfL) and TAB sought to improve how efficiently the Underground network answered the question “is there a more efficient way to test the brakes on a Tube train?” Existing technology was cumbersome, expensive, and required trains to be removed from service for testing, resulting in costly disruption to the network.

Over the course of just five days, a small team conducted a technical proof-of-concept that proved an iPad could be used to test the brakes of a Tube train as accurately (and much more cost-effectively) as existing brake testing technology. Over subsequent months, the product was developed and tested alongside the existing solution. The new tool, known as TfL Decelerator, is in the pilot phase. Across three lines alone, Decelerator is projected to save almost $500,000 per year. Scale that across the network, and the savings are considerable indeed.

Experiment With Connected Learning Loops

You don’t need a mountain of time or money to realize the benefits IoT has to offer. As projects like TfL Decelerator show, smartphones and tablets offer connectivity, light, audio, and motion sensors that can provide the minimum viable infrastructure for new insights right out of the box.

All you need is a small and empowered cross-functional team. Give this team a clear question to tackle and get them to work through the Connected Learning Loop. Ensure they are feeding back lessons learned as they go, and use it to inform future actions. This lightweight, small-scale approach to real-world business challenges enables you to trial new innovations and gain the essential evidence you need to see if it’s worth scaling across your entire business.

It’s time to get learning and start earning.

Image Credit: Krunja /

Kategorie: Transhumanismus

If Energy Becomes Free in the Future, How Will That Affect Our Lives?

11 Březen, 2018 - 16:00

Technology is making the cost of many things trend towards zero. Things we used to have to pay a lot for are now cheap or even free—think about how much it costs to buy a computer, make long-distance calls, take pictures, watch movies, listen to music, or even travel to another state or country. Down the road even more of our day-to-day needs will join this list—including, possibly, electricity.

That’s great, right? Because, free stuff! Who doesn’t love free stuff?

The energy case, though, is more complex.

The cost of burning coal can only go so low, but the cost of harvesting energy from the sun just keeps dropping. October 2017 saw bids for a Saudi Arabian solar plant as low as 1.79 cents per kilowatt hour, breaking the previous record in Abu Dhabi of 2.42 cents/kWh. Granted, it’s no coincidence that these uniquely low prices are coming from some of the sunniest parts of the world. For comparison’s sake, the average residential price for electricity in the US in 2017 was 12.5 cents/kWh.

Just when we think prices can’t go any lower, they do—and perhaps the most amazing part about the continual price decline is that it’s in spite of, not thanks to, batteries. Cheap, efficient batteries are still the biggest bottleneck for renewables, but once we figure them out, the sky—or, in this case, the floor?—is truly the limit. It’s also only a matter of time until transparent solar cells become a reality and turn every outdoor glass surface into a small-scale power plant.

So what would a world of free energy for all look like? Electricity would become ubiquitous in the many parts of the world where that’s not yet the case. In other places, electric bills would disappear—but that would be the least of it. Manufacturing costs would plummet, as would transportation costs, as would, well, pretty much all costs.

The money we’d save on energy could be put to use on social programs, maybe even spawning a universal basic income that would help bring about more just and equitable societies. If everything cost less, we wouldn’t need to work as much to earn as much money, freeing up our time to pursue creative endeavors or other personal passions.

There’s a flip side to every coin, though, and the old adage about the best things in life being free unfortunately doesn’t necessarily hold true in this case. Let’s look at what’s happened when we’ve made other resources free or cheap.

In the US we made food cheap and abundant by learning how to process it and manufacture it at scale—and now we’re fatter and sicker than we’ve ever been. We figured out how to produce plastic bottles and bags for pennies, and now the oceans are choked with our abundantly cheap, non-biodegradable garbage.

The Jevons Paradox holds that as technological progress increases the efficiency of a product or resource, the rate of consumption of that resource rises because of increasing demand, effectively canceling out any savings in efficiency. That’s right—humanity appears to be, at our core, a species that takes, and free electricity would be no exception.

Middle Eastern countries, where electricity prices are the cheapest in the world, present a telling example. Excessive use of energy is commonplace, and there’s no incentive to rein in use. Ideally, energy use per capita should be reflected in GDP per capita, but countries like Kuwait, Bahrain, and Saudi Arabia all have an imbalance in this metric, using much more energy than is needed to achieve their GDPs.

As energy becomes cheaper in other parts of the world, people will use more of it, and the first victim will be the planet. Even though the energy will be renewable, that doesn’t mean there won’t be environmental costs; there could be repercussions we haven’t even imagined yet, just as whoever invented plastic probably never envisioned it poisoning marine life.

So as energy gets cheaper and ultimately moves toward being free, how do we handle its abundance wisely? Government regulation will play a role, as will market forces, despite the absence of economic impetus. As with any new technological development, we may have a phase of adjustment where we go too far, catch ourselves, and swing back the other way.

Free, clean energy will undeniably bring many benefits with it. But we can’t afford to forget that there’s usually a price to pay, too—it’s just not always obvious from the outset.

Image Credit: Len Green /

Kategorie: Transhumanismus

This Week’s Awesome Stories From Around the Web (Through March 10)

10 Březen, 2018 - 17:20

Google Thinks It’s Close to ‘Quantum Supremacy.’ Here’s What That Really Means.
Martin Giles and Will Knight | MIT Technology Review
“Seventy-two may not be a large number, but in quantum computing terms, it’s massive. This week Google unveiled Bristlecone, a new quantum computing chip with 72 quantum bits, or qubits—the fundamental units of computation in a quantum machine…John Martinis, who heads Google’s effort, says his team still needs to do more testing, but he thinks it’s ‘pretty likely’ that this year, perhaps even in just a few months, the new chip can achieve ‘quantum supremacy.'”


How Project Loon Built the Navigation System That Kept Its Balloons Over Puerto Rico
Amy Nordrum | IEEE Spectrum
“Last year, Alphabet’s Project Loon made a big shift in the way it flies its high-altitude balloons. And that shift—from steering every balloon in a huge circle around the world to clustering balloons over specific areas—allowed the project to provide basic Internet service to more than 200,000 people in Puerto Rico after Hurricane Maria.”


The Grim Conclusions of the Largest-Ever Study of Fake News
Robinson Meyer | The Atlantic
“The massive new study analyzes every major contested news story in English across the span of Twitter’s existence—some 126,000 stories, tweeted by 3 million users, over more than 10 years—and finds that the truth simply cannot compete with hoax and rumor.”


Magic Leap Raises $461 Million in Fresh Funding From the Kingdom of Saudi Arabia
Lucas Matney | TechCrunch
“Magic Leap still hasn’t released a product, but they’re continuing to raise a lot of cash to get there. The Plantation, Florida-based augmented reality startup announced today that it has raised $461 million from the Kingdom of Saudi Arabia’s sovereign investment arm, The Public Investment Fund…Magic Leap has raised more than $2.3 billion in funding to date.”


Social Inequality Will Not Be Solved by an App
Safiya Umoja Noble | Wired
“An app will not save us. We will not sort out social inequality lying in bed staring at smartphones. It will not stem from simply sending emails to people in power, one person at a time…We need more intense attention on how these types of artificial intelligence, under the auspices of individual freedom to make choices, forestall the ability to see what kinds of choices we are making and the collective impact of these choices in reversing decades of struggle for social, political, and economic equality. Digital technologies are implicated in these struggles.”

Image Credit: topseller /

Kategorie: Transhumanismus

Your Shopping Experience Is on the Verge of a Major Transformation. Here’s Why.

9 Březen, 2018 - 17:30

Exponential technologies (AI, VR, 3D printing, and networks) are radically reshaping traditional retail.

E-commerce giants (Amazon, Walmart, Alibaba) are digitizing the retail industry, riding the exponential growth of computation.

Many brick-and-mortar stores have already gone bankrupt, or migrated their operations online.

Massive change is occurring in this arena.

For those “real-life stores” that survive, an evolution is taking place from a product-centric mentality to an experience-based business model by leveraging AI, VR/AR, and 3D printing.

Let’s dive in.

E-Commerce Trends

Last year, 3.8 billion people were connected online. By 2024, thanks to 5G, stratospheric and space-based satellites, we will grow to 8 billion people online, each with megabit to gigabit connection speeds.

These 4.2 billion new digital consumers will begin buying things online, a potential bonanza for the e-commerce world.

At the same time, entrepreneurs seeking to service these four-billion-plus new consumers can now skip the costly steps of procuring retail space and hiring sales clerks.

Today, thanks to global connectivity, contract production, and turnkey pack-and-ship logistics, an entrepreneur can go from an idea to building and scaling a multimillion-dollar business from anywhere in the world in record time.

And while e-commerce sales have been exploding (growing from $34 billion in Q1 2009 to $115 billion in Q3 2017), e-commerce only accounted for about 10 percent of total retail sales in 2017.

In 2016, global online sales totaled $1.8 trillion. Remarkably, this $1.8 trillion was spent by only 1.5 billion people — a mere 20 percent of Earth’s global population that year.

There’s plenty more room for digital disruption.

AI and the Retail Experience

For the business owner, AI will demonetize e-commerce operations with automated customer service, ultra-accurate supply chain modeling, marketing content generation, and advertising.

In the case of customer service, imagine an AI that is trained by every customer interaction, learns how to answer any consumer question perfectly, and offers feedback to product designers and company owners as a result.

Facebook’s handover protocol allows live customer service representatives and language-learning bots to work within the same Facebook Messenger conversation.

Taking it one step further, imagine an AI that is empathic to a consumer’s frustration, that can take any amount of abuse and come back with a smile every time. As one example, meet Ava. “Ava is a virtual customer service agent, to bring a whole new level of personalization and brand experience to that customer experience on a day-to-day basis,” says Greg Cross, CEO of Ava’s creator, a New Zealand company called Soul Machines.

Predictive modeling and machine learning are also optimizing product ordering and the supply chain process. For example, Skubana, a platform for online sellers, leverages data analytics to provide entrepreneurs constant product performance feedback and maintain optimal warehouse stock levels.

Blockchain is set to follow suit in the retail space. ShipChain and Ambrosus plan to introduce transparency and trust into shipping and production, further reducing costs for entrepreneurs and consumers.

Meanwhile, for consumers, personal shopping assistants are shifting the psychology of the standard shopping experience.

Amazon’s Alexa marks an important user interface moment in this regard.

Alexa is in her infancy with voice search and vocal controls for smart homes. Already, Amazon’s Alexa users, on average, spent more on when purchasing than standard Amazon Prime customers — $1,700 versus $1,400.

As I’ve discussed in previous posts, the future combination of virtual reality shopping, coupled with a personalized, AI-enabled fashion advisor will make finding, selecting, and ordering products fast and painless for consumers.

But let’s take it one step further.

Imagine a future in which your personal AI shopper knows your desires better than you do. Possible? I think so. After all, our future AIs will follow us, watch us, and observe our interactions — including how long we glance at objects, our facial expressions, and much more.

In this future, shopping might be as easy as saying, “Buy me a new outfit for Saturday night’s dinner party,” followed by a surprise-and-delight moment in which the outfit that arrives is perfect.

In this future world of AI-enabled shopping, one of the most disruptive implications is that advertising is now dead.

In a world where an AI is buying my stuff, and I’m no longer in the decision loop, why would a big brand ever waste money on a Super Bowl advertisement?

The dematerialization, demonetization, and democratization of personalized shopping has only just begun.

The In-Store Experience: Experiential Retailing

In 2017, over 6,700 brick-and-mortar retail stores closed their doors, surpassing the former record year for store closures set in 2008 during the financial crisis. Regardless, business is still booming.

As shoppers seek the convenience of online shopping, brick-and-mortar stores are tapping into the power of the experience economy.

Rather than focusing on the practicality of the products they buy, consumers are instead seeking out the experience of going shopping.

The Internet of Things, artificial intelligence, and computation are exponentially improving the in-person consumer experience.

As AI dominates curated online shopping, AI and data analytics tools are also empowering real-life store owners to optimize staffing, marketing strategies, customer relationship management, and inventory logistics.

In the short term, retail store locations will serve as the next big user interface for production 3D printing (custom 3D printed clothes at the Ministry of Supply), virtual and augmented reality (DIY skills clinics), and the Internet of Things (checkout-less shopping).

In the long term, we’ll see how our desire for enhanced productivity and seamless consumption balances with our preference for enjoyable real-life consumer experiences — all of which will be driven by exponential technologies.

One thing is certain: the nominal shopping experience is on the verge of a major transformation.


The convergence of exponential technologies has already revamped how and where we shop, how we use our time, and how much we pay.

Twenty years ago, Amazon showed us how the web could offer each of us the long tail of available reading material, and since then, the world of e-commerce has exploded.

And yet we still haven’t experienced the cost savings coming our way from drone delivery, the Internet of Things, tokenized ecosystems, the impact of truly powerful AI, or even the other major applications for 3D printing and AR/VR.

Perhaps nothing will be more transformed than today’s $20 trillion retail sector.

Hold on, stay tuned, and get your AI-enabled cryptocurrency ready.

Join Me

Abundance Digital Online Community: I’ve created a digital/online community of bold, abundance-minded entrepreneurs called Abundance Digital.

Abundance Digital is my ‘onramp’ for exponential entrepreneurs — those who want to get involved and play at a higher level. Click here to learn more.

Image Credit: Zapp2Photo /

Kategorie: Transhumanismus

How We Can ‘Robot-Proof’ Education to Better Adapt to Automation

8 Březen, 2018 - 17:00

Like millions of other individuals in the workforce, you’re probably wondering if you will one day be replaced by a machine. If you’re a student, you’re probably wondering if your chosen profession will even exist by the time you’ve graduated. From driving to legal research, there isn’t much that technology hasn’t already automated (or begun to automate). Many of us will need to adapt to this disruption in the workforce.

But it’s not enough for students and workers to adapt, become lifelong learners, and re-skill themselves. We also need to see innovation and initiative at an institutional and governmental level. According to research by The Economist, almost half of all jobs could be automated by computers within the next two decades, and no government in the world is prepared for it.

While many see the current trend in automation as a terrifying threat, others see it as an opportunity. In Robot-Proof: Higher Education in the Age of Artificial Intelligence, Northeastern University president Joseph Aoun proposes educating students in a way that will allow them to do the things that machines can’t. He calls for a new paradigm that teaches young minds “to invent, to create, and to discover”—filling the relevant needs of our world that robots simply can’t fill. Aoun proposes a much-needed novel framework that will allow us to “robot-proof” education.

Literacies and Core Cognitive Capacities of the Future

Aoun lays a framework for a new discipline, humanics, which discusses the important capacities and literacies for emerging education systems. At its core, the framework emphasizes our uniquely human abilities and strengths.

The three key literacies include data literacy (being able to manage and analyze big data), technological literacy (being able to understand exponential technologies and conduct computational thinking), and human literacy (being able to communicate and evaluate social, ethical, and existential impact).

Beyond the literacies, at the heart of Aoun’s framework are four cognitive capacities that are crucial to develop in our students if they are to be resistant to automation: critical thinking, systems thinking, entrepreneurship, and cultural agility.

“These capacities are mindsets rather than bodies of knowledge—mental architecture rather than mental furniture,” he writes. “Going forward, people will still need to know specific bodies of knowledge to be effective in the workplace, but that alone will not be enough when intelligent machines are doing much of the heavy lifting of information. To succeed, tomorrow’s employees will have to demonstrate a higher order of thought.”

Like many other experts in education, Joseph Aoun emphasizes the importance of critical thinking. This is important not just when it comes to taking a skeptical approach to information, but also being able to logically break down a claim or problem into multiple layers of analysis. We spend so much time teaching students how to answer questions that we often neglect to teach them how to ask questions. Asking questions—and asking good ones—is a foundation of critical thinking. Before you can solve a problem, you must be able to critically analyze and question what is causing it. This is why critical thinking and problem solving are coupled together.

The second capacity, systems thinking, involves being able to think holistically about a problem. The most creative problem-solvers and thinkers are able to take a multidisciplinary perspective and connect the dots between many different fields. According to Aoun, it “involves seeing across areas that machines might be able to comprehend individually but that they cannot analyze in an integrated way, as a whole.” It represents the absolute opposite of how most traditional curricula is structured with emphasis on isolated subjects and content knowledge.

Among the most difficult-to-automate tasks or professions is entrepreneurship.

In fact, some have gone so far as to claim that in the future, everyone will be an entrepreneur. Yet traditionally, initiative has been something students show in spite of or in addition to their schoolwork. For most students, developing a sense of initiative and entrepreneurial skills has often been part of their extracurricular activities. It needs to be at the core of our curricula, not a supplement to it. At its core, teaching entrepreneurship is about teaching our youth to solve complex problems with resilience, to become global leaders, and to solve grand challenges facing our species.

Finally, with an increasingly globalized world, there is a need for more workers with cultural agility, the ability to build amongst different cultural contexts and norms.

One of the major trends today is the rise of the contingent workforce. We are seeing an increasing percentage of full-time employees working on the cloud. Multinational corporations have teams of employees collaborating at different offices across the planet. Collaboration across online networks requires a skillset of its own. As education expert Tony Wagner points out, within these digital contexts, leadership is no longer about commanding with top-down authority, but rather about leading by influence.

An Emphasis on Creativity

The framework also puts an emphasis on experiential or project-based learning, wherein the heart of the student experience is not lectures or exams but solving real-life problems and learning by doing, creating, and executing. Unsurprisingly, humans continue to outdo machines when it comes to innovating and pushing intellectual, imaginative, and creative boundaries, making jobs involving these skills the hardest to automate.

In fact, technological trends are giving rise to what many thought leaders refer to as the imagination economy. This is defined as “an economy where intuitive and creative thinking create economic value, after logical and rational thinking have been outsourced to other economies.” Consequently, we need to develop our students’ creative abilities to ensure their success against machines.

In its simplest form, creativity represents the ability to imagine radical ideas and then go about executing them in reality.

In many ways, we are already living in our creative imaginations. Consider this: every invention or human construct—whether it be the spaceship, an architectural wonder, or a device like an iPhone—once existed as a mere idea, imagined in someone’s mind. The world we have designed and built around us is an extension of our imaginations and is only possible because of our creativity. Creativity has played a powerful role in human progress—now imagine what the outcomes would be if we tapped into every young mind’s creative potential.

The Need for a Radical Overhaul

What is clear from the recommendations of Aoun and many other leading thinkers in this space is that an effective 21st-century education system is radically different from the traditional systems we currently have in place. There is a dramatic contrast between these future-oriented frameworks and the way we’ve structured our traditional, industrial-era and cookie-cutter-style education systems.

It’s time for a change, and incremental changes or subtle improvements are no longer enough. What we need to see are more moonshots and disruption in the education sector. In a world of exponential growth and accelerating change, it is never too soon for a much-needed dramatic overhaul.

Image Credit: Besjunior /

Kategorie: Transhumanismus

This Sensor Lets Scientists See Neuron-Level Brain Activity in Real Time

7 Březen, 2018 - 17:00

Picture this: you’re at a boisterous party, trying to listen in on a group conversation. People are talking over each other and going a mile a minute, but you can only pick up snippets from one person at a time.

Confusing? Sure! Frustrating? Absolutely!

Yet this is how neuroscientists eavesdrop on all the electrical chatter going on in our heads. So much depends on understanding these neuronal conversations: deciphering their secret language is key to understanding—and manipulating—the memories, habits, and other cognitive processes that define us.

To monitor the signals zipping through a network of neurons, scientists often stick a tiny electrode into each single contributor and track its activity. It’s not easy to tease out an entire conversation that way—the process is tedious and prone to serious misunderstandings.

“If you put an electrode in the brain, it’s like trying to understand a phone conversation by hearing only one person talk,” said Dr. Ed Boyden at MIT. A pioneer of optogenetics and the inflatable brain, the neuroscience wunderkind has spent the past decade developing creative neurotechnological toolkits that have sparked excitement and garnered praise.

Now Boyden may have a way to tap into an entire neuronal group chat.

With the help of a robot, the team designed a protein that tunnels into the outer shell, or membrane, of a neuron. If there’s a slight change in the voltage, as when the neuron fires, the protein immediately transforms into a fluorescent torch that’s easy to spot under a microscope.

With a whole network of neurons, the embedded sensors spark like fireworks.

A light-sensitive protein embedded in neuron membranes emits a fluorescent signal related to the amount of voltage in the cell. The method could allow the study of neurons in real time. Image Credit: Kiryl Piatkevich and Erica Jung/MIT

“Now we can record the neural activity of many cells in a neural circuit and hear them as they talk to each other,” said Boyden.

But the new sensor isn’t even the big advance. The robotic system, pieced together from easily available components, allows other neuroscientists to develop their own sensors.

By releasing the blueprint in Nature Chemical Biology, Boyden and his team hope the community will rapidly evolve stronger and more sensitive activity probes for the brain, thereby lighting the way to finally figuring out what exactly is a thought, a decision, or a feeling.

The Neural Lighthouse

To be fair, Boyden is far from the first to come up with these so-called “voltage sensors.”

But finding the perfect one has eluded neuroscientists for two decades. To precisely report neuronal firing, these proteins need to be able to rapidly turn on their light beams after the neuron fires—with a reaction time in the range of a tenth of a second, if not faster.

What’s more, they also need to be able to find the best seat in the house: smack on the neuronal membrane, where the voltage change happens, as opposed to inside a cell.

Finally, they need to shine long and bright. Lots of sensors lose their glow rapidly after exposure to light—dubbed “photobleaching,” the bane of neural cartographers. To match neuronal activity to behaviors, the indicators need to stay bright for at least several seconds.

Developing these sensors has traditionally been an extremely tedious affair. Scientists often start with a known sensor, swap some of its constituent molecules with others like Lego pieces, test the resulting new sensor in cells, and hope for the best. The process can take weeks, if not months.

Boyden’s robotic system, on the other hand, screens hundreds of thousands of potential new sensors in a few hours.

It works like this:

In a process that resembles accelerated evolution, the team started with a known light-sensitive sensor and randomly triggered mutations into the protein, making 1.5 million (!!) versions in total.

They then inserted all of the variants into mammalian cells—one variant per cell—and waited for the sensors to reach the cell’s membrane. Next, they programmed a microscope to automatically take photos of the cells.

It’s a powerful algorithm. “This version was modified from previous versions to be compatible with any microscope…camera and/or other optional hardware,” the authors said.

Once the microscope identified each individual cell, a robotic glass tube sucked up the cell into its private glass tube and examined whether the sensor variant satisfied all the requirements. Here, the team specifically focused on two criteria: the protein’s location and its brightness.

In this way, the team rapidly identified the top five candidates, and then subjected them to another round of mutations generating eight million (!!!) new variants. With help from their trusty robot cell picker, they narrowed the best performers down to seven proteins, which they then characterized using good old electrical recordings to see how fast the sensors responded to voltage fluctuations.

In the end, only two sensors met all criteria, and the authors named them Archon1 and Archon2 respectively.

Normally it’s excruciatingly hard to find sensors that excel in multiple domains, the authors say. The robotic screen works so well because it acts like a multi-round game show. To remain a candidate, each variant has to stand out in each round of testing, whether for its brightness, location, or speed.

“(It’s) a very clever high-throughput screening approach,” said Harvard professor Dr. Adam Cohen, who was not involved in this study. Cohen previously developed a sensor called QuasAr2 (get it?) that Boyden used here as a starting point to generate his mutant forms.

Brain Fireworks

Putting Archon1 to the test, the team inserted the protein onto the neuronal membranes of cortical neurons in mice. These cells come from the outermost region of the brain—the cortex—often considered the seat of higher cognitive functions.

Archon1 performed fabulously in brain slices from these mice. When stimulated with a reddish-orange light, the protein emitted a longer wavelength of red light that matched up to the neuron’s voltage swings—the brightness of the protein corresponds to a particular voltage.

The sensor was extremely quick on its feet, capable of reporting each time a neuron fired in near real time.

The team also tested Archon1 in two of neuroscience’s darling translucent animal models: a zebrafish and a tiny worm called C. elegans. Don’t underestimate these critters: zebrafish are often used to study how the brain encodes vision, hearing movement or fear, whereas C. elegans has shed light on the circuits that drive eating, socializing, and even sex.

Because of their see-through bodies, it’s particularly useful to watch their neurons light up in action because of the higher signal-to-noise ratio. As in the mouse brain, Archon1 performed beautifully, rapidly emitting light that lasted at least eight minutes.

“(This) supports recordings of neural activity over behaviorally relevant timescales,” the authors said.

Even cooler, Archon1 can be used in conjunction with optogenetic tools. In a proof-of-concept, the team used blue light to activate a neuron in C. elegans and watched Archon1 light up in response—an amazing visual feedback, especially since neuroscientists often use electrical recordings to see whether their optogenetic tricks worked.

Brighter Future

The team is now looking to test their sensor in living mice while performing certain behaviors and tasks.

The sensor “opens up the exciting possibility of simultaneous recordings of large populations of neurons” and of capturing each individual firing from every single neuron, the authors said. We’ll be watching neural computations happen in real time under the microscope.

And the best is yet to come. Scientific-grade cameras are increasingly capable of taking images at faster speeds and allowing for higher resolutions with a broader field of view. Mapping the brain with Archon1 and future generation sensors will no doubt yield buckets of new findings and theories about how the brain works.

“Over the next five years or so we’re going to try to solve some small brain circuits completely,” said Boyden.

Image Credit: Rost9 /

Kategorie: Transhumanismus

New Malicious AI Report Outlines Biggest Threats of the Next 5 Years

6 Březen, 2018 - 17:00

Everyone’s talking about deep fakes: audio-visual imitations of people, generated by increasingly powerful neural networks, that will soon be indistinguishable from the real thing. Politicians are regularly laid low by scandals that arise from audio-visual recordings. Try watching the footage that could be created of Barack Obama from his speeches, and the Lyrebird impersonations. You could easily, today or in the very near future, create a forgery that might be indistinguishable from the real thing. What would that do to politics?

Once the internet is flooded with plausible-seeming tapes and recordings of this sort, how are we going to decide what’s real and what isn’t? Democracy, and our ability to counteract threats, is already threatened by a lack of agreement on the facts. Once you can’t believe the evidence of your senses anymore, we’re in serious trouble. Ultimately, you can dream up all kinds of utterly terrifying possibilities for these deep fakes, from fake news to blackmail.

How to solve the problem? Some have suggested that media websites like Facebook or Twitter should carry software that probes every video to see if it’s a deep fake or not and labels the fakes. But this will prove computationally intensive. Plus, imagine a case where we have such a system, and a fake is “verified as real” by news media algorithms that have been fooled by clever hackers.

The other alternative is even more dystopian: you can prove something isn’t true simply by always having an alibi. Lawfare describes a “solution” where those concerned about deep fakes have all of their movements and interactions recorded. So to avoid being blackmailed or having your reputation ruined, you just consent to some company engaging in 24/7 surveillance of everything you say or do and having total power over that information. What could possibly go wrong?

The point is, in the same way that you don’t need human-level, general AI or humanoid robotics to create systems that can cause disruption in the world of work, you also don’t need a general intelligence to threaten security and wreak havoc on society. Andrew Ng, AI researcher, says that worrying about the risks from superintelligent AI is like “worrying about overpopulation on Mars.” There are clearly risks that arise even from the simple algorithms we have today.

The looming issue of deep fakes is just one of the threats considered by the new malicious AI report, which has co-authors from the Future of Humanity Institute and the Centre for the Study of Existential Risk (among other organizations.) They limit their focus to the technologies of the next five years.

Some of the concerns the report explores are enhancements to familiar threats.

Automated hacking can get better, smarter, and algorithms can adapt to changing security protocols. “Phishing emails,” where people are scammed by impersonating someone they trust or an official organization, could be generated en masse and made more realistic by scraping data from social media. Standard phishing works by sending such a great volume of emails that even a very low success rate can be profitable. Spear phishing aims at specific targets by impersonating family members, but can be labor intensive. If AI algorithms enable every phishing scam to become sharper in this way, more people are going to get gouged.

Then there are novel threats that come from our own increasing use of and dependence on artificial intelligence to make decisions.

These algorithms may be smart in some ways, but as any human knows, computers are utterly lacking in common sense; they can be fooled. A rather scary application is adversarial examples. Machine learning algorithms are often used for image recognition. But it’s possible, if you know a little about how the algorithm is structured, to construct the perfect level of noise to add to an image, and fool the machine. Two images can be almost completely indistinguishable to the human eye. But by adding some cleverly-calculated noise, the hackers can fool the algorithm into thinking an image of a panda is really an image of a gibbon (in the OpenAI example). Research conducted by OpenAI demonstrates that you can fool algorithms even by printing out examples on stickers.

Now imagine that instead of tricking a computer into thinking that a panda is actually a gibbon, you fool it into thinking that a stop sign isn’t there, or that the back of someone’s car is really a nice open stretch of road. In the adversarial example case, the images are almost indistinguishable to humans. By the time anyone notices the road sign has been “hacked,” it could already be too late.

As the OpenAI foundation freely admits, worrying about whether we’d be able to tame a superintelligent AI is a hard problem. It looks all the more difficult when you realize some of our best algorithms can be fooled by stickers; even “modern simple algorithms can behave in ways we do not intend.”

There are ways around this approach.

Adversarial training can generate lots of adversarial examples and explicitly train the algorithm not to be fooled by them—but it’s costly in terms of time and computation, and puts you in an arms race with hackers. Many strategies for defending against adversarial examples haven’t proved adaptive enough; correcting against vulnerabilities one at a time is too slow. Moreover, it demonstrates a point that can be lost in the AI hype: algorithms can be fooled in ways we didn’t anticipate. If we don’t learn about these vulnerabilities until the algorithms are everywhere, serious disruption can occur. And no matter how careful you are, some vulnerabilities are likely to remain to be exploited, even if it takes years to find them.

Just look at the Meltdown and Spectre vulnerabilities, which weren’t widely known about for more than 20 years but could enable hackers to steal personal information. Ultimately, the more blind faith we put into algorithms and computers—without understanding the opaque inner mechanics of how they work—the more vulnerable we will be to these forms of attack. And, as China dreams of using AI to predict crimes and enhance the police force, the potential for unjust arrests can only increase.

This is before you get into the truly nightmarish territory of “killer robots”—not the Terminator, but instead autonomous or consumer drones which could potentially be weaponized by bad actors and used to conduct attacks remotely. Some reports have indicated that terrorist organizations are already trying to do this.

As with any form of technology, new powers for humanity come with new risks. And, as with any form of technology, closing Pandora’s box will prove very difficult.

Somewhere between the excessively hyped prospects of AI that will do everything for us and AI that will destroy the world lies reality: a complex, ever-changing set of risks and rewards. The writers of the malicious AI report note that one of their key motivations is ensuring that the benefits of new technology can be delivered to people as quickly, but as safely, as possible. In the rush to exploit the potential for algorithms and create 21st-century infrastructure, we must ensure we’re not building in new dangers.

Image Credit: lolloj /

Kategorie: Transhumanismus

Hyperloop and Flying Cars Are Battling It Out for the Future of Transportation

5 Březen, 2018 - 17:00

Tech titans are eager to reimagine how we will travel in the coming decades, but whose vision will win out?

Last week Elon Musk and Uber CEO Dara Khosrowshahi got in a back-and-forth on Twitter over whether flying cars will be the next big thing in transportation. Musk was responding to Khosrowshahi’s reported claim that the technology would make it unnecessary to dig tunnels for a Hyperloop—Musk’s vision of passenger pods flying through vacuum tubes at hundreds of miles per hour.

If you love drones above your house, you’ll really love vast numbers of “cars” flying over your head that are 1000 times bigger and noisier and blow away anything that isn’t nailed down when they land

— Elon Musk (@elonmusk) February 22, 2018

Challenge accepted. Improved battery tech (thx 2 @elonmusk) and multiple smaller rotors will be much more efficient and avoid noise + environmental pollution.

— dara khosrowshahi (@dkhos) February 22, 2018

The Tesla and SpaceX founder pointed out the noise and potential eyesore widespread use of flying cars would cause. Khosrowshahi, whose company is pushing flying cars with its Elevate initiative, responded by saying improved batteries and multiple smaller rotors would reduce both noise and pollution.

Both of their visions for the future of transport are still very much in the concept phase, but the spat raises the interesting question: Which is the more compelling?

Flying cars have been just around the corner for decades, and despite concept vehicles dating back to the middle of the last century, none has ever made it into production. There are reasons to believe things are starting to change though.

Drone technology is providing a compelling model for how to reimagine flying cars. While the concepts of the past were generally cars with wings, recent years have seen the emergence of a new breed of electrically-powered, often autonomous, multi-rotor passenger vehicles.

Chinese startup Ehang released footage last month of people riding in its autonomous passenger drone. According to CNN, the company has been flying passengers since 2015, but this is the first time they’ve released the footage.

Airbus’ self-piloting Vahana multi-rotor concept also completed its first test flight in January, and at the time, the company said it plans to have a production version ready by 2020. As I detailed last July, there are a number of other startups with working prototypes as well.

Building these things is only part of the problem. At present it’s not clear what rules these vehicles would be governed by—standard aviation rules or new drone regulations being formulated to deal with the desire of companies like Amazon to carry out airborne deliveries.

This will probably depend on the level of autonomy vehicles have. The fully autonomous eHang is essentially not that different from a delivery drone, just with human cargo. But the vision Uber outlines in its Elevate whitepaper is of human pilots supported by partial autonomy, which the company says could already operate in urban areas under the same rules as helicopters.

Uber concedes that for the idea to scale, new aircraft traffic control systems able to cope with thousands of drones and flying cars at low altitude will be needed. NASA is currently researching just such a system due to be implemented by 2025 at the latest, and Uber recently announced it was participating in the project, so the problem may have been solved by the time these vehicles are ready to take to the sky.

It remains to be seen what the business model would be though. Uber envisages the same kind of ride-hailing service it provides today, but with passengers embarking at dedicated roof-top “Vertiports.”

Given the infrastructure investment required to build a comprehensive network of such hubs, the large upfront costs of a fleet of flying cars, the limited range provided by near-term battery technology, and the cost of training pilots (or developing fool-proof autonomy), it’s hard to see it being an affordable and accessible service in the near term, despite Uber’s case to the contrary.

It may prove a greener, more seamless way for wealthy executives to flit from one rooftop helipad to another. But it seems unlikely to be the immediate solution to the recent finding that ride-hailing companies like Uber and Lyft are actually clogging our roads by pulling people off public transport.

Musk’s Hyperloop vision on the other hand is firmly centered on mass transit. He first outlined the idea of firing passenger pods down a vacuum tube between Los Angeles and San Francisco at 760 miles per hour back in 2013.

Just five years later and there are proposed projects from Chicago to Mumbai and several companies dedicated to bringing his open source design to life—most notably Virgin Hyperloop One, which is backed by billionaire business magnate Richard Branson.

While the Hyperloop is designed to provide a cheaper, greener, and faster alternative to short-haul flights or high-speed rail between cities, Musk has also revealed his visions for a more modest Loop system designed to work within urban areas. This would see a network of tunnels dug beneath a city with autonomous “electric sleds” running at 125 miles per hour that could transport cars or passenger pods between elevators that open directly onto the street.

While Virgin Hyperloop One’s passenger pod has hit a top speed of 192 miles per hour in an experimental 500-meter-long tube, the technology is further from realization than flying cars. And if funding the infrastructure to support the latter seems challenging, the cost of digging hundreds of miles of tunnels and filling them with depressurized, electrified tubes will make you wince.

In an effort to tackle this problem Musk has started The Boring Company, which aims to cut the cost of tunnelling from as much as $1 billion per mile in the US by more than a factor of 10. But tens of millions of dollars per mile is still a colossal amount of money.

Even if the money can be found, building these tunnels is likely to take a long time, particularly a sub-city network big enough to have a significant impact on congestion. And as UCLA urban planning expert Michael Manville notes in Popular Science, getting permission for even moderate extensions to existing subway networks has come up against decades-long legal challenges.

However, the project’s mass transit focus may make it more likely to garner support from policy-makers than efforts to bring flying cars to our cities. Bent Flyvbjerg, an Oxford University economist specializing in mega-projects, told The Guardian major infrastructure has always relied on large subsidies, so considering the environmental and efficiency gains, he sees no reason why Hyperloops shouldn’t get them too.

He also notes that Musk has been adept at corralling government funding in the form of grants, tax breaks, and environmental credits with his other ventures, even though the Boring Company has said it will not seek public funds for its projects.

Ultimately, flying cars and high-speed, sub-surface travel are probably not in competition. They fill different niches, so the success of one is unlikely to prevent the development of the other.

Would you prefer the future of transportation to be flying cars or Hyperloop?

— Singularity Hub (@singularityhub) March 5, 2018

At the moment, flying cars look closer to realization, but it’s unclear whether they can become more than an upgrade on the private helicopters most of us never use. The combination of Hyperloops and Loops on the other hand could live up to Musk’s promise of a “fifth mode of transport” that revolutionizes mass transit….if we can find the money to build it.

Image Credit: u3d /

Kategorie: Transhumanismus