Singularity HUB

Syndikovat obsah Singularity Hub
News and Insights on Technology, Science, and the Future from Singularity University
Aktualizace: 13 min 9 sek zpět

How Artificial Intelligence Could Help Us Live Longer

5 hodin 43 min zpět

What if we could generate novel molecules to target any disease, overnight, ready for clinical trials?

Imagine leveraging machine learning to accomplish with 50 people what the pharmaceutical industry can barely do with an army of 5,000.

It’s a multibillion-dollar opportunity that can help billions.

The worldwide pharmaceutical market, one of the slowest monolithic industries to adapt, surpassed $1.1 trillion in 2016.

In 2018, the top 10 pharmaceutical companies alone are projected to generate over $355 billion in revenue.

At the same time, it currently costs more than $2.5 billion (sometimes up to $12 billion) and takes over 10 years to bring a new drug to market. Nine out of 10 drugs entering Phase I clinical trials will never reach patients.

As the population ages, we don’t have time to rely on this slow, costly production rate. Some 12 percent of the world population will be 65 or older by 2030, and “diseases of aging” like Alzheimer’s will pose increasingly greater challenges to society.

But a world of pharmaceutical abundance is already emerging.

As artificial intelligence converges with massive datasets in everything from gene expression to blood tests, novel drug discovery is about to get >100X cheaper, faster, and more intelligently targeted.

One of the hottest startups I know in this area is Insilico Medicine.

Leveraging AI in its end-to-end drug pipeline, Insilico Medicine is extending healthy longevity through drug discovery and aging research.

Their comprehensive drug discovery engine uses millions of samples and multiple data types to a) discover signatures of disease and b) identify the most promising targets for billions of molecules. These molecules either already exist or can be generated de novo with the desired set of parameters.

Insilico’s CEO Dr. Alex Zhavoronkov recently joined me on an Abundance Digital webinar to discuss the future of longevity research.

Just this week, Insilico announced the completion of a strategic round of funding led by WuXi AppTec’s Corporate Venture Fund, with participation from Pavilion Capital, Juvenescence, and my venture fund BOLD Capital Partners.

What they’re doing is extraordinary, and it’s an excellent lens through which to view converging exponential technologies.

Case Study: Leveraging AI for Drug Discovery

You’ve likely heard of deep neural nets: multilayered networks of artificial neurons, able to ‘learn’ from massive amounts of data and essentially program themselves.

Build upon deep neural nets, and you get generative adversarial networks (GANs), the revolutionary technology that underpins Insilico’s drug discovery pipeline.

What are GANs?

By pitting two deep neural nets against each other (“adversarial”), GANs enable the imagination and creation of entirely new things (“generative”).

Developed by Google Brain in 2014, GANs have been used to output almost photographically accurate pictures from textual descriptions (as seen below).

Source: Reed et al., 2016

Insilico and its researchers are the first in the world to use GANs to generate molecules.

“The GAN technique is essentially an adversarial game between two deep neural networks,” as Alex explains.

While one generates meaningful noise in response to input, the other evaluates the generator’s output. Both networks thereby learn to generate increasingly perfect output.

In Insilico’s case, that output consists of perfected molecules. Generating novel molecular structures for diseases both with and without known targets, Insilico is pursuing drug discovery in aging, cancer, fibrosis, Parkinson’s Disease, Alzheimer’s Disease, ALS, diabetes, and many others. Once rolled out, the implications would be profound.

Alex’s ultimate goal is to develop a fully-automated Health as a Service (HaaS) / Longevity as a Service (LaaS) engine. Once plugged into the services of companies from Alibaba to Alphabet, such an engine would enable personalized solutions for online users, helping them prevent diseases and maintain optimal health.

But what does this tangibly look like?

Insilico’s End-to-End Pipeline

First, Insilico leverages AI—in the form of GANs—to identify targets (as seen in the first stage of their pipeline below). To do this, Insilico uses gene expression data from both healthy tissue samples and those affected by disease. (Targets are the cellular or molecular structures involved in a given pathology that drugs are intended to act on.)

Source: Insilico Medicine via Medium

Within this first pipeline stage, Insilico can identify targets, reconstruct entire disease pathways and understand the regulatory mechanisms that result in aging-related diseases.

This alone enables breakthroughs for healthcare and medical research. But it doesn’t stop there.

After understanding the underlying mechanisms and causality involved in aging, Insilico uses GANs to ‘imagine’ novel molecular structures. With reinforcement learning, Insilico’s system lets you generate a molecule with any of up to 20 different properties to hit a specified target.

This means that we can now identify targets like never before, and then generate custom molecules de novo such that they hit those specific targets.

At scale, this would also involve designing drugs with minimized side effects, a pursuit already being worked on by Insilico scientist Polina Mamoshina in collaboration with Oxford University’s Computational Cardiovascular Team.

One of Insilico’s newest initiatives—to complete the trifecta, if you will—involves predicting the outcomes of clinical trials. While still in the early stages of development, accurate clinical trial predictors would enable researchers to identify ideal pre-clinical candidates.

That’s a 10X improvement from today’s state of affairs.

Currently, over 90 percent of molecules discovered through traditional techniques and tested in mice end up failing in human clinical trials. Accurate clinical trial predictors would result in an unprecedented cutting of cost, time, and overhead in drug testing.

The 6 D’s of Drug Discovery

The digitization and dematerialization of drug discovery has already happened.

Thanks to converging breakthroughs in machine learning, drug discovery and molecular biology, companies like Insilico can now do with 50 people what the pharmaceutical industry can barely do with an army of 5,000.

As computing power improves, we’ll be able to bring novel therapies to market at lightning speeds, at much lower cost, and with no requirement for massive infrastructure and investments. These therapies will demonetize and democratize as a result.

Add to this anticipated breakthroughs in quantum computing, and we’ll soon witness an explosion in the number of molecules that can be tested (with much higher accuracy).

Finally, AI enables us to produce sophisticated, multi-target drugs. “Currently, the pharma model in general is very simplistic. You have one target and one disease—but usually a disease is not one target, it is many targets,” Alex has explained.

Final Thoughts

Inefficient, slow-to-innovate, and risk-averse industries will all be disrupted in the years ahead. Big Pharma is an area worth watching right now, no matter your industry.

Converging technologies will soon enable extraordinary strides in longevity and disease prevention, with companies like Insilico leading the charge.

Fueled by massive datasets, skyrocketing computational power, quantum computing, blockchain-enabled patient access, cognitive surplus capabilities and remarkable innovations in AI, the future of human health and longevity is truly worth getting excited about.

Rejuvenational biotechnology will be commercially available sooner than you think. When I asked Alex for his own projection, he set the timeline at “maybe 20 years—that’s a reasonable horizon for tangible rejuvenational biotechnology.”

Alex’s prediction may even be conservative.

My friend Ray Kurzweil often discusses the concept of “longevity escape velocity”—the point at which, for every year that you’re alive, science is able to extend your life for more than a year.

With a record-breaking prediction accuracy of 86 percent, Ray predicts “It’s likely just another 10 to 12 years before the general public will hit longevity escape velocity.”

How might you use an extra 20 or more healthy years in your life? What impact would you be able to make?

Image Credit: Jackie Niam /

Kategorie: Transhumanismus

Leading With Purpose: How to Turn Your Mission Into a Movement

21 Červen, 2018 - 20:03

Jennifer Dulski is an entrepreneur, social impact change agent, and business leader. Dulski is currently the head of Facebook Groups, which helps more than a billion people participate in communities that matter to them, including topics like health, parenting, and mobilization around disaster response.

Dulski has spent a career launching new ventures and leading global teams at Yahoo!, Google, and, mostly recently at, where she was the COO and president of a global platform that inspires millions of citizens around the world to ignite and support positive change. Dulski’s new book Purposeful: Are You a Manager or a Movement Starter? provides the playbook on how to become a successful catalyst of positive change.

Lisa Kay Solomon: You wrote Purposeful after nearly two decades of leading positive change in education, social impact, and most recently, as head of Groups at Facebook. What inspired you to write this book now?

Jennifer Dulski: I’ve been fortunate in my career to support movement starters from all walks of life and to witness what is possible when everyday people stand up to make the world better. Some are activists, some are entrepreneurs—and all of them are making a difference. In Purposeful I tell their stories and share their lessons, as well as some lessons from my own career, to show how we can all be movement starters for the causes that matter to us. At a time when our world seems increasingly divided, I believe it is important to highlight what can happen when we come together with a common purpose.

LKS: In Purposeful, you describe a new type of leader you call “movement starters.” Can you briefly describe some of the core aspects of a movement starter and some patterns of successful ones?

JD: I draw a distinction between managers and movement starters. Managers do their best with what they are given, and movement starters push to go beyond what is currently possible and mobilize others. I’ve seen that successful movement starters, regardless of cause or industry, are all effective at the same core skills: creating a compelling vision, mobilizing support, effectively persuading decision-makers, navigating criticism, and overcoming obstacles.

In Purposeful, I walk through these steps in detail, highlighting stories and tips from leaders who illustrate each one. We can all learn from a young woman with Down’s Syndrome who helped persuade Congress to pass the largest law benefiting disabled Americans since 1990, an entrepreneur revolutionizing the way we think about personal nutrition, and a high school student who convinced multinational beverage companies to remove a harmful chemical from their products, among many others. My hope is that by offering tangible advice alongside inspiring stories, people will feel empowered to stand up and start their own movements.

LKS: You talk about how important building allies and connections are in the process of fostering movements. What are some strategies for doing that in a time when we seem increasingly divided?

JD: There are two strategies I have seen to be particularly effective at building support for a movement. The first is having the courage to share a personal story. The more vulnerable people are willing to be in sharing why something matters to them, the more others will rally behind them. Sharing your personal story will help make connections with others who may have had a similar experience or feel the same way. And given the technology that’s available to all of us now, it’s easier than ever to spread these stories and mobilize people quickly.

The second strategy I’ve seen work well is to trust those around us. It’s tempting to think we need to do everything ourselves or be afraid to ask for help. Unfortunately, movements don’t exist with just a single, passionate person; they need a team of supporters. By trusting people around us to participate and asking for help when we need it, we can mobilize armies of support.

LKS: In a world that seems dominated by speed, you talk about the importance of pacing. This comes, in part, from your early experiences as a coxswain of a national champion crew team. Can you share more about this?

JD: While it’s possible for movements to prompt change quickly, most movements build over time with determination, patience, and ongoing action. Motivation of teams is as much an art as it is a science, and when you are building a movement with the help of others, it’s crucial to know the fine line between inspiring people and pushing them too hard.

In rowing, there’s a technique called a “Power 10” when rowers in a boat take ten strokes at their absolute maximum power, usually to try to move past another boat in a race. As a high school and collegiate coxswain, I was responsible for deciding when to take a Power 10. I found that a team can usually take 2–3 in any race—more than that and they stop being effective because the team gets too tired; too few, and you may end up behind another team who’s taking its own Power 10.

This same idea is applicable for leaders of movements. When you need to rally people behind your vision and ensure they feel bought in, a few well-placed sprints or “Power 10s” can work miracles. The key is to know the most strategic time to call for a Power 10—such as having a deadline before a big decision or brainstorming to overcome a particular obstacle—and to use them sparingly.

LKS: In nearly every powerful movement or entrepreneurial effort, there are inevitable setbacks and obstacles. What are some effective ways of getting through them?

JD: Whether you’re trying to enact change in your workplace, build a company, or get legislation passed, you are going to face criticism and obstacles. One key to surviving these challenges is to expect them. The more you can get comfortable knowing setbacks will be part of the package, the easier it becomes to navigate them. My daughters had a great math teacher in elementary school who used to tell them that math wasn’t about getting the right answer, it was about “the struggle.” The best mathematicians were the ones who could keep working on the same problem for years, through many failed attempts, without giving up until they finally solved it. And of course, each attempt taught them something new about what would and would not work.

The same is true of movement starters. Whether traditional activists or entrepreneurs, those that can master the struggle are the ones most likely to be successful. How they do that varies. For example, you can try to leverage naysayers to your advantage, or view yourself as a professional athlete. The key is resilience, because as Mary Pickford said, “This thing we call ‘failure’ is not the falling down, but the staying down.”

LKS: You talk about movements creating a sense of hope. What gives you hope these days?

JD: All the stories featured in the book give me hope, as does the renewed wave of activism we are seeing in communities all around us. From teenagers to grandparents, and from veterans to violent crime survivors, people all over the world are rallying others to create change.

While we live in a world that is increasingly divided, hope lives within all of us. It appears in what we do and say, how we treat each other, and what we stand up to fight for. My goal with Purposeful is to give people the belief and the tools they need to turn that hope into movements, whether in their workplaces, their neighborhoods, or the world.

If more people believe they can stand up and start a movement, and muster the courage to do it, imagine how much stronger and more compassionate our world could be.

Image Credit: WHYFRAME /

We are a participant in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for us to earn fees by linking to and affiliated sites.

Kategorie: Transhumanismus

Why We Need to Fine-Tune Our Definition of Artificial Intelligence

20 Červen, 2018 - 17:30

Sophia’s uncanny-valley face, made of Hanson Robotics’ patented Frubber, is rapidly becoming an iconic image in the field of artificial intelligence. She has been interviewed on shows like 60 Minutes, made a Saudi citizen, and even appeared before the United Nations. Every media appearance sparks comments about how artificial intelligence is going to completely transform the world. This is pretty good PR for a chatbot in a robot suit.

But it’s also riding the hype around artificial intelligence, and more importantly, people’s uncertainty around what constitutes artificial intelligence, what can feasibly be done with it, and how close various milestones may be.

There are various definitions of artificial intelligence.

For example, there’s the cultural idea (from films like Ex Machina, for example) of a machine that has human-level artificial general intelligence. But human-level intelligence or performance is also seen as an important benchmark for those that develop software that aims to mimic narrow aspects of human intelligence, for example, medical diagnostics.

The latter software might be referred to as narrow AI, or weak AI. Weak it may be, but it can still disrupt society and the world of work substantially.

Then there’s the philosophical idea, championed by Ray Kurzweil, Nick Bostrom, and others, of a recursively-improving superintelligent AI that eventually compares to human intelligence in the same way as we outrank bacteria. Such a scenario would clearly change the world in ways that are difficult to imagine and harder to quantify; weighty tomes are devoted to studying how to navigate the perils, pitfalls, and possibilities of this future. The ones by Bostrom and Max Tegmark epitomize this type of thinking.

This, more often than not, is the scenario that Stephen Hawking and various Silicon Valley luminaries have warned about when they view AI as an existential risk.

Those working on superintelligence as a hypothetical future may lament for humanity when people take Sophia seriously. Yet without hype surrounding the achievements of narrow AI in industry, and the immense advances in computational power and algorithmic complexity driven by these achievements, they may not get funding to research AI safety.

Some of those who work on algorithms at the front line find the whole superintelligence debate premature, casting fear and uncertainty over work that has the potential to benefit humanity. Others even call it a dangerous distraction from the very real problems that narrow AI and automation will pose, although few actually work in the field. But even as they attempt to draw this distinction, surely some of their VC funding and share price relies on the idea that if superintelligent AI is possible, and as world-changing as everyone believes it will be, Google might get there first. These dreams may drive people to join them.

Yet the ambiguity is stark. Someone working on, say, MIT Intelligence Quest or Google Brain might be attempting to reach AGI by studying human psychology and learning or animal neuroscience, perhaps attempting to simulate the simple brain of a nematode worm. Another researcher, who we might consider to be “narrow” in focus, trains a neural network to diagnose cancer with higher accuracy than any human.

Where should something like Sophia, a chatbot that flatters to deceive as a general intelligence, sit? Its creator says: “As a hard-core transhumanist I see these as somewhat peripheral transitional questions, which will seem interesting only during a relatively short period of time before AGIs become massively superhuman in intelligence and capability. I am more interested in the use of Sophia as a platform for general intelligence R&D.” This illustrates a further source of confusion: people working in the field disagree about the end goal of their work, how close an AGI might be, and even what artificial intelligence is.

Stanford’s Jerry Kaplan is one of those who lays some of the blame at the feet of AI researchers themselves. “Public discourse about AI has become untethered from reality in part because the field doesn’t have a coherent theory. Without such a theory, people can’t gauge progress in the field, and characterizing advances becomes anyone’s guess.” He would prefer a less mysticism-loaded term like “anthropic computing.” Defining intelligence is difficult enough, but efforts like Stanford’s AI index go some way towards establishing a framework for tracking progress in different fields.

The ambiguity and confusion surrounding AI is part of a broader trend. A combination of marketing hype and the truly impressive pace of technology can cause us to overestimate our own technological capabilities or achievements. In artificial intelligence, which requires highly valued expertise and expensive hardware, the future remains unevenly distributed. For all the hype over renewables in the last 30 years, fossil fuels have declined from providing 88 percent of our energy to 85 percent.

We can underestimate the vulnerabilities. How many people have seen videos of Sophia or Atlas or heard hype about AlphaGo? Okay, now how many know that some neural networks can be fooled by adversarial examples that could be printed out as stickers? Overestimating what technology can do can leave you dangerously dependent on it, or blind to the risks you’re running.

At the same time, there is a very real risk that technological capacities and impacts are underestimated, or missed entirely. Take the recent controversy over social media engineering in the US election: no one can agree over the impact that automated “bots” have had. Refer to these algorithms as “artificial intelligence,” and people will think you’re a conspiracy theorist. Yet they can still have a societal impact.

Those who work on superintelligence argue that development could accelerate rapidly, that we could be in the knee of an exponential curve. Given that the problem they seek to solve (“What should an artificial superintelligence optimize?”) is dangerously close to “What should the mind of God look like?”, they might need all the time they can get.

We urgently need to move away from an artificial dichotomy between techno-hype and techno-fear; oscillating from one to the other is no way to ensure safe advances in technology. We need to communicate with those at the forefront of AI research in an honest, nuanced way and listen to their opinions and arguments, preferably without using a picture of the Terminator in the article.

Those who work with AI and robotics should ensure they don’t mislead the public. We need to ensure that policymakers have the best information possible. Luckily, groups like OpenAI are helping with this.

Algorithms are already reshaping our society; regardless of where you think artificial intelligence is going, a confused response to its promises and perils is no good thing.

Image Credit: vs148 /

Kategorie: Transhumanismus

How DeepMind’s Latest AI Hints at Machines That Think More Like Us

19 Červen, 2018 - 17:00

I once asked a deep learning researcher what he’d like for Christmas.

His answer? “More labeled datasets.”

Nerd jokes aside, the lack of so-called “labeled” training data in deep learning is a real problem. Deep learning relies on millions upon millions of examples to tell the algorithm what to look for—cat faces, vocal patterns, or humanoid things strolling on the street. A deep learning algorithm is only as good as the data it’s trained on—“garbage in, garbage out”—so accurately gathering and labeling existing data is essential.

For the human researchers tasked with the job, carefully parsing the training data is a horrendously boring and time-consuming process.

It’s not just researcher agony. The lack of training datasets is often the linchpin in developing new applications for deep learning.

But what if this reliance on data is actually non-essential? What if there’s a way for machines to learn as quickly and flexibly as humans do? A toddler rapidly figures out what a pizza is from a few examples, even if the toppings are vastly different. State-of-the-art deep neural nets can’t.

In last week’s issue of Science, DeepMind unveiled an algorithm that displays the first steps in transfer learning. When shown a series of related 2D images from a camera, the algorithm can figure out the 3D environment and predict new views of the scene—even those it had never before encountered in its training examples.

The deep neural net, dubbed the Generative Query Network (GQN), could analyze 2D views via a simulated camera to control a digital robotic arm to navigate a 3D environment, suggesting potential applications in robotics.

That’s not even the impressive part: when researchers peeked into its AI brain, they found that the network captured the “essence” of each 3D scene, in that its internal structure represents meaningful aspects of the scene—for example, the type or color of an object. These real-world-based representations allowed the deep neural net to easily replace an object with another in the same scene without crashing.

“[This is] probably most fascinating about the study,” said Dr. Matthias Zwicker at the University of Maryland, who wasn’t involved in the study.

The technique “introduces a number of crucial contributions” that will likely help robots navigate real-world, complex environments in the future, he added.

Greedy Algorithms

You’ve probably heard of deep learning by now: it’s the method that’s rejuvenated the entire field of machine learning, lending itself to face recognition algorithms, voice-mimicking systems, Google Now, machine translation, self-driving cars, AI-based cancer diagnosticians, automatic caption generators, and…I could go on.

But deep learning has a serious problem: it needs human help.

Mimicking the human brain, deep learning relies on artificial neural networks with layers of “neurons” that stack like a gourmet sandwich. Like biological brains, these artificial neurons receive input from multiple peers, perform calculations, and forward the outputs to other neurons. Within a neural net the connections are fixed; training helps them adjust the computation of each neuron’s output. The training process is iterative and similar to tuning a guitar—with humans acting as supervisors telling the network what exactly it’s trying to learn (is it a pizza or not?).

Unshackling deep learning from this labeled data requirement would fundamentally change the intelligence landscape. Humans, after all, often don’t rely on other people for learning new skills. This is partly what unsupervised transfer learning is trying to do: have a machine figure out the goal based on raw input data alone, without requiring additional information from their human supervisors.

Encode-Decode Network

DeepMind’s new GQN relies on a process remarkably similar to how humans learn.

First, an encoder network analyzes visual inputs—the raw pixels—and churns the data into models of a scene. It forms a mathematical “representation” of what the scene describes, and each additional observation by the neural net adds to the richness of that representation. Here, the network can adapt to changes in the environment by efficiently encoding important details into a compact, high-level “idea.”

The next part is a decoder network: this part interprets the representations and offers solutions depending on the specific task at hand. In this way, the encoder network (and its “understandings”) can be used with different decoders, generating solutions to a vast array of problems. This setup allows the GQN to account for uncertainty, the authors explained, so that it can try to understand scenes that are severely occluded from view.

It’s like a child gaining an understanding of what counts as a pizza: they may not be able to describe it, but can use their mental picture of a pizza to figure out if the thing in front of them falls into that category—even if it’s a dish they’ve never seen before.

Of course, compared to human eyes, computers are at a disadvantage: rather than seeing scenes in 3D, cameras capture the environment in 2D. The DeepMind team cleverly tackles the problem by rendering 3D environments from 2D images using a standard computer algorithm.

“This approach allows them to produce 2D views from any desired viewpoint,” explained Zwicker.

Using two nearby views from the same scene as input, the team then trained their GQN to predict new views of the 3D environment—a type of transfer learning. Because the encoder network doesn’t know what types of questions the decoder needs to tackle, it learns to generate representations including all information in a scene—for example, what the objects are, where they’re located in space, the room layout, and so on.

“The GQN will learn by itself what these factors are, as well as how to extract them from pixels,” the authors said. The decoder network then takes this concise, abstract description of a scene and fills in any gaps in the representations.

When pitted against a virtual maze with multiple rooms and corridors, the GQN could tease out its structure by aggregating multiple observations from different vantage points. In another test, the GQN was tasked to control a virtual robotic arm reaching for a colored ball. Rather than the conventional approach of analyzing raw pixels, the GQN zipped up the scene into compact representations, which were in turn used to control the robotic arm.

It makes the process much more efficient and faster, all without human help, the authors said.

The algorithm reconstructs a maze based on a few viewpoints. Image Credit: DeepMind

This isn’t the first try at unsupervised learning. Reinforcement learning, adapted from psychology, “teaches” algorithms to learn based on how much reward they get for an action. DeepMind and OpenAI are both betting on deep reinforcement learning as a way to get to transfer learning, with OpenAI releasing a digital “gym” to train a single algorithm to learn everything that can be learned within the digital realm.

Already there have been successes: AlphaGo Zero, for example, combined deep reinforcement learning with playing itself (“self-play”) to become arguably the strongest Go player in history, without an ounce of human input. OpenAI’s Unsupervised Sentiment Neuron learns to predict the next character in Amazon reviews, but can transfer that learning for sentiment analysis into text. Recently, they launched the “Retro Contest” to encourage machine learning researchers to develop algorithms that can generalize from previous experience.

DeepMind’s current study takes transfer learning out of the gaming world and sticks it into something more concrete: parsing scenes. That’s not to say that the GQN is ready for the real world. Most experiments were restricted to simple 3D “toy” scenes comprising only a few objects, so it’ll still take some work for the algorithm to understand our complex and messy world.

But the technique offers a path towards that goal.

“Our work illustrates a powerful approach to machine learning…that holistically extracts these representations from images, paving the way toward fully unsupervised scene understanding, imagination, planning and behavior,” the authors concluded.

In other words, everything that’ll make a machine think more like a human.

Image Credit: DeepMind

Kategorie: Transhumanismus

Pulling Water, Fuel, and Power From Thin Air Is Getting Practical

18 Červen, 2018 - 17:00

Pulling things from thin air is generally considered a magic trick. But several recent research papers suggest we could soon be extracting valuable resources like water, fuel, and power from the atmosphere.

Startup Carbon Engineering published a new technique for turning atmospheric CO2 into fuel last week that is starting to make the approach seem economically viable. And just a day later, a pair of papers outlined new ways to pull water from the air, one by harvesting fog more effectively and another that can extract moisture even in the driest deserts.

The idea of pulling gases from the air is not new; air separation plants have long been used to extract abundant nitrogen and oxygen for use in heavy industry. But the low proportion of CO2 in the atmosphere (roughly 0.038 percent) means these approaches can’t extract it in a viable way.

Image Credit: Carbon Engineering

Carbon capture and storage technology is used to extract the gas from the exhausts of power stations and other CO2-producing plants, and in April I reported on an analysis from the University of Toronto predicting that within five to ten years, it could be economically viable to convert it into fuels and chemicals.

But in a paper in the journal Joule, Carbon Engineering and researchers from Harvard University estimated they can pull CO2 directly from the atmosphere for as little as $94 per metric ton and no more than $232, which works out to between $1 and $2.50 to remove the carbon dioxide a modern car would produce burning through a gallon of gasoline.

Their approach involves using giant fans to draw air into what look like cooling towers whose interiors are designed to allow the air to come into contact with a liquid solution that traps the CO2. Using similar chemical processes to those used in paper mills, the CO2 is extracted then combined with hydrogen to create carbon-neutral fuels. The company is already testing the approach at a pilot plant in British Columbia.

Pulling water from the air is an approach with longer pedigree—people have been harvesting condensation for millennia—but doing it at scale is much harder. Finding solutions could have a massive impact on water scarcity, though, because some of the areas with the biggest water scarcity problems also have high humidity, like the coastal areas of the Middle East.

Mesh nets used to catch droplets of water called fog harvesters are a low-tech way of collecting water from the atmosphere that’s already widely used. But in a paper in Science Advances, engineers from MIT described how they made this process far more efficient by using electrodes to electrically charge incoming water droplets so they were attracted to a metal mesh.

The engineers have created a startup called Infinite Cooling off the back of the research that plans to use the devices to reclaim the huge amounts of water that billow out of cooling towers at power plants.

The same issue of Science Advances also included another paper from UC Berkeley researchers, who exploited super-absorbent materials called metal-organic framework (MOF) to extract water from the air in one of the driest environments in the world, the Arizona desert. The material sucks up water at night, when humidity is comparatively high, and the heat of the sun is then used to drive it back off during the day so it can be collected.

These techniques are already moving from academia to the real world. The Water Abundance XPRIZE was set up to challenge teams to create technology that could extract 2,000 liters of water a day from the air for no more than 2 cents (1.4 pence) per liter. Earlier this year five finalists were selected to compete for the top prize, with testing due to start next month.

Thin air can even be a direct source of energy. Researchers from the University of Washington have demonstrated it’s possible to harvest useful amounts of electricity from the TV, radio, cell-phone, and WiFi signals constantly bathing the modern world.

The amounts are very small, but by combining this energy harvesting with techniques to reflect WiFi signals in a way that encodes new information in them called back-scattering, the approach could create wireless gadgets that never need to have their batteries replaced. Startup Jeeva Wireless is already commercializing the idea, and its battery-free prototypes have beamed data as far as 100 feet.

So next time someone claims to be able to pull something from thin air, it may pay to be less skeptical. Maybe they’re actually pitching you a novel idea for a startup!

Image Credit: Everyonephoto Studio /

Kategorie: Transhumanismus

Elon Musk’s Boring Company to Build 150 mph Chicago Loop to O’Hare

17 Červen, 2018 - 17:00

What’s the best way to get to and from the airport? The subway is cheap, but it means hauling your luggage around and sitting there through all the stops that aren’t yours. Cabs and ride services get you straight there, but they’re more expensive and run the risk of getting caught in traffic. What if there was an option that was a little more expensive than the subway, a little cheaper than a cab, but guaranteed to be fast, clean, and easy?

At Chicago’s O’Hare airport, it seems that option will soon exist, and it’s being brought to Chicagoans by none other than Elon Musk.

After receiving proposals from several organizations, the Chicago Infrastructure Trust chose Musk’s The Boring Company to build and operate a high-speed underground shuttle between the city center and the already-huge, soon-to-be-huger O’Hare. The company states its goal in building environmentally friendly public transit systems like this one is “to alleviate soul-destroying traffic.”

According to the Airports Council International 2017 rankings, O’Hare is the world’s second-busiest airport in terms of take-offs and landings (after Atlanta’s Hartsfield-Jackson International Airport), and the sixth-busiest in terms of total passenger traffic—over 79.8 million people flew through O’Hare in 2017.

Not every one of those people needs to get to and from the city center—some are just connecting to other destinations—but of the tens of millions that do, how many would opt to use Musk’s Chicago Express Loop, as the project has been dubbed? More importantly, what is this thing? A Tesla-in-a-tunnel? A mini-hyperloop?

It’s sort of a combination of the two. The Express Loop will run in 14-foot-wide tunnels 30-60 feet underground at speeds of up to 150 miles per hour. The Boring Company calls the vehicles themselves “electric skates,” defined as platforms on wheels propelled by multiple electric motors. Besides being built on a modified Tesla X chassis, they’re battery-powered and produce no emissions. They look like futuristic train cars, and each skate will hold up to 16 passengers.

Image Credit: The Boring Company

Hyperloop pods, if they ever come to be, will travel much faster than Express Loop (600+ miles per hour), and the pods will need to be pressurized, as there will be a partial vacuum inside the tube to significantly reduce air friction. Express Loop won’t need to reduce air friction or have pressurized cars.

The trip from downtown Chicago to the airport is projected to take just 12 minutes (for comparison’s sake, the Blue Line subway currently takes about 45 minutes). As the crow flies, it’s 16 miles from downtown to O’Hare. The city departure point will be located in Block 37, an existing mall/subway station that takes up a full square city block, and the O’Hare arrival point will be somewhere in between Terminals 1 and 3.

Building two stations, digging 16 miles of big underground tunnels, and putting high-speed electric vehicles in them sounds like an enormous expense—who’s going to foot the bill?

Chicago’s mayor, Rahm Emanuel, has stated that the project won’t receive any public or city funds, and must be 100 percent privately-funded. The Boring Company claims it can get the entire job done for under $1 billion, and has a unique strategy for dropping constructions costs (comparable projects have cost up to $1 billion per mile, mostly because the tunneling portion of the construction is very costly).

Making the tunnels smaller is mostly what will make them cheaper to build; they’ll be half the width of the standard 28-foot diameter. The machine that digs them will operate faster thanks to partial automation and improved cooling and power supply technology.

The estimated fare to ride the Express Loop is $25, though that’s still not finalized. Taking the Blue Line to O’Hare costs $2.25, and taking it the other direction costs $5. A cab is around $40, and Uber or Lyft anywhere from $15-35 (depending on time of day, demand, etc.). After the novelty of trying it once, will Chicagoans and visitors be up for trading money for time?

Mayor Emanuel thinks they will be. “Getting from downtown to O’Hare or O’Hare to downtown is a race against time. We’re gonna give people a leg up. People have paid for time [forever]. Time is the one commodity you can’t expand,” he said.

As someone who’s missed multiple flights out of O’Hare airport but also has a certain degree of claustrophobia, I’m personally undecided as to whether Express Loop will live up to its hype, and be able to pay down its cost.

Image Credit: The Boring Company

Kategorie: Transhumanismus

This Week’s Awesome Stories From Around the Web (Through June 16)

16 Červen, 2018 - 17:00

This Wild, AI-Generated Film Is the Next Step in ‘Whole-Movie Puppetry’
Sam Machkovech | Ars Technica
“Their plan required Benjamin to do the following: cobble together footage from public domain films, face-swap the duo’s database of human actors into that footage, insert spoken voices to read Benjamin’s script, and score the film. This was all on top of writing the screenplay, a process that has been refined since Benjamin’s last 2016 splash.”


Bitcoin’s Price Was Artificially Inflated Last Year, Researchers Say
Nathaniel Popper | The New York Times
“A concentrated campaign of price manipulation may have accounted for at least half of the increase in the price of Bitcoin and other big cryptocurrencies last year, according to a paper released on Wednesday by an academic with a history of spotting fraud in financial markets.”


Deep Fake Videos Are Getting Impossibly Good
George Dvorsky | Gizmodo
“Fake news sucks, and as those eerily accurate videos of a lip-synced Barack Obama demonstrated last year, it’s soon going to get a hell of a lot worse. As a newly revealed video-manipulation system shows, super-realistic fake videos are improving faster than some of us thought possible.”


Rest Easy Cryptocurrency Fans: Ether and Bitcoin Aren’t Securities
Louise Matsakis | Wired
“Taken together, the two sets of remarks provide the clearest understanding of how the regulatory agency views the cryptocurrency market. In essence, when a cryptocurrency becomes sufficiently decentralized, as the widely popular bitcoin and ether have, the agency no longer views it as a security.”


The Menace and Promise of Autonomous Vehicles
Jacob Silverman | Longreads
“While computers are, of course, prone to make mistakes—and vulnerable to hacking—the driverless future, we are told, will feature far less danger than the auto landscape to which we’ve been accustomed. To get there, though, more people are going to die.”


Elon Musk Chosen to Build, Operate O’Hare Express Dubbed ‘Tesla in a Tunnel’
Fran Spielman | Chicago Sun Times
“The Boring Company’s goal is a one-way fare in the $20-to-$25-range, maybe less. That’s half the cost of a cab or ride-share. And while it can take about 40 minutes to ride the Blue Line from the Clark/Lake station to O’Hare, the Tesla-like ‘electric skate vehicles’ would travel at speeds that could reach 150 mph—though slower along curves—to get you there in 12 minutes.”


Gone But Not Deleted
Luke O’Neil | Boston Magazine
“Our devices are where we carry out the business of living our lives and are increasingly our primary means of communicating with the people in them. Should they also be where we lug around our memories of the deceased? More to the point, do the digital ghosts the dead leave behind make it harder to let them go at all?”

Image Credit: Yuliya Evstratenko /

Kategorie: Transhumanismus

The More People With Purpose, the Better the World Will Be

15 Červen, 2018 - 17:00

It almost sounds too cliché to say that in order to live a fulfilling life, each of us must find our purpose. From a very young age, we encourage our youth to identify their passion and pursue it.

Yet most of us struggle with this throughout our lives. Even if you know what you want to dedicate your life to, the practical path forward is not always so clear. Many people continue to chase after corporate ladders, higher salaries, and bigger-sounding titles in the wrongful assumption that this will eventually bring them some sort of fulfillment. Most people will live and work for decades with no true sense of higher purpose. This is one of the many tragedies of our modern-day materialistic world.

But it is not enough to tell everyone—especially our youth—that they need to find and follow their purpose. In order to bring more meaning into our societies, we also need to consider how each of us can find our purpose and how we can pursue it.

Ikigai: A Japanese Tool to Find your Purpose

One of the most notable strategies in finding one’s purpose is the Japanese concept of Ikigai, which translates to “reason for being.” Your Ikigai is an indicator of what drives, excites, and fulfills you.

In his noteworthy TED talk, Dan Buettner describes how the increased life expectancy of the residents of Okinawa, Japan (the place where people have the longest lifespan) is partially attributed to Ikigai. Almost all Okinawans know what it is that gives them a reason to wake up every morning, and consequently, the concept of retirement is almost non-existent. The residents also live with lower rates of stress and better health in comparison to most Americans.

Broadly speaking, Ikigai lies at the intersection of what you are passionate about, what you are good at, what can allow you to earn a living, and above all, what the world needs. It is a combination of passion, vocation, and mission.

Image Credit: Kishore B /

Conducting the Ikigai process involves identifying each part of the intersection and the sum of all the components. But it’s important to note that this process is never successful as a one-time exercise. Finding one’s Ikigai is about lifelong self-reflection and experimentation. You may identify more than one Ikigai, and your Ikigai may change as your identity and circumstances evolve over time.

But the Ikigai equation doesn’t just consider you as a variable. It also takes into account how your passions align with what the world needs. At a certain level, it’s about aligning your interests with the skills and careers that are in demand in the workforce. But even more, it’s about aligning your drive with the needs of society.

Many of us overlook how having a positive impact on other people’s lives can be a true source of fulfillment. Backed by scientific evidence, it is long known that helping others contributes to increased happiness and well-being. So when thinking about building a purposeful and gratifying career, consider not just what you can do to better yourself, but also how you can use your abilities to better the world.

As the film The School of Life points out so brilliantly, we would all get much more clarity in life if we asked ourselves not “What do I want to do for a living?” but rather “What do I want my mission to be?” History has demonstrated this is the first step towards accomplishing the extraordinary.

This is where the value of having a purpose goes beyond individual well-being and becomes a powerful force for humanity at large.

Massive Transformative Purpose: A Key for Innovation

Of late, many thought leaders have pointed out that the importance of purpose is a topic for consideration not just for individuals, but also for companies and movements. That is where having a massive transformative purpose (MTP) comes into play.

An MTP is essentially an “aspirational tagline” that describes the purpose of the company and the big problem that it solves. Salim Ismail, in his book Exponential Organizations (co-authored by Mike Malone and Yuri van Geest), analyzed the 100 fastest-growing organizations and studied their key traits. The authors discovered that all of these game-changing organizations or individuals have an MTP.

With an MTP, the focus and purpose of a company becomes less on profits and more on contributing to humanity in a significant way. An MTP also describes why you do what you do. As best-selling author Simon Sinek points out, people don’t buy what you do, they buy why you do it. The why of a company or individual is even more important than the how or the what. Companies with a grand unifying vision and purpose have consistently been more successful than companies without one. This is where purpose becomes a key to disruptive innovation.

A Piece of the Puzzle

The last century has seen remarkable advances in science and technology. Yet, globally, we are living through a mental health epidemic, with increasing rates of depression and anxiety. Most people do not look at their jobs with a sense of excitement.

Ultimately, having a purpose ignites meaning and lasting happiness. It means waking up in the morning with a sense of anticipation for the day. After all, a human life is far too precious to be spent on meaningless or mediocre goals.

Ikigai and MTP, as concepts, urge us to align our passions with a mission to better the world. But sometimes, what will better the world is living a life of purpose to begin with. In the words of Howard Thurman,”Don’t ask what the world needs. Ask what makes you come alive, and go do it. Because what the world needs is people who have come alive.”

Image Credit: Ditty_about_summer /

Kategorie: Transhumanismus

Video Gamers May Soon Be Paid More Than Top Pro Athletes

14 Červen, 2018 - 17:00

Your interest in sports may have started out as a hobby when you were just a kid. You were better at it than others, and some even said you were gifted. Maybe you had a chance to develop into a professional athlete.

Colleges would soon line up to extend full scholarships. If you pushed hard enough, practiced countless hours and kept a cool head, lucrative contracts and international fame awaited.

This fantasy plays out for many North American kids who dream of “making it to the big leagues.”

Whether they play hockey, football or basketball, even the most remote possibility of turning their love of the game into a respected career is worth sacrificing for.

Enter video games.

In less than a decade, the realm of professional sport has been taken by storm by the rise of eSports (short for electronic sports). These video game events now compete with—and in some cases outperform — traditional sports leagues for live viewership and advertising dollars.

For the top eSports players, this means sponsorship contracts, endorsements, prize money and yes, global stardom.

Games on TV Still Command High Ad Dollars

This week, dozens of professional video game players will descend on Toronto during NXNE, an annual music and arts festival, to compete in different games for prizes of up to US$1,000. Not a bad payday, perhaps, but still chump change in the eSports scene.

For example, Dota 2, a popular battle arena game published by Valve, recently handed out US$20 million to its top players during its finale.

What does this mean for traditional sports? And sports TV viewership?

The lasting broadcast success of sports leagues games can be explained by the fact that they are meant to be shared happenings and are best experienced live. As such, they have been resilient to disruptions within the media landscape and somewhat spared by the advent of on-demand streaming services such as Netflix and Amazon Prime.

The ability to capture a sizeable number of “eyeballs,” long enough and at a precise time, is the reason why professional sports leagues still command huge TV rights and advertising dollars.

In the past few years, the “Big Four” North American sports leagues have all struck new deals worth hundreds of millions of dollars.

Shifts in Sports Culture

Some leagues like Major League Baseball, and their once subsidiary Advanced Media division (MLBAM), have long embraced technological innovations to enhance audiences’ experience.

Meanwhile, media and telecommunication giants have been slower to catch on.

In 2016, John Skipper, then president of ESPN, referring to cable TV packages said: “We are still engaged in the most successful business model in the history of media, and see no reason to abandon it.”

This attitude, at the time, was not only symptomatic of a lag or inability to adopt technological innovations, but also raised concerns about the company’s future.

But the decline of the traditional linear broadcast, and the risk of losing relevancy in this digital, broadband and tech savvy media landscape is inevitable, and forces these media giants to question their traditional business models and to focus on online audiences.

Along with this shift, a new, popular and expansive trend for the new generation has emerged—eSports.

Whether eSports are actual sports or not is a whole other debate; however, the emergence of the global video game competition field demands attention and strategic investment.

Why eSports Is Doing So Well

As a spectator sport, video games generate viewership at least on par with professional leagues.

Take, for instance, 2016’s League of Legends tournament that drew 36 million viewers, five million more than the NBA Finals, in front of a sellout crowd at the famous Bird Nest stadium in China.

eSports mimic traditional sports leagues principles: Exciting content, likeable stars, catchy team names, slow motion highlights, intense competition and an uncertain outcome.

These video games attract audiences as they are no longer simply designed to be played, but increasingly to be visually pleasing for audiences.

Age-wise, compared to traditional sports that struggle to diversify their audience demographics, eSports have successfully attracted younger viewers.

The fan base is pretty young, with 61 per cent of fans falling in the 18-34 age range. Young men, in particular, are a desirable market for many advertisers.

eSports Attracts Advertisers

The economic outlook for video gaming sports is staggering. According to NewZoo, eSports “on its current trajectory is estimated to reach US$1.4 billion by 2020.” And a “more optimistic scenario places revenues at US$2.4 billion.”

Companies like Red Bull, Coca-Cola and Samsung, all usual suspects when it comes to advertising and young people, are flocking to eSports.

In recent years, eSports has made efforts to monetize across traditional revenue streams, such as merchandise sales, subscriptions plans, ticket sales and broadcast rights. It is, once again, taking a page straight out traditional sports leagues’ playbook.

So, what can established leagues and media giants do? Given the choice between fighting eSports or joining them, many appear to have chosen the latter. Recall ESPN resisting change in 2016. Then fast forward to their recent strategic investments in the digital platform BAMTech, once MLB Advanced Media, in order to launch ESPN streaming services.

As a result, Disney, the 100 per cent owner of ESPN, now has a say in League of Legends streaming because its publisher Riot Games had signed a seven-year US$350 million dollar broadcast deal with BAMTech.

FIFA just partnered with Electronic Arts on a online tournament that drew 20 million players and 30 million viewers. Also hoping to create platform synergies and to reach new audiences, Amazon acquired Twitch in 2014, the leading game streaming service.

These examples show that eSports are not just popular with gamers, but also among sports leagues and media giants. Both stand to learn from each other. No wonder Activision’s CEO said that he wanted to “become the ESPN of eSports.

This popularity also opens up more opportunities to compete on the professional level and earn huge endorsements, prize money and salaries just like LeBron James, Serena Williams, Danica Patrick or Sidney Crosby.

In fact, higher education eSports programs are already launching across the country and college scholarships are now commonplace, further acknowledging the economic viability and social acceptability of this phenomenon.

And with talks of introducing eSports in the Paris 2024 Olympic Games, Canada’s “Own the Podium” program may soon have to follow suit.In any case, it turns out that our parents were wrong all along: You can stay glued to your console in the basement all day and still make it pro.

This article was originally published on The Conversation. Read the original article.

Image Credit: Roman Kosolapov /

Kategorie: Transhumanismus

Pioneering Stem Cell Trial Seeks to Cure Babies Before Birth

13 Červen, 2018 - 17:00

Even before she was born, Elianna Constantino had already cheated death.

Elianna has a rare inherited blood disorder called alpha thalassemia major, which prevents her red blood cells from forming properly. The disease, which has no cure, is usually fatal for a developing fetus.

But while still in her mother’s womb, Elianna received a highly daring treatment. Doctors isolated healthy blood stem cells from her mother and injected them through a blood vessel that runs down the umbilical cord. Four months later, Elianna was born with a loud cry and a glistening head of hair, defying all medical odds.

Elianna is the first in a pioneering clinical trial that pushes the boundaries of stem cell transplants. Led by Dr. Tippi MacKenzie, a pediatrician and fetal surgeon at the UCSF Benioff Children’s Hospital in San Francisco, the audacious trial asks: what if we could cure people of inherited diseases before they were born?

Fetal Medicine

MacKenzie’s trial isn’t completely new. Since the 1990s, doctors have wondered if it’s possible to perform a stem cell transplant on a fetus before birth. Many inherited blood diseases could potentially be cured this way: alpha thalassemia, beta thalassemia (the more common variant), or the big one—sickle cell disease, in which red blood cells grow into sharp sickle-like shapes that damage blood vessels and prevent effective oxygen delivery.

The idea that you can treat a fetus while inside a mother’s womb is pretty radical. Doctors have long thought that fetuses are encased in an impermeable protected barrier, which helps protect the developing human from outside insults.

Early experiments with fetal stem cell transplants seemed to support the dogma. Most trials using the father’s stem cells failed, leading doctors to believe that the procedure couldn’t be done.

But subsequent research in animals discovered a crucial tidbit of information: the mother’s immune system, not the fetus, was rejecting the father’s stem cells.

There’s more: rather than being quarantined, fetuses continuously exchange cells with their mothers, so much so that fetal cells can actually be isolated from a mother’s bloodstream.

The reason for this is to quiet both parties’ immune systems. Because the fetus has part of the father’s DNA, it makes a portion of their cells foreign to the mother. This back-and-forth trafficking of cells “teaches” both the mom’s and the fetal immune system to calm down: even though the cells aren’t a complete genetic match, the fetuses’ cells will tolerate their mother’s cells, and vice-versa. In this way, during pregnancy the fetal immune system is on hold against the mother.

This harmonious truce changes once the baby is born. The child’s immune system grinds into action, attacking any cells that are foreign to its body. Once born, a bone marrow transplant requires drugs to kill off the infant’s own bone marrow cells and make room for healthy ones. It also requires high doses of immunosuppressant drugs to keep the infant’s immune system at bay while the new, healthy cells do their job.

Obviously the treatment is extremely dangerous for a newborn. Because of high fatality rates, many expecting parents are given the option to terminate the pregnancy if a child has an inherited blood-based disorder like alpha thalassemia.

But the animal studies also offered a glimmer of hope: fetuses are the perfect candidates for a stem cell transplant. And their mothers, if healthy, are the perfect donors.

Battle Against Fate

Elianna’s parents knew something was wrong with their child at 18 weeks. An ultrasound found an enlarged heart—twice the normal size—and fluid building in the lungs and other organs. Her brain seemed to lack oxygen, even though blood was rushing through the organ abnormally rapidly. All signs pointed to alpha thalassemia major.

Treatment options were limited. Elianna could receive blood transfusions to replace her malformed red blood cells, which carry oxygen across the body. If born alive, she would need these blood replacements for life. And even then the risk for significant brain damage would be high.

But there was a silver lining: Elianna’s mother had a healthy copy of the genes contributing to alpha thalassemia major. The disease involves two main genes, each with two copies. In order for symptoms to show, all four copies need to be bad. Because the mom had healthy copies, her stem cells were also healthy, which in theory could produce functional red blood cells for her fetus.

The team took a sample of bone marrow from Elianna’s mother and isolated blood stem cells. These cells have the potential of becoming any type of blood cells. Roughly 50 million of these healthy cells were then injected through the mom’s abdomen into the umbilical vein during a blood transfusion. Once released, the cells could circulate with the blood and develop into healthy, mature blood cells.

It’s an enormous number, but for good reason. Stem cells home in to the bone marrow like little torpedoes. The larger the number of transplanted cells, the higher the chance that the mother’s healthy stem cells could elbow out the fetus’s defective ones.

After three more blood transfusions in the next few months, Elianna was born against all odds.

MacKenzie is thrilled. “This is the first case in which we’re using a high dose of the mother’s cells placed directly into the fetus’ bloodstream,” she explained, adding that doing all three things may lead to a higher chance of success.

A Fetal Cure

For now, the team doesn’t know if Elianna is completely cured. So far in her young life she still needs a blood transfusion every three weeks, but the team has found her mother’s stem cells in her bloodstream.

The next step is to see whether the mom’s cells permanently nested inside her child’s bone marrow—called “engrafting”—and if those cells can generate healthy blood cells throughout the child’s life. But this phase one trial is already a success in that neither the mom nor Elianna seemed to experience any negative side effects such as immunorejection.

“I’m thrilled that it was safe and it was feasible,” MacKenzie said.

The team plans to continue optimizing the treatment protocol to potentially increase the chance of a successful stem cell engraft.

“If we find that this is safe and effective for this one disease, we hope to apply it to a host of other blood diseases, the most common of which would be sickle cell disease,” said MacKenzie. Regardless, she adds, we need to “get the message out that fetal transfusions for alpha thalassemia are lifesaving.”

MacKenzie’s trial is just one illustration of how fast fetal medicine—once thought to be impossible—is progressing. Earlier this year, a team successfully prevented three children with a rare genetic disorder from being born without sweat glands. In that case, the fetuses were injected with a protein that their bodies couldn’t make themselves—although the protein drug didn’t work in babies, it allowed the fetuses to develop sweat glands, which are crucial for maintaining body temperature.

The results show that it may be possible to catch certain genetic disorders early—well before birth, when the fetus is undergoing rapid developmental changes. If doctors can target malfunctioning genes or cells at a critical time point before they wreak havoc in a newborn, then it may be possible to treat a previously incurable disease—before a baby is born.

Image Credit: By u3d /

Kategorie: Transhumanismus

Can Hawaii Go Carbon Neutral by 2045?

12 Červen, 2018 - 17:00

If you’ve done any research into climate change, you’re almost certainly familiar with the Keeling Curve. This sawtoothed upward curve tracks the inexorable increase in carbon dioxide in Earth’s atmosphere, and the rate of increase is proportional to the amount of fossil fuels burned that year (around half of what we emit is absorbed by the oceans, making them more acidic, or by plants in land sinks).

Since the measurements began at Mauna Loa Observatory in 1958, the Keeling Curve has tracked how far carbon emissions have taken us away from the pre-industrial atmosphere our species is used to and towards the atmosphere of the Anthropocene.

Now, Hawaii—home of Mauna Loa—has passed a new law to try to stop its emissions from climbing, aiming to make the state carbon neutral by 2045.

Scientists and policymakers continue to learn more about the effects of climate change, and discuss how best to tackle the problem—adaptation, mitigation, or even geoengineering. But everyone agrees that carbon emissions need to fall dramatically. The Paris Agreement, signed by nearly every nation on the planet, states that by the second half of this century there needs to be a “balance between anthropogenic sources and removal by sinks of greenhouse gases.” In other words, the world needs to go carbon neutral.

Hawaii’s law joins others: the Maldives aim to become carbon neutral by 2020, Costa Rica by 2021, Norway by 2030, Iceland by 2040, Sweden by 2045, and New Zealand by 2050. In the case of the Maldives and Hawaii, you can see the motivation; sea level rise threatens the Maldives so badly that they are building artificial islands to house the population. Hawaii’s legislation cites a recent report that suggests $19 billion in damages due to sea level rise if nothing is done to cut emissions globally. Other countries, like the UK, have laws that promise to reduce net carbon emissions by 80 percent by 2050.

These goals may seem ambitious in the few short decades at our disposal. In the UK, half of electricity is still generated by burning fossil fuels; in the US, 63 percent is—and electricity production is just one sector. Yet, when the targets for 1.5 or 2 degrees Celsius of warming require carbon neutrality for everyone by 2100, ambitious action like this is needed by leading, wealthy nations. There’s already concern that, globally, the pledges are far from sufficient: even if everyone fulfills their current promises, the world is still well on track for three to four degrees of warming by the end of the century.

The concern is that many of the plans are lacking on detail. It’s all very well making a promise to be fulfilled by 2050 or by 2100 when, as a politician, you know you won’t be around to foot the bill if it’s not fulfilled.

Hawaii, for example, already has legislation in place that pledges 100 percent renewable electricity by 2045 (as an added bonus, the Aloha State won’t need to rely on expensive oil imports—its electricity is already the most expensive of any state and most of that generation is from oil). The renewable share for electricity is already 25 percent, the best out of all US states, but as you add more renewables to the grid, it becomes more difficult to increase that share. Solar generation will always be less during the winter and at night, and wind generation is also intermittent. Converting the last quarter of Hawaii’s electricity to renewables will be a much greater challenge.

Hawaii is already attempting to address this with battery parks. In March last year, a 52 megawatt-hour (MWh) battery system started operations, and a 100 MWh battery farm started construction this year.

How much storage would Hawaii need under 100 percent renewables? This is less clear—academic studies suggest for the US as a whole, the range could be from 8 to 16 weeks of electricity consumption. Hawaii used 734,000 MWh of electricity in 2016, so this would correspond to 113,000 to 226,000 MWh. In other words, another thousand of those battery farms, which already rank among the biggest in the world. Of course, energy markets change, technologies improve, and smart grids might reduce the storage requirements.

But decarbonization is about so much more than electricity generation; in the US, this sector accounts for only a third of carbon emissions. Many analysts consider that, given the slow adoption and environmental problems associated with many biofuels, such as the US’s hardly-green corn bioethanol, transportation and industry will have to be electrified. Domestic heating needs to be electrified. Industrial processes like producing textiles and cement—which accounts for five percent of global emissions on its own—need to be done in a carbon neutral way.

And while electric cars can be far more efficient than their fossil-fuel-burning counterparts (and could even help solve the energy storage problem if deployed wisely) all this means ever-more electricity consumption and a need for greater capacity. A carbon-neutral Hawaii will need more electricity still than it does today.

In fact, of the vast array of technologies that might be needed to go carbon neutral, the impressive progress in solar panels and renewable energy in general in recent years is an outlier. Aviation and shipping are industries that have especially lagged behind. Electric cars are already on the roads. Electric planes are still prototypes. For Hawaii, a state where 51 percent of energy consumption is due to transportation and most of that is down to jet fuel, this is a serious problem if you want to go carbon neutral.

As an island state, Hawaii depends on aviation and shipping: 90 percent of the state’s food is imported, so growing and eating locally are also green targets. In fact, if you express it in terms of primary energy rather than electricity, that 25 percent turns into 10 percent from renewable sources (including biomass, 2015, EIA). Even optimists expect that electric planes will just be heading into the mainstream in 20 to 30 years—around the time Hawaii is supposed to be completely carbon neutral.

While Hawaii’s mayors recently committed to phasing out carbon emissions in ground transportation by 2045, the plan is scant on details. This will presumably mean a ban on new sales of fossil fuel cars, something it’s unclear Hawaii mayors can enact. Several countries have announced similar measures, but none have been legislated yet. Given that the average American holds onto a car for 6.5 years, without having to buy back or exchange cars, that ban would have to arrive in the 2030s, but there is no sign of it as yet.

Evidently, achieving decarbonization on this scale will be immensely challenging, even for a relatively wealthy state like Hawaii. In perhaps a subtle acknowledgement of this, an accompanying bill mentions initiatives to create a carbon offset program, and discusses afforestation and soil management as techniques that might help draw down carbon dioxide from the atmosphere. Yet if Hawaii achieves “carbon neutral” status by trading emissions with other parties, it won’t set a roadmap that the rest of the world can hope to follow. And while soil carbon sequestration and afforestation can help reduce carbon emissions, they are not magic bullets, and a full accounting would need to be done.

Negative emissions technologies might ultimately be preferable to or cheaper than eliminating the last few vestiges of carbon dioxide from industry, but substantial investment would be required. Hawaii has no carbon capture and storage facilities, and globally, progress here has stalled.

The recent bill gives the newly formed Greenhouse Gas Sequestration Task Force a deadline of 2023 to craft a plan Hawaii can use to become carbon neutral. As with the other countries that have made these bold and necessary pledges, the world will watch to see what they come up with.

“We’re small,” Scott Glenn, head of the state’s Environmental Quality Office, said to Fast Company. “We’re a rounding error to the emissions that California has. But [others] say, if Hawaii can do it, we can do it. If an island in the middle of the Pacific can make this happen, then we can make it happen. That’s what we try to do. That’s the role we see ourselves having within our national dialogue.”

For those that want to make this dream a reality, the hard work starts here.

Image Credit: Shane Myers Photography /

Kategorie: Transhumanismus

This New Chip Design Could Make Neural Nets More Efficient and a Lot Faster

11 Červen, 2018 - 17:00

Neural networks running on GPUs have achieved some amazing advances in artificial intelligence, but the two are accidental bedfellows. IBM researchers hope a new chip design tailored specifically to run neural nets could provide a faster and more efficient alternative.

It wasn’t until the turn of this decade that researchers realized GPUs (graphics processing units) designed for video games could be used as hardware accelerators to run much bigger neural networks than previously possible.

That was thanks to these chips’ ability to carry out lots of computations in parallel rather than having to work through them sequentially like a traditional CPU. That’s particularly useful for simultaneously calculating the weights of the hundreds of neurons that make up today’s deep learning networks.

While the introduction of GPUs saw progress in the field explode, these chips still separate processing and memory, which means a lot of time and energy is spent shuttling data between the two. That has prompted research into new memory technologies that can both store and process this weight data at the same location, providing a speed and energy efficiency boost.

This new class of memory devices relies on adjusting their resistance levels to store data in analog—that is, on a continuous scale rather than the binary 1s and 0s of digital memory. And because information is stored in the conductance of the memory units, it’s possible to carry out calculations by simply passing a voltage through all of them and letting the system work through the physics.

But inherent physical imperfections in these devices mean their behavior is not consistent, which means attempts to use them to train neural networks have so far resulted in markedly lower classification accuracies than when using GPUs.

“We can perform training on a system that goes faster than GPUs, but if it is not as accurate in the training operation that’s no use,” said Stefano Ambrogio, a postdoctoral researcher at IBM Research who led the project, in an interview with Singularity Hub. “Up to now there was no demonstration of the possibility of using these novel devices and being as accurate as GPUs.”

That was until now. In a paper published in the journal Nature last week, Ambrogio and colleagues describe how they used a combination of emerging analog memory and more traditional electronic components to create a chip that matches the accuracy of GPUs while running faster and on a fraction of the energy.

The reason these new memory technologies struggle to train deep neural networks is that the process involves nudging the weight of each neuron up and down thousands of times until the network is perfectly aligned. Altering the resistance of these devices requires their atomic structure to be reconfigured, and this process isn’t identical each time, says Ambrogio. The nudges aren’t always exactly the same strength, which results in imprecise adjustment of the neurons’ weights.

The researchers got around this problem by creating “synaptic cells” each corresponding to individual neurons in the network, which featured both long- and short-term memory. Each cell consisted of a pair of Phase Change Memory (PCM) units, which store weight data in their resistance, and a combination of three transistors and a capacitor, which stores weight data as an electrical charge.

PCM is a form of “non-volatile memory,” which means it retains stored information even when there is no external power source, while the capacitor is “volatile” so can only hold its electrical charge for a few milliseconds. But the capacitor has none of the variability of the PCM devices, and so can be programmed quickly and accurately.

When the network is trained on images to complete a classification task, only the capacitor’s weights are updated. After several thousand images have been seen, the weight data is transferred to the PCM unit for long-term storage.

The variability of PCM means there’s still a chance the transfer of the weight data could contain errors, but because the unit is only updated occasionally, it’s possible to double-check the conductance without adding too much complexity to the system. This is not feasible when training directly on PCM units, said Ambrogio.

To test their device, the researchers trained their network on a series of popular image recognition benchmarks, achieving accuracy comparable to Google’s leading neural network software TensorFlow. But importantly, they predict that a fully built-out chip would be 280 times more energy-efficient than a GPU and would be able to carry out 100 times as many operations per square millimeter.

It’s worth noting that the researchers haven’t fully built out the chip. While real PCM units were used in the tests, the other components were simulated on a computer. Ambrogio said they wanted to check that the approach was viable before dedicating time and effort to building a full chip.

They decided to use real PCM devices, as simulations for these are not yet very reliable, he said, but simulations for the other components are, and he’s highly confident they will be able to build a complete chip based on this design.

It is also only able to compete with GPUs on fully connected neural networks, where each neuron is connected to every neuron in the previous layer, Ambrogio said. Many neural networks are not fully connected or only have certain layers fully connected.

“Crossbar arrays of non-volatile memories can accelerate the training of fully connected neural networks by performing computation at the location of the data.” Credit: IBM Research

But Ambrogio said the final chip would be designed so it could collaborate with GPUs, processing fully-connected layers while they tackle others. He also thinks a more efficient way of processing fully-connected layers could lead to them being used more widely.

What could such a specialized chip make possible?

Ambrogio said there are two main applications: bringing AI to personal devices and making datacenters far more efficient. The latter is a major concern for big tech companies, whose servers burn through huge sums in electricity bills.

Implementing AI directly in personal devices would prevent users having to share their data over the cloud, boosting privacy, but Ambrogio says the more exciting prospect is the personalization of AI.

“This neural network implemented in your car or smartphone is then also continuously learning from your experience,” he said. “If you have a phone you talk to, it specializes to your voice, or your car can specialize to your particular way of driving.”

Image Credit: spainter_vfx /

Kategorie: Transhumanismus

Microsoft’s Wild New Project Puts Servers at the Bottom of the Ocean

10 Červen, 2018 - 17:00

Last week, a few miles off the northern coast of Scotland, a cylinder the size of a shipping container was carefully lowered to the bottom of the ocean floor. Sounds like a clever (if complex) way to bury evidence, or treasure, or something similarly mysterious.

But what’s actually inside the undersea capsule are 864 Microsoft servers. The contents themselves, then, aren’t all that enticing—but the details of this incredible project from one of the world’s biggest tech companies are.

Project Natick, a name Microsoft says has no special significance, completed its first phase in November 2015, after a similar vessel was deployed in the Pacific off the US coast. Phase 1 tested the concept of subsea datacenters, and the company was able to operate the “lights out” datacenter remotely and efficiently for extended periods of time.

A lights out data center is one that ups its efficiency and security by limiting human access—without people coming and going, lighting isn’t needed, and climate control is easier. The data centers of the future will increasingly be lights out as technologies like robotics and AI enable them to be managed remotely and autonomously.

They’ll also, apparently, be located underwater. Eight of the world’s ten biggest cities are on or near a coastline, and according to Microsoft, half the world’s population lives near the ocean—and that’s just one reason to store data there.

Another reason is that getting a storage facility up and running (or, down and bubbling) underwater is much, much faster. Building a datacenter on land has typically taken about two years (though that timeframe is dropping as technology improves); Microsoft claims its deep-sea cylinders will take less than 90 days to go from factory ship to operation.

Project Natick shown being deployed at sea. Image Credit: Project Natick / Microsoft

Equally impressive is the fact that the centers will run completely on renewable energy. The cylinder that was deployed last week is connected to the electrical grid for Scotland’s Orkney Islands, which is powered by a blend of off-shore tidal generators and on-shore wind and solar power. Microsoft plans to study the possibility of powering future Natick datacenters directly using offshore wind or tidal energy, no grid connection needed.

Speaking of energy, Natick cylinders will use a lot less of it than their landlocked counterparts. The vast majority of electricity for datacenters on land is gobbled up by their cooling process. But submersion in the northern ocean’s frigid water provides a constant natural source of cooling, which also makes the servers less likely to overheat and crash.

The odds of a crash, or any other big problem for that matter, had better be low, because the one big drawback to this mostly-brilliant idea is that if anything goes wrong, getting to those servers inside that cylinder 100 feet under water is going to be one heck of a challenge. Artificial intelligence helps monitor the servers for early signs of failure; Project Natick’s website says the planned length of maintenance-free operation for its underwater datacenters is five years.

To boot, the cylinders themselves are made from recycled material, and the company plans to re-recycle them at the end of their lifespan.

“We aspire to create a sustainable datacenter which leverages locally produced green energy, providing customers with additional options to meet their own sustainability requirements,” Microsoft states.

As more of the world’s population comes online, the need for datacenters is going to skyrocket, and having a fast, green solution like this would prove remarkably useful. At less than a week in the water, though, we’ll have to wait and see if Project Natick goes as swimmingly as Microsoft is hoping it will.

Image Credit: Project Natick / Microsoft

Kategorie: Transhumanismus

This Week’s Awesome Stories From Around the Web (Through June 9)

9 Červen, 2018 - 17:00

The World’s Most Powerful Supercomputer Is Tailor Made for the AI Era
Martin Giles | MIT Technology Review
“The team at Oak Ridge says Summit is the first supercomputer designed from the ground up to run AI applications, such as machine learning and neural networks.”


One Firm Is Way Ahead of Wall Street on Bitcoin
Nathaniel Popper | The New York Times
“Now the firm is opening trading to a small group of its 500 clients, with plans to expand. The move is the latest sign that the virtual currency markets, which were once relegated to the fringes of the financial world, are being embraced by big, mainstream investors.”


Waymo Announces 7 Million Miles of Testing, Putting It Far Ahead of Rivals
Timothy B. Lee | Ars Technica
“What makes this truly remarkable is that Waymo announced its last milestone—6 million miles—less than a month ago. …[Waymo CTO Dmitri] Dolgov didn’t specify exactly when Waymo reached the 6 million-mile mark. So the last million miles might have actually taken a bit more than a month to rack up. Still, Waymo’s pace of testing is clearly accelerating as the company gears up to launch its driverless taxi service later this year.”


Microsoft Just Put a Data Center at the Bottom of the Ocean
Daniel Oberhaus | Motherboard
“The shipping container-sized data center holds 864 servers and is completely powered by renewable energy. …In addition to cutting down the amount of time needed to create a data center on land from about 2 years to around 90 days, the submarine data center has the added benefit of natural cooling from the ocean, eliminating one of the biggest costs of running a data center on land.”


What If ET Is an AI?
Caleb Scharf | Aeon
“…a recognizable encounter with even one savant machine would indeed change everything. It would tell us that the galaxy is awash with intelligence, and could suggest that our future might be one of a vestigial, fading biological presence.”


Building Flying Cars Is Less Complicated Than Figuring Out Traffic Control For Them
Kate Fane | Motherboard
“Scaling up this human workforce to monitor millions of airborne vehicles would be unrealistic, especially as autonomous, unpiloted passenger vehicles are the ultimate goal of most in the industry. So instead of writing job postings, companies and government agencies are looking for fully automated systems that could communicate with flying vehicles and automatically redirect them if they flew too close to anything else that might be occupying the sky, or to an airport’s runways.”

Image Credit: OLCF at ORNL via Flickr / CC BY 2.0

Kategorie: Transhumanismus

We Discovered That Life May Be Billions of Times More Common in the Multiverse

8 Červen, 2018 - 17:30

Why is there life in our universe? The existence of galaxies, stars, planets, and ultimately life seem to depend on a small number of finely tuned fundamental physical constants. Had the laws of physics been different, we would not have been around to debate the question. So how come the laws of our universe just happen to be the way they are—is it all a lucky coincidence?

In the last few decades, an increasingly popular theory has come to the fore. The multiverse theory suggests that our universe is just one of many in an infinite multiverse where new universes are constantly being born. It seems likely that baby universes are produced with a wide range of physical laws and fundamental constants, but that only a tiny fraction of these are hospitable for life. It would therefore make sense that there is a universe with the strange fundamental constants we see, finely tuned to be hospitable for life.

But now our new discovery, published in the Monthly Notices of the Royal Astronomical Society, complicates things by suggesting that life may actually be a lot more common in parallel universes than we had thought.

While there is no physical evidence that parallel universes exist (at the moment), the theories that explain how our universe came to be seem to suggest that they are inevitable. Our universe started with a Big Bang, followed by a period of very rapid expansion, known as inflation. However, according to modern physics, inflation is unlikely to have been a single event. Instead, many different patches of the cosmos could suddenly start inflating and expand to huge volumes—each bubble creating a universe in its own right.

Some believe that we may one day be able to witness imprints of collisions with parallel universes in the cosmic microwave background, which is the radiation left over from the birth of the universe. Others, however, believe the multiverse is a mathematical quirk rather than a reality.

Dark Energy

One hugely important constant in the universe is a mysterious, unknown force dubbed dark energy. At the present day, this makes up 70% of our universe. Rather than our universe slowing down as it expands, dark energy causes its expansion to accelerate.

But many current theories suggest that dark energy should be much more plentiful than this across the multiverse. Most universes should have an abundance of dark energy that is around a million, billion, billion, billion, billion, billion, billion times larger than in our universe. But if dark energy were this abundant, the universe would rip itself apart before gravity could bring together matter to form galaxies, stars, planets or people.

While our universe has a strangely low value of dark energy, it is this low value that makes our universe hospitable to life. The multiverse theory can help us explain why it is so low—there will always be some universes with unlikely values in an infinite multiverse.

However, the theory nevertheless requires that our universe’s value for the abundance of dark energy is close to the maximum allowed for intelligent life to exist. This is because larger values of dark energy should be more common in the multiverse than lower values. At the same time, we expect life to exist only in a small group of universes with a value below a certain maximum—those in which matter can still clump together to form stars and galaxies. This means that universes with a comparatively high value of dark energy (close to maximum) that are hospitable to life should be more numerous than universes with low values (close to minimum), meaning they are more likely.

So do we live in such a universe? Through our study, we set out to find out what this maximum level is and whether we are close to it.

Computer Simulations

Our computer model of the universe, the EAGLE project, has been successful at explaining the observed properties of galaxies in our universe. The simulations take the laws of physics and follow the formation of stars and galaxies as the universe expands after the Big Bang. The galaxies that emerge in our model look remarkably like those seen in the night sky through telescopes.

Each simulation led to a universe with specific structure.

This success makes it possible to convincingly investigate how the formation of stars and galaxies would proceed in other parts of the multiverse. We created a series of computer-generated universes that were identical apart from having different amounts of dark energy. Initially, the universes all expanded at similar rates but, as the energy left over from the Big Bang dissipated, the power of dark energy became important. The universes with abundant dark energy accelerated vigorously.

To our surprise, however, we discovered that baby universes with ten or even 100 times more abundant dark energy (compared to our own) produce almost as many stars and planets as our own universe. That means our own universe does not have a value of dark energy that is close to the maximum for life to exist. The effects of gravity are much more robust than we had previously thought. Life, it seems, would be rather common throughout the multiverse, perhaps a million, billion, billion, billion, billion more common that we previously thought.

Our discovery puts the idea that an infinite multiverse can explain the low abundance of dark energy on very rocky ground. Interestingly, in his last published paper, Stephen Hawking argued that the multiverse is far from infinite, and that it is more likely to contain a finite number of rather similar parallel universes.

We are forced to an uncomfortable conclusion. The value of dark energy we observe is far too unlikely for the multiverse to explain why we are here. It seems that a new physical law, or a new approach to understanding dark energy, is needed to account for the deeply puzzling properties of our universe. But the good news is that we are one step closer to cracking it.

This article was originally published on The Conversation. Read the original article.

Image Credit: Jaime Salcido/EAGLE Collaboration/Durham University

Kategorie: Transhumanismus

5 Reasons Car Companies Are Betting Big on Energy Storage

7 Červen, 2018 - 17:30

An increasing number of major car manufacturers are developing solutions in a space that at first glance may seem like a strange choice: energy storage.

BMW recently signed a contract that adds 500 of its i3 battery packs to the UK national energy grid. Audi is running a pilot project. Renault is turning some of its Zoe batteries into a home energy storage solution, and in Japan, both Toyota and Nissan have announced that they will offer battery energy storage.

A closer look reveals many good reasons for car companies to pursue energy storage. Here are five of the biggest.

Electric Is the Future

The strangeness of adding energy storage to the mix fades a bit when considering how much money car companies are investing in batteries. Volkswagen recently announced it plans to spend a whopping $48 billion on batteries in the coming years.

The future of transportation is a subject dear to the hearts of car companies. Their view seems to increasingly be that transportation is going to be a) self-driving and b) electric.

The question for companies becomes: why stop at cars? If you are going to be making batteries anyway, why not explore other markets that need batteries?


Tesla has been doing exactly that, with big success in both large-scale projects in Latin America and Australia and in general with its Powerpack and Powerwall systems for homes and companies. Last year, it sent heads spinning in top offices of major car manufacturers by briefly overtaking them to become the most valuable American car company.

Tesla’s goal is to be much more than a car company, though. As Elon Musk explained to Fast Company, “This is the integrated future. You’ve got an electric car, a Powerwall, and a Solar Roof. It’s pretty straightforward, really.”

Other car companies seem to be thinking along the lines of “well, if Tesla can do it, so can we—and we better get going before they corner the market.”

The Future Is Smart (and Bundled)

Another important point from the Musk quote is that we are heading towards an integrated future. One of a smart grid consisting of not just stand-alone energy storage units, but also things we attach and detach from the grid. Like electric cars, for example. Our four-wheeled friends can be of use for much more than the short periods of time they ferry us around. Using them as energy storage during stand-still hours makes excellent sense.

Finding ways to link cars and energy storage will be increasingly valuable, as will the ability to offer complete packages. The logic goes that if you have a Tesla or a Powerwall, you are much more likely to buy a product from the same manufacturer than from a competitor.

Projected Market Growth

Tesla and other car companies are probably also eyeing the projected growth for battery energy storage. A report from Markets Insider predicts that the market for grid-connected battery energy storage will grow from $3.3 billion in 2016 to $14 billion by 2021. A quick calculation shows that to be equal to a compound annual growth rate (CAGR) of around 33.5 percent. If the market were to grow at or near that rate through the 2020s, it would be worth north of $100 billion by the end of the decade—a significant opportunity.

Where Batteries Go to Die

Saving what may be the most important point for last: One of the uncomfortable truths of current batteries for electric cars is that they get old a long time before they die. By that I mean that each charging cycle has a negative effect on storage capacity, and over time, batteries become less and less useful for cars. It’s like having a diesel car where the gas tank continually gets smaller every time you fill it.

The batteries can still be useful and store power for, say, a household, though. By moving into energy storage, car companies are creating a more sustainable (and profitable) energy ecosystem for their electric cars and trucks.

As Sam Korus from investment management firm ARK Invest put it to Salon, “You can either recycle the materials from an electric-car battery or you could sell the battery as an energy storage device, which requires much less performance.”

Not All Going to Plan

While this may sound straightforward, it’s not necessarily. For example, Mercedes, an early(ish) entrant into energy storage, has announced it is no longer going to be offering energy storage batteries for homes.

One reason given for its exit was that while the company’s batteries were excellently suited to propel vehicles, they were less adept at sitting still in people’s home storing energy. This was because the extra design and security features needed for a car battery are not necessary for a storage battery, thereby requiring a product that was more expensive.

Despite all the reasons for car companies to invest in energy storage technology, then, this is one point they’ll have to contend with before that investment pays off.

Image Credit: Fishman64 /

Kategorie: Transhumanismus

The Biggest New Laws to Regulate Tech Giants—and Why They Matter

6 Červen, 2018 - 17:00

The world’s first stop sign appeared in Michigan around 1915, decades after the first privately owned passenger car came into being. Ever since, laws and regulations have piled up concerning everything from insurance requirements to vehicle safety, road safety, and driver education.

Such is the natural order of things. First come the transformative technologies—in this case, the car—then public debates about safety, ethics, and the need for laws or restrictions. These conversations can continue for years or even decades.

Today, however, the breakneck speed of digital transformation leaves the public and governments behind so quickly that they often never catch up before the next iteration takes hold. The result is unbridled advances in technology that have dazzled the world, bringing benefits but simultaneously trampling some business models, neglecting to always consider what’s best for consumers, and even affecting the outcomes of elections.

And it isn’t just the speed of digital technologies that makes them difficult to regulate. A further complication is that the technologies and platforms are global and at the same time governed by many jurisdictions and polities.

Of greatest urgency are tech issues involving law enforcement and human rights. This year, governments took major steps to address the problems that have arisen in these fields.

Protecting Privacy

On May 25, the European Union introduced laws to protect people’s privacy and rights by launching its General Data Protection Regulation. This legislation applies to all 28 European member states and shifts control of data to customers and away from technology companies who have sold it to the highest bidders. The Europeans are also proposing laws to crack down on the proliferation of libel, hate, fraud, and propaganda across the internet.

Sharing Data

Besides these initiatives on privacy, governments are beginning to catch up on security matters in a wired, globalized world.

On March 23, the US signed into law the Clarifying Lawful Overseas Use of Data Act, or Cloud Act. This allows federal law enforcement officials to compel US-based tech companies (via warrant or subpoena) to provide requested data stored on servers regardless of whether the data is stored in the US or on foreign soil.

The Cloud Act came about following law enforcement difficulties the FBI had with obtaining remote data as part of a 2013 drug trafficking investigation involving a US citizen’s emails that were stored in one of Microsoft’s remote servers in Ireland. Microsoft legitimately argued that existing laws did not cover data stored outside the US and the case was to be adjudicated in April before the Supreme Court. But after the Cloud Act became law, the case was withdrawn, and the new law obligates Microsoft and any other tech companies to hand over pertinent data.

Complications could arise when requests for e-evidence contravene the privacy laws of the countries where the servers operate. But the American and European initiatives are aimed at the tech companies themselves, which will behoove them to cooperate or to move their servers to jurisdictions where there they can provide access.

That’s why it is significant that just one month later, in April, the European Commission proposed laws giving its members the power to order companies in their jurisdictions to hand over “e-evidence” they own even if held in remote servers anywhere in the world..

When Is It OK to Violate Privacy?

Another thorny law enforcement issue was dramatically illustrated in 2016, when Apple refused to comply with an FBI request to unlock a terrorist’s cryptographically protected iPhone after a mass murder in San Bernardino. The phone was locked with a four-digit password and was programmed to eliminate all its data after ten failed password attempts.

Apple had promised these special security protections for its customers, and argued in court that this was why it must refuse to write new software to breach them. The matter was never legally resolved, but the phone was hacked into successfully after the government found a third party able and willing to unlock the phone.

This case and others have led to a proposal in the Senate that has sweeping implications for tech companies, the Internet of Things, and cryptocurrencies like Bitcoin. The bill proposes to criminalize the intentional concealing information. There is little doubt that encryption and anonymity can be abused, and in many cases the trail runs cold for police when criminals are able to launder money or engage in other forms of financial terrorism.

This issue casts a long shadow over the use of sensors and cryptocurrencies like Bitcoin.

The regulation of digital currency proposal was tabled in December and hearings followed. Testifying was John Cassara, an anti-money laundering expert and former law enforcement officer. “Trying to follow money when it comes to digital currencies…[is] extremely challenging. Digital currencies are a small fraction of the threat that we face. That’s not going to be the case 5-10 years from now. We’re right at a crossroads,” he said.

Faces in Databases

Another hot-button looming concern is facial recognition technology, which is increasingly being used in law enforcement and even in some workplaces. The latest facial recognition service, sold by Amazon, has recently drawn criticism  from the American Civil Liberties Union.

In China, facial recognition software is converging with artificial intelligence—predictive algorithms are applied to anticipate criminal behavior in public places.

Facial recognition firm Cloud Walk is working on an AI system to predict crimes before they happen, giving law enforcement a chance to step in before it’s too late.

The system can identify people even if they’re in a different location wearing different clothing, and it can trace people’s paths across a wide geographic range.

Bill Gates joined the debate about tech and law enforcement in a recent interview, when he warned that tech companies may be “inviting government intervention” by not working closely to adhere to public and government concerns.

Gates alluded to giants like Facebook, Google, and Apple for having an “enthusiasm about making financial transactions anonymous and invisible, and their view that even a clear mass-murdering criminal’s communication should never be available to the government.”

As Gates pointed out, no one can be above the law, and that includes tech giants.

Cars were as novel and exciting an invention in the 18th century as artificial intelligence and the Internet of Things are now—but today’s tech is proving harder for regulators to keep a handle on. As technology continues to infuse our lives, the race to make sure it’s as safe and fair as possible will continue.

Image Credit: enzozo /

Kategorie: Transhumanismus

An Innovator’s City Guide to Amsterdam, the Netherlands

6 Červen, 2018 - 16:00

Amsterdam is a perfect innovation hub, as well as a great launchpad into the Dutch and European markets. Although small in size, Amsterdam has it all: a creative and stable business environment, a diverse talent pool and entrepreneurial mindset, and a perfect city to live in, with great hangouts and a friendly and vibrant atmosphere everywhere you go. Its work-life balance is among the best on the planet. So, be part of a thriving business ecosystem while enjoying the great life in Amsterdam.

Meet Your Guide

Peter Maarten Westerhout Singularity University Chapter: Amsterdam, the Netherlands Profession: Peter Maarten Westerhout is a creative innovator, entrepreneur, and speaker. Passionate about bringing technology to life, he is a co-founder of TimeLabz, an agency that helps managers and entrepreneurs set their course in the new digital age. Your City Guide to Amsterdam, the Netherlands

Top three industries in the city: Creative, ICT, financial, and business services.

1. Coworking Space: FreedomLab

FreedomLab is both a co-working space and a consulting agency. It offers consulting, coaching, and small-group sessions as well as spacious, light-filled desks and lounge areas.

2. Makerspace: Makerversity

Makerversity, which recently opened in Amsterdam, is a makerspace that will have a textile workshop, 3D printers, vinyl and laser cutters, and an assembly space. The woodworking and general making facilities are now open and available for use.

Image Credit: Kars Alfrink / CC BY 2.0

3. Local meetups/networks: Startup Boot

Startup Boot attempts to connect founders, startup CEOs, and entrepreneurs by hosting meetups and workshops on blockchain innovation, strategies to reach out to potential clients, and other topics. Many of the organization’s events have taken place on boats on Amsterdam’s famed canals.

4. Best coffee shop with free WiFi: De Stadskantine

De Stadskantine offers affordable food, friendly staff, and a great environment to meet locals who work and live in Amsterdam. Although the coffee is not the finest in town, DeStadskantine is definitely one of the best local spots for nomad innovators.

5. The startup neighborhood: NDSM Werf

Once one of the largest wharfs in the world, the NDSM has morphed from an industrial shipyard into a cultural hub. It hosts vintage markets, rotating festivals, and a waterfront café constructed from old shipping containers.

Image Credit: Gareth Lowndes /

6. Well-known investor or venture capitalist: Patrick de Zeeuw

Patrick de Zeeuw is cofounder and CEO of Startupbootcamp in Amsterdam and an active angel investor in media, online, and mobile companies.

7. Best way to get around: OV-fiets

OV-fiets is a convenient bicycle rental service that costs only €3.85 for a 24-hour rental period. Bicycles can be borrowed from nearly 300 locations all over the Netherlands, including train stations, bus and metro stops, and town centers.

Image Credit: Peter Maarten Westerhout

8. Local must-have dish and where to get it: Kroket & Frikandel speciaal at Febo

Febo is a chain of Dutch walk-up fast food restaurants of the automat type, where food can be purchased from vending machines. Founded in 1941 in Amsterdam, Febo’s most popular items are krokets, frikandellen, hamburgers, and Kaassoufflés.

9. City’s best-kept secret: De Ceuvel

On the heavily polluted site of a former shipyard, De Ceuvel is the creation of a team of architects who converted old houseboats into a workspace and cultural center. The development also includes a café-restaurant on the waterfront with good food and creative programming.

Image Credit: Peter Maarten Westerhout

10. Touristy must-do: Foodhallen

The Foodhallen is an indoor food hall inside a former tram depot that has over a dozen independent restaurant stands and a wide variety of culinary options.

11. Local volunteering opportunity: Amsterdam Cares

Amsterdam Cares is an active network of socially-minded young professionals. With over 9,000 registered volunteers, the network supports more than 100 community organizations.

12. Local university with great resources: University of Amsterdam (UvA)

The UvA’s mission is to provide academic education for tomorrow’s leaders and innovators, carry out pioneering research, and use the results to develop socially relevant applications. Visit one of the many open buildings and libraries in the historic city center.

Image Credit: Dirk M. de Boer /

This article is for informational purposes only. All opinions in this post are the author’s alone and not those of Singularity University. Neither this article nor any of the listed information therein is an official endorsement by Singularity University.

Banner Image Credit: INTERPIXELS  /

Kategorie: Transhumanismus

Robots Will Be Able to Feel Touch With This Artificial Nerve

5 Červen, 2018 - 17:00

When the disembodied cockroach leg twitched, Yeongin Kim knew he had finally made it.

A graduate student at Stanford, Kim had been working with an international team of neuroengineers on a crazy project: an artificial nerve that acts like the real thing. Like sensory neurons embedded in our skin, the device—which kind of looks like a bendy Band-Aid—detects touch, processes the information, and sends it off to other nerves.

Yup, even if that downstream nerve is inside a cockroach leg.

Of course, the end goal of the project isn’t to fiddle with bugs for fun. Rather, the artificial nerve could soon provide prosthetics with a whole new set of sensations.

Touch is just the beginning: future versions could include a sense of temperature, feelings of movement, texture, and different types of pressure—everything that helps us navigate the environment.

The artificial nerve fundamentally processes information differently than current computer systems. Rather than dealing with 0s and 1s, the nerve “fires” like its biological counterpart. Because it uses the same language as a biological nerve, the device can directly communicate with the body—whether it be the leg of a cockroach or residual nerve endings from an amputated limb.

But prosthetics are only part of it. The artificial nerve can potentially combine with an artificial “brain”—for example, a neuromorphic computer chip that processes input somewhat like our brains—to interpret its output signals. The result is a simple but powerful multi-sensory artificial nervous system, ready to power our next generation of bio-robots.

“I think that would be really, really interesting,” said materials engineer Dr. Alec Talin at Sandia National Laboratory in California, who was not involved in the work. The team described their device in Science.

Feeling Good

Current prosthetic devices are already pretty remarkable. They can read a user’s brain activity and move accordingly. Some have sensors embedded, allowing the user to receive sparse feelings of touch or pressure. Newer experimental devices even incorporate a bio-hack that gives its wearer a sense of movement and position in space, so that the user can grab a cup of coffee or open a door without having to watch their prosthetic hand.

Yet our natural senses are far more complex, and even state-of-the-art prosthetics can generate a sense of “other,” often resulting in the device being abandoned. Replicating all the sensors in our skin has been a longtime goal of bioengineers, but hard to achieve without—here’s the kicker—actually replicating how our skin’s sensors work.

Embedded inside a sliver of our skin are thousands of receptors sensitive to pressure, temperature, pain, itchiness, and texture. When activated, these sensors shoot electrical signals down networks of sensory nerves, integrating at “nodes” along the way. Only if the signals are strong enough—if they reach a threshold—does the information get passed on to the next node, and eventually, to the spinal cord and brain for interpretation.

This “integrate-and-fire” mode of neuronal chatter is partly why our sensory system is so effective. It manages to ignore hundreds of insignificant, noisy inputs and only passes on information that is useful. Ask a classic computer to process all these data in parallel—even if running state-of-the-art deep learning algorithms—and it chokes.

Neuromorphic Code

One thing was clear to Kim and his colleagues: forget computers, it’s time to go neural.

Working with Dr. Zhenan Bao at Stanford University and Dr. Tae-Woo Lee at the Seoul National University in Seoul, South Korea, Kim set his sights on fabricating a flexible organic device that works like an artificial nerve.

The device contained three parts. The first is a series of sensitive touch sensors that can detect the slightest changes in pressure. Touching these sensors sparks an electrical voltage, which is then picked up by the next component: a “ring oscillator.” This is just a fancy name for a circuit that transforms voltage into electrical pulses, much like a biological neuron.

The pulses are then passed down to the third component, a synaptic transistor. That’s the Grand Central Station for the device: it takes in all of the electrical pulses from all active sensors, which then integrates the signals. If the input is sufficiently strong the transistor fires off a chain of electrical pulses of various frequencies and magnitudes, similar to those produced by biological neurons.

In other words, the outputs of the artificial nerve are electrical patterns that the body can understand—the “neural code.”

“The neural code is at the same time rich and efficient, being an optimal choice to design artificial systems for sensing and perception,” explained Dr. Chiara Bartolozzi at the Italian Institute of Technology in Genova, who was not involved in the work.

Neural Magic

In a series of tests, the team proved her right.

In one experiment, they moved a small rod across the pressure sensor in different directions and found that it could distinguish between each movement and provide an estimate of the speed.

Another test showed that a more complicated artificial nerve could differentiate between various Braille letters. The team hooked up two sets of synaptic transistors with oscillators. When the device “felt” the Braille characters, the pressure signals integrated, generating a specific output electrical pattern for each letter.

“This approach mimics the process of tactile information processing in a biological somatosensory system,” said the authors, adding that raw inputs are partially processed at synapses first before delivery to the brain.

Then there was the cockroach experiment. Here, the team hooked up the device to a single, detached cockroach leg. They then applied pressure to the device in tiny increments, which was processed and passed on to the cockroach through the synaptic transistor. The cockroach’s nervous system took the outputs as its own, twitching its leg more or less vigorously depending on how much pressure was initially applied.

The device can be used in a “hybrid bioelectronics reflex arc,” the authors explained, in that it can be used to control biological muscles. Future artificial nerves could potentially act the same way, giving prosthetics and robots both touch sensations and reflexes.

The work is still in its infancy, but the team has high hopes for their strategy. Because organic electronics like the ones used here are small and cheap to make, bioengineers could potentially pack more sensors into smaller areas. This would allow multiple artificial nerves to transmit a wider array of sensations for future prosthetic wearers, transforming the robotic appendage into something that feels more natural and “self.”

Natural haptic feedback could help users with fine motor control in prosthetic hands, such as gently holding a ripe banana. When embedded in the feet of lower-limb prosthetics, the artificial nerves could help the user walk more naturally because of pressure feedback from the ground.

The team also dreams of covering entire robots with the stretchy device. Tactile information could help robots better interact with objects, or allow surgeons to more precisely control remote surgical robots that require finesse.

And perhaps one day, the artificial nerve could even be combined with a neuromorphic chip—a computer chip that acts somewhat like the brain—and result in a simple but powerful multi-sensory artificial nervous system for future robots.

“We take skin for granted but it’s a complex sensing, signaling, and decision-making system,” said study author Dr. Zhenan Bao at Stanford University. “This artificial sensory nerve system is a step toward making skin-like sensory neural networks for all sorts of applications.”

Image Credit: Willyam Bradberry /

Kategorie: Transhumanismus

How Cyanobacteria Could Help Save the Planet

4 Červen, 2018 - 17:00

It’s very easy to forget that complex life on Earth almost missed the boat entirely. As the Sun’s luminosity gradually increases, the oceans will boil away, and the planet will no longer be in the habitable zone for life as we know it. Okay, we likely have a billion years before this happens—by which point our species will probably have destroyed itself or moved away from Earth—but Earth itself is 4.5 billion years old or so, and eukaryotic life only started to diversify 800 million or so years ago, at the end of the “boring billion.”

In other words, life seems to have arisen around four billion years ago, shortly after Earth formed, but then a few billion years passed before anything complex evolved. Another few hundred million years of bacteria, algae, and microbes sliding around in the anoxic sludge of the boring billion, and intelligent life might never have evolved at all.

Unraveling the geologic mysteries of the boring billion, and why it ended when it did, is a complex scientific question. Different parts of the earth system, including plate tectonics, the atmosphere, and the biosphere of simple lichens and cyanobacteria interacted to eventually produce the conditions for life to diversify, flourish, and grow more complex. But it is generally accepted that simple cyanobacteria (single-celled organisms that can produce oxygen through photosynthesis) were key players in providing Earth’s atmosphere and oceans with oxygen, which then allowed complex life to flourish.

Those cyanobacteria were ancestors of modern blue-green algae. It’s these simple life forms that some scientists hope will play a crucial role once again, this time in preserving the ideal atmospheric conditions for intelligent life on Earth.

As humans have already increased the concentration of CO2 in the atmosphere by 50 percent, and look set do much more than that, many climate scientists and politicians have invoked the concept of negative emissions—sucking carbon dioxide back out of the atmosphere. This could be done by enhancing natural processes. Rocks like olivine naturally suck up carbon dioxide on geological timescales, and some have suggested that enhanced weathering—grinding up these rocks and sprinkling them over land to act as a carbon sink—could be a useful tool to help economies towards being carbon neutral, and even carbon negative.

Others, such as Klaus Lackner, have suggested directly scrubbing carbon dioxide from the air using artificial trees: plastic sorbents that absorb CO2 from the atmosphere and can be scrubbed and re-used. The problem with this approach is that it’s expensive. When we can’t even convince more than a few dozen fossil fuel power plants to capture and bury CO2 when it’s billowing from smoke stacks in highly-concentrated form, it’s hard to imagine billions of dollars of investment to scrub diluted carbon dioxide from the atmosphere without any hope of profit beyond cleaning up our mess.

That’s not to say direct air capture couldn’t follow the same incredible cost curve that solar panels did, and R&D investment is important. But solar panels and wind turbines produce energy you can sell, while direct air capture consumes energy to filter out carbon dioxide, most of which you’ll probably have to bury at considerable expense. There’s a very big difference.

It’s this pragmatism that has led most of the economic models that allow negative emissions to use bio-energy with carbon capture and storage (BECCS); biofuels that draw down carbon in life are then burned and the carbon dioxide buried. The process is, theoretically, net-emissions negative, but at least companies can profit from the energy produced by burning the biofuels, so it’s not economically prohibitive.

The only issue is that many of the models don’t take sufficient account of land use, and BECCS-heavy scenarios have come under criticism; in some cases, three times the land of India may need to be devoted to biofuels production, which begs the question, where will we grow the food to feed billions of people? In other words, the negative emissions industry needs to be at least comparable to the size of the industry that burns fossil fuels today; we need a range of options.

It’s in this context that blue-green algae and cyanobacteria have some distinct advantages. Cyanobacteria and microalgae can use land that’s not productive for agriculture and, in the case of so-called Marine BECCS, they can be harvested from the oceans. Algae that are efficient at capturing carbon dioxide have already been used at plants like Sweden’s Algoland project to neutralize the CO2 produced as a byproduct of making cement.

What’s more, the algae can produce protein that can be used as a food additive for animals; Algoland offsets part of its costs in this way. Given that people regularly discuss using insects as a source of protein for a growing human population, we might just as easily be able to stomach protein from algae, too.

There’s another reason to bet on algae: fossil fuel companies are starting to throw research and development budgets behind them.

Exxon Mobil funded researchers that genetically modified algae to double the rate at which they draw down carbon dioxide. Other companies are hoping to research pyrolysis, whereby algal biomass can be converted into biofuels that, with some processing, could be used in the transportation industry.

For the fossil fuel companies, it makes sense: a full switch to renewable electricity that powers an electrified transport system would threaten their business model; liquid biofuels allow the infrastructure that currently supports the oil industry to continue to exist. For this reason, those with the resources may be more inclined to invest.

Another advantage advocates will note is that the biochar produced as a byproduct of this process can return carbon to depleted soils, improving agricultural productivity.

Given that topsoil erosion due to unsustainable farming practices is an unsung environmental crisis that threatens the future of agriculture for billions, any procedure that can help to alleviate this problem will be useful. If biochar can be produced and its benefits realized at scale, marine BECCS using algae may actually help food and soil security in the future rather than threaten it.

The scale of what’s required, both to make a dent in carbon emissions and wean our economy off fossil fuels, is huge.

While optimistic analyses imply that an area around three times the size of Texas could provide global demand for liquid fuels and ten times the annual protein production from soy, they also acknowledge that such an industry would require a large supply of renewable energy to power the industrial processes. And they may even need some form of direct air capture to supply the enhanced CO2 to the algae. The demand for phosphorous as a nutrient, which is already strained by agriculture, is also a cause for concern.

The scientific literature is full of ideas that have the “technical potential” to supply the world’s energy needs without the use of fossil fuels. It can feel like you hear about a new one every week, and everyone has their favorite: solar panels in the Sahara would do it. Political and economic feasibility are the barriers.

Ultimately, no one technology is likely to be a magic bullet, especially not when we need to remake entire industries. But it’s tantalizing to think that the descendants of the cyanobacteria that rescued our planet from the doldrums and paved the way for intelligent life could also provide the tools we use to make sure that life has a future.

Image Credit: LeStudio /

Kategorie: Transhumanismus