Singularity HUB

Syndikovat obsah Singularity Hub
News and Insights on Technology, Science, and the Future from Singularity University
Aktualizace: 2 min 11 sek zpět

Sci-Fi Movies Are the Secret Weapon That Could Help Silicon Valley Grow Up

4 hodiny 33 min zpět

If there’s one line that stands the test of time in Steven Spielberg’s 1993 classic Jurassic Park, it’s probably Jeff Goldblum’s exclamation, “Your scientists were so preoccupied with whether or not they could, they didn’t stop to think if they should.”

Goldblum’s character, Dr. Ian Malcolm, was warning against the hubris of naively tinkering with dinosaur DNA in an effort to bring these extinct creatures back to life. Twenty-five years on, his words are taking on new relevance as a growing number of scientists and companies are grappling with how to tread the line between “could” and “should” in areas ranging from gene editing and real-world “de-extinction” to human augmentation, artificial intelligence and many others.

Despite growing concerns that powerful emerging technologies could lead to unexpected and wide-ranging consequences, innovators are struggling with how to develop beneficial new products while being socially responsible. Part of the answer could lie in watching more science fiction movies like Jurassic Park.

Hollywood Lessons in Societal Risks

I’ve long been interested in how innovators and others can better understand the increasingly complex landscape around the social risks and benefits associated with emerging technologies. Growing concerns over the impacts of tech on jobs, privacy, security and even the ability of people to live their lives without undue interference highlight the need for new thinking around how to innovate responsibly.

New ideas require creativity and imagination, and a willingness to see the world differently. And this is where science fiction movies can help.

Sci-fi flicks are, of course, notoriously unreliable when it comes to accurately depicting science and technology. But because their plots are often driven by the intertwined relationships between people and technology, they can be remarkably insightful in revealing social factors that affect successful and responsible innovation.

This is clearly seen in Jurassic Park. The movie provides a surprisingly good starting point for thinking about the pros and cons of modern-day genetic engineering and the growing interest in bringing extinct species back from the dead. But it also opens up conversations around the nature of complex systems that involve both people and technology, and the potential dangers of “permissionless” innovation that’s driven by power, wealth and a lack of accountability.

Similar insights emerge from a number of other movies, including Spielberg’s 2002 film “Minority Report”—which presaged a growing capacity for AI-enabled crime prediction and the ethical conundrums it’s raising—as well as the 2014 film Ex Machina.

As with Jurassic Park, Ex Machina centers around a wealthy and unaccountable entrepreneur who is supremely confident in his own abilities. In this case, the technology in question is artificial intelligence.

The movie tells a tale of an egotistical genius who creates a remarkable intelligent machine—but he lacks the awareness to recognize his limitations and the risks of what he’s doing. It also provides a chilling insight into potential dangers of creating machines that know us better than we know ourselves, while not being bound by human norms or values.

The result is a sobering reminder of how, without humility and a good dose of humanity, our innovations can come back to bite us.

The technologies in Jurassic Park, Minority Report, and Ex Machina lie beyond what is currently possible. Yet these films are often close enough to emerging trends that they help reveal the dangers of irresponsible, or simply naive, innovation. This is where these and other science fiction movies can help innovators better understand the social challenges they face and how to navigate them.

Real-World Problems Worked Out On-Screen

In a recent op-ed in the New York Times, journalist Kara Swisher asked, “Who will teach Silicon Valley to be ethical?” Prompted by a growing litany of socially questionable decisions amongst tech companies, Swisher suggests that many of them need to grow up and get serious about ethics. But ethics alone are rarely enough. It’s easy for good intentions to get swamped by fiscal pressures and mired in social realities.

Elon Musk has shown that brilliant tech innovators can take ethical missteps along the way. Image Credit:AP Photo/Chris Carlson

Technology companies increasingly need to find some way to break from business as usual if they are to become more responsible. High-profile cases involving companies like Facebook and Uber as well as Tesla’s Elon Musk have highlighted the social as well as the business dangers of operating without fully understanding the consequences of people-oriented actions.

Many more companies are struggling to create socially beneficial technologies and discovering that, without the necessary insights and tools, they risk blundering about in the dark.

For instance, earlier this year, researchers from Google and DeepMind published details of an artificial intelligence-enabled system that can lip-read far better than people. According to the paper’s authors, the technology has enormous potential to improve the lives of people who have trouble speaking aloud. Yet it doesn’t take much to imagine how this same technology could threaten the privacy and security of millions—especially when coupled with long-range surveillance cameras.

Developing technologies like this in socially responsible ways requires more than good intentions or simply establishing an ethics board. People need a sophisticated understanding of the often complex dynamic between technology and society. And while, as Mozilla’s Mitchell Baker suggests, scientists and technologists engaging with the humanities can be helpful, it’s not enough.

An Easy Way into a Serious Discipline

The “new formulation” of complementary skills Baker says innovators desperately need already exists in a thriving interdisciplinary community focused on socially responsible innovation. My home institution, the School for the Future of Innovation in Society at Arizona State University, is just one part of this.

Experts within this global community are actively exploring ways to translate good ideas into responsible practices. And this includes the need for creative insights into the social landscape around technology innovation, and the imagination to develop novel ways to navigate it.

People love to come together as a movie audience.Image credit: The National Archives UK, CC BY 4.0

Here is where science fiction movies become a powerful tool for guiding innovators, technology leaders and the companies where they work. Their fictional scenarios can reveal potential pitfalls and opportunities that can help steer real-world decisions toward socially beneficial and responsible outcomes, while avoiding unnecessary risks.

And science fiction movies bring people together. By their very nature, these films are social and educational levelers. Look at who’s watching and discussing the latest sci-fi blockbuster, and you’ll often find a diverse cross-section of society. The genre can help build bridges between people who know how science and technology work, and those who know what’s needed to ensure they work for the good of society.

This is the underlying theme in my new book Films from the Future: The Technology and Morality of Sci-Fi Movies. It’s written for anyone who’s curious about emerging trends in technology innovation and how they might potentially affect society. But it’s also written for innovators who want to do the right thing and just don’t know where to start.

Of course, science fiction films alone aren’t enough to ensure socially responsible innovation. But they can help reveal some profound societal challenges facing technology innovators and possible ways to navigate them. And what better way to learn how to innovate responsibly than to invite some friends round, open the popcorn and put on a movie?

It certainly beats being blindsided by risks that, with hindsight, could have been avoided.

Andrew Maynard, Director, Risk Innovation Lab, Arizona State University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image Credit: Fred Mantel /

Kategorie: Transhumanismus

The Spatial Web Will Map Our 3D World—And Change Everything In the Process

16 Listopad, 2018 - 16:00

The boundaries between digital and physical space are disappearing at a breakneck pace. What was once static and boring is becoming dynamic and magical.

For all of human history, looking at the world through our eyes was the same experience for everyone. Beyond the bounds of an over-active imagination, what you see is the same as what I see.

But all of this is about to change. Over the next two to five years, the world around us is about to light up with layer upon layer of rich, fun, meaningful, engaging, and dynamic data. Data you can see and interact with.

This magical future ahead is called the Spatial Web and will transform every aspect of our lives, from retail and advertising, to work and education, to entertainment and social interaction.

Massive change is underway as a result of a series of converging technologies, from 5G global networks and ubiquitous artificial intelligence, to 30+ billion connected devices (known as the IoT), each of which will generate scores of real-world data every second, everywhere.

The current AI explosion will make everything smart, autonomous, and self-programming. Blockchain and cloud-enabled services will support a secure data layer, putting data back in the hands of users and allowing us to build complex rule-based infrastructure in tomorrow’s virtual worlds.

And with the rise of online-merge-offline (OMO) environments, two-dimensional screens will no longer serve as our exclusive portal to the web. Instead, virtual and augmented reality eyewear will allow us to interface with a digitally-mapped world, richly layered with visual data.

Welcome to the Spatial Web. Over the next few months, I’ll be doing a deep dive into the Spatial Web (a.k.a. Web 3.0), covering what it is, how it works, and its vast implications across industries, from real estate and healthcare to entertainment and the future of work. In this blog, I’ll discuss the what, how, and why of Web 3.0—humanity’s first major foray into our virtual-physical hybrid selves (BTW, this year at Abundance360, we’ll be doing a deep dive into the Spatial Web with the leaders of HTC, Magic Leap, and High-Fidelity).

Let’s dive in.

What is the Spatial Web?

While we humans exist in three dimensions, our web today is flat.

The web was designed for shared information, absorbed through a flat screen. But as proliferating sensors, ubiquitous AI, and interconnected networks blur the lines between our physical and online worlds, we need a spatial web to help us digitally map a three-dimensional world.

To put Web 3.0 in context, let’s take a trip down memory lane. In the late 1980s, the newly-birthed world wide web consisted of static web pages and one-way information—a monumental system of publishing and linking information unlike any unified data system before it. To connect, we had to dial up through unstable modems and struggle through insufferably slow connection speeds.

But emerging from this revolutionary (albeit non-interactive) infodump, Web 2.0 has connected the planet more in one decade than empires did in millennia.

Granting democratized participation through newly interactive sites and applications, today’s web era has turbocharged information-sharing and created ripple effects of scientific discovery, economic growth, and technological progress on an unprecedented scale.

We’ve seen the explosion of social networking sites, wikis, and online collaboration platforms. Consumers have become creators; physically isolated users have been handed a global microphone; and entrepreneurs can now access billions of potential customers.

But if Web 2.0 took the world by storm, the Spatial Web emerging today will leave it in the dust.

While there’s no clear consensus about its definition, the Spatial Web refers to a computing environment that exists in three-dimensional space—a twinning of real and virtual realities—enabled via billions of connected devices and accessed through the interfaces of virtual and augmented reality.

In this way, the Spatial Web will enable us to both build a twin of our physical reality in the virtual realm and bring the digital into our real environments.

It’s the next era of web-like technologies:

  • Spatial computing technologies, like augmented and virtual reality;
  • Physical computing technologies, like IoT and robotic sensors;
  • And decentralized computing: both blockchain—which enables greater security and data authentication—and edge computing, which pushes computing power to where it’s most needed, speeding everything up.

Geared with natural language search, data mining, machine learning, and AI recommendation agents, the Spatial Web is a growing expanse of services and information, navigable with the use of ever-more-sophisticated AI assistants and revolutionary new interfaces.

Where Web 1.0 consisted of static documents and read-only data, Web 2.0 introduced multimedia content, interactive web applications, and social media on two-dimensional screens. But converging technologies are quickly transcending the laptop, and will even disrupt the smartphone in the next decade.

With the rise of wearables, smart glasses, AR / VR interfaces, and the IoT, the Spatial Web will integrate seamlessly into our physical environment, overlaying every conversation, every road, every object, conference room, and classroom with intuitively-presented data and AI-aided interaction.

Think: the Oasis in Ready Player One, where anyone can create digital personas, build and invest in smart assets, do business, complete effortless peer-to-peer transactions, and collect real estate in a virtual world.

Or imagine a virtual replica or “digital twin” of your office, each conference room authenticated on the blockchain, requiring a cryptographic key for entry.

As I’ve discussed with my good friend and “VR guru” Philip Rosedale, I’m absolutely clear that in the not-too-distant future, every physical element of every building in the world is going to be fully digitized, existing as a virtual incarnation or even as N number of these. “Meet me at the top of the Empire State Building?” “Sure, which one?”

This digitization of life means that suddenly every piece of information can become spatial, every environment can be smarter by virtue of AI, and every data point about me and my assets—both virtual and physical—can be reliably stored, secured, enhanced, and monetized.

In essence, the Spatial Web lets us interface with digitally-enhanced versions of our physical environment and build out entirely fictional virtual worlds—capable of running simulations, supporting entire economies, and even birthing new political systems.

But while I’ll get into the weeds of different use cases next week, let’s first concretize.

How Does It Work?

Let’s start with the stack. In the PC days, we had a database accompanied by a program that could ingest that data and present it to us as digestible information on a screen.

Then, in the early days of the web, data migrated to servers. Information was fed through a website, with which you would interface via a browser—whether Mosaic or Mozilla.

And then came the cloud.

Resident at either the edge of the cloud or on your phone, today’s rapidly proliferating apps now allow us to interact with previously read-only data, interfacing through a smartphone. But as Siri and Alexa have brought us verbal interfaces, AI-geared phone cameras can now determine your identity, and sensors are beginning to read our gestures.

And now we’re not only looking at our screens but through them, as the convergence of AI and AR begins to digitally populate our physical worlds.

While Pokémon Go sent millions of mobile game-players on virtual treasure hunts, IKEA is just one of the many companies letting you map virtual furniture within your physical home—simulating everything from cabinets to entire kitchens. No longer the one-sided recipients, we’re beginning to see through sensors, creatively inserting digital content in our everyday environments.

Let’s take a look at how the latest incarnation might work. In this new Web 3.0 stack, my personal AI would act as an intermediary, accessing public or privately-authorized data through the blockchain on my behalf, and then feed it through an interface layer composed of everything from my VR headset, to numerous wearables, to my smart environment (IoT-connected devices or even in-home robots).

But as we attempt to build a smart world with smart infrastructure, smart supply chains and smart everything else, we need a set of basic standards with addresses for people, places, and things. Just like our web today relies on the Internet Protocol (TCP/IP) and other infrastructure, by which your computer is addressed and data packets are transferred, we need infrastructure for the Spatial Web.

And a select group of players is already stepping in to fill this void. Proposing new structural designs for Web 3.0, some are attempting to evolve today’s web model from text-based web pages in 2D to three-dimensional AR and VR web experiences located in both digitally-mapped physical worlds and newly-created virtual ones.

With a spatial programming language analogous to HTML, imagine building a linkable address for any physical or virtual space, granting it a format that then makes it interchangeable and interoperable with all other spaces.

But it doesn’t stop there.

As soon as we populate a virtual room with content, we then need to encode who sees it, who can buy it, who can move it…

And the Spatial Web’s eventual governing system (for posting content on a centralized grid) would allow us to address everything from the room you’re sitting in, to the chair on the other side of the table, to the building across the street.

Just as we have a DNS for the web and the purchasing of web domains, once we give addresses to spaces (akin to granting URLs), we then have the ability to identify and visit addressable locations, physical objects, individuals, or pieces of digital content in cyberspace.

And these not only apply to virtual worlds, but to the real world itself. As new mapping technologies emerge, we can now map rooms, objects, and large-scale environments into virtual space with increasing accuracy.

We might then dictate who gets to move your coffee mug in a virtual conference room, or when a team gets to use the room itself. Rules and permissions would be set in the grid, decentralized governance systems, or in the application layer.

Taken one step further, imagine then monetizing smart spaces and smart assets. If you have booked the virtual conference room, perhaps you’ll let me pay you 0.25 BTC to let me use it instead?

But given the Spatial Web’s enormous technological complexity, what’s allowing it to emerge now?

Why Is It Happening Now?

While countless entrepreneurs have already started harnessing blockchain technologies to build decentralized apps (or dApps), two major developments are allowing today’s birth of Web 3.0:

  • High-resolution wireless VR/AR headsets are finally catapulting virtual and augmented reality out of a prolonged winter.

The International Data Corporation (IDC) predicts the VR and AR headset market will reach 65.9 million units by 2022. Already in the next 18 months, 2 billion devices will be enabled with AR. And tech giants across the board have long begun investing heavy sums.

In early 2019, HTC is releasing the VIVE Focus, a wireless self-contained VR headset. At the same time, Facebook is charging ahead with its Project Santa Cruz—the Oculus division’s next-generation standalone, wireless VR headset. And Magic Leap has finally rolled out its long-awaited Magic Leap One mixed reality headset.

  • Mass deployment of 5G will drive 10 to 100-gigabit connection speeds in the next 6 years, matching hardware progress with the needed speed to create virtual worlds.

We’ve already seen tremendous leaps in display technology. But as connectivity speeds converge with accelerating GPUs, we’ll start to experience seamless VR and AR interfaces with ever-expanding virtual worlds.

And with such democratizing speeds, every user will be able to develop in VR.

But accompanying these two catalysts is also an important shift towards the decentralized web and a demand for user-controlled data.

Converging technologies, from immutable ledgers and blockchain to machine learning, are now enabling the more direct, decentralized use of web applications and creation of user content. With no central point of control, middlemen are removed from the equation and anyone can create an address, independently interacting with the network.

Enabled by a permission-less blockchain, any user—regardless of birthplace, gender, ethnicity, wealth, or citizenship—would thus be able to establish digital assets and transfer them seamlessly, granting us a more democratized Internet.

And with data stored on distributed nodes, this also means no single point of failure. One could have multiple backups, accessible only with digital authorization, leaving users immune to any single server failure.

Implications Abound–What’s Next…

With a newly-built stack and an interface built from numerous converging technologies, the Spatial Web will transform every facet of our everyday lives—from the way we organize and access our data, to our social and business interactions, to the way we train employees and educate our children.

We’re about to start spending more time in the virtual world than ever before. Beyond entertainment or gameplay, our livelihoods, work, and even personal decisions are already becoming mediated by a web electrified with AI and newly-emerging interfaces.

In our next blog on the Spatial Web, I’ll do a deep dive into the myriad industry implications of Web 3.0, offering tangible use cases across sectors.

Join Me

Abundance-Digital Online Community: I’ve created a Digital/Online community of bold, abundance-minded entrepreneurs called Abundance-Digital. Abundance-Digital is my ‘on ramp’ for exponential entrepreneurs – those who want to get involved and play at a higher level. Click here to learn more.

Image Credit: Comeback01 /

Kategorie: Transhumanismus

How Quantum Computing is Enabling Breakthroughs in Chemistry

15 Listopad, 2018 - 16:30

Note: Mark Jackson is Scientific Lead of Business Development at Cambridge Quantum Computing 

Quantum computing is expected to solve computational questions that cannot be addressed by existing classical computing methods. It is now accepted that the very first discipline that will be greatly advanced by quantum computers is quantum chemistry.

Quantum Computers

In 1982, the Nobel Prize-winning physicist Richard Feynman observed that simulating and then analyzing molecules was so difficult for a digital computer as to make it impossible for any practical use. The problem was not that the equations governing such simulations were difficult.

In fact, they were comparatively straightforward, and had already been known for decades. The problem was that most molecules of interest contained hundreds of electrons, and each of these electrons interacted with every other electron in a quantum mechanical fashion—resulting in millions of interactions that even powerful computers could not handle.

To overcome the quantum nature of the equations, Feynman proposed quantum computers, which perform calculations based on the laws of quantum physics, as the ultimate answer. Unfortunately, such precise manipulation of individual quantum objects was far from technically possible. The joke for the past 35 years has been that quantum computing is always ten years away.

In the past few years, what was once a distant dream has slowly become a reality. Not only do quantum computers now exist, millions of programs have been executed via the cloud, and useful applications have started to emerge.

The power of a quantum computer can be roughly estimated by the number of qubits, or quantum bits: each qubit can represent a 1 and 0 state simultaneously. There are a number of promising hardware approaches to quantum computing, including superconducting, ion trap, and topological. Each has advantages and disadvantages, but superconducting has taken an early lead in terms of scalability. Google, IBM, and Intel have each used this approach to fabricate quantum processors ranging from 49 to 72 qubits. Qubit quality has also improved.

Chemistry Breakthrough

The breakthrough by scientists at Cambridge Quantum Computing (CQC) and their partners at JSR Corp was the ability to model multi-reference states of molecules. Multi-reference states are often needed to describe the “excited states” arising when molecules interact.

The reason such modeling is significant is that “classical” digital computers find it virtually impossible to tackle multi-reference states; in many cases, classical computing methods fail not only quantitatively but also qualitatively in the description of the electronic structure of the molecules.

An outstanding problem—and the one recently solved—is to find ways that a quantum computer can run calculations efficiently and with the required chemical accuracy to make a difference in the real world. The program was run on IBM’s 20 qubit processor, as both CQC and JSR are members of the IBM Q Network.

Why is chemistry of such interest? Chemistry is one of the first commercially lucrative applications for a variety of reasons. Researchers hope to discover more energy-efficient materials to be used in batteries or solar panels. There are also environmental benefits: about two percent of the world’s energy supply goes toward fertilizer production, which is known to be grossly inefficient and could be improved by sophisticated chemical analysis.

Finally, there are applications in personalized medicine, with the possibility of predicting  how pharmaceuticals would affect individuals based on their genetic makeup. The long-term vision is the ability to design a drug for a particular individual to maximize treatment and minimize side effects.

There were two strategies employed by CQC and JSR Corp that allowed the researchers to make this advance. First, they used CQC’s proprietary compiler to most efficiently convert the computer program into instructions for qubit manipulation. Such efficiency is particularly essential on today’s low-qubit machines, in which every qubit is needed and speed of execution is critical.

Second, they utilized quantum machine learning, a special sub-field of machine learning that uses vector-like amplitudes rather than mere probabilities. The method of quantum machine learning being used is specially designed for low-qubit quantum computers, offloading some of the calculations to conventional processors.

The next few years will see a dramatic advance in both quantum hardware and software. As calculations become more refined, more industries will be able to take advantage of applications including quantum chemistry. The Gartner Report states that within 4 years, 20 percent of corporations will have a budget for quantum computing. Within ten years, it should be an integral component of technology.

Image Credit: Egorov Artem /

Kategorie: Transhumanismus

Why Scientists Are Rushing to Catalog the World’s Poop

15 Listopad, 2018 - 16:00

If a group of scientists is successful, the Svalbard Global Seed Vault will be getting a cousin—one that may initially sound rather strange. Instead of gathering seeds to preserve plant species, this project involves gathering fecal samples from people all over the globe.

The effort is known as the Global Microbiome Conservancy (GMC), and its goal is to catalog and safe-keep the different kinds of gut bacteria found in humans’ digestive systems across the planet. It’s an endeavor that could be under threat from changing diets and lifestyles.

Healthy Mysteries

Each of us is a generous host to an almost uncountable number of microorganisms, including bacteria, fungi, and viruses, collectively known as the microbiome. The microorganisms play a central part in areas like our immune system and metabolism. For example, the state of your gut bacteria seems to play a vital role in relation to allergies, diabetes, and some forms of cancer, as well as how well you respond to certain types of medicine. There also seems to be a link between gut bacteria and psychological states like anxiety and depression.

Scientists believe that altering the composition of gut bacteria in the right way can lead to a range of health benefits. Eric Alm, an MIT microbiologist and one of the founders of GMC, believes there are many more potential treatments linked to gut bacteria out there than we know of today.

“I’m 100 percent confident that there are relevant medical applications for hundreds of strains we’ve screened and characterized,” he told Science.

The Diverse Gut Bacteria

Alm and his collaborators have been collecting gut bacteria samples from individuals across several continents. The process itself is less than glamorous. Plastic bowls are handed out and people poop in them and hand them back. The samples are then processed, fixed, and dried for DNA sequencing and measuring of lipid content. Samples are split into small tubes and shipped back to a lab, where the different bacterial strains are isolated and then preserved in freezers.

So far, the GMC has analyzed samples from people in North America, the Arctic, and Africa. The strains cataloged have included five formerly unknown bacteria genera from North American contributors, while the strains from Africa and the Arctic have included 55 unknown genera.

GMC’s budget will support collection trips until 2021. By then the team hopes to have visited about 34 countries, covering the Arctic, Africa, Asia, Oceania, and South America. The team hopes to raise additional funds to expand their research.

A Connection to Diabetes

The results from Africa and the Arctic illustrate how indigenous people living on traditional diets tend to have more diverse gut biomes, a fact that appears to be linked to the absence of certain diseases among indigenous people.

People living in Western, more urbanized societies, whose diets tend to include more foods and higher use of antibiotics in food production, tend to have less diverse gut biomes, which can lead to health issues.

“There is a critical connection between autoimmune disorders and a decline in gut microbe diversity,” according to Ramnik Xavier, co-director of MIT’s Center for Microbiome Informatics and Therapeutics.

Xavier was part of a 2015 study that looked at infants in Finland who were genetically predisposed to develop type 1 diabetes. The study found that there was a connection between changes in the gut microbiome and the onset of type 1 diabetes.

“These and other discoveries begin to paint a picture of the ‘missing microbiome’ and underscore the importance of identifying gut microbes that may be depleted from industrialized societies,” Xavier explained.

Diversity Under Threat

GMC’s effort to find new potential cures in gut bacteria is turning into a race with time. The rapid westernization of many traditional societies causes changes in relation to diets that could lead to the disappearance of certain kinds of gut bacteria.

“Strains that co-evolved with humans are currently disappearing,” Alm told Science.

This could have long-term effects on efforts to understand precisely how the microbiome helps fend off disease, and can be a tool to improve our health.

Image Credit: Anatomy Insider /

Kategorie: Transhumanismus

Designer Babies, and Their Babies: How AI and Genomics Will Impact Reproduction

14 Listopad, 2018 - 16:00

As if stand-alone technologies weren’t advancing fast enough, we’re in age where we must study the intersection points of these technologies. How is what’s happening in robotics influenced by what’s happening in 3D printing? What could be made possible by applying the latest advances in quantum computing to nanotechnology?

Along these lines, one crucial tech intersection is that of artificial intelligence and genomics. Each field is seeing constant progress, but Jamie Metzl believes it’s their convergence that will really push us into uncharted territory, beyond even what we’ve imagined in science fiction. “There’s going to be this push and pull, this competition between the reality of our biology with its built-in limitations and the scope of our aspirations,” he said.

Metzl is a senior fellow at the Atlantic Council and author of the upcoming book Hacking Darwin: Genetic Engineering and the Future of Humanity. At Singularity University’s Exponential Medicine conference last week, he shared his insights on genomics and AI, and where their convergence could take us.

Life As We Know It

Metzl explained how genomics as a field evolved slowly—and then quickly. In 1953, James Watson and Francis Crick identified the double helix structure of DNA, and realized that the order of the base pairs held a treasure trove of genetic information. There was such a thing as a book of life, and we’d found it.

In 2003, when the Human Genome Project was completed (after 13 years and $2.7 billion), we learned the order of the genome’s 3 billion base pairs, and the location of specific genes on our chromosomes. Not only did a book of life exist, we figured out how to read it.

Jamie Metzl at Exponential Medicine

Fifteen years after that, it’s 2018 and precision gene editing in plants, animals, and humans is changing everything, and quickly pushing us into an entirely new frontier. Forget reading the book of life—we’re now learning how to write it.

“Readable, writable, and hackable, what’s clear is that human beings are recognizing that we are another form of information technology, and just like our IT has entered this exponential curve of discovery, we will have that with ourselves,” Metzl said. “And it’s intersecting with the AI revolution.”

Learning About Life Meets Machine Learning

In 2016, DeepMind’s AlphaGo program outsmarted the world’s top Go player. In 2017 AlphaGo Zero was created: unlike AlphaGo, AlphaGo Zero wasn’t trained using previous human games of Go, but was simply given the rules of Go—and in four days it defeated the AlphaGo program.

Our own biology is, of course, vastly more complex than the game of Go, and that, Metzl said, is our starting point. “The system of our own biology that we are trying to understand is massively, but very importantly not infinitely, complex,” he added.

Getting a standardized set of rules for our biology—and, eventually, maybe even outsmarting our biology—will require genomic data. Lots of it.

Multiple countries already starting to produce this data. The UK’s National Health Service recently announced a plan to sequence the genomes of five million Britons over the next five years. In the US the All of Us Research Program will sequence a million Americans. China is the most aggressive in sequencing its population, with a goal of sequencing half of all newborns by 2020.

“We’re going to get these massive pools of sequenced genomic data,” Metzl said. “The real gold will come from comparing people’s sequenced genomes to their electronic health records, and ultimately their life records.” Getting people comfortable with allowing open access to their data will be another matter; Metzl mentioned that Luna DNA and others have strategies to help people get comfortable with giving consent to their private information. But this is where China’s lack of privacy protection could end up being a significant advantage.

To compare genotypes and phenotypes at scale—first millions, then hundreds of millions, then eventually billions, Metzl said—we’re going to need AI and big data analytic tools, and algorithms far beyond what we have now. These tools will let us move from precision medicine to predictive medicine, knowing precisely when and where different diseases are going to occur and shutting them down before they start.

But, Metzl said, “As we unlock the genetics of ourselves, it’s not going to be about just healthcare. It’s ultimately going to be about who and what we are as humans. It’s going to be about identity.”

Designer Babies, and Their Babies

In Metzl’s mind, the most serious application of our genomic knowledge will be in embryo selection.

Currently, in-vitro fertilization (IVF) procedures can extract around 15 eggs, fertilize them, then do pre-implantation genetic testing; right now what’s knowable is single-gene mutation diseases and simple traits like hair color and eye color. As we get to the millions and then billions of people with sequences, we’ll have information about how these genetics work, and we’re going to be able to make much more informed choices,” Metzl said.

Imagine going to a fertility clinic in 2023. You give a skin graft or a blood sample, and using in-vitro gametogenesis (IVG)—infertility be damned—your skin or blood cells are induced to become eggs or sperm, which are then combined to create embryos. The dozens or hundreds of embryos created from artificial gametes each have a few cells extracted from them, and these cells are sequenced. The sequences will tell you the likelihood of specific traits and disease states were that embryo to be implanted and taken to full term. “With really anything that has a genetic foundation, we’ll be able to predict with increasing levels of accuracy how that potential child will be realized as a human being,” Metzl said.

This, he added, could lead to some wild and frightening possibilities: if you have 1,000 eggs and you pick one based on its optimal genetic sequence, you could then mate your embryo with somebody else who has done the same thing in a different genetic line. “Your five-day-old embryo and their five-day-old embryo could have a child using the same IVG process,” Metzl said. “Then that child could have a child with another five-day-old embryo from another genetic line, and you could go on and on down the line.”

Sounds insane, right? But wait, there’s more: as Jason Pontin reported earlier this year in Wired, “Gene-editing technologies such as Crispr-Cas9 would make it relatively easy to repair, add, or remove genes during the IVG process, eliminating diseases or conferring advantages that would ripple through a child’s genome. This all may sound like science fiction, but to those following the research, the combination of IVG and gene editing appears highly likely, if not inevitable.”

From Crazy to Commonplace?

It’s a slippery slope from gene editing and embryo-mating to a dystopian race to build the most perfect humans possible. If somebody’s investing so much time and energy in selecting their embryo, Metzl asked, how will they think about the mating choices of their children? IVG could quickly leave the realm of healthcare and enter that of evolution.

“We all need to be part of an inclusive, integrated, global dialogue on the future of our species,” Metzl said. “Healthcare professionals are essential nodes in this.” Not least among this dialogue should be the question of access to tech like IVG; are there steps we can take to keep it from becoming a tool for a wealthy minority, and thereby perpetuating inequality and further polarizing societies?

As Pontin points out, at its inception 40 years ago IVF also sparked fear, confusion, and resistance—and now it’s as normal and common as could be, with millions of healthy babies conceived using the technology.

The disruption that genomics, AI, and IVG will bring to reproduction could follow a similar story cycle—if we’re smart about it. As Metzl put it, “This must be regulated, because it is life.”

Image Credit: hywards /

Kategorie: Transhumanismus

Ears Grown From Apples? The Promise of Plants for Engineering Human Tissue

13 Listopad, 2018 - 16:00

Inspiration for game-changing science can seemingly come from anywhere. A moldy bacterial plate gave us the first antibiotic, penicillin. Zapping yeast with a platinum electrode led to a powerful chemotherapy drug, cisplatin.

For Dr. Andrew Pelling at the University of Ottawa, his radical idea came from a sci-fi cult classic called The Little Shop of Horrors. Specifically, he was intrigued by the movie’s main antagonist, a man-eating plant called Aubrey 2.

What you have here is a plant-like creature with mammalian features, said Pelling at the Exponential Medicine conference in San Diego last week. “So we started wondering: can we grow this in the lab?”

Pelling’s end goal, of course, isn’t to bring a sci-fi monster to life. Rather, he wanted to see whether grocery-store-bought plants can supply the necessary structure for engineering replacement human tissues.

Andrew Pelling at Exponential Medicine The Rise of Mechanobiology

Growing a human ear out of apples may seem irrational, but Pelling’s key insight is that an apple’s fibrous interior is strikingly similar to the microenvironments usually used in labs to bio-engineer human tissue.

To fabricate a replacement ear, for example, scientists normally carve or 3D print hollow support structures out of expensive bio-compatible materials. They then seed human stem cells into the structure, and painstaking supply a cocktail of growth factors and nutrients to urge the cells to grow. Eventually, after weeks and months of incubation, the cells spread and differentiate into skin-like cells on the scaffold. The result is a bio-engineered replacement ear.

The problem? The extremely high bar to entry: stem cells, growth factors, and materials for the scaffold are all difficult and expensive to procure.

But are those key components really necessary?

“We often think about biology through the lenses of the genome or biochemisty,” said Pelling. But cells and tissue are living components—they stretch, compress, and shear, producing mechanical forces that act upon each other.

In a series of experiments, Pelling and others found that these mechanical forces aren’t just a side product of biology; rather, they seem to crucially regulate the underlying molecular machinery of the cell.

An early study found that every stage of the growth of embryos—a “fundamental process in biology”—can be regulated and controlled by mechanical information. In other words, physical forces can drive cells to divide and migrate through tissues as our genetic code guides the formation of an entire body.

In the lab, stretching and mechanically stimulating the cells seems to fundamentally change their behaviors, too. In one assay, Pelling’s team peppered cancerous cells onto a sheet of skin cells grown on the bottom of a Petri dish. The cancer cells huddled together into little balls, forming a distinct barrier between the microtumor and the skin cells.

But when the team put the entire cellular system into a device that minutely stretches it—mimicking the body’s breathing and movement—the tumor cells became aggressive, tunneling into the layer of skin cells.

“There’s no gene modification…or biochemistry going on here. This is a purely mechanical influence,” said Pelling. “There’s a fundamental link between these things.”

Even cooler: active movement isn’t necessary for mechanical forces to transform the way cells behave. The shape of their microenvironment is enough to direct their actions.

For example, when Pelling put two cell types into a physical structure with grooves, the cells self-segregated within hours, with one type growing in the troughs and the other on the higher ledges. By simply sensing the shape of that grooved surface they “learned” to separate and spatially pattern over long ranges.

The takeaway: using shape alone, it’s possible to stimulate cells to form complex three-dimensional patterns.

Here’s where the apple comes in.

Apple of My…Ear?

Under the microscope, the microenvironment of an apple is on the same length scale as engineered surfaces for fabricating replacement tissues. That discovery got the team to wonder: is it possible to exploit that surface pattern of plants to grow human organs?

To test it out, they took an apple and washed away all its plant cells, DNA, and other biomolecules. This left them with a fibrous scaffold—the stuff that usually gets stuck in your teeth. When the team stuck human and animal cells inside, the cells began to grow and spread.

Encouraged, the team then hand-carved an apple into the shape of a human ear and repeated the process above. Within weeks the cells infiltrated, turning the chunk of apple into a fleshy human ear.

Of course, having the right shape isn’t enough. The replacement tissue also has to survive inside the body.

The team next implanted an apple-based scaffold directly under the skin of a mouse. In just eight weeks, not only had the mouse’s healthy cells invaded the matrix, the rodent’s body also produced new collagen and blood vessels that helped keep the scaffold living and healthy.

That ticks three important aspects for an engineered tissue: it’s safe, it’s biocompatible, and it comes from a sustainable, ethical source.

“This thing is becoming a living part of the body and it used to be an apple, and we did this by going to the grocery store,” said Pelling.

Moving Into the Clinical Space

Pelling is especially excited by his finding because of its simplicity: it doesn’t require stem cells or exotic growth factors to work. The elegant approach exploits the physical structure of the plant.

The team is now broadening its work to three main areas of tissue engineering: soft tissue cartilage, bone, and spinal cord and nerve repair. The key is to match the specific microstructure of a plant to that of the tissue, Pelling explained.

“It’s really exciting to see these kinds of wild ideas translate this way,” he said.

And why restrict ourselves to the body parts nature gave us? If the shape of a scaffold is the sole determinant of engineering a tissue or organ, why not design our own?

Pelling took the idea and ran with it, commissioning a design company to sketch out the scaffold for three different types of ears: an average human ear, a pointy Spock-shaped one, and a wavy one designed to suppress or enhance different frequencies to—in theory—augment hearing.

“The point I want to emphasize is…the strength of blue-sky thinking is actually coupling it to the rigor of the scientific method,” Pelling concluded. Ultimately this is how we’ll create more dinventions and solve problems.

Image Credit: WhiteWings /

Kategorie: Transhumanismus

Breaking Out of the Corporate Bubble With Uncommon Partners

12 Listopad, 2018 - 17:30

For big companies, success is a blessing and a curse. You don’t get big without doing something (or many things) very right. It might start with an invention or service the world didn’t know it needed. Your product takes off, and growth brings a whole new set of logistical challenges. Delivering consistent quality, hiring the right team, establishing a strong culture, tapping into new markets, satisfying shareholders. The list goes on.

Eventually, however, what made you successful also makes you resistant to change.

You’ve built a machine for one purpose, and it’s running smoothly, but what about retooling that machine to make something new? Not so easy. Leaders of big companies know there is no future for their organizations without change. And yet, they struggle to drive it.

In their new book, Leading Transformation: How to Take Charge of Your Company’s Future, Kyle Nel, Nathan Furr, and Thomas Ramsøy aim to deliver a roadmap for corporate transformation.

The book focuses on practical tools that have worked in big companies to break down behavioral and cognitive biases, envision radical futures, and run experiments. These include using science fiction and narrative to see ahead and adopting better measures of success for new endeavors.

A thread throughout is how to envision a new future and move into that future.

We’re limited by the bubbles in which we spend the most time—the corporate bubble, the startup bubble, the nonprofit bubble. The mutually beneficial convergence of complementary bubbles, then, can be a powerful tool for kickstarting transformation. The views and experiences of one partner can challenge the accepted wisdom of the other; resources can flow into newly co-created visions and projects; and connections can be made that wouldn’t otherwise exist.

The authors call such alliances uncommon partners. In the following excerpt from the book, Made In Space, a startup building 3D printers for space, helps Lowe’s explore an in-store 3D printing system, and Lowe’s helps Made In Space expand its vision and focus.

Uncommon Partners

In a dingy conference room at NASA, five prototypical nerds, smelling of Thai food, laid out the path to printing satellites in space and buildings on distant planets. At the end of their four-day marathon, they emerged with an artifact trail that began with early prototypes for the first 3D printer on the International Space Station and ended in the additive-manufacturing future—a future much bigger than 3D printing.

In the additive-manufacturing future, we will view everything as transient, or capable of being repurposed into new things. Rather than throwing away a soda bottle or a bent nail, we will simply reprocess these things into a new hinge for the fence we are building or a light switch plate for the tool shed. Indeed, we might not even go buy bricks for the tool shed, but instead might print them from impurities pulled from the air and the dirt beneath our feet. Such a process would both capture carbon in the air to make the bricks and avoid all the carbon involved in making and then transporting traditional bricks to your house.

If it all sounds a little too science fiction, think again. Lowe’s has already been honored as a Champion of Change by the US government for its prototype system to recycle plastic (e.g., plastic bags and bottles). The future may be closer than you have imagined. But to get there, Lowe’s didn’t work alone. It had to work with uncommon partners to create the future.

Uncommon partners are the types of organizations you might not normally work with, but which can greatly help you create radical new futures. Increasingly, as new technologies emerge and old industries converge, companies are finding that working independently to create all the necessary capabilities to enter new industries or create new technologies is costly, risky, and even counterproductive. Instead, organizations are finding that they need to collaborate with uncommon partners as an ecosystem to cocreate the future together. Nathan [Furr] and his colleague at INSEAD, Andrew Shipilov, call this arrangement an adaptive ecosystem strategy and described how companies such as Lowe’s, Samsung, Mastercard, and others are learning to work differently with partners and to work with different kinds of partners to more effectively discover new opportunities. For Lowe’s, an adaptive ecosystem strategy working with uncommon partners forms the foundation of capturing new opportunities and transforming the company. Despite its increased agility, Lowe’s can’t be (and shouldn’t become) an independent additive-manufacturing, robotics-using, exosuit-building, AR-promoting, fill-in-the-blank-what’s-next-ing company in addition to being a home improvement company. Instead, Lowe’s applies an adaptive ecosystem strategy to find the uncommon partners with which it can collaborate in new territory.

To apply the adaptive ecosystem strategy with uncommon partners, start by identifying the technical or operational components required for a particular focus area (e.g., exosuits) and then sort these components into three groups. First, there are the components that are emerging organically without any assistance from the orchestrator—the leader who tries to bring together the adaptive ecosystem. Second, there are the elements that might emerge, with encouragement and support. Third are the elements that won’t happen unless you do something about it. In an adaptive ecosystem strategy, you can create regular partnerships for the first two elements—those already emerging or that might emerge—if needed. But you have to create the elements in the final category (those that won’t emerge) either with an uncommon partner or by yourself.

For example, when Lowe’s wanted to explore the additive-manufacturing space, it began a search for an uncommon partner to provide the missing but needed capabilities. Unfortunately, initial discussions with major 3D printing companies proved disappointing. The major manufacturers kept trying to sell Lowe’s 3D printers. But the vision our group had created with science fiction was not for vendors to sell Lowe’s a printer, but for partners to help the company build a system—something that would allow customers to scan, manipulate, print, and eventually recycle additive-manufacturing objects. Every time we discussed 3D printing systems with these major companies, they responded that they could do it and then tried to sell printers. When Carin Watson, one of the leading lights at Singularity University, introduced us to Made In Space (a company being incubated in Singularity University’s futuristic accelerator), we discovered an uncommon partner that understood what it meant to cocreate a system.

Initially, Made In Space had been focused on simply getting 3D printing to work in space, where you can’t rely on gravity, you can’t send up a technician if the machine breaks, and you can’t release noxious fumes into cramped spacecraft quarters. But after the four days in the conference room going over the comic for additive manufacturing, Made In Space and Lowe’s emerged with a bigger vision. The company helped lay out an artifact trail that included not only the first printer on the International Space Station but also printing system services in Lowe’s stores.

Of course, the vision for an additive-manufacturing future didn’t end there. It also reshaped Made In Space’s trajectory, encouraging the startup, during those four days in a NASA conference room, to design a bolder future. Today, some of its bold projects include the Archinaut, a system that enables satellites to build themselves while in space, a direction that emerged partly from the science fiction narrative we created around additive manufacturing.

In summary, uncommon partners help you succeed by providing you with the capabilities you shouldn’t be building yourself, as well as with fresh insights. You also help uncommon partners succeed by creating new opportunities from which they can prosper.

Helping Uncommon Partners Prosper

Working most effectively with uncommon partners can require a shift from more familiar outsourcing or partnership relationships. When working with uncommon partners, you are trying to cocreate the future, which entails a great deal more uncertainty. Because you can’t specify outcomes precisely, agreements are typically less formal than in other types of relationships, and they operate under the provisions of shared vision and trust more than binding agreement clauses. Moreover, your goal isn’t to extract all the value from the relationship. Rather, you need to find a way to share the value.

Ideally, your uncommon partners should be transformed for the better by the work you do. For example, Lowe’s uncommon partner developing the robotics narrative was a small startup called Fellow Robots. Through their work with Lowe’s, Fellow Robots transformed from a small team focused on a narrow application of robotics (which was arguably the wrong problem) to a growing company developing a very different and valuable set of capabilities: putting cutting-edge technology on top of the old legacy systems embedded at the core of most companies. Working with Lowe’s allowed Fellow Robots to discover new opportunities, and today Fellow Robots works with retailers around the world, including BevMo! and Yamada. Ultimately, working with uncommon partners should be transformative for both of you, so focus more on creating a bigger pie than on how you are going to slice up a smaller pie.

The above excerpt appears in the new book Leading Transformation: How to Take Charge of Your Company’s Future by Kyle Nel, Nathan Furr, and Thomas Ramsøy, published by Harvard Business Review Press.

Image Credit: Here /

We are a participant in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for us to earn fees by linking to and affiliated sites.

Kategorie: Transhumanismus

Hacking the Mind Just Got Easier With These New Tools

12 Listopad, 2018 - 17:00

For eons, the only way to access the three-pound mushy bio-computer between our ears was to physically crack the skull, or insert a sharp object up the nose.

Lucky for us, these examples of medical barbarism have been relegated to history. Yet the goal of reaching through the skull to modulate brain activity hasn’t changed. Within the brain, millions of neurons and their billions of connections hum with electrical activity, weaving intricate connective patterns that lead to thoughts, behaviors, and memories.

If we have the tools to read and tweak those circuits, we have the key to treating mental disorders, or even augmenting the mind.

What sounds completely intractable just a decade ago is now possible. Optogenetics, a technique that lets scientists control neuronal activity with light, has successfully implanted fake memories into mice. Scientists are playing with ultrasound to control brain circuits. It’s now possible to recapitulate what a person is seeing based on his brain activity alone. Brain-machine interfaces have given paralyzed folks the ability to walk again. Even rudimentary telepathy between people is now a thing.

Yet to Dr. Divya Chander at Stanford University, these technologies have two fundamental flaws that limit their transformative nature. First, most require invasive implants and open-brain surgery. Second, they’re often unwieldy and extremely expensive.

Last week at Singularity University’s Exponential Medicine conference in San Diego, technologists presented new non-invasive devices that seek to simplify and democratize brain modulation. Physically tunneling through the skull may soon be another thing of the past.

Openwater, the Wearable MRI

Being inside an MRI machine is not a pleasant experience. You’re in a tiny claustrophobic tube surrounded by a giant magnet, and instructed to lie extremely still as the machine churns away.

Nevertheless, state-of-the-art MRIs are the current gold standard for generating high-resolution images of your brain structure. Functional MRI, which tracks blood flow—a proxy for neural activity—has also been instrumental in teasing out the intricacies of brain activation in response to a changing environment. But they’re bulky and expensive; two-thirds of humanity has no access to the technology.

To Dr. Mary Lou Jepsen, CEO and founder of Openwater, the solution is simple in concept: shrink the machine down to the size of a ski hat, a bra, or a bandage, and manufacture the gadget at the cost of a smartphone. The trick, she explains, is to move away from magnets and instead turn to light.

The human body is translucent to red and near-infrared light, allowing our tissues—including both skull and brain—to be illuminated. The problem is that the light scatters as it passes through tissue, which prevents a sharp, clear image.

To re-focus the light, Jepsen turned to holograms. “Holography records the intensity of light and the phase of light waves,” she explained. Because it captures all light rays and photons at all positions and angles simultaneously, a hologram can be used to re-direct light rays into a single stream of light.

During the scan, the device first shoots focused ultrasound waves to a spot on the tissue. Next comes the red light, which slightly changes color to orange when it goes through the “sonic spot.”

Jepsen then matches this output orange light with another disc of similar orange light to form the hologram. “Holograms can only be made from two beams of light of the same color,” she explained. The resulting hologram is then recorded on a camera chip.

The result? All red light is filtered out, so that the setup only captures information about that particular sonic spot. Spot by spot, the device can image the entire brain.

Openwater is currently building a prototype, and Jepsen is particularly excited about testing it on brain diseases. Because blood absorbs red light, it’s an especially attractive target to image. Tumors often carry five times the blood levels of normal tissues, making it pop under red light; in contrast, stroke restricts blood flow, which lets blood-deprived tissue show up as a dark spot on scans.

In theory, the device could even track neural activity. Scientists have long used increased oxygenated blood flow as a proxy for neural activation. Jepsen’s device can track the same changes with light.

Eventually Jepsen hopes to supply rural places, ambulances, and urgent care centers with the device. “I think…this is inevitable,” she concluded.

A Wearable Brain-Machine Interface

Mind-controlled prostheses have come a long way, yet most still required implanted electrodes to precisely capture intentions of movement.

Back in 2012, Dr. Eric Leuthardt, a neurosurgeon at Washington University in St. Louis, began experimenting with ways to capture the brain’s movement instructions using wearables.

Specifically, he explained, “I wanted to use these neurotechnologies to connect our mind and heal our brains in the setting of stroke, focusing on patients that lost control of hand functions after the attack.”

The crux to Leuthardt’s system is a peculiar electrical fingerprint in a region of the brain called the premotor cortex. This area plans movements—either real ones or imagined ones—and the signals subsequently get sent to the motor cortex on other side of the brain and carried out.

Leuthardt found that using a cap embedded with electrodes, he could reliably pick up the low-frequency signals generated by the premotor cortex. These “planning” signals are then sent to a machine learning algorithm to parse out the intended movement. Finally, the results of the computation are used to control a prosthetic to carry out the movement.

With training, the stroke patients were able to use their minds to pick up a marble and place it into a cup—a remarkably complex operation. Eventually they could perform everyday tasks with their prosthetic hands, such as pulling up pants.

“What’s so cool about this technology is it’s not a drug, doesn’t require surgery, we’re simply using a technology to harvest the power of your own thoughts to change the wiring and structure of your brain,” said Leuthardt.

Another of Leuthardt’s innovative devices, the eQuility stimulator, is striving to disrupt negative thought patterns in psychiatric disorders such as depression.

In depression, the brain’s various circuits show an imbalance in activation. One way to potentially treat symptoms is to restore that balance. Scientists have been eyeing the vagus nerve—two spaghetti-like nerves that run along the neck and innervate the entire body—as a potential target. Previous stimulators are extremely bulky and need to be implanted under the skin, making them impractical, explained Leuthardt.

eQuility takes advantage of a branch of the vagus nerve that snakes over to the ear. By packing an electrical stimulator inside a headset, the wearable can modulate vagus nerve activity directly from the ear.

Ultimately we may be reaching towards another milestone in brain modulation: one that democratizes the technologies, allowing more people to manipulate their brain activity without first going under the knife.

“In the next 30 to 50 years we are going to see a rewriting of the fabric of the human experience,” concluded Leuthardt. “Fundamentally it’s only going to be limited by our imagination.”

Image Credit: Anita Ponne /

Kategorie: Transhumanismus

5 Technologies Bringing Healthcare Systems into the Future

11 Listopad, 2018 - 17:00

If you think you’ve got a bad case of the travel bug, get this: Dr. John Halamka travels 400,000 miles a year. That’s equivalent to fully circling the globe 16 times.

Halamka is chief information officer at Harvard’s Beth Israel Deaconess Medical Center, a professor at Harvard Medical School, and a practicing emergency physician. In a talk at Singularity University’s Exponential Medicine last week, Halamka shared what he sees as the biggest healthcare problems the world is facing, and the most promising technological solutions from a systems perspective.

“In traveling 400,000 miles you get to see lots of different cultures and lots of different people,” he said. “And the problems are really the same all over the world. Maybe the cultural context is different or the infrastructure is different, but the problems are very similar.”

Less of This, More of That

From Asia to Europe and Africa to America, societies are trying to figure out how best to manage an aging population. Japan is perhaps the most dramatic example: “In Japan 25 percent of the population is over the age of 65, the birth rate is 1.4, and hardly any primary care physicians are going into the profession,” Halamka said.

Longer lifespans around the world are a testament to medical progress, but they also mean rising healthcare costs and higher rates of chronic disease. Combine that with low birth rates and the implications are magnified; there’s not going to be anyone to pay for the care of this aging society.

That care isn’t just for the body, it’s for the mind too. Anxiety and depression have become something of an epidemic, whether due to the relentless pace of modern life, the isolation of increasingly individualist cultures, or the comparisons and competitiveness brought about by social media. “Across the world no one’s really addressing the mental health burden very well,” Halamka said.

John Halamka at Exponential Medicine

All these issues would be more solvable if there were more people to work on solving them—that is, more doctors. But there’s actually a marked shortage of clinicians, and of specialization in the most high-demand fields. “You’re not seeing a distribution of the kind of services people need,” Halamka said. This is especially a problem in rural areas.

Finally, the systems we’ve built to help with the above problems have themselves become a problem. Multiple countries are trying to figure out how to deal with a lack of interoperability and data sharing in their medical information technology.

Rather than being highly imaginative or far-reaching, Halamka noted, many of the healthcare technologies the world needs are actually quite simple and practical.

The Tech That Can Help

Machine Learning

Working for the Bush and Obama administrations, Halamka was a first-hand witness to the way regulation can create a burden for clinicians. Between the FDA recommending that doctors monitor patient implants at every visit, the CDC recommending a travel history be taken at every visit, Medicare and Medicaid putting forth 20 quality measures for every visit, Halamka said, “By the time we were done, there were 140 required data elements to be entered at every visit while making eye contact, being empathetic and never committing malpractice. It’s not possible!”

This is where machine learning can help. Halamka joked that if AI can replace your doctor, AI should replace your doctor; the things we really want our doctors to do—listen to us, respect our care preferences, guide us through all the possibilities—can’t be done by a machine.

But AI can reduce clinicians’ burden of documentation using functions like natural language processing. Imagine a version of Alexa that listens to doctor-patient conversations, takes notes, and produces charts—all the doctor has to do is review and sign.

AI can also augment physicians’ capacity to understand evidence and make informed decisions. “There are 800 papers published in my field every week,” Halamka said. “I’m a little behind on my reading.”

Those decisions can run from which antibiotic to prescribe a patient to the amount of time to reserve an operating room for. After implementing a machine learning algorithm that predicted how much time patients would need in the OR by comparing them to thousands of similar patients, Beth Israel was able to free up 30 percent of its OR schedule and enhance its throughput.

Internet of Things

Earlier this year, Halamka was diagnosed with primary hypertension. His lifestyle and diet essentially couldn’t be healthier—he’s a vegan who avoids both caffeine and alcohol—but it turned out the condition was hereditary. His doctor prescribed beta blockers. “Ugh,” he said. “They’re like negative espresso.” 50 milligrams of metoprolol was the dose for a person of his size, age, and gender—but, he realized, all that had no bearing on his body’s ability to metabolize metoprolol.

So he decided to do a little experiment. While varying the dosage, he used sensors around his home and office to monitor his mood, energy, blood pressure, pulse, and other indicators. “I was able to tailor my medication to the right dose, with the right output, and the fewest side effects for me,” he said. “And that’s the kind of thing we all want.”

In the near future we’ll be able to 3D print pills, assess their efficacy with the smart devices in our homes, and tailor them to the optimal dosage for our bodies.

Big Data

Halamka pointed out that there are 26 different electronic health records (EHRs) used in the Boston region alone. But Fast Health Interoperability Resources (FHIR), an application programming interface for exchanging electronic health records, will soon enable new ways to aggregate data from different EHRs. Patients will be able to look at their lifetime experience, and not just a single silo in a single EHR.

“My hope is the data of the past will inform the care of patients in the future,” Halamka said.

When his wife, who is Korean, was diagnosed with stage three breast cancer, he used an open source tool called i2b2 to mine data from Harvard’s 17 hospitals, looking at treatments and outcomes of women with the same type of cancer and of the same age and ethnicity.

He found that Taxol, the drug used to treat this cancer, causes neuropathy (numbness in the hands and feet) in Asian women. “My wife is an artist, so saying ‘you’re cured but you can’t work ever again’ wasn’t a desirable outcome,” Halamka said. So they did a clinical trial of one, taking her Taxol dose down by 50 percent. Today she’s well and functional, and her hands and feet are just as they were before treatment. That is the kind of thing we need to use big data for, he said.


Halamka is the nation’s expert in poisonous mushrooms and plants, and he does 900 telemedicine consultations every year (he is malpractice insured in all 50 US states).

He said, “Here I am with my iPhone, receiving images and cases from all over the world, and through just a virtual interaction, developing a care plan that keeps people healthy. It’s low cost and it’s efficient. And that’s the kind of expertise we all need access to, whether we’re urban or rural, whether you’re in the US or elsewhere.”

One challenge, however, is policy. State licenses and malpractice insurance can make crossing borders complicated. If a doctor in, say, North Dakota consults Halamka for a mushroom poisoning and Halamka advises a certain treatment, the North Dakota doctor ultimately decides whether to offer the treatment or not.


Halamka believes one of the main use cases for blockchain in medical IT is in auditing and integrity. When Harvard doctors are sued for malpractice, he said, he’s asked to provide 20 years’ worth of medical records to the plaintiff attorney, but there’s no guarantee or way to prove those records haven’t been altered in any way.

A blockchain audit trail would fix this problem. “When a note is signed put a hash of that note into the blockchain, twenty years go by, you can validate the note has not been changed,” Halamka said. You could also use it to show patient consent, or incentivize them to contribute their data or comply with treatment regimens.

On Their Way, Already Here

The technologies and use cases Halamka outlined aren’t decades or even years out—they’re up and running in hospitals today. Beth Israel Deaconess, he said, is using machine learning to read faxes, apply metadata, and insert information into medical records. They’re using mobile and an internet of things to keep congestive heart failure patients healthy in their homes. They’re pushing data across the community to track where patients are receiving care and help coordinate the best care at the lowest cost.

Robots that can perform precision surgery and AIs that can diagnose rare illnesses in minutes aren’t going to eliminate our need for physicians. In fact, if it’s applied in the right ways, tech will not only help doctors practice at the top of their licenses and hospitals to run with utmost efficiency—it will reduce the likelihood that we’ll end up there to begin with.

Image Credit: metamorworks /

Kategorie: Transhumanismus

How Do We Teach Autonomous Cars To Drive Off the Beaten Path?

9 Listopad, 2018 - 18:00

Autonomous vehicles can follow the general rules of American roads, recognizing traffic signals and lane markings, noticing crosswalks and other regular features of the streets. But they work only on well-marked roads that are carefully scanned and mapped in advance.

Many paved roads, though, have faded paint, signs obscured behind trees and unusual intersections. In addition, 1.4 million miles of U.S. roads—one-third of the country’s public roadways—are unpaved, with no on-road signals like lane markings or stop-here lines. That doesn’t include miles of private roads, unpaved driveways or off-road trails.

What’s a rule-following autonomous car to do when the rules are unclear or nonexistent? And what are its passengers to do when they discover their vehicle can’t get them where they’re going?

Accounting for the Obscure

Most challenges in developing advanced technologies involve handling infrequent or uncommon situations, or events that require performance beyond a system’s normal capabilities. That’s definitely true for autonomous vehicles. Some on-road examples might be navigating construction zones, encountering a horse and buggy, or seeing graffiti that looks like a stop sign. Off-road, the possibilities include the full variety of the natural world, such as trees down over the road, flooding and large puddles—or even animals blocking the way.

At Mississippi State University’s Center for Advanced Vehicular Systems, we have taken up the challenge of training algorithms to respond to circumstances that almost never happen, are difficult to predict and are complex to create. We seek to put autonomous cars in the hardest possible scenario: driving in an area the car has no prior knowledge of, with no reliable infrastructure like road paint and traffic signs, and in an unknown environment where it’s just as likely to see a cactus as a polar bear.

Our work combines virtual technology and the real world. We create advanced simulations of lifelike outdoor scenes, which we use to train artificial intelligence algorithms to take a camera feed and classify what it sees, labeling trees, sky, open paths and potential obstacles. Then we transfer those algorithms to a purpose-built all-wheel-drive test vehicle and send it out on our dedicated off-road test track, where we can see how our algorithms work and collect more data to feed into our simulations.

Starting Virtual

We have developed a simulator that can create a wide range of realistic outdoor scenes for vehicles to navigate through. The system generates a range of landscapes of different climates, like forests and deserts, and can show how plants, shrubs and trees grow over time. It can also simulate weather changes, sunlight and moonlight, and the accurate locations of 9,000 stars.

The system also simulates the readings of sensors commonly used in autonomous vehicles, such as lidar and cameras. Those virtual sensors collect data that feeds into neural networks as valuable training data.

Simulated desert, meadow and forest environments generated by the Mississippi State University Autonomous Vehicle Simulator. Chris Goodin, Mississippi State University, Author provided. Building a Test Track

Simulations are only as good as their portrayals of the real world. Mississippi State University has purchased 50 acres of land on which we are developing a test track for off-road autonomous vehicles. The property is excellent for off-road testing, with unusually steep grades for our area of Mississippi—up to 60 percent inclines—and a very diverse population of plants.

We have selected certain natural features of this land that we expect will be particularly challenging for self-driving vehicles, and replicated them exactly in our simulator. That allows us to directly compare results from the simulation and real-life attempts to navigate the actual land. Eventually, we’ll create similar real and virtual pairings of other types of landscapes to improve our vehicle’s capabilities.

A road washout, as seen in real life, left, and in simulation. Chris Goodin, Mississippi State University, Author provided. Collecting More Data

We have also built a test vehicle, called the Halo Project, which has an electric motor and sensors and computers that can navigate various off-road environments. The Halo Project car has additional sensors to collect detailed data about its actual surroundings, which can help us build virtual environments to run new tests in.

The Halo Project car can collect data about driving and navigating in rugged terrain. Beth Newman Wynn, Mississippi State University, Author provided.

Two of its lidar sensors, for example, are mounted at intersecting angles on the front of the car so their beams sweep across the approaching ground. Together, they can provide information on how rough or smooth the surface is, as well as capturing readings from grass and other plants and items on the ground.

Lidar beams intersect, scanning the ground in front of the vehicle. Chris Goodin, Mississippi State University, Author provided

We’ve seen some exciting early results from our research. For example, we have shown promising preliminary results that machine learning algorithms trained on simulated environments can be useful in the real world. As with most autonomous vehicle research, there is still a long way to go, but our hope is that the technologies we’re developing for extreme cases will also help make autonomous vehicles more functional on today’s roads.

Matthew Doude, Associate Director, Center for Advanced Vehicular Systems; Ph.D. Student in Industrial and Systems Engineering, Mississippi State University; Christopher Goodin, Assistant Research Professor, Center for Advanced Vehicular Systems, Mississippi State University, and Daniel Carruth, Assistant Research Professor and Associate Director for Human Factors and Advanced Vehicle System, Center for Advanced Vehicular Systems, Mississippi State University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Photo provided for The Conversation by Matthew Goudin / CC BY ND

Kategorie: Transhumanismus

Clocking the Drugs, Drugging the Clock: The Health Impact Of Circadian Medicine

9 Listopad, 2018 - 17:00

When it comes to healthcare disruption, technology is often at the center: wearables that track our biological stats, scanners that monitor our internal processes, or modulators that directly zap our brains into alternate states.

But to Dr. Satchin Panda, a professor at the Salk Institute in La Jolla, California, the key to health may already be within ourselves. In a talk at Singularity University’s Exponential Medicine conference in San Diego this week, he explained how research from the past two decades is slowly unveiling the rhythms of the human body. From DNA to cells, tissues to entire organs, the human body runs on millions of clocks synchronized with outside light.

Dr. Satchin Panda at Exponential Medicine

A growing body of evidence now suggests that when we align our sleeping and eating habits with our circadian rhythms, we can stave off chronic diseases, reduce drug side effects, and even extend our healthspans.

“We spend almost half of our life right now fighting with chronic diseases. Most of these chronic diseases are not caused by pathogens, but mostly due to bad lifestyle choices that we make,” said Panda.

The answer isn’t pharmaceuticals: in fact, for every person a drug helps in chronic disease management, it fails in 3 to 24 others that take it.

“We need a completely new idea about what causes disease so that we can start a new revolution, finding new treatments and a new preventative strategy,” Panda said.

His radical idea? Circadian medicine.

The Master Clock

If you’ve ever traveled across time zones or engaged in shift work, you’ve felt the disruptive effects of knocking your circadian rhythm out of whack.

Scientists have long known that our sleep cycles are regulated by a master clock within the brain. The suprachiasmatic nucleus (known by the friendlier acronym “SCN”) houses a network of neurons that program the day-to-day oscillations of our biological needs—feeding, drinking, body temperature, hormone secretions—and synchronize them to external light.

A highway of connections link neurons in the SCN directly to special cells in the retina that transmit information about blue light. Back in the early 2000s, Panda and colleagues discovered a peculiar light-sensing protein decorating the surface of roughly 5,000 cells in the retina. Dubbed “melanopsin,” this particular sensor was especially fine-tuned to light wavelengths in the blue range—similar to light that permeates during the day. In contrast, the protein showed almost no response to candle light or moonlight.

Thanks to the discovery of melanopsin, scientists soon uncovered the role that blue or orange light plays in regulating our sleep cycles. Too little bright light during the day, or too much blue light in the evening, directly disrupts the secretion of melatonin, a sleep-inducing hormone.

Just a few sleepless nights can negatively impact metabolism, increasing diabetes risk. But if it continues for weeks, months, or years, said Panda, then disrupted sleep can lead to over 100 different chronic diseases, including depression, metabolic disorders, cognitive issues, and inflammation—the killers of our modern age.

Fortunately, the findings are already seeping into technology design. Apple, for example, recently introduced screen dimming and Night Shift, which automatically changes screens to a warmer color at night.

Thanks to circadian medicine, we now know that “lighting for health is not lighting for vision,” said Panda.

A Clockwork Bonanza

But the master clock is just the tip of the iceberg.

In fact, research shows that almost every cell, tissue, and organ follows its own circadian rhythm. The pancreas, for example, increases its production of insulin during the day, allowing us to better regulate blood sugar levels when we eat—but these secretions slow to a crawl at night. Thousands of genes in every organ follow a tight daily schedule, switching on and off around the same time every day. Even gut bugs in our microbiomes cycle their biological processes.

“We’re designed to have 24-hour rhythms in our physiology and metabolism. These rhythms exist because, just like our brains need to go to sleep each night to repair, reset, and rejuvenate, every organ needs to have down time to repair and reset as well,” said Panda.

It’s not just basic science. Recently, Panda’s team turned its focus on an unfortunate side effect of staying up too late: excessive eating.

There’s a saying that if “your eyes are open your mouth is open,” joked Panda. The health problems of eating outside the body’s normal rhythms, however, are no joke.

In one study using two genetically identical male mice, the team provided the rodents with high-fat, high-sugar food—the only difference was one group had round-the-clock access, whereas the other group’s eating was restricted to an eight-hour window. A few weeks later, despite consuming a similar amount of calories, the group that ate whenever got fat and sick. The other, despite eating the same junk food, was spared from obesity, fatty liver, and metabolic disease—simply because they restricted feeding to a short time window.

“We went back and repeated this experiment again and again…and always saw the same thing,” said Panda. “Mice that ate within 8 to 12 hours max always remained healthy.”

Later studies found that intermittently fasting can even reverse some symptoms of metabolic damage, in mice and men.

Three years ago the team launched a website called, which—with informed consent—recruited citizen scientists to log their eating habits and health data. Over 50 percent of participants reported an eating window of longer than 15 hours.

When Panda and colleagues urged this group to restrict their eating to 10 hours, they found that people who adopted the practice showed improved metabolic profiles in just three months.

“What we are finding is that just controlling timing is much more powerful than any single drug that’s available,” said Panda.

From Drugs to Cancer

Better sleep and metabolism already give you an edge up in health. But Panda and others are finding that circadian medicine can potentially help two other fields: augmenting therapies, and managing cancer.

“Nearly 80 percent of FDA-approved drug targets may benefit if they’re given at the right time, and we can reduce adverse side effect,” said Panda.

Similarly, the time of day patients receive a flu shot, schedule open-heart surgery, or get chemotherapy or radiation can all impact their long-term outcomes—sometimes by as high as 20 to 25 percent.

More recently, his team has begun examining circadian rhythms in cells that are out of whack: tumor cells. In many cancers the clock breaks down. When the team reactivated the clock, the tumor cells died.

By “training the circadian clock, clocking the drugs, and drugging the clock…we can benefit almost billions of people worldwide who have these chronic conditions,” concluded Panda. “I’m really excited.”

Image Credit: Bro Crock /

Kategorie: Transhumanismus

The Fascinating, Creepy New Research in Human Hibernation for Space Travel

8 Listopad, 2018 - 17:00

No interstellar travel movie is complete without hibernators. From Prometheus to Passengers, we’ve watched protagonists awaken in hibernation pods, rebooting their fragile physiology from a prolonged state of suspended animation—a violent process that usually involves ejecting stomach fluids.

This violent re-awakening seems to make sense. Humans, after all, don’t naturally hibernate. But a small, eclectic group of scientists is battling nature to trigger artificial hibernation in humans. If successful, they could delay aging, treat life-threatening illnesses, and get us to Mars and beyond.

Last week, a group of experts gathered in New Orleans to explore the possibility of inducing “synthetic” hibernation in humans. Scientists are learning from nature to understand the factors that lead to hibernation and re-awakening in animals.

The Hibernation Enigma

What better way to pass long stretches of life-threating coldness and food poverty than to enter deep unconsciousness? Much of the animal kingdom hibernates through winters: bears, squirrels, hedgehogs. Even our primate cousin, the fat-tailed lemur, drastically drops its metabolic rate when food supplies dwindle.

What about us? Although we (regrettably) don’t hibernate, a handful of “miracles” suggests that a metabolic deep freeze may help preserve our injured bodies in a beneficial way.

In 1999, radiologist Dr. Anna Bagenholm fell into a frozen stream while skiing in Norway. By the time she was retrieved, she had been under ice for over 80 minutes. By all accounts she was clinically dead—no breathing, no heartbeat. Her body temperature dropped to an unprecedented 56.7 °F (13.7 °C).

Yet when doctors gradually warmed her blood, her body began to slowly heal. By the next day, her heart restarted. By day 12, she opened her eyes.  Eventually she fully recovered.

Bagenholm’s case is just one hint that humans have the ability to recover from a severely depressed metabolic state. For years doctors have employed therapeutic hypothermia—lowering body temperature by a few degrees for several days—to help keep patients with brain injuries or epilepsy in suspended animation.

This rapid cooling helps preserve tissues that have been cut off from blood supply, so they require less oxygen to function. In China, experiments have helped keep people in a deep freeze for up to two weeks.

The promise of therapeutic hypothermia is so great that in 2014, NASA partnered with the Altanta-based SpaceWorks and gave preliminary funding to a space-travel hibernator for a mission to Mars.

Although the flight into space would only last a few months, putting astronauts into an inactive state could dramatically cut down on the required amount of food and habitat size. A hibernation state could also help prevent serious side effects from low gravity, such as changes in spinal fluid flow that diminishes eyesight. Direct muscle stimulation, courtesy of the hibernation pod, could prevent muscle wasting in zero G, and a deep state of unconsciousness could potentially minimize psychological challenges like boredom and loneliness.

The project made it to the second funding round, but many questions remained. One issue with the proposal is that prolonged hypothermia is terrible for health: blood clots, bleeding, infection, and liver failure may occur. On a spaceship without sophisticated medical facilities, these complications could be deadly.

Another problem is that we don’t fully understand what happens in an animal when it goes into hibernation. That’s what the New Orleans conference tried to address.

Biological Inspiration

To Dr. Hannah Carey at the University of Wisconsin, the answer to human hibernation may not lie in medical treatments, but in nature.

Carey studies the hibernation habits of the ground squirrel, a petite omnivorous rodent that roams North American prairies. Between late September and May, the ground squirrel hibernates in underground burrows to survive bitter winters.

“The fact that there is hibernation in the primate lineage makes any discoveries with biomedical potential that much more applicable to human beings,” she said.

One peculiar observation Carey has made is that low metabolic rate doesn’t last all winter. Periodically, hibernating animals will rouse from their torpor state for half a day, raising their body temperatures back to normal. The animals still don’t eat or drink during these periods, however.

“Originally hibernation was considered a continuation of sleep, but physiologically it is very different because your metabolism is totally suspended, although it is still regulated,” said Oxford neuroscientist Dr. Vlad Vyazovskiy, who also presented at the conference. “Torpor, this extreme metabolic challenge, seems to do something to the brain or body which necessitates sleep, which in turn provides some type of restoration.”

Neuroscientists have long appreciated the benefits of sleep. For example, studies show that sleep helps the brain clear toxic waste through its lymphatic system and allows the brain’s synapses to “reset.” If hibernation contributes to a sleep-deprived state, could periodically inducing sleep be the answer to long-term torpor?

We don’t yet know. But to Carey, these findings from animals suggest that on the quest to human hibernation, unraveling the biology of natural hibernators may be preferable to hypothermia-based medical practices.

Artificial Hibernation

While Cary and Vyazovskiy study how hibernating animals remain healthy, Dr. Matteo Cerri at the University of Bologna in Italy has a different focus: how to artificially induce torpor in non-hibernating animals.

The answer may lie in a small group of neurons in a brain region called the raphe pallidus. Because metabolism dramatically slows during hibernation, hormonal and brain mechanisms are likely involved to kick-start the process, he explained.

Back in 2013, his team was one of the first to induce a hibernation-like state in rats, which don’t normally hibernate. They injected a chemical into the raphe pallidus to inhibit neuronal activity. These neurons are usually involved in the “thermoregulatory cold defense,” said Cerri, in that they’ll trigger biological responses to counteract the lowering of body temperature.

The rats were then placed into a dark, chilly room and fed high-fat diets—conditions known to lower metabolic rate.

Shutting down defense neurons for six hours resulted in a drastic temperature drop in the rats’ brains. Their heart rates and blood pressure also decreased. Eventually, their brain wave patterns began to resemble animals in natural hibernation.

The best part? When the team stopped their treatment, the rats recovered—they didn’t show any signs of abnormal behavior the next day.

Previous attempts at inducing torpor in non-hibernating animals have failed, the team said, but in this study they showed that inhibiting raphe pallidus neurons is essential to inducing a torpor-like state.

If the results hold up in larger mammals, it could spell trouble for human hibernation. Cerri and others are working to further dissect the brain’s control over torpor and how to hijack it for the purpose of inducing a hibernation-like state.

What’s Next?

Human hibernation isn’t yet on the horizon. But results from the conference presenters, among others, are gradually nailing down the molecular and neural factors that could potentially allow us to go into deep freeze.

Leopold Summerer, who leads the Advanced Concepts Team at the European Space Agency, is hopeful about future prospects for human hibernation. “We see the science has advanced enough to put some of the science fiction into the realm of science reality,” he said.

Image Credit: Rick Partington /

Kategorie: Transhumanismus

Using Big Data to Give Patients Control of Their Own Health

7 Listopad, 2018 - 18:00

Big data, personalized medicine, artificial intelligence. String these three buzzphrases together, and what do you have?

A system that may revolutionize the future of healthcare, by bringing sophisticated health data directly to patients for them to ponder, digest, and act upon—and potentially stop diseases in their tracks.

At Singularity University’s Exponential Medicine conference in San Diego this week, Dr. Ran Balicer, director of the Clalit Research Institute in Israel, painted a futuristic picture of how big data can merge with personalized healthcare into an app-based system in which the patient is in control.

Dr. Ran Balicer at Exponential Medicine

Picture this: instead of going to a physician with your ailments, your doctor calls you with some bad news: “Within six hours, you’re going to have a heart attack. So why don’t you come into the clinic and we can fix that.” Crisis averted.

Following the treatment, you’re at home monitoring your biomarkers, lab test results, and other health information through an app with a clean, beautiful user interface. Within the app, you can observe how various health-influencing life habits—smoking, drinking, insufficient sleep—influence your chance of future cardiovascular disease risks by toggling their levels up or down.

There’s more: you can also set a health goal within the app—for example, stop smoking—which automatically informs your physician. The app will then suggest pharmaceuticals to help you ditch the nicotine and automatically sends the prescription to your local drug store. You’ll also immediately find a list of nearby support groups that can help you reach your health goal.

With this hefty dose of AI, you’re in charge of your health—in fact, probably more so than under current healthcare systems.

Sound fantastical? In fact, this type of preemptive care is already being provided in some countries, including Israel, at a massive scale, said Balicer. By mining datasets with deep learning and other powerful AI tools, we can predict the future—and put it into the hands of patients.

The Israeli Advantage

In order to apply big data approaches to medicine, you first need a giant database.

Israel is ahead of the game in this regard. With decades of electronic health records aggregated within a central warehouse, Israel offers a wealth of health-related data on the scale of millions of people and billions of data points. The data is incredibly multiplex, covering lab tests, drugs, hospital admissions, medical procedures, and more.

One of Balicer’s early successes was an algorithm that predicts diabetes, which allowed the team to notify physicians to target their care. Clalit has also been busy digging into data that predicts winter pneumonia, osteoporosis, and a long list of other preventable diseases.

So far, Balicer’s predictive health system has only been tested on a pilot group of patients, but he is expecting to roll out the platform to all patients in the database in the next few months.

Truly Personalized Medicine

To Balicer, whatever a machine can do better, it should be welcomed to do. AI diagnosticians have already enjoyed plenty of successes—but their collaboration remains mostly with physicians, at a point in time when the patient is already ill.

A particularly powerful use of AI in medicine is to bring insights and trends directly to the patient, such that they can take control over their own health and medical care.

For example, take the problem of tailored drug dosing. Current drug doses are based on average results conducted during clinical trials—the dosing is not tailored for any specific patient’s genetic and health makeup. But what if a doctor had already seen millions of other patients similar to your case, and could generate dosing recommendations more relevant to you based on that particular group of patients?

Such personalized recommendations are beyond the ability of any single human doctor. But with the help of AI, which can quickly process massive datasets to find similarities, doctors may soon be able to prescribe individually-tailored medications.

Tailored treatment doesn’t stop there. Another issue with pharmaceuticals and treatment regimes is that they often come with side effects: potentially health-threatening reactions that may, or may not, happen to you based on your biometrics.

Back in 2017, the New England Journal of Medicine launched the SPRINT Data Analysis Challenge, which urged physicians and data analysts to identify novel clinical findings using shared clinical trial data.

Working with Dr. Noa Dagan at the Clalit Research Institute, Balicer and team developed an algorithm that recommends whether or not a patient receives a particularly intensive treatment regime for hypertension.

Rather than simply looking at one outcome—normalized blood pressure—the algorithm takes into account an individual’s specific characteristics, laying out the treatment’s predicted benefits and harms for a particular patient.

“We built thousands of models for each patient to comprehensively understand the impact of the treatment for the individual; for example, a reduced risk for stroke and cardiovascular-related deaths could be accompanied by an increase in serious renal failure,” said Balicer. “This approach allows a truly personalized balance—allowing patients and their physicians to ultimately decide if the risks of the treatment are worth the benefits.”

This is already personalized medicine at its finest. But Balicer didn’t stop there.

We are not the sum of our biologics and medical stats, he said. A truly personalized approach needs to take a patient’s needs and goals and the sacrifices and tradeoffs they’re willing to make into account, rather than having the physician make decisions for them.

Balicer’s preventative system adds this layer of complexity by giving weights to different outcomes based on patients’ input of their own health goals. Rather than blindly following big data, the system holistically integrates the patient’s opinion to make recommendations.

Balicer’s system is just one example of how AI can truly transform personalized health care. The next big challenge is to work with physicians to further optimize these systems, in a way that doctors can easily integrate them into their workflow and embrace the technology.

“Health systems will not be replaced by algorithms, rest assured,” concluded Balicer, “but health systems that don’t use algorithms will be replaced by those that do.”

Image Credit: Magic mine /

Kategorie: Transhumanismus

AI Won’t Replace Doctors, It Will Augment Them

7 Listopad, 2018 - 17:00

The future of medicine is a physician-patient-AI golden triangle, one in which machines augment clinical care and diagnostics—one with the patient at its heart.

That is the takeaway message of DeepMind researcher Dr. Alan Karthikesalingam, who presented his vision of AI-enabled healthcare Monday at Singularity University’s Exponential Medicine conference in San Diego.

You’ve probably heard of DeepMind: it’s the company that brought us the jaw-dropping Go-playing AI agent AlphaGo. It’s also the company that pioneered a powerful deep learning approach called deep reinforcement learning, which can train AI to solve increasingly complex problems without explicitly telling them what to do.

“It’s clear that there’s been remarkable progress in the underlying research of AI,” said Karthikesalingam. “But I think we’re also at an interesting inflection point where these algorithms are having concrete, positive applications in the real world.”

And what better domain than healthcare to apply the fledgling technology in transforming human lives?

Dr. Alan Karthikesalingam at Exponential Medicine Caution and Collaboration

Of course, healthcare is vastly more complicated than a board game, and Karthikesalingam acknowledges that any use of AI in medicine needs to be approached with a hefty dose of humility and realism.

Perhaps more than any other field, medicine puts safety first and foremost. Since the birth of medicine, it’s been healthcare professionals acting as the main gatekeepers to ensure new treatments and technology can demonstrably benefit patients. And for now, doctors are an absolutely critical cog in the healthcare machinery.

The goal of AI is not to replace doctors, stressed Karthikesalingam. Rather, it is to optimize physician performance, releasing them from menial tasks, and providing alternative assessments or guidance that may have otherwise slipped their notice.

This physician-guided approach is reflected by a myriad of healthcare projects that DeepMind is dipping its toes into.

A collaboration with Moorfields Eye Hospital, one of the “best eye hospitals in the world,” yielded an AI that could diagnose eye disease and perform triage. The algorithm could analyze detailed scans of the eye to identify early symptoms and prioritize patient cases based on severity and urgency.

It’s the kind of work that normally requires over twenty years of experience to perform well. When trained, the algorithm had a success rate similar to that of experts, and importantly, didn’t misclassify a single urgent case.

Roughly 300 million people worldwide suffer from eyesight loss, but if caught early the symptoms can be preventable in 80 to 90 percent of cases. As technologies that image the back of the eye become increasingly sophisticated, patients may have access to methods to scan their own eyes with the use of smartphones or other portable devices. Combined with AI that diagnoses eye disease, the outcome could dramatically reduce personal and socio-economic burden for the entire world.

“This was an incredibly exciting result for our team. We saw here that our algorithm was able to allocate urgent cases correctly, with a test set of just over a thousand cases,” said Karthikesalingam.

Another early collaborative success for DeepMind is in the field of cancer. Eradicating tumors with radiation requires physicians to draw out the targeted organs and tissues on a millimeter level—a task that can easily takes four to eight (long, boring) hours.

Working with University College London, DeepMind developed an algorithm that can perform clinically-applied segmentation of organs. In one example, the AI could tease out the delicate optic nerve—the information highway that shuttles data from the eyes to the brain—from medical scans, thereby allowing doctors to treat surrounding tissues without damaging eyesight.

Interpretable and Scarce

“There’s a real potential for AI to be a useful tool for clinicians that benefits patients,” said Karthikesalingam.

But perhaps the largest challenge in the next five to ten years is bringing AI systems into the real world of healthcare. For algorithms to cross the chasm between proof-of-concept to useful medical associates, they need an important skill outside of diagnosis: the ability to explain themselves.

The doctors need to be able to scrutinize the decisions of deep learning AI—not to the point of mathematically understanding the inner workings of the neural networks, but at least having an idea of how a decision was made.

You may have heard of the “black box” problem in artificial neural networks. Because of the way they are trained, researchers can observe the input (say, MRI images) and output decision (cancer, no cancer) without any insight into the inner workings of the algorithm.

DeepMind is building an additional layer into its diagnostic algorithms. For example, in addition to spitting out an end result, the eye disease algorithm also tells the doctor how confident (or not) it is in its own decision when looking through various parts of an eye scan.

“We find this to be particularly exciting because it means that doctors will be able to assess the algorithm’s diagnosis and reach their own conclusions,” said Karthikesalingam.

Even deep learning’s other problem—it’s need for millions of training data—is rapidly becoming a non-issue. Compared to online images, medical data is relatively hard to come by and expensive. Nevertheless, recent advances in deep reinforcement learning are drastically slashing the amount of actual training data needed. DeepMind’s organ segregation algorithm, for example, was trained on only 650 images—an extremely paltry set that makes the algorithm much more clinically applicable.

Towards the Future

“At DeepMind we strongly believe that AI will not replace doctors, but hopefully will make their lives easier,” said Karthikesalingam.

The moonshot for the next five years isn’t developing better AI diagnosticians. Rather, it’s bringing algorithms into the clinic in such a way that AI becomes deeply integrated into clinical practice.

Karthikesalingam pointed out that the amount of AI research that actually crosses into practice will depend not just on efficacy, but also trust, security and privacy.

For example, the community needs to generate standard medical image datasets to evaluate a variety of algorithmic diagnosticians on equal footing. Only when backed by ample, reproducible evidence can AI systems be gradually accepted into the medical community and by patients.

“In the end, what we’re doing is all about patients,” said Karthikesalingam. “I think this is perhaps the most important part of all. Patients are ultimately who we hope to benefit from all the exciting progress in AI. We’ve got to start placing them at the heart of everything we do.”

Image Credit: HQuality /

Kategorie: Transhumanismus

Custom-Grown Bones, and Other Wild Advances in Regenerative Medicine

6 Listopad, 2018 - 17:00

The human body has always been an incredible machine, from the grand feats of strength and athleticism it can accomplish down to the fine details of each vein, nerve, and cell. But the way we think about the body has changed over time, as has our level of understanding of it.

In Nina Tandon’s view, there have been two different phases of knowledge here. “For so much of human history, medicine was about letting the body come to rest, because there was an assumed proportionality attributed to the body,” she said.

Then, around the turn of the last century, we started developing interchangeable parts (whether from donors, or made of plastic or metal), and thinking of our bodies a bit more like machines. “We’re each made out of 206 bones held together by 360 joints,” Tandon said. “But many of us are more than that. By the time we go through this lifetime, 70 percent of us will be living with parts of our body that we weren’t born with.”

If that percentage seems high—it did to me—consider all the things that count as ‘parts’ of our bodies that are artificial: Dental implants. Pacemakers. IUDs. Joint replacements.

Now, though, we’re moving into a third phase of bodily knowledge. “We are an ecosystem of cellular beings, trillions of cells,” Tandon said. “We finally realized that man is a modular system, and cells are the pixels in this world.”

Tandon is co-founder and CEO of EpiBone, a company working on custom-growing bones using patients’ own stem cells. In a talk at Singularity University’s Exponential Medicine in San Diego this week, Tandon shared some of her company’s work and her insights into regenerative medicine, a field with tremendous promise for improving human well-being.

Nina Tandon at Exponential Medicine

What sets the third phase of knowledge apart from the second phase is that we’re learning how to fix and rebuild our own bodies using, well, our own bodies. Some examples include CAR-T therapies, which fight cancer using a patient’s own cells; regenerative medicine, which uses stem cells to repair body parts or make new ones; and microbiome analyses, which use our gut bacteria to fashion personalized dietary treatments.

Tandon’s expertise, though, is in personalized bones (not a term you ever thought you’d hear, is it?). “Bone is the most transplanted human tissue after blood,” she said. “And we’re replacing over a million joints every year in this country alone, just because of a couple millimeters of damaged cartilage. Welcome to the hundred-billion-dollar medical device industry.”

Epibone is working on doing it better. Here are some details of their method.

First, patients undergo a CT scan to determine the size and shape of the bone they need. Stem cells are extracted from the adipose (fatty) tissue in the abdomen. A scaffold model of the bone is created, as is a custom bioreactor to grow the bone in, while the extracted stem cells are prodded to differentiate into osteoblasts (bone cells).

When they’re ready, the stem cells are infused into the bone scaffold, and a personalized bone graft grows in the bioreactor in just three weeks. When the new bone is implanted into the patient’s body, the surrounding tissue seamlessly integrates with it; the custom size and shape ensure it will fit, there’s no risk of rejection since it contains the patient’s own cells, and since it’s made of living tissue, it’s likely to require far less revision than other types of implants.

Epibone is hoping to start human clinical trials next year, and it’s in good company; Tandon mentioned several concurrent projects in regenerative medicine that show we’ve truly entered the “biofabrication age,” as she put it.

Humacyte is working on bioengineered acellular vessels, and is currently in phase three clinical trials. Emulate Bio miniaturizes organoids on tissue chips. CollPlant has engineered tobacco plants to produce recombinant human collagen. Ecovative uses mushrooms to engineer sustainable advanced materials. BioMASON created a concrete that self-heals its cracks using water-activated bacteria.

“Cellular therapies can also involve using bugs as drugs,” Tandon said. “Imagine a probiotic yogurt being a kind of diagnostic device in the future using these little micro machines called bacteria.” To that end, Sangeeta Bhatia’s lab at MIT has engineered bacteria to glow green in the presence of colon cancer cells.

The list goes on—companies are building tools so wild that many still sound like science fiction.

As they continue to advance, Tandon noted, we must always consider the ethics behind these technologies and how we’re using them, and the conversations need to go beyond hot-button issues like designer babies or body modification.

“Are the modalities of government grant funding, angel funding, and VC really incentivizing us to develop the technologies that we want to see?” she asked. Access to biotech tools and treatments is an ethical consideration as well; scale and cost control must be foremost in biotech developers’ minds, so as not to end up with solutions for only the wealthy and privileged.

Regenerative medicine will certainly pose challenges, but its possibilities are vast and exciting.

In closing, Tandon asked the audience to envision a future where all the extra parts our bodies need “…are made not out of metal, not out of ceramic, not out of parts carved from other peoples’ bodies—but made out of ourselves.”

Image Credit: ChooChin /

Kategorie: Transhumanismus

Could Blockchain Voting Fix Democracy? Today, It Gets a Test Run

6 Listopad, 2018 - 16:00

There’s no shortage of debate about the role tech has played in politics. From misinformation being spread via WhatsApp in Brazil to Facebook becoming a tool for hate speech in Myanmar to the Cambridge Analytica scandal in the US, many would say tech has been a burden rather than a boon.

Tech has certainly impacted the ease with which information—both true and false—is spread, and hence the way people perceive political candidates. But what about voting itself? Even as tech has affected how we decide who to vote for, the process of casting a ballot and tallying votes on election day has remained largely unchanged.

‘Modernizing’ voting by making it mobile and digital has been an ongoing conversation for years, but always comes back to the same conclusion: such a fundamental piece of democracy is too crucial to expose to cyber-risks.

But long-time opponents of internet voting now have a new player to contend with, one that’s claiming to bring the security and immutability that’s been the missing link up until now: blockchain. The midterm elections today include a small blockchain voting experiment, which many are hoping will scale up in coming years.

An Experiment in Digital Democracy

For this midterm election, overseas citizens and members of the military from twenty-four counties in West Virginia have the option to vote using an app called Voatz.

The experiment is the result of collaboration between Tusk Montgomery Philanthropies and West Virginia’s secretary of state, Mac Warner. As a member of the military and the US State Department for 28 years, Warner was troubled by how difficult it was for overseas service members to participate in elections. Political strategist and venture capitalist Bradley Tusk is the founder of Tusk Montgomery, which aims to improve American democracy by making it easier to vote.

“We’re completely polarized, and nothing gets done,” Tusk told The New Yorker. “I don’t see how democracy survives absent radically higher participation.”

With funding from Tusk Montgomery, Voatz was piloted with overseas West Virginians in May. Participants’ votes are recorded on a private blockchain, and ballots are transmitted to multiple computers that verify the validity of votes before they’re counted. The app uses end-to-end encryption and biometric verification, such as through the fingerprint or eye-scan technology built into some smartphones.

Does Easier Voting = More Voting?

As Tusk emphasized, a fundamental tenet of democracy is citizen participation and engagement. If no one’s voting—or just a select group of heavily partisan voters are—then elections aren’t serving the purpose the founding fathers intended.

UCSB’s American Presidency Project shows voter turnout in US presidential elections consistently staying below 60 percent from 1968 to 2012, and below 55 percent in more than half those elections. A study by the Pew Research Center found that the US ranked 26th in voter turnout out of 32 developed democratic states. Many of the countries that outrank the US have compulsory voting laws—for example, Australians who don’t vote must pay a $20 fine.

Voting isn’t all that hard; you register in advance, show up at a polling place on election day, and cast your ballot. You might have to wait in line, or be late to work, or face bad weather or traffic or any other number of minor annoyances—but it’s just one day every few years, and it’s a privilege millions around the world don’t have.

Despite this, what if the minor annoyances of voting are actually barriers keeping people from voting at all? Would the convenience of voting straight from our phones make a measurable difference in participation?

A study called the Cost of Voting Index found that factors like voter-registration deadlines, laws around early and absentee voting, voter ID requirements, and polling hours influenced voter participation in the 2016 presidential election, with a higher turnout in states where voting is easier.

Does Easier Voting = Better Voting?

For every ardent supporter of blockchain voting, there’s an even more ardent detractor—or two. The staunchest criticism is, unsurprisingly, security.

Blockchain is famed for its security and immutability. But, at least with the Voatz app, ballots don’t go straight from the voter to a blockchain, and there’s widespread concern about what could happen in the space between.

Rather than a blockchain-based app, Voatz can more accurately be described as an app with a blockchain attached to it, according to Marian Schneider, president of elections NGO Verified Voting, an organization wholly opposed to any form of internet voting.

A 2015 report by the U.S. Vote Foundation to assess the feasibility of end-to-end verifiable internet voting found that risks in voter authentication, client-side malware, network attacks, and DdoS (distributed denial of service) attacks were too high to outweigh the benefits of online voting, coming to the grim conclusion that “Unless and until those additional security problems are satisfactorily and simultaneously solved—and they may never be—we must not consider any Internet voting system for use in public elections.”

A team of researchers from the Initiative for CryptoCurrencies and Contracts, firmly opposed to blockchain voting, raised many of the same concerns, including the threat of interference by malware and network attacks. They also believe voting on a blockchain could make vote buying easier, and point out that Voatz (along with other makers of voting machines and online voting systems), while assuring the public of the app’s security, has declined to provide public access to its cryptographic protocols.

A New and Nebulous Political Era

Blockchain as a tool for internet voting is both imperfect in its current state and promising as a possibility. But proponents and opponents alike should keep in mind that it’s far from a mature technology.

Five years into Facebook and other social media platforms, we didn’t imagine these sites would eventually be used to spread hate speech or targeted propaganda, and we didn’t realize they may have influenced our political choices until they’d already done so.

Similarly, outside of the security hurdles blockchain must clear to become a viable voting tool, it may contain risks and challenges we’re not yet aware of.

Tech has presented a slew of challenges to modern politics, and balancing the harm it can cause with the good it can do is no small task. It’s a problem that will be solved incrementally, and probably slowly at that.

As for getting more people to vote, even Bradley Tusk acknowledges blockchain may not end up being the answer. “It’s not about voting on a blockchain,” he said. “If something emerges tomorrow that is better than blockchain voting, that’s totally fine with me.”

The West Virginia experiment today will be, if nothing else, an indicator of where to go from here.

Image Credit: Breaking The Walls /

Kategorie: Transhumanismus

Y Combinator’s Search For a Climate Change Unicorn

5 Listopad, 2018 - 17:00

Silicon Valley’s premier tech incubator, Y Combinator, has decided to take on climate change. They’ve put out a call for proposals for technology that can suck CO2 from the air in the hope of reversing our seemingly inexorable march towards a catastrophic 2°C increase in global temperatures.

An incubator that became famous for spawning internet-based unicorns like AirBnB, DropBox, and Stripe is not the most obvious vehicle for supporting next-generation geoengineering approaches. But with sluggish action on climate change, they’ve decided to bring Silicon Valley’s disruptive potential to bear on carbon sequestration technology.

The call comes hot on the heels of the IPCC’s recent warning that we need to limit warming to 1.5°C rather than 2°C to avoid serious adverse effects. And with growing pessimism around efforts to curb emissions, most scenarios in the report suggest we will need to combine dramatic expansion in renewable energy with removal of huge amounts of CO2 from the atmosphere.

In announcing the new initiative, the company concedes that the approaches they plan to support are not, and should not be, our Plan A. But they argue that we’re already past the point where clean energy can tackle climate change by itself, so we need to start preparing our plan B.

Some of the ideas verge on science fiction at this stage, as Y Combinator admits. But their aim is not to commercialize ideas that already exist; rather, they want to fund improbable moonshots with a high likelihood of failure but potentially massive returns. To that end, they’ve identified four research areas that fit that mold.

Ocean Phytoplankton

Plants are the planet’s best carbon-fixers, but huge swathes of the ocean are almost entirely devoid of them. That’s largely due to a lack of key nutrients required to carry out photosynthesis, so the idea is to either use fleets of ships to fertilize these areas or genetically engineer phytoplankton so it doesn’t require the nutrients.

The former would be an enormous bioengineering challenge, while the latter would be a logistical nightmare. In either scenario, the organisms would likely need to be engineered to convert CO2 into a stable form of carbon, like a bioplastic, to prevent it re-entering the carbon cycle when they die.

Electro-Geo Chemistry

Nature is already sequestering about one billion metric tons of CO2 every year through a process called mineral weathering, in which rocks react with CO2 to create carbon-storing minerals that eventually wash into the ocean. We could dramatically accelerate that process by using renewable energy to carry out electrolysis on saline water, which produces both hydrogen fuel and a highly reactive solution that can trap CO2 in minerals.

The hydrogen produced in the process means roughly half the energy used by the process can be recovered—but the main problem is that at scale, this would be a huge drain on already-scant clean energy.

Cell-Free Systems

A plethora of microbes can convert CO2 into other useful compounds, but they’re also busy doing all the other stuff required to support life, so this process isn’t as efficient as we might like. If we could isolate the enzymatic pathways responsible, though, it could be possible to create bioreactors containing carefully-designed enzyme systems that can efficiently suck CO2 from the air and potentially even use it to make valuable chemicals.

The main limitation with this idea is that we currently have almost no idea how to do this in a stable way at sufficient scale. Nonetheless, rapid growth in synthetic biology technology means we already have most of the basic tools required to hand, and they’re rapidly improving.

Desert Flooding

In the same way much of the ocean is doing little to help us extract CO2 from the atmosphere, the 10 percent of the world that is desert also represents huge tracts of land free of carbon-fixing vegetation. Creating millions of 1km2 shallow oases that could support algal beds could not only suck up emissions, but also potentially make these areas habitable.

The benefit is that almost all the technological parts are in place already. But it would take trillions of dollars and the biggest infrastructure project the world has ever seen to make it a reality.

Meddling With Nature

It’s clear that most of these ideas are long shots, but that is where Y Combinator has made its name. But in this case the risks aren’t only to investors. Releasing genetically-engineered algae into the ocean, pumping our seas full of minerals, or transforming deserts into networks of oases could have unpredictable effects.

Geoengineering by its very essence is a Faustian bargain—reshaping the planet’s ecosphere on a scale large enough to suck gigatons of CO2 out of the air is inevitably going to have knock-on effects. But whether Silicon Valley’s mantra of “move fast and break things” is the best guiding principle when thinking about the trade-offs is questionable.

Given the limited funding available for negative emissions technology, money could well be better spent on improving more established approaches. A recent report from the National Academies that assessed a host of more established carbon capture technologies found significant barriers, such as unsustainable land use or unreasonable costs, but suggested that sustained government investment could overcome many of these issues.

The accountability that comes with public-sector-led efforts may also be important when it comes to technologies that could dramatically reshape the world we live in. The question remains, though, whether there is the will and the means for governments to push the development of this technology fast enough.

It’s also important to recognize the danger of presenting policymakers with the promise of a ‘get out of jail free’ card. That may slow their attempts to reduce emissions despite the fact that the effectiveness and scalability of all these methods is far from established.

At the end of the day, though, broad inaction on climate change from governments around the world means that any attempt to find a solution, no matter how radical, needs to be applauded. And if Y Combinator can create a climate change unicorn, more power to them.

Image Credit: Fabio Lamanna /

Kategorie: Transhumanismus

Singularity University’s Exponential Medicine Kicks Off Today in San Diego

4 Listopad, 2018 - 19:00

Technology is enabling us to explore groundbreaking new ideas in medicine, and as a result the medical field is advancing more rapidly than ever before. AI is helping find faster, better ways of diagnosing, curing, and preventing illness. Gene editing is allowing scientists to rewrite the fundamental building blocks of life. Multiple disciplines, including genomics and 3D printing, are bringing a new level of personalization to medicine—gone are the days of ‘one treatment fits all.’

The Singularity Hub team is on the ground this week at San Diego’s Hotel Del Coronado for Singularity University’s annual Exponential Medicine Summit. Our writers will bring you editorial coverage, and you can join the conversation on Singularity Hub’s Facebook page and Twitter account.

Or tune into the summit in real-time with our video live stream.

From renowned neuroscientists and surgeons to leading technologists and entrepreneurs, we’ll be learning from over 70 industry experts during the next four days. Here’s a taste of a few speakers in this year’s lineup:

You won’t want to miss what these and other great minds have to say about where we are in medicine—and where we’re going.

Image Credit: Dancestrokes /

Kategorie: Transhumanismus

Our Voting System Is Hackable. Here’s How to Secure It

4 Listopad, 2018 - 17:00

The November 6, 2018 midterm elections are being widely regarded as a referendum on President Trump, but they will also serve another, less obvious purpose. The integrity of our election infrastructure is being tested. Key technological vulnerabilities remain.

Although media coverage of Russian interference in the 2016 US presidential election has largely focused on the possibility of collusion, troll farms, and leaked DNC emails, election infrastructure was also directly targeted.

If hackers demonstrably alter votes in the midterms, our republic’s processes and core promise might seem tainted. The Secure Elections Act sought to patch up vulnerabilities and prevent this, but a committee session was cancelled following White House criticism.

According to an op-ed by Republican Senator James Lankford and Democratic Senator Amy Klobuchar, election security grant funding has not fully resolved electoral problems. “Fourteen states do not have adequate post-election auditing procedures,” they stated.

A recent DEF CON report revealed massive vulnerabilities in voting equipment. A voting tabulator used in 23 states can be remotely hacked via a network attack. A second critical vulnerability was disclosed to the vendor a decade ago, yet the machine still contains that flaw. The report also revealed that an electronic card used to activate voting terminals can be wirelessly reprogrammed. This vulnerability could allow a nefarious actor to cast an endless amount of votes.

There are indications that a Russia-backed hacking offensive is still underway. In May, British and US security officials revealed that a Russian hacking group targeted millions of computers and infected home wifi routers. In July, a Microsoft executive told a security forum that the company detected evidence of phishing attacks, originating from a fake Microsoft domain. The targets were all candidates in the midterm elections.

A History of Vulnerability

In 2002, the Bush administration passed the Help America Vote Act to assist states in the replacement of punch card and lever-based voting systems. But massive problems remained. In 2006, the documentary Hacking Democracy exposed backdoors in software made by Diebold Election Systems.

In 2016, infrastructure vulnerabilities were exploited. The Department of Homeland Security belatedly notified 21 states that their election systems had been targeted. Some states disputed DHS’s findings.

State and federal officials claim that no votes were changed, but there are other methods of disruption. Voter registration databases can be manipulated. In 2016, many Americans were turned away at the polls even when they displayed current registration cards. These problems were attributed to electronic poll books. One of the companies providing the software, VR Systems, had been penetrated by Russian hackers months earlier.

“I don’t believe that the goal of attacking the voter registries is just to send out the message to kind of say it has been hacked,” said Carsten Schürmann, Director of DemTech Group. He told me that it’s very difficult to discern if the hackers were successful.

“We don’t know how many people were turned away from polling stations because they weren’t on the electoral roll,” he said.

The Department of Homeland Security determined that election infrastructure should be designated as a critical infrastructure sub-sector.

When asked to comment, cybersecurity expert Chuck Brooks said, “The funding and compliance issues are complex as many state officials are wary of federal control over any aspects of elections under states’ rights.”

Lawrence D. Norden, Deputy Director of the Democracy Program at the Brennan Center for Justice, also told me that the federal government has been cautious about infringing on the authority of states to run their own elections.

Some problems are exacerbated by a general naiveté.

“For the most part, Americans really do not understand the cybersecurity dimensions of voting,” said Brooks. “We are in the midst of transitioning into a digital world where data and personal records are being routinely compromised and stolen.” Antiquated voting machines are vulnerable to insider threats, negligence, and hackers who get tools from the dark web.

Many governments outsource the procurement and operations of election technologies. The AP reported that only 3 companies sell and service more than 90 percent of the machinery used to cast votes and tabulate results in the US. This outsourced guardianship of democracy and digital campaign efforts has, at times, included questionable practices. In 2017, private contractor Election Systems & Software left Chicago voter data publicly exposed on an Amazon cloud server. Deep Roots Analytics, a data firm hired by the RNC, accidentally leaked personal details on about 61 percent of the US population.

What Happened in Georgia

Peculiar circumstances still loom over the state of Georgia’s presidential election results.

In 2017, Marilyn Marks, an elections integrity activist, was keeping an eye on Georgia’s 6th congressional district special election after hearing about the state’s reputation for questionable practices. I spoke with Marks earlier this year.

“On election night, something went kaflooey,” she told me. Democratic candidate Jon Ossoff was on track to win.

“And he had been winning, winning, winning, it was staying over 50 percent and then all of a sudden, the vote counting system goes down,” said Marks. “For two and a half hours, they don’t report anything because the system blew up. And then when it comes back, suddenly he’s down at 47 percent. And this black hole happened in-between.”

Marks litigated to try to prevent the same machines from being used in the runoff. Later on, she found out about cybersecurity researcher Logan Lamb. In August 2016, Lamb had realized that Kennesaw State’s election server was vulnerable. He informed the election center that there was a strong probability their site had already been compromised. After Kennesaw State failed to secure its infrastructure, Lamb got in the system a second time.

“They called in the FBI because they wanted to go after Logan, as if he was the bad guy,” said Marks.

Marilyn Marks got involved. Then, technicians at Kennesaw State wiped the server clean. This deletion happened even after voters had requested an independent security review of that server.

“The destruction of the records is what seemed to create the explosion between the Secretary of State’s office and Kennesaw State University,” said Marks.

The deletion was attributed to “standard operating procedure.”

In September 2018, a federal judge in Atlanta denied a motion to force the state of Georgia to switch from electronic touch screen machines to paper ballots in advance of the midterm elections.

No Way of Knowing

Some officials insist the machines weren’t hacked, but concede that if they were, we would have no way of knowing. In his testimony before the House Committee on Oversight and Government Reform, Dr. Matt Blaze said a successful attack that exploits a software flaw might leave behind little or no forensic evidence.

Schürmann successfully hacked a a WINVote machine at Def Con 2017. He told me he agrees with Dr. Blaze’s assessment.

“The operations that we’ve carried out, I’m pretty sure have not left a trail. There is no log file that you can look at where it says somebody logged in here,” said Schürmann.

How to Secure Our Infrastructure

A Brennan Center for Justice paper titled “Securing Elections from Foreign Interference”  urged Congress, states, and local governments to assist election officials in the replacement of vulnerable and paperless machines, arguing we should be using electronic machines with a voter-verified paper audit trail. This software-independent record provides an important security redundancy, which dually deters against attacks and provides voters with more confidence in electoral integrity.

Additionally, states and local governments need to update the IT infrastructure supporting their voter registration databases. Some systems still run on discontinued software like Windows XP or Windows 2000, rendering them more vulnerable to cyberattacks. Regular and comprehensive threat assessments should be conducted.

Furthermore, more states should conduct post-election audits of paper records in order to identify evidence of vote tampering. Even when states conduct audits, the standards of those audits are inadequate. “They are often insufficiently robust to ensure an election-changing software error would be found,” the paper’s authors wrote.

Schürmann told me, “I believe actually that it’s great to have two result paths, an electronic result path and a paper result path.” He added that it’s easy to acquire a basic set of hacker tools that can violate insecure electronic systems.

“You basically just have to type in the IP number of the server you’d like to attack and the tool does everything. It tells you what kinds of attacks are possible and which ones you should try,” he said.

In a foreword to the Brennan Center paper, former CIA Director Amb. R. James Woolsey wrote, “I am confident the Russians will be back, and that they will take what they have learned last year to attempt to inflict even more damage in future elections.”

As the midterm elections are conducted, there will be ample reason to question the security of the nation’s voting machines. Multiple sources have suggested a voter-verifiable paper audit trail as the most obvious solution, but for now, we are still open to interference.

We need to politically empower cybersecurity experts who can come in and provide third-party analysis. In the case of Logan Lamb, the cybersecurity researcher who sounded the alarm about Georgia’s exposed infrastructure, that wasn’t initially done. His concerns were dismissed. The Deep Root Analytics voter data leak was also discovered by a cybersecurity researcher.

Perhaps the mindset created by technology is partly to blame. Modern tech encourages a sense of urgency and expectation for immediate information and gratification. This does not actually serve the integrity of our voting process. Although the problem originates in the literal machinery of our democracy, the solution must be broader than improved technology.

Image Credit: Orhan Cam /

Kategorie: Transhumanismus

Why Our Current Energy Transition Is Both Unprecedented and Urgent

2 Listopad, 2018 - 16:00

The combustion of oil, gas, and coal have made possible a much higher standard of living for humans through radical innovations in technology and science over the past 150 years. Yet for decades, scientists have provided clear evidence that carbon emissions from burning fossil fuels are imperiling our species and many others.

And now the evidence indicates, according to the latest Intergovernmental Panel on Climate Change report, the window may be closing on the opportunity to limit the damage.

As a historian who has studied the oil industry’s earliest years and petroleum’s role in world history, I believe that keeping the world habitable for future generations will depend on a swift transition to more sustainable energy sources. Unlike past transitions, the current one is at least partly driven by the recognition that stabilizing the climate requires a new mix of energy sources. It is an opportunity to make our energy in smarter ways and with less waste.

A Fossil-Fueled Society

Energy transitions are no simple flip of a new switch following the discovery or adoption of technology. For instance, for about 25 years after 1890, America’s roadways were a wild laboratory of various conveyances. From the horse-drawn buggy to the bicycle, from the Stanley Steamer to the Model T, devices serving the same purpose (including the first electric cars) derived energy from different sources including coal, horsepower, and gasoline.

Competition and influence determined that the internal-combustion engine would power autos of the future. However, public will and political decisions also played important roles, as did zoning ordinances and other laws. Americans determined that the 20th century would be powered by fossil fuels such as petroleum. The marketplace provided them with flexibility to create a landscape of drive-thrus and filling stations.

Similarly, consider the changes to how people illuminated their homes, businesses, and public places.

Between 1850 and 1900, Americans mostly did that with oils and candles rendered from the fat of farm animals and whales, as well as from burning kerosene made first from coal and then petroleum. By the early 1900s, most American lighting was powered by electricity, initially generated from burning coal. Later in the century, that power came from a mixture of coal, natural gas, hydropower, and nuclear energy. Starting around 2000, the use of wind and solar energy began to climb.

Pittsburgh can park some of its municipal electric vehicles at solar-powered charging stations. Image credit:AP Photo/Keith Srakocic

The same kinds of transformations occurred with heating and manufacturing. Cheap electricity, gasoline, and diesel together produced the massive amounts of power and flexibility that completely changed the human condition in the 20th century.

Rethinking Energy

Fossil fuels and nuclear reactors made it possible to do more work and accomplish more  than ever before in human history. Now, another energy transition beckons.

Wind, solar, and other forms of renewable energy, paired with increased efficiency and vast amounts of storage, do not necessarily promise more power. But relying on them does point toward a more sustainable future.

I believe that this revolution requires new ways of thinking about energy that date back to the global energy crisis of the 1970s, a time of temporary oil shortages caused by Middle Eastern nations’ political discontent with Western nations. Long lines at gas stations and other inconveniences fueled a national conversation about conservation.

In 1977, Jimmy Carter made a memorable call for “the moral equivalent of war” on energy waste, and said “we must start now to develop the new, unconventional sources of energy we will rely on in the next century.”

President Jimmy Carter predicted in April 1977 that decisions about energy would “test the character of the American people.”

It had become clear all around that fossil fuels were not infinite resources. A matter of national security since World War I, energy supplies became a geopolitical touchstone of preeminent consideration in the relations between nations.

After 1980, growing awareness of the hazards posed by climate change introduced new criteria by which to select new power sources and phase out old ones. Pollution had been an obvious side effect of burning fossil fuels from the start due to smog and spills. But despite some early postulation, most scientists did not initially realize that this pollution was interfering with Earth’s basic functions.

As a result, in addition to considering price, supply and output, energy sources now must be judged for the carbon that they put into the atmosphere. Under such scrutiny, and thanks to innovation and market forces, fossil fuels are no longer cheaper than solar, wind, and geothermal alternatives in a growing number of locations.

Energy accounting is beginning to change, particularly in parts of the nation and the world where carbon is capped and traded, and countries with carbon taxes in effect.

No Choice in the Long Run

Human energy use has transitioned more or less constantly since we developed the ability to control fire. Historians have long observed that when nations resist these transitions, they can fall behind for an entire generation or more.

For instance, Chinese sailors began the Age of Sail when big ships first harnessed the power of the wind to widen the scope of exploration, trade and warfare in the 1500s. But then China essentially sat idly by, watching while other nations wove this innovation into a new global economy.

Similarly, huge technological leaps are now galloping forward from the assumption that climate change—with its increased temperatures, erratic weather patterns, melting ice caps, rising seas, and the heightened intensity and frequency of storms—demands new ways of thinking about energy.

And any nation that fails to accept this new reality may find itself quickly outmoded.

Brian C. Black, Distinguished Professor of History and Environmental Studies, Pennsylvania State University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image Credit: lassedesignen /

Kategorie: Transhumanismus