Singularity HUB

Syndikovat obsah Singularity Hub
News and Insights on Technology, Science, and the Future from Singularity University
Aktualizace: 1 min 48 sek zpět

Singularity University’s Global Summit Kicks off Today in San Francisco

20 Srpen, 2018 - 19:00

Singularity University’s (SU) third annual Global Summit begins today in San Francisco, running through Wednesday, August 22. The Singularity Hub team will be there to give you a look inside the event with articles covering all the best talks, and you can go here to watch the event livestream.

Global Summit showcases trends in emerging technologies and explores how they’re converging within several industries, as well as how they can help solve the world’s biggest challenges.

Artificial intelligence experts like Suzanne Gildert and Neil Jacobstein will discuss what a world completely transformed by machine learning and AI might look like, and what the role of humans in it will be. World Bank Lead Economist Wolfgang Fengler will talk about how we can use data to understand the impact we’re having on the world, and to solve some of our greatest challenges.

In addition, 2018 marks SU’s 10-year anniversary, and company co-founders Ray Kurzweil and Peter Diamandis will take us back to the origins of SU and look ahead to what they hope the next 10 years will bring. Additional speakers at Global Summit will dive into subjects including:

  • How exponential technologies can help distribute abundance more evenly around the world
  • How autonomous vehicles will transform the way we move from point A to point B
  • The breakthrough mixed reality applications that will transform how we see, hear, and feel
  • How decentralized networks like bitcoin can help strengthen civil liberties
  • Why the combination of data and innovation means we can have the ability to feed our growing global population

To get a taste of what’s in store, check out the below selection of stories from last year’s event, and be sure to tune in for this year’s coverage all this week and next.

NASA Made This Technology for Space—Now, It Will Improve What We Eat

“A single hyperspectral image can provide information about pH, color, tenderness, lean fat analysis, and protein content. The technology combines digital imaging, or computer vision, with spectroscopy, the technique of acquiring chemical information from the light of a single pixel. A sensor processes light and measures how it’s reflected across hundreds of continuous wavelengths. “What this means in practice is you can take an image of a food product and understand the nutritional content, the freshness levels, and how much protein, fat, or moisture it contains,” Ramanan said.”

Civilization Is Breaking Down—Here’s What We Need to Do About It

“As we move towards abundance, Ismail believes we need to move towards a social structure that embodies the female rather than the male archetype.  For starters, Ismail said, “We need to architect our organizations and institutions for flexibility and adaptability.” Existing incentive models in business focus heavily on short-term indicators like quarterly earnings and are not set up for long-term changes. But the most successful companies have turned these models on their heads, with leaders like Jeff Bezos, Mark Zuckerberg, and Larry Page refusing to steer their companies in the status-quo direction.”

Which of These Emerging Technologies Will Be the Next Big Thing?

“Forecasting the cycles of hope and hype in technology is still incredibly difficult, and no one gets it just right. Some exciting technologies seem to be just around the corner, only to die out or hit unexpected roadblocks and get kicked ever further down the road. Still, we live in a pretty amazing time in history, and over the decades, some emerging technologies will rise up and affect our lives profoundly.”

These 7 Forces Are Changing the World at an Extraordinary Rate

“ ‘If you think there’s an arms race going on for AI, there’s also one for HI—human intelligence,’ Diamandis said. He explained that if a genius was born in a remote village 100 years ago, he or she would likely not have been able to gain access to the resources needed to put his or her gifts to widely productive use. But that’s about to change.”

Risk Takers Are Back in the Space Race—and That’s a Good Thing

“Exponential technologies—such as 3D printing, computing, and robotics—are a big reason feats that were once the sole domain of a few governments are becoming possible for startups with a team of 50 or 100 talented workers. ‘We always talk about space being a place where spin-offs happen, where we would go spend a lot of money on Apollo and, in exchange, we get Teflon and cordless drills,’ Wagner said. ‘And it turns out, now we’re back in a part of the cycle where space is where spin-ins are happening.’ ”

The Hyperloop Is Now More Than Just a Crazy Idea, But Will It Really Take Off?

“Critics suggest maintaining the required low pressure will prove a Sisyphean task over hundreds of miles. Others say seismic activity or subsidence could push the tube dangerously out of alignment. It’ll also be difficult to protect the system from attack. And in case of emergency how easy will it be to evacuate? Then there’s the cost to build and maintain what amounts to a brand new and simply massive bit of infrastructure. These challenges, especially cost, get relevant when you go from prototype to scale. So, paper to prototype is a leap forward, but prototype to production may be a bigger one.”

Why Empowering Women Is the Best Way to Solve Climate Change

“The total atmospheric CO2 reduction of 119.2 gigatons that could result from empowering and educating women and girls makes this the number one solution to reversing global warming. ‘A girl who is allowed to be in school and come to be a woman on her terms…makes very different reproductive choices,’ Hawken said. ‘And when we modeled this we modeled family planning clinics everywhere. Not just in Africa, but in Arkansas. Women everywhere should be supported in their reproductive health and well-being for their families.’ ”

Image Credit: Stefig Photo /

Kategorie: Transhumanismus

Newly-Decoded Wheat Genome Opens the Door to Engineering Superfoods

20 Srpen, 2018 - 17:00

Tweaking the DNA of crops to make them hardier and more productive is one of the most promising applications of gene-editing technology. That’s not been possible with wheat because its complex genome has proved tricky to decode, but now an international team of researchers has finally cracked it.

Part of the reason it’s proven so hard to map wheat’s genome is its sheer size—nearly 108,000 genes compared to humans’ 20,000. It also contains three pairs of each chromosome, because the plant we know today is actually a mishmash of three species of grasses that crossbred millennia ago.

To add to the problem, more than 85 percent of the genome consists of repeated sections. Sequencing a genome is done by breaking it down into chunks to make it easier to read and then piecing them back together again, but this is hard to do if so much of it is the same.

That’s why it’s taken more than 200 researchers 13 years to build the first fully-annotated reference genome for bread wheat, which the International Wheat Genome Sequencing Consortium (IWGSC) presented in a paper in Science this week. The genome provides a map that will allow plant biologists to pinpoint the genes and regulatory networks responsible for a host of useful traits like yield, drought tolerance, and pest resistance.

That could be crucial, because wheat is the staple crop for about a third of the world and production will need to increase by 60 percent by 2050 to keep pace with growing populations, according to the UN’s Food and Agricultural Organization.

Researchers have been breeding new varieties of wheat using conventional cross-breeding approaches for a long time, but the process is expensive, time-consuming, and unpredictable because it’s impossible to guarantee offspring will inherit just the right mix of genes from their parents.

The International Maize and Wheat Improvement Centre (CIMMYT), based near Mexico City, has led a lot of this work, but its head of wheat research, Ravi Singh, told the Atlantic they are already starting to exploit the new genome to identify genetic code underlying important characteristics.

They were able to do this because the genome was actually provided to researchers back in January 2017. That means there’s already been plenty of work on practical application of the new knowledge, and this week’s publication was accompanied by six other studies in Science, Science Advances, and Genome Biology.

Among the discoveries made so far are a gene that makes the stems of plants stiffer (and therefore more resistant to stem-boring pests), a gene shared with rice for grain size that scientists have already used to boost this trait in lab-grown plants, and 365 genes for proteins that create allergic responses in humans, like celiac disease.

IWGSC researchers have even paired discoveries in the genome with the gene-editing technology CRISPR, identifying genes responsible for flowering time, and editing them so the plant blooms several days early. They’re also investigating how to switch specific genes on or off at different stages of development, which opens the tantalizing prospect of tailoring a crop’s genetics to its seasonal environment.

Ultimately though, the biggest challenge scientists face could be regulatory. The EU’s top court—the Court of Justice of the European Union—recently ruled that CRISPR-edited plants are subject to the bloc’s stringent GM rules, which many worry could stifle innovation. Recent revelations about CRISPR’s potential unintended consequences means we are unlikely to see attitudes shift anytime soon.

There’s also a valid argument that many of the world’s food problems are not a matter of production, but a matter of logistics. Particularly in developing countries, huge amounts of produce are wasted or spoiled due to insufficient transportation, poor storage, and lack of packaging technology. Whether money should be invested into improving supply chains rather than new crop varieties is an open question.

But with climate change making our planet increasingly inhospitable and unpredictable, the ability to quickly adapt our crops to new environments is likely to be a crucial tool in our arsenal. Having already cracked rice, maize, and soy, it’s reassuring that we’ve been able to add the world’s most widely-cultivated crop to our inventory.

Image Credit: Dilkaran Singh /

Kategorie: Transhumanismus

If We Made Life in a Lab, Would We Understand It Differently?

19 Srpen, 2018 - 19:00

What is life? For much of the 20th century, this question did not particularly concern biologists. Life is a term for poets, not scientists, argued the synthetic biologist Andrew Ellington in 2008, who began his career studying how life began. Despite Ellington’s reservations, the related fields of origins-of-life research and astrobiology have renewed focus on the meaning of life. To recognize the different form that life might have taken four billion years ago, or the shape it could take on other planets, researchers need to understand what, in essence, makes something alive.

Life, however, is a moving target, as philosophers have long observed. Aristotle distinguished “life” as a concept from “the living”—the collection of existing beings that make up our world, such as the neighbor’s dog, my cousin and the bacteria growing in your sink. To know life, we must study the living; but the living is always changing across time and space. In trying to define life, we must consider the life we know and the life we don’t know. As the origins-of-life researcher Pier Luigi Luisi at Roma Tre University puts it, there is life-as-it-is-now, life-as-it-could-be, and life-as-it-once-was. These categories point to a dilemma that medieval mystical philosophers addressed. Life, they noticed, is always more than the living, making it, paradoxically, permanently inaccessible to the living. Because of this gap between actual life and potential life, many definitions of life focus on its capacity to change and evolve rather than trying to pin down fixed characteristics.

In the early 1990s while advising NASA on the possibilities of life on other planets, the biologist Gerald Joyce, now at the Salk Institute for Biological Studies in California, helped to come up with one of the most widely used definitions of life. It’s known as the chemical Darwinian definition: “Life is a self-sustained chemical system capable of undergoing Darwinian evolution.” In 2009, after decades of work, Joyce’s group published a paper in which they described an RNA molecule that could catalyze its own synthesis reaction to make more copies of itself. This chemical system met Joyce’s definition of life. But nobody wanted to claim that it was alive. The problem was, it hadn’t done anything new or exciting yet. A New York Times article put it this way: “Someday their genome may surprise their creator with a word—a trick or a new move in the game of almost life—that he has not anticipated. ‘If it would happen, if it would do it for me, I would be happy,’ Dr Joyce said, adding, ‘I won’t say it out loud, but it’s alive.’ ”

Joyce seeks to understand life by trying to generate simple living systems in the lab. In doing so, he and other synthetic biologists bring new kinds of life into being. Every attempt to synthesize novel life forms points to the fact that there are still more, perhaps infinite, possibilities for how life could be. Synthetic biologists could change the way life evolves, or its capacity to evolve at all. Their work raises new questions about a definition of life based on evolution. How to categorize life that is redesigned, the product of a break in the chain of evolutionary descent?

An origin story for synthetic biology goes like this: in 1997, Drew Endy, one of the founders of synthetic biology and now a professor of bioengineering at Stanford University in California, was trying to create a computational model of the simplest life form he could find: the bacteriophage T7, a virus that infects E coli bacteria. A crystalline head atop spindly legs, it looks like a landing capsule touching down on the Moon as it grabs onto its bacterial host. The bacteriophage is so simple that by some definitions it is not even alive. (Like all viruses, it depends on the molecular machinery of its host cell to replicate.) Bacteriophage T7 has only 56 genes, and Endy thought it might be possible to create a model that accounted for every part of the phage and how those parts worked together: a perfect representation that would predict how the phage would change if any one of its genes were moved or deleted.

Endy built a series of bacteriophage T7 mutants, systematically knocking out genes or scrambling their location in the tiny T7 genome. But the mutant phages conformed to the model only some of the time. A change that should have caused them to weaken would instead have their progeny bursting open E coli cells twice as fast as before. It wasn’t working. Eventually, Endy had a realization: “If we want to model the natural world, we have to rewrite [the natural world] to be modellable.” Instead of trying to make a better map, change the territory. Thus was born the field of synthetic biology. Borrowing techniques from software engineering, Endy began to “refactor” bacteriophage T7’s genome. He made bacteriophage T7.1, a life form designed for ease of interpretation to the human mind.

Phage T7.1 is an example of what one synthetic biologist has called supra-Darwinian life: life that owes its existence to human design, rather than natural selection. Bioengineers such as Endy approach life in dualistic terms: a physical structure on the one hand, a pattern of information on the other. In theory, a perfect representation of life would enable a seamless transition between information and matter, intention and realization: change some letters of DNA on your computer screen, print out an organism that looks and behaves just as you intended. With this approach, evolution threatens to corrupt the engineer’s blueprint. Preserving one’s biological designs might require making your engineered organisms unable to reproduce or evolve.

In contrast, Joyce’s desire for his molecules to surprise him suggests that the capacity for open-ended evolution — “inventiveness, pluripotentiality, open-endedness” — is the critical criteria of life. In accordance with this idea, Joyce now defines life as “a genetic system that contains more bits [of information] than the number that were required to initiate its operation.” But according to this definition, given two identical systems with different histories—one designed and the other evolved—only the latter would be considered alive; the rationally designed system, no matter how complex, would be just a “technological artifact.”

Design and evolution are not always opposed. Many synthetic biology projects use a mix of rational design and directed evolution: they construct a host of mutant cells—variations on a theme—and select the ones that work the best. Although Joyce’s new understanding of life still involves evolution, it evokes the abrupt temporality of emergence rather than Darwin’s longue durée. Emergent life fits a culture of disruptive innovation whose ultimate ideal approximates something like the magic of pulling a kidney out of a 3D printer: the enchantment of joining together familiar things with new and surprising results. Design and evolution are also compatible when bioengineers look at genetic diversity as a treasure trove of design elements for future life forms.

For some synthetic biologists, the path to what the mystics called life-beyond-life—life that exceeds the living as we know it—now runs through biological engineering. Endy describes his vocation in terms of a desire to contribute to life by generating new kinds of “improbable patterns that continue to thrive and exist.” Joyce imagines life and technology joining forces against the fundamental thermodynamic tendency towards disorder and decay. What new forms life will take, only time will tell.

This article was originally published at Aeon and has been republished under Creative Commons.

Image Credit: nobeastsofierce /

Kategorie: Transhumanismus

This Week’s Awesome Stories From Around the Web (Through August 18)

18 Srpen, 2018 - 17:00

What the Year 2050 Has in Store for Humankind
Yuval Noah Harari | Wired
“Humankind is facing unprecedented revolutions, all our old stories are crumbling and no new story has so far emerged to replace them. How can we prepare ourselves and our children for a world of such unprecedented transformations and radical uncertainties?”


The Robots Won’t Take Over Because They Couldn’t Care Less
Margaret Boden | Aeon
“When AI teams talk of aligning their program’s ‘values’ with ours, they should not be taken as speaking literally. That’s good news, given the increasingly common fear that ‘The robots will take over!’ The truth is that they certainly won’t want to.”


How Social Media Took Us From Tahrir Square to Donald Trump
Zeynep Tufekci | MIT Technology Review
“Power always learns, and powerful tools always fall into its hands. This is a hard lesson of history but a solid one. It is key to understanding how, in seven years, digital technologies have gone from being hailed as tools of freedom and change to being blamed for upheavals in Western democracies—for enabling increased polarization, rising authoritarianism, and meddling in national elections by Russia and others.”


Google Just Gave Control Over Data Center Cooling to an AI
Will Knight | MIT Technology Review
“Over the past couple of years, Google has been testing an algorithm that learns how best to adjust cooling systems—fans, ventilation, and other equipment—in order to lower power consumption. …Now, Google says, it has effectively handed control to the algorithm, which is managing cooling at several of its data centers all by itself.”


$100 Million Was Once Big Money for a Startup. Now, It’s Common.
Erin Griffith | The New York Times
“Many of the new investors, including SoftBank’s $93 billion Vision Fund, manage funds so large they dwarf the entire traditional venture capital market in the United States. These giant funds are looking for start-ups that can take large sums of money with one shot.”


The Nightmarishly Complex Wheat Genome Finally Yields to Scientists
Diana Gitig |Ars Technica
“The genome is huge, about five times larger than ours. It’s hexaploid, meaning it has six copies of each of its chromosomes. More than 85 percent of the genetic sequences among these three sets of chromosome pairs are repetitive DNA, and they are quite similar to each other, making it difficult to tease out which sequences reside where.”

Image Credit: paffy /

Kategorie: Transhumanismus

Baidu, Alibaba, and Tencent: The Rise of China’s Tech Giants

17 Srpen, 2018 - 17:00

Baidu, Alibaba, and Tencent (BAT) are now valued at a combined $1 trillion USD.

Alibaba and Tencent alone now account for almost one-third of the MSCI China Index, fueling its 47 percent gain in 2017.

As of this past March, China had skyrocketed to 164 unicorns, worth a combined $628 billion USD. Roughly 50 percent are controlled or backed by BAT.

But BAT aren’t keeping ambitions local. Worldwide, BAT invests in over 150 companies, spanning the gamut from AI to biotech.

And with access to more internet users than those of the US and all of Europe combined, BAT is fueled by the greatest treasure trove of user data on the planet.

Now rivaling the likes of FANG, these homegrown tech giants are driving China’s AI revolution at an unprecedented pace, building out everything from autonomous vehicles and smart cities to facial recognition and AI-driven healthcare platforms.

And with China’s favoring of homegrown players—applying tight domestic restrictions on FAMGA (kicking out Facebook and Google entirely in 2009 and 2010)—BAT works more and more tightly with state-run initiatives.

In November 2017, China’s Ministry of Science and Technology announced a new wave of “open innovation platforms,” relying on Baidu for rollout of autonomous vehicles, Alibaba Cloud (Aliyun) for smart cities, and Tencent for medical imaging and diagnostics.

But while BAT are increasingly embroiled in China’s state agenda, they are also expanding their control to other APAC countries and consumers, recruiting US talent, investing in startups from Canada to Israel, and forming global partnerships in everything from predictive healthcare to conversational AI.

By adopting a platform-based business strategy—expanding horizontally via major acquisitions and equity positions—BAT have gained unparalleled influence in almost every aspect of users’ lives.

In this post, I’ll be looking at some of the biggest BAT highlights, strategies, and state-corporate collaborations catapulting these three AI giants to (global) dominance.


Of all the BAT giants, Baidu was the first to pioneer and apply deep learning, scoring a big win in 2014 with the hire of Andrew Ng to head Baidu’s Silicon Valley AI lab.

By 2015, Baidu’s AI algorithms had already surpassed humans in Chinese speech recognition, a full year before Microsoft achieved the same feat in English.

Fast forward to 2017, and China’s dominant search engine now heads up national initiatives in AI R&D, driverless vehicles, and international open-source platforms.

Reaching second-quarter revenues of around 21.1 billion Chinese yuan ($3.1 billion USD), Baidu continues to beat forecasts, with total revenues increasing 31 percent year on year.

And as Baidu continues expanding into adjacent markets, this growth will only accelerate.

With multiple driverless vehicle patents to its name, Baidu announced its international open-source Apollo platform last year to turbocharge autonomous driving solutions.

Hosting over 95 partners around the globe—including Nvidia, Ford, and Daimler—Apollo’s ecosystem makes source code available to everyone. This means companies can build on existing research versus starting from scratch, massively accelerating progress.

And as of June 2018, Baidu is putting driverless cars on the road.

Launching tests on an unused expressway in China’s industrial city of Tianjin, Baidu has already signed agreements with the local government of Xiong’an New Area to build an AI city, decked out with autonomous cars, smart traffic systems, facial recognition, and sensor-loaded cement.

But it doesn’t stop there.

Already heading China’s National Engineering Lab for Deep Learning Technologies, Baidu is also working on brain-inspired neural chips and intelligent robotics under China’s state-run umbrella.

And if those projects weren’t enough to convince you of Baidu’s ambitions, the search engine is getting serious about speech recognition, aiming to win big in the voice assistant market.

With voice patents in both the US and Japan, Baidu most recently launched Aladdin—a 3-in-1 smart speaker, smart lamp, and projector for the Japanese market—showcasing its product at CES 2018 (the Consumer Electronics Show).

Built on Baidu’s conversational AI platform, DuerOS, Aladdin is only the first of many Baidu consumer products that will rival the likes of Amazon’s Alexa and Google Assistant.

Hinted at by a patent published with the US, China, Europe, South Korea, and Japan, Baidu may soon be rolling out a consumer robot equipped with both voice and facial recognition.

And while Baidu takes charge of driverless vehicles and voice recognition, Alibaba’s been anointed to spearhead smart cities.


China’s leading e-commerce behemoth, Alibaba has made an unbelievable dent in the Chinese retail and financial sectors.

Witnessing a 62 percent rise in sales from core commerce in Q1 of this year, Alibaba has built far more than a digital marketplace.

Leading the world in fintech disruption, Alibaba’s Ant Financial Services Group controls the world’s largest money market fund, has made loans to tens of millions, and handled more payments in 2017 than Mastercard.

Home to both Tmall (B2C) and Taobao (C2C)—China’s top online marketplaces—Alibaba has taken its legacy worldwide with foreign-facing AliExpress. But the real treasure trove of real-world data lies in Alipay, Alibaba’s mobile payments platform.

While mobile payments make up less than one percent of overall in-store transaction volume in the US, they are almost indispensable in China.

Pedestrians pay street-side fruit vendors using QR codes. Charitable givers use Alipay or Wechat Wallet when donating to relief funds or directly to affected families. And at one Hangzhou-based KFC, Alipay debuted the concept of paying with your face.

Already geared with facial recognition for user sign-in, Alibaba’s Alipay has more than half a billion users worldwide. But Alibaba is setting its sights far further afield than just online retail and mobile payments. Working with several local governments, including that of Macau and Hangzhou, Alibaba is at the forefront of smart cities.

Alibaba’s AI cloud platform “ET City Brain” uses AI algorithms to predict outcomes across traffic management, healthcare and urban planning, crunching data from cameras, sensors, social media, and government data.

Aiming to revolutionize urban management, Alibaba has partnered with Nvidia for its deep-learning-based video platform for smart city services. More recently, Alibaba also led a financing round for Chinese computer vision startup SenseTime, now the highest valued AI startup in the world.

And just this year, Alibaba backed AI-based vehicle-to-vehicle network developer Nexar, and has even partnered with the Malaysian government to launch the country’s first City Brain initiative. Targeting traffic, City Brain can optimize urban traffic flow, getting emergency vehicles to the scene at record speeds.

But Alibaba’s reach extends far beyond Asia. Already with operations in over 200 countries, Alibaba is now launching a global $15 billion R&D initiative in AI, quantum computing, and emerging new tech-driven markets.

Scoring top local talent, Alibaba’s R&D center, DAMO Academy, is set to launch in Tel Aviv’s thriving tech hub, among six other cities.


But when it comes to Chinese tech giants with absolutely no analog in the West, Tencent takes the cake. Hands-down.

Briefly surpassing Facebook’s market cap in November of last year, Tencent was the first Chinese company to top $500 billion.

With over a billion users, Tencent’s WeChat is like a digital Swiss army knife on steroids.

Combining the functionality of Facebook, iMessage, PayPal, UberEats, Instagram, Expedia, Skype, WebMD, eVite, GroupMe and many others, WeChat is an ecosystem of epic proportions.

Businesses coordinate large-scale events, people order personal in-home masseurs, and tycoons pay hefty sums, all without ever leaving the mobile app.

After the app’s simple functionality took off among Chinese consumers, WeChat hit 100 million registered users within a year and 300 million by its second anniversary, adding on functions left and right way before Western counterparts like WhatsApp thought to do the same.

Just last year, around 38,000 medical institutions reported having WeChat accounts, 60 percent of which let users register for appointments online. And when it comes to paying your hospital bill, more than 2,000 hospitals accept WeChat Wallet payments.

To solidify its loyal consumer base, Tencent has also become a leader in mobile gaming, owning the wildly popular League of Legends, played by over 100 million people every month.

Rapidly iterating to meet consumer demands, WeChat has made itself almost indispensable to the daily lives of its users, gaining brand loyalty that American social media platforms could only dream of.

And now, Tencent is making extraordinary new inroads in AI-based healthcare disruption under Chinese government leadership.

Hiring scores of researchers and opening an outpost in Seattle, Tencent is massively ramping up AI capabilities. Aiming at world-class status in genomics and personalized medicine, the company further invests in and partners with global startups to bring AI healthcare tech to China.

Just last April, Tencent partnered with UK’s Babylon Health, a virtual healthcare assistant startup, whose app now allows Chinese users to message their symptoms and receive immediate medical feedback.

Most notably, Tencent recently participated in a $154 million mega-round for China-based healthcare AI unicorn iCarbonX. Hoping to develop a complete digital representation of your biological self, iCarbonX has acquired numerous American personalized medicine startups.

And in addition to Tencent’s own Miying healthcare AI platform—aimed at assisting healthcare institutions in AI-driven cancer diagnostics—Tencent is quickly expanding into the drug discovery space, participating in two multimillion-dollar, US-based AI drug discovery deals just this year.

Final Thoughts

China’s tech behemoths are disrupting everything from intelligent urban infrastructure to personalized medicine.

But they aren’t just revolutionizing these industries on their home turf. They’re bringing enormous sums of capital and cutting edge technology to startups and markets across the globe. The pie isn’t getting smaller—it’s getting bigger.

And with BAT at the helm, China shows no signs of slowing down.

Join Me

Webinar with Dr. Kai-Fu Lee: Dr. Kai-Fu Lee—one of the world’s most respected experts on AI—will discuss his latest book AI Superpowers: China, Silicon Valley, and the New World Order. Artificial Intelligence is reshaping the world as we know it. With US-Sino competition heating up, who will own the future of technology? Register here for the free webinar on September 4th, 2018 from 11am – 12:30pm PST.

Image Credit: LU JINRONG /

Kategorie: Transhumanismus

Why Everything You Know About How Companies Learn Is About to Change

16 Srpen, 2018 - 17:00

Chris Pirie is the general manager of worldwide learning at Microsoft, focused on creating a digital, flexible, and scalable learning agenda that meets the needs of its global workforce of nearly 124,000 employees.

Pirie has had a front seat at the learning and leadership table for years, developing digital learning platforms at Oracle, serving on the board of the Association for Talent Development, and leading several NGOs in the space. He believes that digital disruption and the rapid pace of changing skills requirements are creating a ripe moment for the total transformation of corporate learning.

We discussed the trends driving the need for new learning strategies, and what leaders can do right now to best support the learning and leadership needs of their organizations.

Lisa Kay Solomon: Corporate learning and training have been around a long time. What’s unique about this moment in time?

Chris Pirie: Suddenly leaders care a lot about learning and skills at every level and across industry and the public sector. Building skills for the future used to be the exclusive domain of universities and colleges, with some gentle tweaking provided by corporate training departments to keep employees operating as good compliant citizens (see Starbucks) and prepare the workforce for their next role.

This is changing fast as skills development, and especially skills for the future, ramps up as a major preoccupation of our corporate and political leaders. There are several forces driving the accelerated change in the world, and thus, the urgency for organizations to ready and retool their workforces.

Here are a few of the most disruptive.

The Disruption of Digital Transformation. In the future (indeed now) many believe value will derive from data as much as from goods and services, and the skills and ability to collect, manage, analyze, and derive insights will be highly sought after.

The Rise of the Robots. The fourth industrial revolution is disrupting jobs at all levels, promising to automate white collar roles that were traditionally not impacted by automation. The shelf life of existing skills is shrinking, while new emergent skills are desperately needed.

The Demands of Modern Learners. What learners want and what they truly need may be at odds. Learners have less time to learn and want access to instant and more customized learning experiences, their expectations of “just enough, just in time, and just for me” access, customized experiences, and rich selection of media are set by their consumer experiences. But real learning—acquiring skills, understanding new paradigms, and changing behaviors—takes time and costs attention.

LKS: How has digital proliferation changed the nature of what we learn and how we learn?

CP: Thanks to the internet, both content and expertise are easy to access, but ironically hard to find. Content is often free and abundant, but this endless stream of content adds incredible pressure for the learner—there is now no excuse not to know, and yet it’s hard to judge the validity and provenance of the information. Do I take a corporate course or an online MOOC from a reputable school? Do I pay, does my organization pay?

As a result, many of the corporate training departments I meet with have a deep feeling of inadequacy.

Their traditional models of classroom and top-down knowledge transfer seem wholly inadequate for the task ahead. Across a number of surveys, a pattern is emerging: corporate leaders and employees want more impactful learning. But everyone (including the learning leaders themselves) demonstrates little confidence that training groups can respond in a meaningful way.

LKS: What do you see unfolding in the future related to corporate learning?

CP: The learning scientists are coming! Within corporations, we’re going to see a fundamental rethink of the role and responsibility of learning in organizations and the creation of a new type of learning organization. The learning scientists will draw on several disciplines to make time spent learning more effective and efficient, and to build a learning culture that unlocks the natural curiosity and learning prowess that we all have to provide competitive advantage to the organization.

Data Science. These hybrid experts will use the “digital exhaust” and apply sophisticated algorithms to search for behavior patterns to get a read on how knowledge and information flow through organizations and networks. Through these data-driven methods, we can see which meetings are productive, where pockets of expertise exist in the organization, who are the teachers, who are the curious, and what are the behaviors they exhibit.

Neuroscience. Secondly, we’re going to see continued progress in neuroscience to inform our understanding of how we learn and how the brain maps new knowledge and moves it from short-term to long-term memory. We’ll start to know what it looks and feels like to pay full attention and which social and physical conditions can accelerate or throttle the learning process. Organizations like NeuroLeadership Institute are codifying the research into workable models that help leaning designers to leverage those brain chemistry process and biases. I believe we will soon see diagnostic tools to help evaluate costly corporate learning programs against such standards and tools to help learning experience designers design for maximum impact.

Social Science. Thirdly, anthropologists and social learning scientists are exposing the inadequacies of the traditional corporate approach of “we’ll tell you what to think” against the natural state of curiosity and peer-to-peer teaching and learning that is always at work in organizations (but not always in service of the leadership strategy). We’re seeing the emergence of a radical re-think of the role of learning organizations, informed by social anthropologists, cognitive scientists, behavioral economists, and culture experts who are driving dialogue across the industry on the importance of engaging social and informal learning networks.

Computer Science. Cheap and ubiquitous computing power has already fundamentally re-shaped the learning process, particularly in the context of content development, search, social learning networks, and collaboration. As today’s knowledge workers use web hosted collaboration tools like Office365 and Slack, as well as professional networking tools like LinkedIn, they create a set of digital exhaust that is rich in information regarding how expertise and influence flow across an organization.

For example, agents/bots can recommend precise micro-learning content for a technical sales consultant based on the opportunities she is tracking in her CRM system and about to call on today—these engines learn from the research and learning habits of other sellers who have closed similar deals. How humans learn will likely not change, but the process will get a big assist as machine learning and AI specialists build technology that will help us gain new knowledge and skills more efficiently.

LKS: What can leaders do now to promote cultures and reinforce systems of learning within their organizations?

CP: In 2018, a Conference Board survey of the global C-suite revealed that talent and skills are the number one hot button issue for talent and organization leaders. Similarly, the Deloitte Talent Report for 2017 suggested that the phrase Learning Organization no longer referred to the training department, but was now a desirable posture for the entire enterprise. The role of the leader then is pretty clear: job number one is to care deeply about learning!

At Microsoft our inflection point was the announcement of our new CEO, a new corporate mission, and some deep soul searching on the need to shift from a “know it all” to a “learn it all” organization.

Here are three key approaches I’ve observed that Microsoft and other organizations are taking to build a deep learning culture to create sustained competitive advantage.

  • Embrace a growth mindset culture by learning from failure, pursuing deep customer empathy, and celebrating curiosity
  • Harness the social learning already happening across and between organizations and set your teams free to learn from customers and each other
  • Remove friction. Use technology (whatever technology you have to hand) to build high-scale learning programs and knowledge networks that are woven into the fabric of work

Lastly, at Microsoft we are applying these approaches not just to our employees, but across our entire ecosystem, bringing modern learning culture and techniques to our partner and customer programs and building deeper customer trust by infusing learning principles into marketing and sales activities.

Teaching our customers and learning from them in equal measure is essential for both our own transformation and our customers’ digital transformation journey.

Image Credit: Dmitry Guzhanin /

Kategorie: Transhumanismus

China Is Building a Fleet of Autonomous AI-Powered Submarines. Here Are the Details

15 Srpen, 2018 - 17:00

A fleet of autonomous, AI-powered submarines is headed into hotly-contested Asian waterways. The vehicles will belong to the Chinese armed forces, and their mission capabilities are likely to raise concerned eyebrows in surrounding countries.

According to the South China Morning Post (SCMP), the submarines will be able to carry out “[…] a wide range of missions, from reconnaissance to mine placement to even suicide attacks against enemy vessels.”

If all goes to plan, the first submarines will launch in 2020.

New Non-Nuclear Threat

While details of the project remain sparse, one unnamed scientist told the SCMP that the submarines “will not be nuclear-armed.”

The onboard AI systems will be tasked with making decisions on course and depth to avoid detection as well as identifying any craft they come across. One area that has caused some concern is whether the submarines’ AI systems are being designed to not seek input during the course of a mission. In other words, if they will be left to make decisions such as whom to attack.

While there is some light to be had trying to find a name for the submarines’ capabilities (self-swimming?), China’s neighbors will likely be anything but amused by the news. The subs will likely patrol areas in the South China Sea and the Pacific Ocean. Both are contested waters where China and countries like Japan and Vietnam disagree as to who holds the rights to various resource-rich areas and islands. Recently, the Chinese military created artificial islands in the area to use as military bases.

The country’s robotic submarines could be seen as a further escalation of the situation.

Regional unease may be intensified by the fact that AI vessels would be able to learn from similar craft. In other words, the submarines would be able to engage in continuous strategic adjustment and development, should they come to be deployed in a conflict.

The Robot Sea Battle

This is not the only military project involving autonomous vessels at sea. Lin Yang, marine technology equipment director at the Shenyang Institute of Automation, Chinese Academy of Sciences, told the SCMP that the Chinese development project had been launched in part because of similar measures undertaken by the US.

Earlier this year, DARPA handed the ASWACTUW (short for Anti-Submarine Warfare Continuous Trail Unmanned Vessel) experimental craft over to the US Navy. Once fully developed, the “Sea Hunter,” as it’s (thankfully) also known, will be able to carry out autonomous missions for up to three months at a time.

The video below shows a bit more about the project (Warning: soundtrack may confuse and make you think you’re watching a Transformers movie).

The US is also working with major defense contractors on two prototype autonomous submarine systems, coincidentally set to be ready by 2020: Lockheed Martin’s Orca system and Boeing’s Echo Voyager.

The Murky Waters Of AI Warfare

These developments add further fuel to the fiery debate surrounding the use of AI-driven weapons systems. In the case of the submarines, questions include what would happen if they were to potentially go rogue or become compromised, leading them to attach to the wrong goals.

As Jim Mattis put it in an interview about the use of AI and drones in warfare, “If we ever get to the point where it is completely on automatic pilot, we are all spectators. That is no longer serving a political purpose. And conflict is a social problem that needs social solutions, people—human solutions.”

Many echo such sentiments, and fear humans may be getting subtracted out of this particular equation. It is a worry that resounds within the AI industry, with dozens of CEOS—including Elon Musk—signing an open letter to the UN urging a ban on AI-powered weapons.

“Lethal autonomous weapons threaten to become the third revolution in warfare. Once developed, they will permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend. These can be weapons of terror, weapons that despots and terrorists use against innocent populations, and weapons hacked to behave in undesirable ways. We do not have long to act. Once this Pandora’s box is opened, it will be hard to close,” the letter warns.

Image Credit: FOTOGRIN /

Kategorie: Transhumanismus

Amazing New Brain Map of Every Synapse Points to the Roots of Thinking

14 Srpen, 2018 - 17:00

Imagine a map of every single star in an entire galaxy. A map so detailed that it lays out what each star looks like, what they’re made of, and how each star is connected to another through the grand physical laws of the cosmos.

While we don’t yet have such an astronomical map of the heavens, thanks to a momentous study published last week in Neuron, there is now one for the brain.

If every neuron were a galaxy, then synapses—small structures dotted along the serpentine extensions of neurons—are its stars. In a technical tour-de-force, a team from the University of Edinburgh in the UK constructed the first detailed map of every single synapse in the mouse brain.

Using genetically modified mice, the team literally made each synapse light up under fluorescent light throughout the brain like the starry night. And similar to the way stars differ, the team found that synapses vastly varied, but in striking patterns that may support memory and thinking.

“There are more synapses in a human brain than there are stars in the galaxy. The brain is the most complex object we know of and understanding its connections at this level is a major step forward in unravelling its mysteries,” said lead author Dr. Seth Grant at the Center for Clinical Brain Sciences.

The detailed maps revealed a fundamental law of brain activity. With the help of machine learning, the team categorized roughly one billion synapses across the brain into 37 sub-types. Here’s the kicker: when sets of neurons receive electrical information, such as trying to decide between different solutions for a problem, unique sub-types of synapses spread out among different neurons unanimously spark with activity.

In other words: synapses come in types. And each type may control a thought, a decision, or a memory.

The neuroscience Twittersphere blew up.

“Whoa,” commented Dr. Ben Saunders simply at the University of Minnesota.

It’s an “amazing paper cataloguing the diversity and distribution of synapse sub-types across the entire mouse brain,” wrote neurogeneticist Dr. Kevin Mitchell. It “highlights [the] fact that synapses are the key computational elements in the nervous system.”

The Connectome Connection

The team’s interest in constructing the “synaptome”—the first entire catalog of synapses in the mouse brain—stemmed from a much larger project: the connectome.

In a nutshell, the connectome is all the neuronal connections within you. Evangelized by Dr. Sebastian Seung in a TED Talk, the connectome is the biological basis of who you are—your memories, personality, and how you reason and think. Capture the connectome, and one day scientists may be able to reconstruct you—something known as whole brain emulation.

Yet the connectome only describes how neurons functionally talk to each other. Where in the brain is it physically encoded?

Enter synapses. Neuroscientists have long known that synapses transmit information between neurons using chemicals and electricity. There’s also been hints that synapses are widely diverse in terms of what proteins they contain, but traditionally this diversity’s been mostly ignored. Until recently, most scientists believed that actual computations occur at the neuronal body—the bulbous part of a neuron from which branches reach out.

So far there’s never been a way to look at the morphology and function of synapses across the entire brain, the authors explained. Rather, we’ve been focused on mapping these crucial connection points in small areas.

“Synaptome mapping could be used to ask if the spatial distribution of synapses [that differ] is related to connectome architecture,” the team reasoned.

And if so, future brain emulators may finally have something solid to grasp onto.


To construct the mouse synaptome, the authors developed a pipeline that they dubbed SYNMAP. They started with genetically modified mice, which have their synapses glow different colors. Each synapse is jam-packed with different proteins, with—stay with me—PSD-95 and SAP102 being two of the most prominent members. The authors added glowing proteins to these, which essentially acted as torches to light up each synapse in the brain.

The team first bioengineered a mouse with glowing synapses under florescent light.

Next, they painstakingly chopped up the brain into slices, used a microscope to capture images of synapses in different brain regions, and pieced the photos back together.

An image of synapses looks like a densely-packed star map to an untrained eye. Categorizing each synapse is beyond the ability (and time commitment) of any human researcher, so the team took advantage of new machine learning classification techniques, and developed an algorithm that could parse these data—more than 10 terabytes—automatically, without human supervision.

A Physical Connectome

Right off the bat, the team was struck by the “exquisite patterns” the glowing synapses formed. One tagged protein—PSD-95—seemed to hang out on the more exterior portions of the brain where higher cognitive functions occur. Although there is overlap, the other glowing protein preferred more interior regions of the brain.

Microscope images showing the two glowing synapse proteins, PSD-95 and SAP102, across brain sections.

When they looked closely, they found that the two glowing proteins represented different sets of synapses, the author explained. Each region of the brain has a characteristic “synaptome signature.” Like fingerprints that differ in shape and size, various brain regions also seemed to contain synapses that differ in their protein composition, size, and number.

Using a machine learning algorithm developed in-house, the team categorized the synapses into 37 subtypes. Remarkably, regions of the brain related to higher reasoning and thinking abilities also contained the most diverse synapse population, whereas “reptile brain regions” such as the brain stem were more uniform in synapse sub-type.

A graph of a brain cross-section showing some of the most commonly found synapse subtypes in each area. Each color represents a different synapse subtype. “Box 4” highlights the hippocampus. Why?

To see whether synapse diversity helps with information processing, the team used computer simulations to see how synapses would respond to common electrical patterns within the hippocampus—the seahorse-shaped region crucial for learning and memory. The hippocampus was one of the regions that showed remarkable diversity in synapse subtypes, with each spread out in striking patterns throughout the brain structure.

Remarkably, each type of electrical information processing translated to a unique synaptome map—change the input, change the synaptome.

It suggests that the brain can process multiple electrical information using the same brain region, because different synaptomes are recruited.

The team found similar results when they used electrical patterns recorded from mice trying to choose between three options for a reward. Different synaptomes lit up when the choice was correct versus wrong. Like a map into internal thoughts, synaptomes drew a vivid picture of what the mouse was thinking when it made its choice.

Each behavior activates a particular synaptome. Each synaptome is like a unique fingerprint of a thought process. Synaptome Reprogramming

Like computer code, a synaptome seems to underlie a computational output—a decision or thought. So what if the code is screwed up?

Psychiatric diseases often have genetic causes that impact proteins in the synapse. Using mice that show symptoms similar to schizophrenia or autism, the team mapped their synaptome—and found dramatic changes in how the brain’s various synapse sub-types are structured and connected.

For example, in response to certain normal brain electrical patterns, some synaptome maps only weakly emerged, whereas others became abnormally strong in the mutant mice.

Mutations can change the synaptome and potentially lead to psychiatric disorders

It seems like certain psychiatric diseases “reprogram” the synaptome, the authors concluded. Stronger or new synaptome maps could, in fact, be why patients with schizophrenia experience delusions and hallucinations.

So are you your synaptome?

Perhaps. The essence of you—memories, thought patterns—seems to be etched into how diverse synapses activate in response to input. Like a fingerprint for memories and decisions, synaptomes can then be “read” to decipher that thought.

But as the authors acknowledge, the study’s only the beginning. Along with the paper, the team launched a Synaptome Explorer tool to help neuroscientists further parse the intricate connections between synapses and you.

“This map opens a wealth of new avenues of research that should transform our understanding of behavior and brain disease,” said Grant.

Images Credit: Derivatives of Fei Zhu et al. / University of Edinburg / CC BY 4.0

Kategorie: Transhumanismus

AI Can Help Create a Better World—If We Build it Right

13 Srpen, 2018 - 17:00

Society is rife with fears about the future of AI. For some, like Richard Branson and Ray Dalio, it’s AI’s exacerbation of the wealth gap and the looming social crisis it could bring about. For others, it’s privacy implications or sci-fi visions of robot overlords.

My worry for AI, however, is that we are vilifying a technology that may in fact be our single greatest resource for creating a just world—if we build it right. While there do exist serious ethical conundrums and uncertainties regarding the potential for artificial superintelligence, at the moment, what we are facing on Earth is mostly a complex assemblage of human problems.

It is true that automation is contributing to the increasing inequality within most countries, with the income gap widening between each country’s educated middle and upper classes and their less-educated lower classes. However, what’s systematically overlooked is that inequality between different countries is decreasing, in large part due to a combination of global trade and advances in computing, communication, shipping, and manufacturing technologies.

China is gradually approaching American and European levels of income, and the more successful African nations are gradually approaching China’s level of income. The economic rise of China has arguably decreased global wealth inequality dramatically, both within China and via China’s proactive approach to the rest of the developing world. This rise has been in part thanks to the use of AI-driven automation to improve efficiency via factory robotics, supply chain optimization, and other methods.

Looking to the future, AI will provide a critical opportunity for developing countries to continue growth in the coming decades. For China, Accenture predicts AI has the potential to add as much as 1.6 percentage points to the country’s economic growth rate by 2035. Such projections may appear highly conservative in hindsight, however, if futurists like Ray Kurzweil are correct about the advent of human-level artificial general intelligence before 2030. If exponential advances in AI lead to radical breakthroughs, in the end, this is likely to benefit the entire world—but the benefits may initially accrue most to those nations where AI has been developed most intensively.

Further, AI is leveling the playing field by advancing basic infrastructure in the developing world, allowing these countries to compete and cooperate globally. AI has helped farmers in Uganda reduce crop disease, predicted dengue outbreaks in Malaysia, and increased power supply and internet access via smart networking, among a host of other fundamental improvements.

The opportunity for AI to narrow the wealth gap between countries is immense, and we’ve already seen this trend begin to take shape. But there are powerful forces pushing in other directions. Not necessarily out of evil intentions—though those do exist—but mostly just because of the way society has self-organized, and the tendency of most people to fulfill their assigned social roles without giving it much thought.

 The main obstacle to the realization of AI’s potential for global good is currently the socioeconomic organization of the AI industry, which is centered in a few big tech companies—the Googles, Facebooks, Baidus, and Tencents of the world—and large military-industrial complexes. This means AI is primarily developed toward applications that will generate revenue for these companies, or help powerful nations achieve better defense. Other applications largely emerge around the fringes as side effects of these goals.

This also means that most of the brilliant AI developers in the world have little opportunity to participate centrally in the rollout and application of AI technology, because they have limited opportunity to work for these tech companies or militaries. As a result, their cultures and voices are not included in the network of emerging AI tech. This is problematic because they are the ones who know best what their countries need.

The Africa-oriented software and hardware applications being worked on in iCog Labs, the Ethiopian AI and robotics company Getnet Aseffa and I co-founded, were not conceived by me or other Westerners imposing our own ideas of what the developing world needs. Rather, they were conceived by the software development staff and undergraduate interns at iCog—by Ethiopians themselves.

Another significant problem with the lack of global diversity in the development of AI is that the technology is being created with the biases of developers from only a handful of countries. We are teaching AI to see the world from only one perspective. Google’s algorithm, for example, still fails to effectively recognize African Americans, whom it once labeled as “gorillas.” Imagine the biases algorithms like Google’s will display when they are tasked with recognizing people and objects from non-Western countries with entirely different dress, cultural norms, and languages.

Vision-processing biases are only the most obvious reflection of a subtler problem. How do we root culture-specific cognitive biases out of a globally-distributed, emergent general intelligence? The safest approach, and the one most conducive to a high level of intelligence, creativity, and open-mindedness on the part of the AIs we are creating is for the AI’s process of learning and formation to be as globally inclusive and participatory as possible.

What is needed is creation of mechanisms that educate more people in the developing world with deep technical skills, and then enable them to monetize these skills to support themselves and their families. This will create a growing class of people who understand both the developing world and advanced technology.

This can only happen if we open-source and democratize the way we create AI. One of our key goals with SingularityNET is to make AI development more of a broad, participatory pursuit, to ensure that AI algorithms and services are created and contributed to by a wide variety of people with different backgrounds, knowledge, and interests.

The speed with which the bounty of AI and other advanced technologies will be shared with the developing world, and with the underclasses of the developed world, is going to depend largely on the way these technologies get refined and rolled out, and on the entities that do so. Decentralization, democratization, and open-sourcing are all forces acting primarily on the side of sharing and general benefit. Let’s push them forward as rapidly, ambitiously, and wisely as we can.

Image Credit: salajean /

Kategorie: Transhumanismus

A Student Took Down One of Quantum Computing’s Top Applications—Now What?

12 Srpen, 2018 - 17:00

The possibility that quantum computing could turbocharge machine learning is one of the most tantalizing applications for the emerging technology. But an 18-year-old student just put that vision in doubt after finding a classical solution to one of its most promising real-world applications.

One of the poster boys for the incipient field of quantum machine learning (QML) was a solution to the “recommendation problem”—essentially, how Netflix determines what movie you might like—published in 2016 that was exponentially faster than any classical algorithm. But as reported in Quanta, in the process of trying to verify its unassailability, Ewin Tang came up with a classical version just as fast.

Tang, a prodigy who enrolled at the University of Texas at age 14, was set the task of proving there was no fast classical alternative to the quantum solution by quantum computing expert Scott Aaronson as an independent research project. Ultimately, he ended up taking inspiration from the quantum algorithm to design a classical one exponentially faster than any of its predecessors.

His research is now undergoing peer review before publication, but has already been presented informally at a meeting of quantum computing experts who broadly agreed that the algorithm was correct, according to Quanta.

In some senses it’s good news, because Tang has found a way to translate speedups thought to only be possible with an (as yet unbuilt) quantum computer to more conventional machines. But that’s probably little comfort if you’re one of the companies investing millions trying to build quantum hardware.

Machine learning has long been considered one of the potential early applications for quantum computers, because there are fundamental synergies between the two fields. They’re both best when dealing with huge amounts of data; they’re resistant to uncertainty; and they can tease out subtle patterns traditional computing approaches may miss. The hope is that large-scale quantum machines will ultimately be able to perform these calculations exponentially faster than machine learning running on conventional hardware.

The field of QML was really born in 2008 with the development of an algorithm now referred to as HHL, which was capable of solving the kind of matrix calculations at the heart of many machine learning problems.

Since then, a QML startup incubator has popped up at the University of Toronto; the companies building large-scale quantum computers have publicly pinned their hopes on the field; and there have been a number of early-stage demos of the approach.

Quantum computing startup Rigetti showed at the end of last year that it could run a clustering algorithm on one of its prototype quantum chips, while IBM demonstrated a simple classification task on a quantum computer in March.

Earlier in 2017, researchers used a quantum computer built by D-Wave to crunch data from the Large Hadron Collider to look for photon signatures that could indicate the Higgs Boson. And just last month, one of the creators of the HHL algorithm developed a quantum equivalent of generative adversarial networks (GANs), which pit two neural networks against each other with one trying to trick the other with ever more convincing fakes of things like photos or audio.

But while these demonstrations have shown it is entirely possible to do machine learning on quantum computers, none of them have proved that it would be any faster, at least when dealing with classical data.

And that points to the main limitation of QML.  Quantum computers operate on quantum states, not the 0s and 1s that make up the datasets of real-world machine learning problems. Converting that data into a form these machines can run on and then translating the answers back into something human-readable is a major outstanding challenge.

One area where that limitation doesn’t apply is carrying out analysis on quantum data, and so this may end up being one of the early applications of QML. Aside from that, there may be more creative—and as yet undiscovered—approaches to machine learning that exploit the underlying physics of these machines rather than simply trying to copy approaches developed for conventional computers.

And at the end of the day, even if a quantum speedup remains elusive for the time being, Tang’s new algorithm shows that research into quantum algorithms can inspire breakthroughs in classical approaches that can push machine learning forward.

Image Credit: GiroScience /

Kategorie: Transhumanismus

This Week’s Awesome Stories From Around the Web (Through August 11)

11 Srpen, 2018 - 17:00

I Tried Magic Leap and Saw a Flawed Glimpse of Mixed Reality’s Amazing Potential
Adi Robertson | The Verge
“Whether or not it’s cooler than most people think, the Magic Leap One is cooler than the vast majority of mixed reality in 2018. But it still seems a long way from realizing the promise of the medium—and it hasn’t shown that it can bridge that gap.”


The NYSE’s Owner Wants to Bring Bitcoin to your 401(k). Are Crypto Credit Cards Next?
Shawn Tully | Fortune
“Bitcoin could be on the verge of breaking through as a mainstream currency. At least that’s the goal of a startup that is soon to be launched by one of the most powerful players on Wall Street, with backing from some of America’s leading companies.”


Inside the Very Big, Very Controversial Business of Dog Cloning
David Ewing Duncan | Vanity Fair
“Barbra Streisand is not alone. At a South Korean laboratory, a once-disgraced doctor is replicating hundreds of deceased pets for the rich and famous. It’s made for more than a few questions of bioethics.”


What’s Driving Elon Musk?
Amit Katwala | Wired
“Over two months of reporting, friends, family members, and former colleagues told Wired about the personality traits that have contributed to the billionaire’s successes and his setbacks, and what they might mean for the future of SpaceX, Tesla, and life on Mars.”


This Robot Uses AI to Find Waldo, Thereby Ruining Where’s Waldo
Dami Lee | The Verge
“If you’re totally stumped on a page of Where’s Waldo and ready to file a missing persons report, you’re in luck. Now there’s a robot called There’s Waldo that’ll find him for you, complete with a silicone hand that points him out.”

Image Credit: vrx /

Kategorie: Transhumanismus

Could Machine Learning Mean the End of Understanding in Science?

10 Srpen, 2018 - 17:00

Much to the chagrin of summer party planners, weather is a notoriously chaotic system. Small changes in precipitation, temperature, humidity, wind speed or direction, etc. can balloon into an entirely new set of conditions within a few days. That’s why weather forecasts become unreliable more than about seven days into the future—and why picnics need backup plans.

But what if we could understand a chaotic system well enough to predict how it would behave far into the future?

In January this year, scientists did just that. They used machine learning to accurately predict the outcome of a chaotic system over a much longer duration than had been thought possible. And the machine did that just by observing the system’s dynamics, without any knowledge of the underlying equations.

Awe, Fear, and Excitement

We’ve recently become accustomed to artificial intelligence’s (AI) dazzling displays of ability.

Last year, a program called AlphaZero taught itself the rules of chess from scratch in about a day, and then went on to beat the world’s best chess-playing programs. It also taught itself the game of Go from scratch and bettered the previous silicon champion, the algorithm AlphaGo Zero, which had itself mastered the game by trial and error after having been fed the rules.

The behaviour of the Earth’s atmosphere is a classic example of chaos theory.
Image Credit: Eugene R Thieszen /

Many of these algorithms begin with a blank slate of blissful ignorance, and rapidly build up their “knowledge” by observing a process or playing against themselves, improving at every step, thousands of steps each second. Their abilities have variously inspired feelings of awe, fear, and excitement, and we often hear these days about what havoc they may wreak upon humanity.

My concern here is simpler: I want to understand what AI means for the future of “understanding” in science.

If You Predict it Perfectly, Do You Understand It?

Most scientists would probably agree that prediction and understanding are not the same thing. The reason lies in the origin myth of physics—and arguably, that of modern science as a whole.

For more than a millennium, the story goes, people used methods handed down by the Greco-Roman mathematician Ptolemy to predict how the planets moved across the sky.

Ptolemy didn’t know anything about the theory of gravity or even that the sun was at the centre of the solar system. His methods involved arcane computations using circles within circles within circles. While they predicted planetary motion rather well, there was no understanding of why these methods worked, and why planets ought to follow such complicated rules.

Then came Copernicus, Galileo, Kepler and Newton.

Newton discovered the fundamental differential equations that govern the motion of every planet. The same differential equations could be used to describe every planet in the solar system.

This was clearly good, because now we understood why planets move.

Solving differential equations turned out to be a more efficient way to predict planetary motion compared to Ptolemy’s algorithm. Perhaps more importantly, though, our trust in this method allowed us to discover new unseen planets based on a unifying principle — the Law of Universal Gravitation—that works on rockets and falling apples and moons and galaxies.

The Milky Way Galaxy, which contains our solar system.
Image Credit: sripfoto /

This basic template—finding a set of equations that describe a unifying principle — has been used successfully in physics again and again. This is how we figured out the Standard Model, the culmination of half a century of particle physics, which accurately describes the underlying structure of every atom, nucleus or particle. It is how we are trying to understand high-temperature superconductivity, dark matter and quantum computers. (The unreasonable effectiveness of this method has inspired questions about why the universe seems to be so delightfully amenable to a mathematical description.)

In all of science, arguably, the notion of understanding something always refers back to this template: If you can boil a complicated phenomenon down to a simple set of principles, then you have understood it.

Stubborn Exceptions

However, there are annoying exceptions that spoil this beautiful narrative. Turbulence—one of the reasons why weather prediction is difficult—is a notable example from physics. The vast majority of problems from biology, with their intricate structures within structures, also stubbornly refuse to give up simple unifying principles.

While there is no doubt that atoms and chemistry, and therefore simple principles, underlie these systems, describing them using universally valid equations appears to be a rather inefficient way to generate useful predictions.

In the meantime, it is becoming evident that these problems will easily yield to machine-learning methods.

AI might help identify new drugs to treat antibiotic resistant bacterial like Klebsiella, which causes about 10 per cent of all hospital-acquired infections in the United States. NIH
Image Credit: NIAID

Just as the ancient Greeks sought answers from the mystical Oracle of Delphi, we may soon have to seek answers to many of science’s most difficult questions by appealing to AI oracles.

Such AI oracles are already guiding self-driving cars and stock market investments, and will soon predict which drugs will be effective against a bacterium—and what the weather will look like two weeks ahead.

They will make these predictions much better than we ever could have, and they will do it without recourse to our mathematical models and equations.

It is not inconceivable that, armed with data from billions of collisions at the Large Hadron Collider, they might do a better job at predicting the outcome of a particle physics experiment than even physicists’ beloved Standard Model!

As with the inscrutable utterances of the priestesses of Delphi, our AI oracles are also unlikely to be able to explain why they predict what they do. Their outputs will be based on many microseconds of what might be called “experience.” They resemble that caricature of an uneducated farmer who can perfectly predict which way the weather will turn, based on experience and a gut feeling.

Science Without Understanding?

The implications of machine intelligence, for the process of doing science and for the philosophy of science, could be immense.

For example, in the face of increasingly flawless predictions, albeit obtained by methods that no human can understand, can we continue to deny that machines have better knowledge?

If prediction is in fact the primary goal of science, how should we modify the scientific method, the algorithm that for centuries has allowed us to identify errors and correct them?

If we give up on understanding, is there a point to pursuing scientific knowledge as we know it?

I don’t have the answers. But unless we can articulate why science is about more than the ability to make good predictions, scientists might also soon find that a “trained AI could do their job.”

Amar Vutha, Assistant Professor of Physics, University of Toronto

This article was originally published on The Conversation. Read the original article.

Image Credit: Sergey Tarasov /

Kategorie: Transhumanismus

The Astounding Growth of Chinese VC—and the Tech It’s Flowing Into

9 Srpen, 2018 - 17:00

Over the course of the next month, I will be releasing a series of China-centered articles, leading up to my webinar with Dr. Kai-Fu Lee, one of the most plugged-in AI investors on the planet and of the technology’s earliest pioneers.

In the next four weeks, we’ll be covering everything from China’s surging tech investments and the drivers behind China’s growing dominance in AI, to an early look at Kai-Fu Lee’s soon-to-be-released AI Superpowers: China, Silicon Valley and the New World Order.

Today, let’s dive into Chinese venture capital abroad.

Of the $154 billion worth of VC invested in 2017, 40 percent came from Asian (primarily Chinese) VCs. America’s share? Only 4 percentage points higher at 44 percent.

In a great push to access intellectual property and drive growth in key tech sectors, China’s VC scene is booming.

Despite a looming trade war and Chinese government restrictions on capital outflow, China-based VC funds and corporate investments continue to pump vast new sums into everything from global biotech startups to AI-equipped robotics.

Now the second-largest VC market in the world, China’s VC fundraising nearly doubled between 2015 and 2017, in large part driven by the fruition of China’s internet universe and mounting government efforts.

This astounding growth is part of China’s explosion in private equity. But while many of these funds used to be poured into thousands of Chinese copycat companies and local tech startups, we’re seeing a dramatic outward shift in China’s biggest investments.

Let’s take a look at the increasingly international nature of startups backed by Baidu, Alibaba, and Tencent (BAT), for instance. As Baidu, Alibaba, and Tencent take on global leadership in AI, autonomous vehicles, and personalized medicine, China’s corporate VC targets are getting an even split between China and the rest of the world.

And as I’ll discuss in a future post, China’s government aims to be the global leader in AI by 2030, a task which requires mass absorption of foreign expertise and foreign data.

Leveraging tremendous government-backed guiding funds and state-corporate collaborations, Beijing can now help bring the world’s top tech talent and intellectual property back to the mainland.

With a heavy focus on AI, Chinese VC firms are targeting three major arenas:

  1. Robotics
  2. Driverless vehicles
  3. Biotech

Back on the mainland, China’s ultra-high-powered startup ecosystem pioneers the world’s greatest supply of high-tech hardware and newly deployable algorithms. And now sharing the wealth, Chinese VC abroad plays a critical role in picking out the most promising collaborators to learn from at home and work with abroad.


Shenzhen’s Maker Movement puts China in the unequivocal lead when it comes to hardware manufacturing. With some of the most industrious (not to mention numerous) electronics engineers in the world, Shenzhen provides a huge leg up to Chinese robotics startups that need to iterate fast.

It’s this unparalleled turnaround rate that in part makes Chinese VC support so attractive to global hardware-reliant startups. And it’s why savvy Chinese VC funds and investors will often give small US startups of interest a critical gateway to Shenzhen’s abundant resources.

As we start to witness the rise of next-gen technologies like intelligent robots and autonomous drones, this access will be critical, and Chinese VCs will score top global market access in this emerging new sector.

Already in 2016, China accounted for 35 percent of the world’s robot-related patent filings, and Chinese investment has amounted to at least $10 billion per year.

Autonomous Vehicles

As I’ll discuss in a future post on Baidu, Alibaba, and Tencent, these three tech behemoths are forming a “national team,” with Baidu heading up China’s initiative in driverless cars.

Aiming to decimate traffic congestion and build new AI cities, China has a vested interest in the rapid deployment of AI behind autonomous transit.

Added to the mix is the fact that autonomous vehicles are one of China’s weakest points. Given its reliance on elite AI expertise (quality > quantity), a luxury still concentrated at Western juggernauts like Google, autonomous vehicle rollout in China will need as much foreign assistance as it can get.

And as seen here, BAT is investing significantly in auto tech, each company announcing its own autonomous driving initiative.

Baidu, for instance, officially announced its Apollo ecosystem last year, reportedly the first comprehensive opening up of automated driving technology and deep learning data.

Bringing on global partners from Nvidia to Renesas, Baidu has surpassed 10,000 developers on the project’s GitHub repository, claiming to be the most vibrant AV platform vendor in the industry.

Combining open-software and cloud-service platforms, Apollo strives to enable any automotive partner to efficiently build out their own autonomous vehicles system. And as more partners get on board, Apollo expects to amass significant driving data, speeding up AV technology deployment.

But BAT isn’t stopping at open-source platforms to learn from partner data. Alibaba’s Innovation Ventures (the company’s VC investment arm), for instance, recently participated in a $30 million Series B funding round for Israeli vehicle-to-vehicle networking tech startup Nexar.


In line with BAT’s third most heavily-backed category, China is investing big in biotech.

In the first half of this year, China-based VC funds poured $5.1 billion into private US biotech firms, surpassing 2017’s record $4 billion. This indicates a major shift in China’s healthcare-oriented focus.

Among the Chinese investors, Tencent and Baidu Ventures made a joint investment in American deep learning-based drug discovery startup Atomwise. And of Baidu’s five other US deals, the company’s VC arm invested in healthcare AI startup Engine Biosciences.

Just as China has worked its way from a largely ‘copycat’ tech ecosystem to a thriving hub of innovation and cutting-edge research, Beijing is trying to do the same in healthcare. By investing big in foreign (particularly AI-based) biotech, China aims to shrug off its reputation as a low-grade generic pharmaceuticals manufacturer and begin pioneering novel therapies.

Given China’s enormous population and overwhelmed medical institutions, predictive analytics and personalized medicine could unleash dramatic economic value. And as biotech emerges as one of the most profitable and disruptive technologies, Chinese VCs may soon help the country cash in.

Final Thoughts

China is a true miracle story. After reaping the harvest of a cheap manufacturing sector and export-oriented economy, China is upending traditional investment flows at a rate previously unheard of.

Diving into everything from autonomous drones to AI-driven diagnostics, Chinese VC firms are gaining a hold of global markets everywhere, amping up China’s access to elite tech expertise and valuable new data.

And as international startups position themselves to take advantage of China’s strategic generosity, these Chinese giants will both solidify growing influence at home and technological dominance abroad.

Image Credit: Sean Xu /

Kategorie: Transhumanismus

Waste Heat: The Overlooked Energy Problem, and How to Solve It

8 Srpen, 2018 - 17:00

With the exponential increase in the number of data centers and electronic devices over the last decade, waste heat has become a big but overlooked environmental problem. It is frequently hidden away, out of sight in data centers and server farms in remote locations around the world, but its environmental cost is very real.

A strong driver of the growth in energy usage is our dependence on software and the ever-increasing amount of hardware needed to support it. Waste heat may only get worse as new technology enters our lives. While a handful of companies have started to reduce or recycle waste heat in various ways, existing mitigation methods are typically inefficient and do little to curb the rise in waste-heat-related pollution. Now, though, a new generation of water-cooled GPU systems is coming online and promising to revolutionize the energy efficiency of everything from artificial intelligence to cryptocurrency mining.

Thermodynamics 101

As anyone who has taken elementary physics knows, energy can’t be created or destroyed. In a closed ecosystem, the amount of energy stays the same, merely changing between high-quality and low-quality forms, namely heat. Low-quality heat energy can be an environmental pollutant, just like the plastic detritus with which our oceans are now awash.

Modern Life Is Power-Hungry

Ten years ago, the International Energy Agency (IEA) predicted the growth of global energy consumption from 15,665 terawatt hours (TWh) in 2006 to 28,142 TWh by 2030. According to a more recent IEA report, though, we are well ahead of schedule: by the end of 2017 we were already more than 75 percent of the way there.

There are good reasons for this acceleration in the rate at which we are burning resources. Modern cities cannot exist without a sprawling IT infrastructure. The services underpinned by these technologies make our lives better, safer, and more comfortable. But every single traffic light needs electricity to work—and nowadays that means not just a traffic light, but the whole supporting and coordinating system of sensors, processors, and servers. Multiply this across dozens of different systems and you have a serious problem.

GPUs and Global Warming

Power-hungry data centers are a major contributor to global warming, increasing the level of CO2 in the atmosphere with every bit they flip. The increase in new data centers shows no signs of slowing: the global data center market is estimated to reach revenues of around $174 billion by 2023, growing at a CAGR of approximately 4 percent.

Moreover, the type of data centers being built has changed. One of the most significant trends is towards graphics processing unit (GPU) mining farms. GPUs are used heavily by the gaming industry due to their ability to manipulate images and output them efficiently, but they have also found diverse applications beyond gaming. Large arrays of GPUs are used in specialist server farms for many computationally-intensive tasks involved in artificial intelligence and neural network applications. Another application is UC Berkeley’s Search for Extra-Terrestrial Intelligence (SETI) program, which uses GPUs to crunch data from radio telescopes.

In addition, GPUs are ideally suited to mining certain coins; one impact of the rise in cryptocurrency prices over the course of 2017 was a global shortage of GPUs. As demand for GPU cards rose with crypto prices, gamers and SETI found themselves short of processing power. Cryptocurrency mining alone results in the production of 33.9 kilotons of CO2 per year.

Suffice it to say that the major graphics card companies are experiencing explosive growth, driven by a slew of high-tech applications. Due to the nature of the technologies used, the efficiency of even the best data centers doesn’t exceed 20 percent, which means the other 80 percent of energy is converted to heat without any further utility—and this brings a whole new world of problems to an already hot topic.

Possible Solutions and Mitigation Strategies

The biggest players in this sphere have recently started to consider how to solve this problem. One set of solutions revolves around where our power is consumed; futurologists have predicted we will one day start building our data centers in space, for example. Even now, Microsoft has placed servers in the depths of the sea off the coast of Scotland and IBM is building a water-cooled processor for its Zurich Aquasar supercomputer. These initiatives don’t solve the problem of dirty heat, but they do make it cheaper for large corporations to service their infrastructure and optimize conditions for their hardware.

Most companies can’t afford to locate their servers on the sea bed, but Germany-based startup Comino has proposed the next-best thing: a liquid cooling system that makes it possible to capture vastly more heat and recycle a higher proportion of waste energy back into the host facility.

More specifically, Comino has adopted this approach for GPU supercomputers and servers, which is an acute need as key technologies from blockchain to AI reach a point of maturity and mass adoption. The company has already built two data centers in Europe with its unique liquid cooling technology. Its pilot project, Comino Grando, was launched in Sweden with a capacity of almost 5,000 kW and at a cost of around $30 million. By recycling energy, Grando has proven to be 40 percent more efficient than air-cooled supercomputers.

Comino’s CEO, Eugeny Vlasov, told me in an interview, “Liquid cooling systems are used to reduce PUE (Power Usage Effectiveness) to 1.05, which is the main measure of the eco-friendliness of such systems. It’s strange, but unfortunately tech companies don’t always think of using the latest and greatest technologies to reduce their environmental impact. We have a strong engineering team that works on integrating advanced tech solutions to deliver cost-efficient management of the extra heat that plagues many data centers.”

Solutions like Comino’s aim to make the entire world more eco-friendly, even as the pace of tech growth shows no signs of slowing. “Current tasks like cryptocurrency mining, neural networks, and GPU databases eat up massive amounts of energy. Crypto mining alone consumes more energy than Denmark, twice what it needs to be, with no heat recovery. We can do much, much better,” Vlasov said.

By the middle of the next decade, data centers are projected to account for almost 10 percent of total global electricity consumption. In an age of carbon credits and green incentives, environmental concerns are typically treated as a bolt-on extra, something companies do because they have to, not because it makes sense for their bottom lines. The good news for businesses is that in this instance, environmental benefits align with financial ones, since lower energy use reduces capital costs and operating expenses. The technology works for machine learning, cutting-edge database work or rendering, smart cities, banks, manufacturing, providers, video streaming, neural networks, mining, or data centers—there are few sectors that do not stand to be revolutionized by water cooling.

Another strategy for reusing the waste energy from data centers is to use that energy to heat buildings; Amazon already employs this approach to heat its headquarters, for example. Dutch startup Nerdalize has begun trials of a solution for the domestic market: customers pay the company to install servers in their homes and receive free heating in exchange.

Of course, all of these developments in increasing energy efficiency won’t bring us to the ultimate goal of stopping pollution altogether. One day, the future will become reality and data centers, servers, and supercomputers will be placed outside Earth’s atmosphere in space. Only then will hardware heat pollution become a thing of the past. In the meantime, that doesn’t mean we should stop caring about our planet.

Image Credit: Eliro /

Kategorie: Transhumanismus

CAR-T May Be a Silver Bullet Against Cancer—and Here’s What Else It Can Do

7 Srpen, 2018 - 17:00

CAR-T is the super-soldier serum of cell therapy: you pluck out an immune cell soldier, inject it with a dose of new genes, and send the enhanced cell back into the host body—bam! Suddenly the host has a slew of Captain America-esque superpowered cells ready to tackle cancer and all sorts of cellular enemies.

Without doubt, CAR-T is set to overhaul cancer therapy. Last year several variants of the immunocellular technique earned the FDA’s nod of approval for blood cancers; with big pharma pouring in billions to develop the technology, more are certainly to come.

Yet a small group of ingenious scientists are already thinking ahead: can CAR-T do more?

To Dr. Michael Milone at the University of Pennsylvania, the answer is a clear yes: there’s the potential to “open up the application of this anti-cancer technology to the treatment of a much wider range of diseases, including autoimmunity and transplant rejection,” he said.

It’s likely to trigger “a next wave in cellular immunotherapy,” said Dr. Everett Meyer at Stanford Medical Center, who uses the technology to help islet transplants. Islets are clusters of insulin-producing cells that are destroyed by immune cells in Type I diabetes.

With CAR-T ready for autoimmune trials by 2019, here’s what’s in the works.

Civil War

In cancer therapies, a type of immune cell called killer T cells are extracted through a process similar to dialysis and given genes that help them recognize various types of blood cancers.

The overwhelming culprit of these cancers? B cells. Normally these cells are a critical component of the immune system: they make and deploy antibodies, which hunt down invasive bacteria and nip them in the bud.

But when B cells go rogue, they trigger multiple types of deadly blood cancers. What’s more, B cells can sometimes pump out antibodies that mistake healthy tissue for infections. In autoimmune diseases, antibodies tag onto normal cells, mislabeling them as dangerous, which in turn provokes a T cell onslaught. These autoimmune attacks lead to Type I diabetes, in which insulin-producing cells are slaughtered by the body’s own immune cells, and lupus, where tissues from the lung, heart, brain, and kidneys are caught in friendly fire.

Currently there are no cures for autoimmune disorders. For severe cases, immunosuppressant drugs can help, but they increase the chances of infections and cancer.

Back in 2016, Milone’s team had a eureka moment: in traditional CAR-T therapy, T cells are often engineered to target cancerous B cells. What if the same super-soldiers can hunt down autoimmune-causing B cells?

“We thought we could adapt this technology that’s really good at killing all B cells in the body to target specifically the B cells that make antibodies that cause autoimmune disease,” Milone said at the time.

“Targeting just the cells that cause autoimmunity has been the ultimate goal for therapy in this field,” added study author Dr. Aimee Payne.

In a proof-of-concept, the team took on pemphigus vulgaris (PV), an autoimmune condition that causes the skin to gradually peel off and is almost always fatal. The team first figured out which B cells were producing the disease-causing antibodies. Like most cell types, B cells have specific protein “barcodes” on their surface—the PV-causing B cell subtype has a particular protein dubbed Dsg3 (yeah, biologists aren’t the best at giving catchy names to proteins).

Bingo, target acquired. Next, the team constructed a protein “claw” that grabs onto Dsg3. This claw is a “chimeric autoantibody receptor”—or the “CAR” in CAR-T. Armed with the claw, the genetically-enhanced T cells were then infused back into the bodies of mice.

The result was shockingly positive. “We were able to show that the treatment killed all the Dsg3-specific B cells, a proof-of-concept that this approach works,” without harming other B cells, Payne said.

The best part about the treatment? It’s plug-and-play: change the CAR, and it’s possible to target any type of B cell—and potentially treat any autoimmune disorder caused by antibodies gone wild.

New T on the Block

So far, the T immune cells used in CAR-T have all been killer T cells.

Yet these killers are only a fraction of the immune cell zoo. The new contender? T regulatory cells, or Tregs.

Tregs are the killjoys of the immune system. They shut the party down before it gets too rowdy, thus inhibiting immune attacks from getting out of control. Autoimmune diseases often are caused or exacerbated by ineffective Tregs. The reason is unclear: sometimes they have a genetic deficit, or they might resist activation because of something in their environment. Regardless, Tregs fail in autoimmune disorders—which makes them promising candidates for CAR-T.

At the forerunner of Treg enhancement is Txcell, a startup based in Valbonne, France. Two years ago, the company began experimenting with giving Tregs their own protein claws against inflammation.

It’s a big step away from traditional CAR-T. Rather than targeting a specific barcode on a cell, TxCell engineers Tregs that home to a particular type of tissue ravaged by autoimmune attacks.

“For example,” explained Stepane Boissel, CEO of TxCell, “if you have multiple sclerosis, the antigen is specifically present in the brain. If you have Crohn’s disease, the antigen has to be in the guts. In fact, given the large number of relevant antigens, we believe our technology has possibly a larger potential than CAR-T cell therapy in oncology.”

Far along the TxCell pipeline is an engineered Treg that helps treat Type I diabetes, in which immune cells attack insulin-releasing cells in the pancreas. Without insulin, the body struggles to maintain normal blood sugar levels, leading to diabetes.

Dr. Megan Levings, a researcher at the University of British Columbia in Vancouver, collaborates with TxCell on the project. In 2016, Levings and team published a paper showing that Tregs enhanced with CAR “protein claws” could help dampen the immune response to organ transplants.

“This work provides what we believe is the first proof-of-concept that CAR Tregs have the potential to be used therapeutically,” Levings’ team wrote at the time.

Just last year, Meyers backed up these data with a new study showing that engineered Tregs allow better islet transplantation in mice. We clearly show that CAR-T with Tregs is a “powerful new platform that’s very flexible for many immune diseases,” said Meyers.

“On paper, that’s a very powerful, very directed, very targeted kind of therapy,” said Boissels. “Again, we have to be cautious, but if it works, there is a very large panel of diseases we can potentially target. If you combine all autoimmune disorders…the field of autoimmune disease is probably the largest pharmaceutical market in the world.”

TxCell is aiming to launch the first CAR-T trial for boosting organ transplants by next year, which will be the first time CAR-Tregs are tested in humans. Although it may not be entirely smooth sailing, previous CAR-T trials for cancer could help pave the way to approval.

CAR-T for autoimmune is still at an early stage. And without long-term data, it’s hard to say whether suppressing the suppressors could lead to side effects like infections and cancer.

But for those suffering from autoimmune disorders and organ rejection, CAR-Tregs represents an entirely new possibility that could revolutionize treatment as CAR-T is doing for cancer.

That’s definitely something to be excited about.

Image Credit: Ikram920 /

Kategorie: Transhumanismus

Successfully Transplanted Lab-Grown Pig Lungs Take Us Closer to Custom Organs

6 Srpen, 2018 - 17:00

Being able to grow new organs from a patient’s own cells could revolutionize both the safety and availability of transplants. Scientists have now overcome major hurdles in the realization of the technology—but in pigs.

Someone is added to the US national transplant waiting listing every 10 minutes, and an average of 20 people a day die waiting for an organ, according to the United Network for Organ Sharing. Even for those lucky enough to get a transplant, they face a lifetime on immune system suppressing drugs to prevent a rejection, which may or may not be successful.

But decades of research into tissue engineering mean we’re tantalizingly close to being able to simply harvest cells from a patient’s body and use them to create a new organ. Such a capability would remove any risk of rejection, as the organ would be made from a person’s own cells and could produce replacement organs on demand, assuming it could be made affordable.

Now a team at the University of Texas Medical Branch (UTMB) has taken a major leap towards making that vision a reality after they grew lungs from the cells of pigs in a lab and then successfully transplanted them into the animals.

“Our ultimate goal is to eventually provide new options for the many people awaiting a transplant,” Joan Nichols, professor of internal medicine at UTMB and one of the authors of the paper, said in a press release.

Building the new lungs was a complicated process. In a paper in Science Translational Medicine, the researchers describe how they first had to take a lung from an unrelated pig and use special chemicals to strip all the cells from it to leave behind the underlying scaffolding of proteins that supported them.

The stripped lungs were then seeded with cells taken from a lung removed from the pig due to receive the transplant and immersed in a cocktail of nutrients and growth factors in a bioreactor for 30 days. The resulting lungs were then transplanted into the recipients.

To allow the researchers to examine how the transplanted lungs developed inside their recipients, the four pigs used in the study were euthanized after ten hours, two weeks, one month, and two months. All the lungs appeared to be healthy and were not rejected by the pigs’ immune systems. They were even colonized by the microbes native to their recipients’ bodies.

The researchers had already grown a human lung in the lab back in 2014, but translating these latest results into humans will be a long road. Larger studies with more animals will be needed to examine long-term survival before the approach can even be attempted with people.

But the study overcame some major problems that have plagued efforts to grow new organs from scratch. One of the biggest issues for lab-grown organs is the difficulty in developing the complex tissue and blood vessel architecture that allows for proper oxygenation and blood flow. In previous studies on small animals, this has led to transplants failing just a few hours after transplant.

How long it could be before the approach is cleared for humans is hard to say, but the researchers think that with enough funding it may be possible within 5-10 years. And they aren’t the only ones looking for ways to create new organs on demand.

In June, biotech startup Prellis Biologics announced they had managed to 3D print tiny blood vessels known as capillaries fast enough to prevent the tissue from dying. The idea of 3D printing organs from patients’ cells has been around for a while, but has been plagued by the same blood supply issues as tissue engineering approaches. So this latest breakthrough could be significant, and the company thinks it can bring them to market within five years.

Pigs may present more than just a testing ground for tissue engineering approaches. Despite outward appearances, pigs and humans actually have fairly similar anatomies, which for decades has prompted scientists to consider whether it would be possible to transplant pig organs into humans, a process called xenotransplantation.

Tests in monkeys historically resulted in hard-to-control immune responses that led to swift rejection of transplants, but a series of recent successes have renewed optimism. Last year scientists also managed to use the gene editing tool CRISPR to deactivate viruses in pigs that could potentially make people sick if they received transplants from the animals.

So while it’s probably too early to throw away your donor card just yet, it may not be too long before transplant waiting lists are a thing of the past.

Image Credit: crystal light /

Kategorie: Transhumanismus

Graphene and Beyond: The Astonishing Properties and Promise of 2D Materials

5 Srpen, 2018 - 17:00

Since graphene was first isolated in 2004, a Nobel Prize-winning feat that sparked a whole new exciting field of materials science research, 2D materials have had all kinds of suggested applications. Now, at the cutting edge of research, materials scientists are discovering that stacked layers of these atomically thin materials can open up a whole new world of fascinating and useful properties.

Graphene’s discovery was something of a throwback for physics. It was a far cry from the huge collaborations of LIGO (first to observe gravitational waves) and CERN (first to find the Higgs Boson), which require thousands of scientists and billion-dollar equipment. Professors Geim and Novoselov discovered graphene when experimenting with some sticky-tape and a block of graphite: amazingly, they were able to ‘exfoliate’ a layer a single atom thick.

The discovery may have been low-key and unexpected, but the subsequent hype over the properties of this material certainly was not. Graphene is ultra-light, immensely tough, yet flexible and stretchable. Often with excellent electronic properties, 2D materials can be highly conductive of electricity. Some 2D materials can be stacked together or combined to have tunable semiconductor bandgaps, which can make them the perfect materials for producing super-efficient solar panels, perfectly tuned to the wavelengths of light from the sun.

Although the strength and flexibility of graphene led enthusiasts to imagine using it as a new, omnipresent, superior construction material, it is these optical and electronic properties that are providing the first use cases for graphene. A recent study blended various types of graphene to develop an LED that could emit light across the entire visible spectrum. Conventional LEDs emit a single wavelength—and so displays need mixtures of red, blue, and green to produce a full-color image. If the graphene-based LED could be stabilized and made more efficient, you’d only need one LED with tunable colors, allowing for flexible displays.

2D materials can seem miraculous, to the extent that experiments were even done to see if graphene could be made bulletproof. It’s not all that far-fetched—although atomically thin, graphene is very efficient at transferring momentum through its lattice, and bulletproof materials like Kevlar often work by dissipating the energy from impact across a wider area. While it took 300 layers of graphene (with gaps between each layer) to stop a specially-designed “microbullet,” scientists last year discovered that two-layer graphene can undergo a phase transition to become harder and stiffer than diamond.

Since its discovery, graphene has been joined by new 2D materials.

Stanene is atomically thin tin; stacking multiple layers of stanene could result in a phase transition to superconductivity, even though tin in bulk isn’t superconductive. As yet, the transition temperature doesn’t put bilayer stanene in the range of high-temperature superconductors, but any new manifestation of superconductivity has physicists excited.

Germanium was an element that was initially of interest due to its electronic properties. Many of the earliest transistors used germanium instead of silicon, although until recently it has been supplanted by silicon, which is easier to use in mass manufacturing.

Now, with the isolation of germanene in 2014, individual layers of germanium are among the 2D materials touted alongside graphene. While graphene’s famous hexagonal crystal structure is flat, germanene’s crystal structure is buckled; its lattice consists of two vertically separated sub-lattices. External strain or applying external electric fields to germanene can cause its bandgap to change; this owes to that double-lattice structure, but can allow germanene to be used in field effect transistors. Not to be outdone, silicon itself has a monolayer counterpart in silicene.

The early hype around graphene’s applications has been replaced by a more steady approach. We don’t have bulletproof graphene planes, trains, and automobiles yet, but graphene is slowly but surely moving towards fulfilling its potential as more research into each possible application is conducted. Graphene-based sensors are already being widely produced. Despite all the hype around replacing silicon as the basic material in electronics, some of the first commercial uses of 2D materials like graphene have been put in sports gear.

In the longer term, it seems likely that graphene and other 2D materials will find their niches. In the meantime, the experimental insight that stacking together individual layers of atomically thin materials can result in new, unexpected, and useful properties has opened up a new field of research: van der Waals heterostructures. These materials exist in a transition regime—between the bulk properties of large-scale matter that we’re familiar with, and the quantum realm on the atomic level. The result is tantalizing for theoretical physicists and technologists alike.

These heterostructures are stacks of various layers of graphene, germanene, silicene, and stanene—but also molecular monolayers. They are named after the weak van der Waals forces that attract molecules to each other. These forces are due to the shifting distributions of charge in the layers of molecules interacting with each other.

These Van der Waals forces are weaker than electrostatic forces, and tail off more rapidly with distance, but they are enough to keep these “Lego-like” structures together. Planes of atoms in hexagonal 2D arrangements can be stacked, and then the possible range of fundamental physical properties and materials to study can be multiplied exponentially. Each newly-synthesized 2D material adds more potential combinations, and already layers thirteen deep composed of four different materials can be synthesized.

Consider, for example, the quest for a high-temperature superconductor. We know that this most desirable material property is subtly linked to the structure of the crystal lattice, as in the case of the YCBO structures. Stacking 2D materials offers an exciting new way to probe these phenomena experimentally.

“What if we mimic layered superconductors by using atomic-scale Lego? Bismuth strontium calcium copper oxide superconductors (BSCCO) can be disassembled into individual atomically thin planes. Their reassembly with some intelligently guessed differences seems worth a try, especially when the mechanism of high temperature superconductivity remains unknown,” wrote Professor Geim in Nature.

For the moment, graphene remains the most likely 2D material to see near-term applications, partly due to the funding for its research and partly because it can still be produced more swiftly. The exfoliation method of gradually pulling apart layers of graphite to obtain graphene can’t be used with every 2D material, even though it produces the purest crystals.

Many of the more exotic materials must be produced by molecular beam epitaxy—painstakingly depositing individual atoms onto a surface at conditions of high vacuum and high temperature. This will limit the mass-manufacturing possibilities, or bulk uses for 2D crystals, until MBE gets cheaper—or, perhaps more likely given the high temperatures and vacuum needed for MBE, until another manufacturing technique is perfected.

Yet it seems inevitable, given the demand for ever-improved electronic components for batteries, semiconductors and transistors, and for optoelectronics like solar panels and LEDs, that we will learn how best to exploit the astonishing properties of 2D materials. This is the dream of manufacturing reaching the cutting edge of fine-tuning fundamental physical properties with a careful choice of materials. Like kids with a Lego set, the only limit to what we can build may be our imagination.

Image Credit: koya979 /

Kategorie: Transhumanismus

This Week’s Awesome Stories From Around the Web (Through August 4)

4 Srpen, 2018 - 17:00

Apple Is Worth One Trillion Dollars
Ian Bogost | The Atlantic
“Tech magnates crow too often about ‘changing the world,’ but this is what it really looks like, for good and for ill. It’s what railroads did once, and steel too, and oil, and automobiles as well. There is always brutality in the biggest big business, because nothing grows so large without transforming the world it leaves behind utterly, and forever.”


Hello Quantum World
Will Knight | Cosmos
“No other contender can match IBM’s pedigree in this area, though. Starting 50 years ago, the company produced advances in materials science that laid the foundations for the computer revolution. Which is why, last October, I found myself at IBM’s Thomas J. Watson Research Center to try to answer these questions: What, if anything, will a quantum computer be good for? And can a practical, reliable one even be built?”


OpenAI Sets New Benchmark for Robot Dexterity
James Vincent | The Verge
“Once fully trained, Dactyl was able to move the cube from one position to another up to 50 times in a row without dropping it. (Although the median number of times it did so was much smaller; just 13.) And in learning to move the cube around in its hand, Dactyl even developed human-like behaviors. All this was learned without any human instruction—just trial and error, for decades at a time.”


DARPA Has an Ambitious $1.5 Billion Plan to Reinvent Electronics
Martin Giles | MIT Technology Review
“The agency has just unveiled the first set of research teams selected to explore unproven but potentially powerful approaches that could revolutionize US chip development and manufacturing.”


Bioengineers Are Closer Than Ever to Lab-Grown Lungs
Robbie Gonzalez | Wired
“They’ve grown three more pig lungs since, using cells from their intended recipients, and transplanted each of them successfully without the use of immunosuppressive drugs. Taken together, the four porcine procedures, which the researchers describe in this week’s issue of Science Translational Medicine, are a major step toward growing human organs that are built to-order, using a transplant recipient’s own cells.”


The Future Is Ear: Why ‘Hearables’ Are Finally Tech’s Next Big Thing
Peter Burrows | Fast Company
“i‘Ultimately, the idea is to steal time from the smartphone,’ says Gints Klimanis, Doppler’s former head of audio engineering. ‘The smartphone will probably never go away completely, but the combination of voice commands and hearing could become the primary interface for anything spontaneous.’ ”


The Explosive Race to Totally Reinvent the Smartphone Battery
Amit Katwala | Wired
“Lithium-ion batteries power everything from smartphones and laptops to electric cars and e-cigarettes. But, with lithium close to breaking point, researchers are scrambling for the next battery breakthrough.”

Image Credit: Uladzik Kryhin /

Kategorie: Transhumanismus

Designing a ‘Solar Tarp,’ a Foldable, Packable Way to Generate Power From the Sun

3 Srpen, 2018 - 17:00

The energy-generating potential of solar panels—and a key limitation on their use—is a result of what they’re made of. Panels made of silicon are declining in price such that in some locations they can provide electricity that costs about the same as power from fossil fuels like coal and natural gas. But silicon solar panels are also bulky, rigid, and brittle, so they can’t be used just anywhere.

In many parts of the world that don’t have regular electricity, solar panels could provide reading light after dark and energy to pump drinking water, help power small household or village-based businesses or even serve emergency shelters and refugee encampments. But the mechanical fragility, heaviness, and transportation difficulties of silicon solar panels suggest that silicon may not be ideal.

Building on others’ work, my research group is working to develop flexible solar panels, which would be as efficient as a silicon panel, but would be thin, lightweight and bendable. This sort of device, which we call a “solar tarp,” could be spread out to the size of a room and generate electricity from the sun, and it could be balled up to be the size of a grapefruit and stuffed in a backpack as many as 1,000 times without breaking. While there has been some effort to make organic solar cells more flexible simply by making them ultra-thin, real durability requires a molecular structure that makes the solar panels stretchable and tough.

A small piece of a prototype solar tarp. University of California, San Diego, CC BY-ND Silicon Semiconductors

Silicon is derived from sand, which makes it cheap. And the way its atoms pack in a solid material makes it a good semiconductor, meaning its conductivity can be switched on and off using electric fields or light. Because it’s cheap and useful, silicon is the basis for the microchips and circuit boards in computers, mobile phones and basically all other electronics, transmitting electrical signals from one component to another. Silicon is also the key to most solar panels, because it can convert the energy from light into positive and negative charges. These charges flow to the opposite sides of a solar cell and can be used like a battery.

But its chemical properties also mean it can’t be turned into flexible electronics. Silicon doesn’t absorb light very efficiently. Photons might pass right through a silicon panel that’s too thin, so they have to be fairly thick—around 100 micrometers, about the thickness of a dollar bill—so that none of the light goes to waste.

Next-Generation Semiconductors

But researchers have found other semiconductors that are much better at absorbing light. One group of materials, called “perovskites,” can be used to make solar cells that are almost as efficient as silicon ones, but with light-absorbing layers that are one-thousandth the thickness needed with silicon. As a result, researchers are working on building perovskite solar cells that can power small unmanned aircraft and other devices where reducing weight is a key factor.

The 2000 Nobel Prize in Chemistry was awarded to the researchers who first found they could make another type of ultra-thin semiconductor, called a semiconducting polymer. This type of material is called an “organic semiconductor” because it is based on carbon, and it is called a “polymer” because it consists of long chains of organic molecules. Organic semiconductors are already used commercially, including in the billion-dollar industry of organic light-emitting diode displays, better known as OLED TVs.

Polymer semiconductors aren’t as efficient at converting sunlight to electricity as perovskites or silicon, but they’re much more flexible and potentially extraordinarily durable. Regular polymers—not the semiconducting ones—are found everywhere in daily life; they are the molecules that make up fabric, plastic, and paint. Polymer semiconductors hold the potential to combine the electronic properties of materials like silicon with the physical properties of plastic.

The Best of Both Worlds: Efficiency and Durability

Depending on their structure, plastics have a wide range of properties—including both flexibility, as with a tarp; and rigidity, like the body panels of some automobiles. Semiconducting polymers have rigid molecular structures, and many are composed of tiny crystals. These are key to their electronic properties but tend to make them brittle, which is not a desirable attribute for either flexible or rigid items.

My group’s work has been focused on identifying ways to create materials with both good semiconducting properties and the durability plastics are known for—whether flexible or not. This will be key to my idea of a solar tarp or blanket, but could also lead to roofing materials, outdoor floor tiles, or perhaps even the surfaces of roads or parking lots.

This work will be key to harnessing the power of sunlight—because, after all, the sunlight that strikes the Earth in a single hour contains more energy than all of humanity uses in a year.

Image Credit: Sonpichit Salangsing /

This article was originally published on The Conversation. Read the original article.

Kategorie: Transhumanismus

In the Future, We’ll Know Everything—Thanks to This Tech

2 Srpen, 2018 - 17:00

We’re rapidly approaching the era of abundant knowledge—a time when you can know anything you want, anywhere you want, anytime you want. An era of radical transparency.

By 2020, it’s estimated we’ll have 50 billion connected devices, which will generate over 600 zettabytes of information.

The global network of connectivity, drones, and satellites are not only connecting people, they’re also connecting things—devices and sensors to form the Internet of Things and the Internet of Everything.

In this blog, we’ll cover four different levels of the Internet of Things:

  1. Satellite imaging the earth in meter-resolution
  2. A sky full of drones imaging everything in centimeter-resolution
  3. Autonomous vehicles sensing our streets in sub-millimeter-resolution
  4. A future of augmented reality glasses imaging everything before us

Let’s dive in.

Satellite Imaging and Orbital IoT

In an earlier blog, I discussed the coming age of microsatellite constellations from SpaceX and OneWeb.

OneWeb is working on a constellation of almost 900 satellites, while SpaceX will deploy over 12,000 mini-fridge-sized satellites. Both constellations plan to deploy global 5G internet. But global internet is only a fraction of the potential of microsatellite constellations. Microsatellite constellations equipped with high-resolution cameras are extending the Internet of Things, providing a massive amount of data to help solve the world’s grand challenges.

As of August 2017, there were nearly 1,800 operational satellites in orbit. Of these, 742 are communications satellites, 596 are used for Earth observation, and 108 are used for navigation.

We’re seeing a massive increase in the number of operational satellites as satellites become smaller and launch costs plummet.

Private companies all over the world are building out satellite technology. Planet Labs is a disruptive company using milk carton-sized imaging satellites to help entire industries obtain game-changing dataPlanet Labs showcases 175+ satellites in orbit, enabling them to image anywhere on the globe with up to 3.72-meter resolution.

Alternatively, Planet Labs offers a specialized, targeted satellite option called SkySats. Thirteen of these satellites can achieve up to 72-centimeter resolution. SkySats can also capture video, which can be used to extrapolate 3D models. These satellites are built on the same technology that Google deployed to capture crisp 3D image views for Google Maps.

A Sky Full of Drones

Closer to Earth’s surface (a few hundred feet above our heads) we’re developing an extensive network of autonomous drones that are collecting valuable information for farmers, wind turbine surveyors, financial institutions, and many others.

At CES 2018, the Department of Transportation announced that over 1 million drones were officially registered with the FAA. The FAA predicts that by 2020, 7 million drones will be flying over North America.

While private and commercial drone flight rules remain moderately restrictive, this past October the US announced plans for the federal government to work with companies to start deploying large fleets of drones with more flexible flight restrictions.

As drones become more robust, larger, and more capable, they will generate massive amounts of imaging and sensor data.

A small drone fleet can easily generate 100 terabytes of data per day.

During some of the tragic natural disasters of 2017 (wildfires, hurricanes), drone data collection was invaluable for saving lives, surveying damage, and providing search and rescue operations with crucial footage of hard-to-reach locations. As drones fly more and collect more data, we can use this data in a positive feedback loop to better train autonomous drones.

Speaking of autonomous vehicles… autonomous cars are a big part of the incoming era of abundant knowledge.

Autonomous Vehicles Seeing Everything

Intel predicts that the self-driving car industry will grow to $7 trillion by 2050.

One implication is that these autonomous cars will begin imaging everything surrounding them, all the time. Imagine millions of autonomous cars on the street, each packed with dozens of cameras, LIDAR, and radar “sensing” to help the car navigate.

One key sensor is called a LiDAR, a laser-based technology that builds a 3D map of a car’s surroundings by measuring how long it takes for millions of lasers to bounce off surrounding objects and return to the car. LiDAR market leader Velodyne’s VLS-128 system can obtain up to 9.6 million data points per second. Tesla, unlike Waymo, is avoiding using LiDAR altogether, opting instead for an ultrasonic, radar, and camera approach to autonomous vehicles.

The bottom line is this: when we enter the era of autonomous cars, there will never be a car, pedestrian, accident, or street-side pick-pocket that isn’t being imaged. These cars will record, in detail, an extraordinary abundance of images.

Augmented Reality Headsets

While the world now has more mobile phones than humans, we will soon see the emergence of an even more advanced technology: augmented reality headsets.

Such AR glasses will feature a multitude of forward-looking cameras that image everything at sub-millimeter resolution as you walk about your day. By the end of 2020, our smartphones, AR glasses, watches, medical wearables and smart dust are expected to constitute 50 billion connected devices, hosting a total of 1 trillion sensors.

And just as the number of connected devices is increasing exponentially, the number of sensors per connected device is also increasing exponentially.

So far, sensors on phones have doubled every four years. This means we can expect 160 sensors per mobile device by 2027, and a world jam-packed with near 100 trillion sensors. Sensors that can be accessed and interrogated by your AI to answer almost any question.

I can envision waking up in the morning, putting on my augmented reality contact lenses, and forgetting about them for the rest of the day. While in my eyes, they record every conversation, every person crossing the street, and everything I look at. Drawing from this constant stream of observational data, I’ll be able to train my social graph and preferences into my AI using the collected data sets.

Final Thought–It’s Your Questions that Matter Most!

The bottom line is, we are heading towards a future where you can know anything you want, any time you want, anywhere you want. In this future, it’s not “what you know,” but rather “the quality of the questions you ask” that will be most important.

Want to know the average spectral color of women’s blouses on Madison Avenue this morning? Ask it, and your AI can gather the image data and provide you an accurate answer in seconds. If you’re in the fashion business, you can go on to ask whether any recent advertising campaign correlates with the change in blouse color.

Such an abundance of data is what I call “radical transparency,” and it leads to a few interesting conclusions, which are probably the topic of a future blog…

First, that privacy may truly be a thing of the past. And second, that it is harder and harder to do something in secret without leaving a digital trail.

Join Me

Abundance Digital Online Community: I have created a digital/online community of bold, abundance-minded entrepreneurs called Abundance Digital. This is my “onramp” for exponential entrepreneurs—those who want to get involved and play at a higher level. Click here to learn more.

Image Credit: Pratchaya.Lee /

Kategorie: Transhumanismus