Singularity HUB

Syndikovat obsah Singularity Hub
News and Insights on Technology, Science, and the Future from Singularity University
Aktualizace: 33 min 11 sek zpět

How Fast Is AI Progressing? Stanford’s New Report Card for Artificial Intelligence

18 Leden, 2018 - 17:00

When? This is probably the question that futurists, AI experts, and even people with a keen interest in technology dread the most. It has proved famously difficult to predict when new developments in AI will take place. The scientists at the Dartmouth Summer Research Project on Artificial Intelligence in 1956 thought that perhaps two months would be enough to make “significant advances” in a whole range of complex problems, including computers that can understand language, improve themselves, and even understand abstract concepts.

Sixty years later, and these problems are not yet solved. The AI Index, from Stanford, is an attempt to measure how much progress has been made in artificial intelligence.

The index adopts a unique approach, and tries to aggregate data across many regimes. It contains Volume of Activity metrics, which measure things like venture capital investment, attendance at academic conferences, published papers, and so on. The results are what you might expect: tenfold increases in academic activity since 1996, an explosive growth in startups focused around AI, and corresponding venture capital investment. The issue with this metric is that it measures AI hype as much as AI progress. The two might be correlated, but then again, they may not.

The index also scrapes data from the popular coding website Github, which hosts more source code than anyone in the world. They can track the amount of AI-related software people are creating, as well as the interest levels in popular machine learning packages like Tensorflow and Keras. The index also keeps track of the sentiment of news articles that mention AI: surprisingly, given concerns about the apocalypse and an employment crisis, those considered “positive” outweigh the “negative” by three to one.

But again, this could all just be a measure of AI enthusiasm in general.

No one would dispute the fact that we’re in an age of considerable AI hype, but the progress of AI is littered by booms and busts in hype, growth spurts that alternate with AI winters. So the AI Index attempts to track the progress of algorithms against a series of tasks. How well does computer vision perform at the Large Scale Visual Recognition challenge? (Superhuman at annotating images since 2015, but they still can’t answer questions about images very well, combining natural language processing and image recognition). Speech recognition on phone calls is almost at parity.

In other narrow fields, AIs are still catching up to humans. Translation might be good enough that you can usually get the gist of what’s being said, but still scores poorly on the BLEU metric for translation accuracy. The AI index even keeps track of how well the programs can do on the SAT test, so if you took it, you can compare your score to an AI’s.

Measuring the performance of state-of-the-art AI systems on narrow tasks is useful and fairly easy to do. You can define a metric that’s simple to calculate, or devise a competition with a scoring system, and compare new software with old in a standardized way. Academics can always debate about the best method of assessing translation or natural language understanding. The Loebner prize, a simplified question-and-answer Turing Test, recently adopted Winograd Schema type questions, which rely on contextual understanding. AI has more difficulty with these.

Where the assessment really becomes difficult, though, is in trying to map these narrow-task performances onto general intelligence. This is hard because of a lack of understanding of our own intelligence. Computers are superhuman at chess, and now even a more complex game like Go. The braver predictors who came up with timelines thought AlphaGo’s success was faster than expected, but does this necessarily mean we’re closer to general intelligence than they thought?

Here is where it’s harder to track progress.

We can note the specialized performance of algorithms on tasks previously reserved for humans—for example, the index cites a Nature paper that shows AI can now predict skin cancer with more accuracy than dermatologists. We could even try to track one specific approach to general AI; for example, how many regions of the brain have been successfully simulated by a computer? Alternatively, we could simply keep track of the number of professions and professional tasks that can now be performed to an acceptable standard by AI.

“We are running a race, but we don’t know how to get to the endpoint, or how far we have to go.”

Progress in AI over the next few years is far more likely to resemble a gradual rising tide—as more and more tasks can be turned into algorithms and accomplished by software—rather than the tsunami of a sudden intelligence explosion or general intelligence breakthrough. Perhaps measuring the ability of an AI system to learn and adapt to the work routines of humans in office-based tasks could be possible.

The AI index doesn’t attempt to offer a timeline for general intelligence, as this is still too nebulous and confused a concept.

Michael Woodridge, head of Computer Science at the University of Oxford, notes, “The main reason general AI is not captured in the report is that neither I nor anyone else would know how to measure progress.” He is concerned about another AI winter, and overhyped “charlatans and snake-oil salesmen” exaggerating the progress that has been made.

A key concern that all the experts bring up is the ethics of artificial intelligence.

Of course, you don’t need general intelligence to have an impact on society; algorithms are already transforming our lives and the world around us. After all, why are Amazon, Google, and Facebook worth any money? The experts agree on the need for an index to measure the benefits of AI, the interactions between humans and AIs, and our ability to program values, ethics, and oversight into these systems.

Barbra Grosz of Harvard champions this view, saying, “It is important to take on the challenge of identifying success measures for AI systems by their impact on people’s lives.”

For those concerned about the AI employment apocalypse, tracking the use of AI in the fields considered most vulnerable (say, self-driving cars replacing taxi drivers) would be a good idea. Society’s flexibility for adapting to AI trends should be measured, too; are we providing people with enough educational opportunities to retrain? How about teaching them to work alongside the algorithms, treating them as tools rather than replacements? The experts also note that the data suffers from being US-centric.

We are running a race, but we don’t know how to get to the endpoint, or how far we have to go. We are judging by the scenery, and how far we’ve run already. For this reason, measuring progress is a daunting task that starts with defining progress. But the AI index, as an annual collection of relevant information, is a good start.

Image Credit: Photobank gallery /

Kategorie: Transhumanismus

Why Gene Silencing Could Launch a New Class of Blockbuster Drugs

17 Leden, 2018 - 17:00

Long before CRISPR, there was gene silencing.

Ever since the Human Genome Project transcribed our genetic bible in 1997, scientists have dreamt of curing inherited diseases at the source. The first audacious idea? Shoot the messenger.

Unlike CRISPR, which latches onto a gene and snips it out, gene silencing leaves the genome intact. Rather, it targets another molecule—RNA—the “messenger” that shuttles DNA instructions to the cell’s protein-making factory.

The idea is extremely elegant in its simplicity: destroy RNA and nix mutant proteins before they’re ever made.

If realized, this new class of drugs could overhaul modern medicine. Over 85 percent of proteins in the body can’t be targeted with conventional chemical drugs. Gene silencing opens up an enormous portion of the genome to intervention. They may easily be the next great class of drugs—or even the future of medicine.

Yet what followed the initial wave of excitement was two decades of failed attempts and frustration.

Pharmaceutical companies pulled out. Investment funding dried up. Interest moved to CRISPR and other gene editing tools. For a while, it seemed as if gene silencing was slowly being relegated to the annals of forgotten scientific history. Until now.

Last month, Ionis Pharmaceutical announced positive results for a groundbreaking gene silencing trial for Huntington’s disease. The drug, an anti-sense oligonucleotide (ASO), successfully lowered the levels of a toxic protein in the brains of 46 early-stage patients.

This is huge. For one, it’s a proof-of-concept that gene silencing works as a therapeutic strategy. For another, it shows that the drug can tunnel through the notorious blood-brain barrier—a tight-knit wall of cells that blocks off and protects the brain from toxic molecules in the body.

The trial, though small, once again piqued big pharma’s interest. Roche, the Swiss pharmaceutical giant, licensed the drug IONIS-HTTRX upon seeing its promising results, shelling out $45 million to push its development down the pipeline towards larger trials.

If replicated in a larger patient population, this is the breakthrough in Huntington’s disease.

“For the first time a drug has lowered the level of the toxic disease-causing protein in the nervous system, and the drug was safe and well-tolerated,” says Dr. Sarah Tabrizi, a professor at University College London (UCL) who led the phase one trial. “This is probably the most significant moment in the history of Huntington’s since the gene [was isolated].”

But perhaps more far-reaching is this: Huntington’s disease is only one of many neurodegenerative disorders of the brain that slowly kill off resident neurons. Parkinson’s disease, amyotrophic lateral sclerosis (Lou Gehrig’s disease), and even Alzheimer’s also fall into that category.

None of these devastating diseases currently have a cure. The success of IONIS-HTTRX now lights a way. As a gene-silencing ASO, the drug is based on a simple Lego-like principle: you can synthesize similar ASOs that block the production of other misshapen proteins that lead to degenerative brain disorders.

To Dr. John Hardy, a neuroscientist at UCL who wasn’t involved in the study, the future for gene silencing looks bright.

“I don’t want to overstate this too much,” he says, “but if it works for one, why can’t it work for a lot of them? I am very, very excited.”

A Roadmap for Tinkering With Genes

ASOs may be the wild new player within our current pharmacopeia, but when it comes to genetic regulation, they’re old-school.

Lessons learned from their development will no doubt help push other gene silencing technologies or even CRISPR towards clinical success.

Within every cell, the genes encoded in DNA are “translated” into copies of RNA molecules. These messengers float into the cell’s protein-making factory, carrying in their sequence (made out of the familiar A, G, C, and curious U) the coding information that directs which proteins are made.

Most of our current pills directly latch onto proteins to enhance or block their function. ASOs, on the other hand, work on RNA.

In essence, ASOs are short strands of DNA not unlike the genomic data present in your body. Scientists can engineer ASO sequences that latch onto a specific RNA molecule—say, the one that makes the mutant protein mHtt that leads to symptoms of Huntington’s disease.

ASOs are bad news for RNA. In some cases, the ASO jams the messenger from delivering its genetic message to the cell’s protein factory. In others, the drug calls a “scissor-like” protein into action, causing it to chop up the target RNA. No RNA, no mutant protein, no disease.

While the biology is solid, a surprising roadblock tripped scientists up for decades: getting ASO molecules inside the cell and nucleus, which harbors the cell’s genetic material.

In short, “naked” ASOs are the body’s prime target. Blood factors chew them up. Kidneys spit them out. Even when they manage to tunnel into the right organ, they may get directed to the cell’s waste disposal system and disintegrate before they have a chance to do their magic.

What’s more, in some cases ASOs are confused with viruses by the body’s defense system, which leads to a double whammy: not only does the drug get eaten up, the body also generates an immune response, which in some cases could be deadly.

If these troubles sound familiar, you’re right: they’re remarkably similar to the ones that CRISPR has to tackle. But IONIS-HTTRX, in its success, offers some important tips to the newcomer.

For example, covering up “trouble spots” on the drug that stimulate the immune response helps calm the body down—a strategy CRISPR scientists are likely to adopt given that a recent study found signs of antibodies against the technology’s major component, Cas9.

A True Breakthrough for Huntington’s

IONIS-HTTRX signals a potential new age for tackling devastating brain disorders. But broad impacts aside, for patients with Huntington’s disease, the success couldn’t have arrived sooner.

“This is a very exciting day for patients and families affected by this devastating genetic brain disorder. For the first time we have evidence that a treatment can decrease levels of the toxic disease-causing protein in patients,” says Dr. Blair Levitt at the University of British Columbia, who oversaw the Canadian portion of the study.

A total of 49 early-stage Huntington’s patients across nine centers in the UK, Germany and Canada participated in the study. Each patient received four doses of the drug or placebo through a direct injection into their spine to help IONIS-HTTRX reach the brain.

Not only was the drug well-tolerated and safe, it lowered the levels of the mutant protein in the spinal fluid. The effect was dose-dependent, in that the more drugs a patient received, the lower level of toxic protein was found.

“If I’d have been asked five years ago if this could work, I would have absolutely said no. The fact that it does work is really remarkable,” says Hardy.

That said, the drug isn’t necessarily a cure. A patient would have to receive a dose of drug every three to four months for the rest of their lives to keep symptoms at bay. While it’s too early to put a price sticker on the treatment, the number could be in the hundred thousands every year.

A major step going forward is to see whether the drug improves patients’ clinical symptoms. Roche is on it—the company is throwing in millions to test the drug in larger trials.

In the meantime, participants from the safety trial are given the option to continue drug treatment, and this extension trial is bound to give scientists more insights.

Although this isn’t the first gene-silencing drug that has been shown to be successful, it is the first drug that tackles an incurable disease in the brain.

It’s a glimpse into a profound future—one where broken brains can be mended  and lives can be saved, long before the first signs of disease ever manage to take hold.

Image Credit: science photo /

Kategorie: Transhumanismus

How the Science of Decision-Making Will Help Us Make Better Strategic Choices

17 Leden, 2018 - 16:00

Neuroscientist Brie Linkenhoker believes that leaders must be better prepared for future strategic challenges by continually broadening their worldviews.

As the director of Worldview Stanford, Brie and her team produce multimedia content and immersive learning experiences to make academic research and insights accessible and useable by curious leaders. These future-focused topics are designed to help curious leaders understand the forces shaping the future.

Worldview Stanford has tackled such interdisciplinary topics as the power of minds, the science of decision-making, environmental risk and resilience, and trust and power in the age of big data.

We spoke with Brie about why understanding our biases is critical to making better decisions, particularly in a time of increasing change and complexity.

Lisa Kay Solomon: What is Worldview Stanford?

Brie Linkenhoker: Leaders and decision makers are trying to navigate this complex hairball of a planet that we live on and that requires keeping up on a lot of diverse topics across multiple fields of study and research. Universities like Stanford are where that new knowledge is being created, but it’s not getting out and used as readily as we would like, so that’s what we’re working on.

Worldview is designed to expand our individual and collective worldviews about important topics impacting our future. Your worldview is not a static thing, it’s constantly changing. We believe it should be informed by lots of different perspectives, different cultures, by knowledge from different domains and disciplines. This is more important now than ever.

At Worldview, we create learning experiences that are an amalgamation of all of those things.

LKS: One of your marquee programs is the Science of Decision Making. Can you tell us about that course and why it’s important?

BL: We tend to think about decision makers as being people in leadership positions, but every person who works in your organization, every member of your family, every member of the community is a decision maker. You have to decide what to buy, who to partner with, what government regulations to anticipate.

You have to think not just about your own decisions, but you have to anticipate how other people make decisions too. So, when we set out to create the Science of Decision Making, we wanted to help people improve their own decisions and be better able to predict, understand, anticipate the decisions of others.

“I think in another 10 or 15 years, we’re probably going to have really rich models of how we actually make decisions and what’s going on in the brain to support them.”

We realized that the only way to do that was to combine a lot of different perspectives, so we recruited experts from economics, psychology, neuroscience, philosophy, biology, and religion. We also brought in cutting-edge research on artificial intelligence and virtual reality and explored conversations about how technology is changing how we make decisions today and how it might support our decision-making in the future.

There’s no single set of answers. There are as many unanswered questions as there are answered questions.

LKS: One of the other things you explore in this course is the role of biases and heuristics. Can you explain the importance of both in decision-making?

BL: When I was a strategy consultant, executives would ask me, “How do I get rid of the biases in my decision-making or my organization’s decision-making?” And my response would be, “Good luck with that. It isn’t going to happen.”

As human beings we make, probably, thousands of decisions every single day. If we had to be actively thinking about each one of those decisions, we wouldn’t get out of our house in the morning, right?

We have to be able to do a lot of our decision-making essentially on autopilot to free up cognitive resources for more difficult decisions. So, we’ve evolved in the human brain a set of what we understand to be heuristics or rules of thumb.

And heuristics are great in, say, 95 percent of situations. It’s that five percent, or maybe even one percent, that they’re really not so great. That’s when we have to become aware of them because in some situations they can become biases.

For example, it doesn’t matter so much that we’re not aware of our rules of thumb when we’re driving to work or deciding what to make for dinner. But they can become absolutely critical in situations where a member of law enforcement is making an arrest or where you’re making a decision about a strategic investment or even when you’re deciding who to hire.

Let’s take hiring for a moment.

How many years is a hire going to impact your organization? You’re potentially looking at 5, 10, 15, 20 years. Having the right person in a role could change the future of your business entirely. That’s one of those areas where you really need to be aware of your own heuristics and biases—and we all have them. There’s no getting rid of them.

LKS: We seem to be at a time when the boundaries between different disciplines are starting to blend together. How has the advancement of neuroscience help us become better leaders? What do you see happening next?

BL: Heuristics and biases are very topical these days, thanks in part to Michael Lewis’s fantastic book, The Undoing Project, which is the story of the groundbreaking work that Nobel Prize winner Danny Kahneman and Amos Tversky did in the psychology and biases of human decision-making. Their work gave rise to the whole new field of behavioral economics.

In the last 10 to 15 years, neuroeconomics has really taken off. Neuroeconomics is the combination of behavioral economics with neuroscience. In behavioral economics, they use economic games and economic choices that have numbers associated with them and have real-world application.

For example, they ask, “How much would you spend to buy A versus B?” Or, “If I offered you X dollars for this thing that you have, would you take it or would you say no?” So, it’s trying to look at human decision-making in a format that’s easy to understand and quantify within a laboratory setting.

Now you bring neuroscience into that. You can have people doing those same kinds of tasks—making those kinds of semi-real-world decisions—in a brain scanner, and we can now start to understand what’s going on in the brain while people are making decisions. You can ask questions like, “Can I look at the signals in someone’s brain and predict what decision they’re going to make?” That can help us build a model of decision-making.

I think in another 10 or 15 years, we’re probably going to have really rich models of how we actually make decisions and what’s going on in the brain to support them. That’s very exciting for a neuroscientist.

Image Credit: Black Salmon /

Kategorie: Transhumanismus

If We Could Engineer Animals to Be as Smart as Humans—Should We?

16 Leden, 2018 - 17:00

Advances in neural implants and genetic engineering suggest that in the not–too–distant future we may be able to boost human intelligence. If that’s true, could we—and should we—bring our animal cousins along for the ride?

Human brain augmentation made headlines last year after several tech firms announced ambitious efforts to build neural implant technology. Duke University neuroscientist Mikhail Lebedev told me in July it could be decades before these devices have applications beyond the strictly medical.

But he said the technology, as well as other pharmacological and genetic engineering approaches, will almost certainly allow us to boost our mental capacities at some point in the next few decades.

Whether this kind of cognitive enhancement is a good idea or not, and how we should regulate it, are matters of heated debate among philosophers, futurists, and bioethicists, but for some it has raised the question of whether we could do the same for animals.

There’s already tantalizing evidence of the idea’s feasibility. As detailed in BBC Future, a group from MIT found that mice that were genetically engineered to express the human FOXP2 gene linked to learning and speech processing picked up maze routes faster. Another group at Wake Forest University studying Alzheimer’s found that neural implants could boost rhesus monkeys’ scores on intelligence tests.

The concept of “animal uplift” is most famously depicted in the Planet of the Apes movie series, whose planet–conquering protagonists are likely to put most people off the idea. But proponents are less pessimistic about the outcomes.

Science fiction author David Brin popularized the concept in his “Uplift” series of novels, in which humans share the world with various other intelligent animals that all bring their own unique skills, perspectives, and innovations to the table. “The benefits, after a few hundred years, could be amazing,” he told Scientific American.

Others, like George Dvorsky, the director of the Rights of Non-Human Persons program at the Institute for Ethics and Emerging Technologies, go further and claim there is a moral imperative. He told the Boston Globe that denying augmentation technology to animals would be just as unethical as excluding certain groups of humans.

Others are less convinced. Forbes Alex Knapp points out that developing the technology to uplift animals will likely require lots of very invasive animal research that will cause huge suffering to the animals it purports to help. This is problematic enough with normal animals, but could be even more morally dubious when applied to ones whose cognitive capacities have been enhanced.

The whole concept could also be based on a fundamental misunderstanding of the nature of intelligence. Humans are prone to seeing intelligence as a single, self-contained metric that progresses in a linear way with humans at the pinnacle.

In an opinion piece in Wired arguing against the likelihood of superhuman artificial intelligence, Kevin Kelly points out that science has no such single dimension with which to rank the intelligence of different species. Each one combines a bundle of cognitive capabilities, some of which are well below our own capabilities and others which are superhuman. He uses the example of the squirrel, which can remember the precise location of thousands of acorns for years.

Uplift efforts may end up being less about boosting intelligence and more about making animals more human-like. That represents “a kind of benevolent colonialism” that assumes being more human-like is a good thing, Paul Graham Raven, a futures researcher at the University of Sheffield in the United Kingdom, told the Boston Globe. There’s scant evidence that’s the case, and it’s easy to see how a chimpanzee with the mind of a human might struggle to adjust.

There are also fundamental barriers that may make it difficult to achieve human-level cognitive capabilities in animals, no matter how advanced brain augmentation technology gets. In 2013 Swedish researchers selectively bred small fish called guppies for bigger brains. This made them smarter, but growing the energy-intensive organ meant the guppies developed smaller guts and produced fewer offspring to compensate.

This highlights the fact that uplifting animals may require more than just changes to their brains, possibly a complete rewiring of their physiology that could prove far more technically challenging than human brain augmentation.

Our intelligence is intimately tied to our evolutionary history—our brains are bigger than other animals’; opposable thumbs allow us to use tools; our vocal chords make complex communication possible. No matter how much you augment a cow’s brain, it still couldn’t use a screwdriver or talk to you in English because it simply doesn’t have the machinery.

Finally, from a purely selfish point of view, even if it does become possible to create a level playing field between us and other animals, it may not be a smart move for humanity. There’s no reason to assume animals would be any more benevolent than we are, having evolved in the same ‘survival of the fittest’ crucible that we have. And given our already endless capacity to divide ourselves along national, religious, or ethnic lines, conflict between species seems inevitable.

We’re already likely to face considerable competition from smart machines in the coming decades if you believe the hype around AI. So maybe adding a few more intelligent species to the mix isn’t the best idea.

Image Credit: Ron Meijer /

Kategorie: Transhumanismus

Are Solar Roads the Highway of the Future, or a Road to Nowhere?

15 Leden, 2018 - 17:00

By some back-of-the-envelope estimates, around 0.2–0.5 percent of the world’s land surface is covered in roads. This proportion is projected to increase by 60 percent by 2050. It’s a staggering fraction of territory for one species to claim—and it’s for transportation alone. But what if roads doubled as power generators?

In China, one of the world’s first solar highways is taking shape. Could the solar panel superhighway be the power station of the future?

One of the advantages fossil fuels have over renewable sources of energy is energy density. The reason is fairly simple: fossil fuels are renewable energy, plus millions of years of storage time. Oil, coal, and natural gas are all reserves of energy that built up, ultimately, from plants (and the animals that ate those plants) storing the sun’s energy over millennia with photosynthesis. Naturally, then, fossil fuels are more energy-dense than harnessing the sun’s power in real time.

Simply put: fossil fuels require far less land for energy production than solar power.

One of the major obstacles in our efforts to use renewable energy is the amount of physical space needed to harness these energy sources. Our ever-increasing energy consumption makes this a real problem. Primary energy is the total energy input required by humans, from all sources—including fossil fuels and renewables. We consumed in 2016 478 TW of primary energy, and that figure is growing every year.

For example, if you wanted to supply all of our energy using corn bioethanol, which has a power production density of around 0.2 watts per square meter (one of the worst among biofuels), you need over 2 x 1015 m2 of land just to grow the corn. Unfortunately, this is more than four times the surface area of Earth.

Anti-renewables advocates often use this to crow that a renewable energy infrastructure is impossible. But this is an exaggeration; power density for solar panel arrays in deserts can reach 20 W/m2 or even more, and suddenly you’re dealing with more manageable fractions of the Earth’s surface. Note also that the energy produced by solar panels is in the form of high-grade electricity.

Since going fossil-fuel free means using electricity instead of burning fuels, which is often more efficient, we’d consume less primary energy in a fossil-fuel free world; fossil fuel power plants are not 100 percent efficient, and some lose up to 70 percent of their primary energy converting to electricity. But the massive scale of renewables that could replace them will require a lot of land.

So, it’s only natural that plenty of people have considered the road networks.

Given that the land is already covered, environmental disruption is lower. The power stations don’t suffer from the same remoteness problems you might have in the Sahara; for maintenance and repair, they’re easily accessible…by road. Combine it with LEDs, and you have efficient street-lighting, signage, and road markings. You can even dream that, with wireless power transmission, cars could one day be powered by the very roads they drive on.

The idea might sound far-fetched, but it has a lot of enthusiastic support by governments and companies.

The new Chinese effort, which will sandwich 2 km of solar panels between transparent asphalt and a layer of insulate beneath, is just the latest in a long line of attempts. A more outlandish proposition was shown in Solar Freakin’ Roadways, a viral video with 22 million views that spawned an Indiegogo campaign that raised over $2 million. The promises of Solar Roadways, an Idaho-based startup, bordered on irresponsible, but nevertheless captured the public imagination.

Soon enough, Scott Brusaw, the founder, had a prototype roadway built in his backyard that was generating half the average US power consumption. Unfortunately, Solar Roadways would need more than a little momentum, and the skeptical voices began pouring in. David Biello, writing in Scientific American, noted that “[The glass for the roadways] must be tempered, self-cleaning, and capable of transmitting light to the PV below under trying conditions, among other characteristics—a type of glass that does not yet exist.”

The Chinese method, which uses a new transparent asphalt instead of glass, might surmount this materials problem, as its builders claim it can withstand 10 times more pressure than the normal asphalt variety. Building solar roads is not a one-man or one-country operation; prototypes have been constructed in the Netherlands with a cycle lane constructed by SolaRoad, and in France, with a project that claimed to be the first solar panel road. These projects have been generating power for some years already, so the idea is not in principle impossible. Unfortunately, there’s a big, dream-filled gap between “not impossible” and “practical.”

For a start, there’s the price. Take Scott Brusaw’s Solar Roadways: if you use his price estimates, the cost of replacing the US roadways with solar roads would come in at a cool $56 trillion—so the Indiegogo campaign won’t cut it (unless everyone on the planet can run several of them, with similar success). There’s broad political agreement that the US needs infrastructure investment, but the additional investment required for solar roadways may be a hard sell. China’s solar roadway cost $458 per square meter, compared to Brusaw’s $746—an improvement, but likely not enough.

It’s true that any real solution to our energy crisis needs to be radical and wide-ranging. Cost estimates for similarly radical schemes like Saharan solar panels, or sucking carbon dioxide out of the atmosphere, can also run into trillions of dollars.

But alongside the cost, there’s still a very real question about whether this could work as a solution to the energy crisis. Roads are not always built in the optimal place to put solar panels, and they can’t be at the ideal angle of elevation for solar panels. If cleaning the dust off the Saharan solar panels is a challenge, keeping the roads functioning as roads and power generators at the same time could be a maintenance nightmare. It’s hard to see how placing the panels alongside the roadway doesn’t work out cheaper and better.

Let’s dig into the prototypes, too.

The prototype road in the Netherlands is reported to work “better than expected” by generating “70 kilowatt hours per square meter per year,” according to the spokesman for SolaRoad, comparable to the upper limits of expected production. But 70 kWh is not a huge amount. If you want to use the road to charge your car, 1 square meter gets you around 300 miles a year in a Tesla; given that the average American drives 13,476 miles a year, it’s a drop in the ocean.

What about the power density problem? Scaling up the Netherlands prototype, it would have a power density of 8 W per meter squared. If you spend that $56 trillion on solar roadways, you’d cover around 7.5 x 1010 m2 with panels, and generate 600 GW of electrical power. Not bad—that’s around the electricity consumed by the US today, although total energy consumption is higher. But for $56 trillion, you could do better. And the prospects of actually funding solar roads, when more cost-effective solar and renewables have not been deployed on anything like this scale, are incredibly slim.

The pursuit of the solar roadway in China is symbolic of that country’s newfound commitment to innovative renewable energy solutions. Who knows—one day, with solution-processed solar panels, solar roads may just get cheap and efficient enough to become more realistic. At worst, this kind of a project is a distraction from better solutions. At best, the long and winding road is little more a symbol of how much further we have to go.

Image Credit: Krivosheev Vitaly /

Kategorie: Transhumanismus

This Neural Network Built by Japanese Researchers Can ‘Read Minds’

14 Leden, 2018 - 17:00

It already seems a little like computers can read our minds; features like Google’s auto-complete, Facebook’s friend suggestions, and the targeted ads that appear while you’re browsing the web sometimes make you wonder, “How did they know?” For better or worse, it seems we’re slowly but surely moving in the direction of computers reading our minds for real, and a new study from researchers in Kyoto, Japan is an unequivocal step in that direction.

A team from Kyoto University used a deep neural network to read and interpret people’s thoughts. Sound crazy? This actually isn’t the first time it’s been done. The difference is that previous methods—and results—were simpler, deconstructing images based on their pixels and basic shapes. The new technique, dubbed “deep image reconstruction,” moves beyond binary pixels, giving researchers the ability to decode images that have multiple layers of color and structure.

“Our brain processes visual information by hierarchically extracting different levels of features or components of different complexities,” said Yukiyasu Kamitani, one of the scientists involved in the study. “These neural networks or AI models can be used as a proxy for the hierarchical structure of the human brain.”

The study lasted 10 months and consisted of three people viewing images of three different categories: natural phenomena (such as animals or people), artificial geometric shapes, and letters of the alphabet for varying lengths of time.

Reconstructions utilizing the DGN. Three reconstructed images
correspond to reconstructions from three subjects.

The viewers’ brain activity was measured either while they were looking at the images or afterward. To measure brain activity after people had viewed the images, they were simply asked to think about the images they’d been shown.

Recorded activity was then fed into a neural network that “decoded” the data and used it to generate its own interpretations of the peoples’ thoughts.

In humans (and, actually, all mammals) the visual cortex is located at the back of the brain, in the occipital lobe, which is above the cerebellum. Activity in the visual cortex was measured using functional magnetic resonance imaging (fMRI), which is translated into hierarchical features of a deep neural network.

Starting from a random image, the network repeatedly optimizes that image’s pixel values. The neural network’s features of the input image become similar to the features decoded from brain activity.

Importantly, the team’s model was trained using only natural images (of people or nature), but it was able to reconstruct artificial shapes. This means the model truly ‘generated’ images based on brain activity, as opposed to matching that activity to existing examples.

Not surprisingly, the model did have a harder time trying to decode brain activity when people were asked to remember images, as compared to activity when directly viewing images. Our brains can’t remember every detail of an image we saw, so our recollections tend to be a bit fuzzy.

The reconstructed images from the study retain some resemblance to the original images viewed by participants, but mostly, they look like minimally-detailed blobs. However, the technology’s accuracy is only going to improve, and its applications will increase accordingly.

Imagine “instant art,” where you could produce art just by picturing it in your head. Or what if an AI could record your brain activity as you’re asleep and dreaming, then re-create your dreams in order to analyze them? Last year, completely paralyzed patients were able to communicate with their families for the first time using a brain-computer interface.

There are countless creative and significant ways to use a model like the one in the Kyoto study. But brain-machine interfaces are also one of those technologies we can imagine having eerie, Black Mirror-esque consequences if not handled wisely. Neuroethicists have already outlined four new human rights we would need to implement to keep mind-reading technology from going sorely wrong.

Despite this, the Japanese team certainly isn’t alone in its efforts to advance mind-reading AI. Elon Musk famously founded Neuralink with the purpose of building brain-machine interfaces to connect people and computers. Kernel is working on making chips that can read and write neural code.

Whether it’s to recreate images, mine our deep subconscious, or give us entirely new capabilities, though, it’s in our best interest that mind-reading technology proceeds with caution.

Image Credit: igor kisselev /

Kategorie: Transhumanismus

This Week’s Awesome Stories From Around the Web (Through January 13)

13 Leden, 2018 - 17:00

CES 2018: Phantom Auto Demonstrates First Remote-Controlled Car on Public Roads
Mark Harris | IEEE Spectrum
“But Shukman is not sitting next to me in the driver’s seat of Phantom’s Lincoln MKZ, and he hasn’t felt a drop of rain in weeks. Shukman is remotely controlling the car from Mountain View, Calif., more than 500 miles away. In the first such demonstration on public roads, Phantom Auto hopes to convince skeptical car makers and wary regulators that the best backup for today’s experimental autonomous vehicles (AVs) is nothing more or less than an old-fashioned human driver.”


How Russia Says It Swatted Down a Drone Swarm in Syria
David Axe | Motherboard
“On the night of January 5, a swarm of explosives-laden small drones, apparently controlled by Syrian rebels, attacked two Russian bases in western Syria, the Kremlin confirmed on Thursday… But the threat from small, cheap, numerous, and armed unmanned aerial vehicles isn’t going away. The next swarm could be bigger and more dangerous.”


Gene Therapy Could Make Cancer Care More Unequal, and This Map Shows Why
Emily Mullin | MIT Technology Review
“Not only are they hugely expensive—Kymriah is $475,000 and Yescarta is $373,000 for a one-time treatment—but for now you can get them only in certain urban areas. We mapped those locations below. As you can see, some of the biggest gaps are in rural states, where cancer already kills more people than it does in cities. That’s a problem because both therapies are given as a last resort when traditional cancer drugs have failed.”


Facebook Overhauls News Feed to Focus on What Friends and Family Share
Mike Isaac | The New York Times
“The shift is the most significant overhaul in years to Facebook’s News Feed, the cascading screen of content that people see when they log into the social network. Over the next few weeks, users will begin seeing fewer viral videos and news articles shared by media companies. Instead, Facebook will highlight posts that friends have interacted with—for example, a photo of your dog or a status update that many of them have commented on or liked.”


Spaceflight Startup Rocket Lab Will Try Again This Month to Launch Its Small Rocket to Orbit
Loren Grush | The Verge
“This will be Rocket Lab’s second attempt at this test flight called ‘Still Testing.’ The original plan was to launch ‘Still Testing’ in December during a 10-day launch window, but the mission was delayed multiple times because of less-than-ideal weather and technical glitches. The company got close to launching on December 12th, getting all the way to a final countdown. However, the Electron’s engines ignited and then quickly shut off after computers detected that the rocket’s propellant was getting too warm. As a result, the rocket released a short burst of exhaust plumes but remained on the launch pad.”

Image Credit: Levchenko Ilia /

Kategorie: Transhumanismus

There Are Over 1,000 Alternatives to Bitcoin You’ve Never Heard Of

12 Leden, 2018 - 17:00

Bitcoin gets all the attention, especially since it recently rocketed towards $20,000. But many other cryptocurrencies exist, and more are being created at an accelerating rate. A quick look at shows over 1,400 alternatives to Bitcoin (as of this writing), with a combined value climbing towards $1 trillion. So if Bitcoin is so amazing, why do these alternatives exist? What makes them different?

The easy answer is that many are simply copycats trying to piggyback on Bitcoin’s success. However, a handful have made key improvements on some of Bitcoin’s drawbacks, while others are fundamentally different, allowing them to perform different functions. The far more complicated—and fascinating—answer lies in the nitty-gritty details of blockchain, encryption, and mining.

To understand these other cryptocurrencies, Bitcoin’s shortcomings need to first be understood, as the other currencies aim to pick up where Bitcoin falls short.

The Problems With Bitcoin

Bitcoin’s block size is only 1 MB, drastically limiting the number of transactions each block can hold. With the pre-programmed time limit of 10 minutes per block being added, this gives a theoretical maximum of 7 transactions per second. Compared with Visa and PayPal’s significantly higher transactions per second, for example, Bitcoin can’t compete, and with the popularity of Bitcoin soaring, the problem is going to get worse. As of now, around 200,000 transactions are backlogged.

Bitcoin’s scalability problem is also likely to make mining more difficult and increase mining fees. Adding blocks to the blockchain requires doing an alarming amount of computation to find the solution to the SHA-256 cryptographic hash algorithm, for which the miner is rewarded with a geometrically decreasing predetermined amount of Bitcoins, currently at 12.5 per block.

However, each new block takes more computing than the last, meaning it becomes more difficult for less reward. To help offset this, miners can charge fees, and with it becoming more difficult to make a profit, the fees are only going to go up.

Because of the computing power needed to process each block, it has been estimated that each transaction requires enough electricity to power the average home for nine days. If this is true, and if Bitcoin continues to grow at the same rate, some have predicted it will reach an unsustainable level within a decade.

Furthermore, Bitcoin’s blockchain has only one purpose: to handle Bitcoin. Given the complexity of the system, it could be doing much more. Also, Bitcoin is not entirely anonymous. For any given Bitcoin address, the transactions and the balance can be seen, as they are public and stored permanently on the network. The details of the owner can be revealed during a purchase.


Ignoring the copycats, several Bitcoin alternatives—or altcoins—have gained popularity. Some of these are a result of changing the Bitcoin code, which is open-source, effectively creating a hard fork in the blockchain and a new cryptocurrency. Others have their own native blockchains.

Hard forks include Bitcoin Cash, Bitcoin Classic, and Bitcoin XT, all three of which increased the block size. XT changed the block size to 8 MB, allowing for up to 24 transactions per second, whereas Classic only increased it to 2 MB. While these two are now terminated due to a lack of community support, Cash is still going. Its major change was to do away with Segregated Witness, which reduces the size of a transaction by removing the signature data, allowing for more transactions per block.

Another Bitcoin derivative is Litecoin. The major changes from Bitcoin are that the creator, Charlie Lee, reduced the block generation time from 10 minutes to 2.5, and instead of using SHA-256, it uses scrypt, which is considered by some to be a more efficient hashing algorithm.

As far as native blockchains go, there are a lot of altcoins.

One of the most popular—at least by market capitalization—is Ethereum. The key element that distinguishes Ethereum from Bitcoin is that its language is Turing-complete, meaning it can be programmed for just about anything, such as smart contracts, not just its currency, Ether. For example, the United Nations has adopted it to transfer vouchers for food aid to refugees, keep track of carbon outputs, etc.

Monero has solved Bitcoin’s privacy issue. It uses ring signatures, which allow for information about the sender to hide among other pieces of data, effectively creating stealth addresses. This makes the Monero blockchain opaque, not transparent like other blockchains. However, programmers have included a “spend” key and a “view” key, which allow for optional transparency if agreed upon for specific transactions.

Dash has avoided Bitcoin’s logjam by splitting the network into two tiers. The first handles block generation done by miners, much like Bitcoin, but the second tier contains masternodes. These handle the new services of PrivateSend and InstantSend, and they add a level of privacy and speed not seen in other blockchains. These transactions are confirmed by a consensus of the masternodes, thus removing them from the computing and time-intensive project of block generation.

IOTA just did away with blocks altogether. It stands for the Internet of Things Application and depends on users to validate transactions instead of relying on miners and their souped-up computers. As a user conducts a transaction, he/she is required to validate two previous transactions, so the rate of validation will always scale with the amount of transactions.

On the other hand, Ripple, which is now one of the top cryptocurrencies by market capitalization, has taken a completely different approach. While other cryptocurrencies are designed to replace the traditional banking system, Ripple attempts to strengthen it by facilitating bank transfers. That is, bank transfers depend on systems like SWIFT, which is expensive and time-consuming, but Ripple’s blockchain can perform the same functions far more efficiently. Over 100 major banking institutions are signed up to implement it.

Bitcoin isn’t going anywhere anytime soon, but budding crypto-enthusiasts should give heed to these competitors and many others, as they may one day replace it as the dominant cryptocurrency.

Image Credit: lmstockwork /

Kategorie: Transhumanismus

Low-Cost Soft Robot Muscles Can Lift 200 Times Their Weight and Self-Heal

11 Leden, 2018 - 17:20

Jerky mechanical robots are staples of science fiction, but to seamlessly integrate into everyday life they’ll need the precise yet powerful motor control of humans. Now scientists have created a new class of artificial muscles that could soon make that a reality.

The advance is the latest breakthrough in the field of soft robotics. Scientists are increasingly designing robots using soft materials that more closely resemble biological systems, which can be more adaptable and better suited to working in close proximity to humans.

One of the main challenges has been creating soft components that match the power and control of the rigid actuators that drive mechanical robots—things like motors and pistons. Now researchers at the University of Colorado Boulder have built a series of low-cost artificial muscles—as little as 10 cents per device—using soft plastic pouches filled with electrically insulating liquids that contract with the force and speed of mammalian skeletal muscles when a voltage is applied to them.

Three different designs of the so-called hydraulically amplified self-healing electrostatic (HASEL) actuators were detailed in two papers in the journals Science and Science Robotics last week. They could carry out a variety of tasks, from gently picking up delicate objects like eggs or raspberries to lifting objects many times their own weight, such as a gallon of water, at rapid repetition rates.

“We draw our inspiration from the astonishing capabilities of biological muscle,” Christoph Keplinger, an assistant professor at UC Boulder and senior author of both papers, said in a press release. “Just like biological muscle, HASEL actuators can reproduce the adaptability of an octopus arm, the speed of a hummingbird and the strength of an elephant.”

The artificial muscles work by applying a voltage to hydrogel electrodes on either side of pouches filled with liquid insulators, which can be as simple as canola oil. This creates an attraction between the two electrodes, pulling them together and displacing the liquid. This causes a change of shape that can push or pull levers, arms or any other articulated component.

The design is essentially a synthesis of two leading approaches to actuating soft robots. Pneumatic and hydraulic actuators that pump fluids around have been popular due to their high forces, easy fabrication and ability to mimic a variety of natural motions. But they tend to be bulky and relatively slow.

Dielectric elastomer actuators apply an electric field across a solid insulating layer to make it flex. These can mimic the responsiveness of biological muscle. But they are not very versatile and can also fail catastrophically, because the high voltages required can cause a bolt of electricity to blast through the insulator, destroying it. The likelihood of this happening increases in line with the size of their electrodes, which makes it hard to scale them up. By combining the two approaches, researchers get the best of both worlds, with the power, versatility and easy fabrication of a fluid-based system and the responsiveness of electrically-powered actuators.

One of the designs holds particular promise for robotics applications, as it behaves a lot like biological muscle. The so-called Peano-HASEL actuators are made up of multiple rectangular pouches connected in series, which allows them to contract linearly, just like real muscle. They can lift more than 200 times their weight, but being electrically powered, they exceed the flexing speed of human muscle.

As the name suggests, the HASEL actuators are also self-healing. They are still prone to the same kind of electrical damage as dielectric elastomer actuators, but the liquid insulator is able to immediately self-heal by redistributing itself and regaining its insulating properties.

The muscles can even monitor the amount of strain they’re under to provide the same kind of feedback biological systems would. The muscle’s capacitance—its ability to store an electric charge—changes as the device stretches, which makes it possible to power the arm while simultaneously measuring what position it’s in.

The researchers say this could imbue robots with a similar sense of proprioception or body-awareness to that found in plants and animals. “Self-sensing allows for the development of closed-loop feedback controllers to design highly advanced and precise robots for diverse applications,” Shane Mitchell, a PhD student in Keplinger’s lab and an author on both papers, said in an email.

The researchers say the high voltages required are an ongoing challenge, though they’ve already designed devices in the lab that use a fifth of the voltage of those features in the recent papers.

In most of their demonstrations, these soft actuators were being used to power rigid arms and levers, pointing to the fact that future robots are likely to combine both rigid and soft components, much like animals do. The potential applications for the technology range from more realistic prosthetics to much more dextrous robots that can work easily alongside humans.

It will take some work before these devices appear in commercial robots. But the combination of high-performance with simple and inexpensive fabrication methods mean other researchers are likely to jump in, so innovation could be rapid.

Image Credit: Keplinger Research Group/University of Colorado

Kategorie: Transhumanismus

The Future of Cancer Treatment Is Personalized and Collaborative

11 Leden, 2018 - 16:00

In an interview at Singularity University’s Exponential Medicine in San Diego, Richard Wender, chief cancer control officer at the American Cancer Society, discussed how technology has changed cancer care and treatment in recent years.

Just a few years ago, microscopes were the primary tool used in cancer diagnoses, but we’ve come a long way since.

“We still look at a microscope, we still look at what organ the cancer started in,” Wender said. “But increasingly we’re looking at the molecular signature. It’s not just the genomics, and it’s not just the genes. It’s also the cellular environment around that cancer. We’re now targeting our therapies to the mutations that are found in that particular cancer.”

Cancer treatments in the past have been largely reactionary, but they don’t need to be. Most cancer is genetic, which means that treatment can be preventative. This is one reason why newer cancer treatment techniques are searching for actionable targets in the specific gene before the cancer develops.

When asked how artificial intelligence and machine learning technologies are reshaping clinical trials, Wender acknowledged that how clinical trials have been run in the past won’t work moving forward.

“Our traditional ways of learning about cancer were by finding a particular cancer type and conducting a long clinical trial that took a number of years enrolling patients from around the country. That is not how we’re going to learn to treat individual patients in the future.”

Instead, Wender emphasized the need for gathering as much data as possible, and from as many individual patients as possible. This data should encompass clinical, pathological, and molecular data and should be gathered from a patient all the way through their final outcome. “Literally every person becomes a clinical trial of one,” Wender said.

For the best cancer treatment and diagnostics, Wender says the answer is to make the process collaborative by pulling in resources from organizations and companies that are both established and emerging.

It’s no surprise to hear that the best solutions come from pairing together uncommon partners to innovate.

Image Credit: jovan vitanovski /

Kategorie: Transhumanismus

Darker Still: Black Mirror’s New Season Envisions Neurotech Gone Wrong

10 Leden, 2018 - 17:00

The key difference between science fiction and fantasy is that science fiction is entirely possible because of its grounding in scientific facts, while fantasy is not. This is where Black Mirror is both an entertaining and terrifying work of science fiction. Created by Charlie Brooker, the anthological series tells cautionary tales of emerging technology that could one day be an integral part of our everyday lives.

While watching the often alarming episodes, one can’t help but recognize the eerie similarities to some of the tech tools that are already abundant in our lives today. In fact, many previous Black Mirror predictions are already becoming reality.

The latest season of Black Mirror was arguably darker than ever. This time, Brooker seemed to focus on the ethical implications of one particular area: neurotechnology.

Emerging Neurotechnology

Warning: The remainder of this article may contain spoilers from Season 4 of Black Mirror.

Most of the storylines from season four revolve around neurotechnology and brain-machine interfaces. They are based in a world where people have the power to upload their consciousness onto machines, have fully immersive experiences in virtual reality, merge their minds with other minds, record others’ memories, and even track what others are thinking, feeling, and doing. 

How can all this ever be possible? Well, these capabilities are already being developed by pioneers and researchers globally. Early last year, Elon Musk unveiled Neuralink, a company whose goal is to merge the human mind with AI through a neural lace. We’ve already connected two brains via the internet, allowing one brain to communicate with another. Various research teams have been able to develop mechanisms for “reading minds” or reconstructing memories of individuals via devices. The list goes on.

With many of the technologies we see in Black Mirror it’s not a question of if, but when. Futurist Ray Kurzweil has predicted that by the 2030s we will be able to upload our consciousness onto the cloud via nanobots that will “provide full-immersion virtual reality from within the nervous system, provide direct brain-to-brain communication over the internet, and otherwise greatly expand human intelligence.” While other experts continue to challenge Kurzweil on the exact year we’ll accomplish this feat, with the current exponential growth of our technological capabilities, we’re on track to get there eventually.

Ethical Questions

As always, technology is only half the conversation. Equally fascinating are the many ethical and moral questions this topic raises.

For instance, with the increasing convergence of artificial intelligence and virtual reality, we have to ask ourselves if our morality from the physical world transfers equally into the virtual world. The first episode of season four, USS Calister, tells the story of a VR pioneer, Robert Daley, who creates breakthrough AI and VR to satisfy his personal frustrations and sexual urges. He uses the DNA of his coworkers (and their children) to re-create them digitally in his virtual world, to which he escapes to torture them, while they continue to be indifferent in the “real” world.

Audiences are left asking themselves: should what happens in the digital world be considered any less “real” than the physical world? How do we know if the individuals in the virtual world (who are ultimately based on algorithms) have true feelings or sentiments? Have they been developed to exhibit characteristics associated with suffering, or can they really feel suffering? Fascinatingly, these questions point to the hard problem of consciousness—the question of if, why, and how a given physical process generates the specific experience it does—which remains a major mystery in neuroscience.

Towards the end of USS Calister, the hostages of Daley’s virtual world attempt to escape through suicide, by committing an act that will delete the code that allows them to exist. This raises yet another mind-boggling ethical question: if we “delete” code that signifies a digital being, should that be considered murder (or suicide, in this case)? Why shouldn’t it? When we murder someone we are, in essence, taking away their capacity to live and to be, without their consent. By unplugging a self-aware AI, wouldn’t we be violating its basic right to live in the same why? Does AI, as code, even have rights?

Brain implants can also have a radical impact on our self-identity and how we define the word “I”. In the episode Black Museum, instead of witnessing just one horror, we get a series of scares in little segments. One of those segments tells the story of a father who attempts to reincarnate the mother of his child by uploading her consciousness into his mind and allowing her to live in his head (essentially giving him multiple personality disorder). In this way, she can experience special moments with their son.

With “no privacy for him, and no agency for her” the good intention slowly goes very wrong. This story raises a critical question: should we be allowed to upload consciousness into limited bodies? Even more, if we are to upload our minds into “the cloud,” at what point do we lose our individuality to become one collective being?

These questions can form the basis of hours of debate, but we’re just getting started. There are no right or wrong answers with many of these moral dilemmas, but we need to start having such discussions.

The Downside of Dystopian Sci-Fi 

Like last season’s San Junipero, one episode of the series, Hang the DJ, had an uplifting ending. Yet the overwhelming majority of the stories in Black Mirror continue to focus on the darkest side of human nature, feeding into the pre-existing paranoia of the general public. There is certainly some value in this; it’s important to be aware of the dangers of technology. After all, what better way to explore these dangers before they occur than through speculative fiction?

A big takeaway from every tale told in the series is that the greatest threat to humanity does not come from technology, but from ourselves. Technology itself is not inherently good or evil; it all comes down to how we choose to use it as a society. So for those of you who are techno-paranoid, beware, for it’s not the technology you should fear, but the humans who get their hands on it.

While we can paint negative visions for the future, though, it is also important to paint positive ones. The kind of visions we set for ourselves have the power to inspire and motivate generations. Many people are inherently pessimistic when thinking about the future, and that pessimism in turn can shape their contributions to humanity.

While utopia may not exist, the future of our species could and should be one of solving global challenges, abundance, prosperity, liberation, and cosmic transcendence. Now that would be a thrilling episode to watch.

Image Credit: Billion Photos /

Kategorie: Transhumanismus

Gene Therapy Had a Breakthrough 2017—2018 May Be Even Better

9 Leden, 2018 - 17:00

Gene therapy had a hell of a 2017. After decades of promises but failed deliveries, last year saw the field hitting a series of astonishing home runs.

The concept of gene therapy is elegant: like computer bugs, faulty letters in the human genome can be edited and replaced with healthy ones.

But despite early enthusiasm, the field has suffered one setback after another. At the turn of the century, the death of an 18-year-old patient with inherited liver disease after an experimental gene therapy treatment put the entire field into a deep freeze.

But no more. Last year marked the birth of gene therapy 2.0, in which the experimental dream finally became a clinical reality. Here’s how the tech grew into its explosive potential—and a sneak peek at what’s on the horizon for 2018.

1. Bad Blood, Meet CAR-T

It sounds like magic: you harvest a patient’s own immune cells, dose them with an injection of extra genetic material, and turn them into living cancer-hunting machines.

But in 2017, the FDA approved a double whammy of CAR-T immunotherapies. The first, green-lighted in August, helps kids and young adults battle an especially nasty form of leukemia called B-cell acute lymphoblastic leukemia. Two months later, a therapy for adults with non-Hodgkin lymphoma hit the scene.

Together, these approvals marked the long-anticipated debut of gene therapy in the US market. Previously, Europe has led the charge with its approval of Glybera in 2015, a gene therapy that reduces fatty acid buildup in the bloodstream.

The historic nod of confidence for CAR-T has already sparked widespread interest among academics and drug companies alike at finding new targets for the upgraded immune cells (the “T” in “CAR-T” stands for T-cell, a type of immune cell). CAR-T is especially exciting for the cancer field because it helps people who don’t respond to other classic treatments, such as chemotherapy.

Already in the works are treatments that target multiple myeloma, which causes multiple tumors in the bone or soft tissue, and glioblastoma, an aggressive brain tumor for which there is no cure.

But the technology’s potential is hardly limited to cancer. Last year, a preliminary study in two monkeys showed that genetically engineered stem cells can suppress and even eradicate HIV infections. The study, though small, tantalizingly suggests a whole new way to battle HIV after three decades of fruitless search for a vaccine. With multiple CAR-T therapies going through the pipeline, 2018 may very likely welcome new members onto the gene therapy scene.

2. A new hope for genetic diseases

Just before Christmas, the FDA dropped another bombshell with its approval of Luxturna, the first gene therapy that targets mutated DNA in a specific gene.

Made by Spark Therapeutics, it offers a one-time solution to patients with a rare form of inherited blindness. The company justified its hefty price tag—$850,000, the most expensive on the US market—with the promise of a rebate if certain vision thresholds aren’t met 30 months after the treatment.

Unlike CAR-T, Luxturna works in a classic “find-and-replace” manner. The treatment uses a virus to insert a functional piece of DNA into the eyes to override a defective gene. In this case, it’s the RPE65 gene, and the virus is directly injected into the eyes.

Although marketed as a one-time solution, Luxturna isn’t technically a cure. The therapy has been shown to delay or halt progressive blindness in patients, but the lack of long-term follow-up data makes it difficult to say whether the benefits can last decades.

This therapeutic “shelf-life” problem isn’t limited to Luxturna.

In 2017, a 44-year-old man became the first person to receive a gene-editing therapy that directly modifies his cells. Here, the therapy used an older gene-editing tool called zinc finger nucleases, which corrected a genetic error that throws the body’s metabolism out of whack and slowly destroys its cells. While the therapy worked initially, the benefits didn’t last.

Going forward, scientists will have to figure out a way to make the treatment stick. One potential solution is to engineer better carriers, so that components in those carriers will keep spurring the body to express the healthy gene.

That said, even temporary relief may save lives.

A study from last November showed that all fifteen children with spinal muscular atrophy who were treated with gene therapy—a single injection into the vein—survived the disease. Scientists weren’t just blown away by the dramatic results. The study introduced a new virus that could carry the payload safely and directly into the brain through the bloodstream—something long sought-after.

In another dramatic case published a few days later, scientists helped a seven-year-old boy regain most of his skin, which had peeled off due to an inherited disease called epidermolysis bullosa (EB).

The team replaced a defective gene with a healthy copy in the boy’s skin stem cells, then grew those cells into large sheets of skin, which were later grafted onto the boy. This was the second attempt in which the treatment worked—and more are slated to come.

3. CRISPR coming to a human near you

In its current form, CRISPR isn’t technically gene therapy. Rather than replacing a diseased gene with a good one, it goes into the nucleus and directly cuts out faulty genes.

But the hotshot gene editor could potentially overhaul gene therapy as we know it—and 2018 may kick off its reign.

CRISPR Therapeutics, based in Cambridge, has already sought approval from European regulatory agencies to begin a trial to fix a genetic defect that causes beta thalassemia, an inherited blood disorder. Also on their agenda is sickle-cell disease; the company is gearing up to seek FDA approval in early 2018 to conduct CRISPR-based trials in the US.

Hot on its heels is Stanford University. Like CRISPR Therapeutics, the school seeks to start a human trial for sickle-cell disease in 2018. Stanford’s approach is slightly different than that of the company: rather than fixing the faulty gene outside the body, Stanford plans on making edits directly inside patients.

That’s not all. A wealth of pre-clinical trials in 2017 suggests that CRISPR shows promise for a myriad of inherited diseases. In mice, it alleviates genetic-based hearing loss and extends lifespan in people with Lou Gehrig’s disease. Clinical studies using the technology for multiple types of cancer are amped up and ready to go.

But the best is yet to come.

Last year saw three incredible improvements to CRISPR 1.0. In one study, researchers modified the tool to target a single DNA typo instead of a gene in human cells. This opens the door to treatments for thousands of diseases: mistakes in a single base pair account for roughly half of the 32,000 mutations linked to human disease.

The tool further broadened in scope when it was redesigned to tinker with epigenetics—factors that alter the activity of a gene, rather than the gene itself. CRISPR itself may cause new harmful mutations when it cuts DNA, a long-beleaguered fact that’s been tough to deal with. Because this revamped version keeps DNA intact, it solves that “major bottleneck” of a problem.

Finally, CRISPR was also adapted to cut RNA—the molecule that helps DNA make proteins. Killing the messenger has advantages: because RNA is constantly being made, the edits aren’t permanent.

RNA is a long-time favorite target for scientists trying to delay or halt neurodegeneration—diseases that kill off brain cells, including Huntington’s disease. An RNA-targeting CRISPR will likely prove to be an invaluable new tool.

Without doubt, 2017 has been a great year for gene therapy. But the field isn’t without problems. Many words have been spilled on its mind-boggling price tag, its reliance on virus carriers, and its inability to target deeper organs such as the heart and brain.

Recently, a surprising pre-print study revealed that some humans may have pre-existing immunity to the CRISPR system’s enzyme, Cas9. The study hasn’t yet been peer-reviewed; however, if the results hold up to scrutiny, scientists may have a few more barriers to cross on the road to CRISPR therapeutics.

But these recent successes paint an overwhelmingly rosy picture. Now’s the time to be optimistic—and who knows what more 2018 will bring?

Image Credit: Rost9 /

Kategorie: Transhumanismus

Business Design Is a Powerful Tool for Breaking Down Bureaucracy

9 Leden, 2018 - 16:00

Designing the right business model and value proposition is fundamental to the success of any endeavor—yet rarely have governments used these practices.

We caught up with Michael Eales, a business designer and partner at Business Models Inc., a global innovation firm. Based in Brisbane, Australia, Michael has been building bridges between government, industry, and entrepreneurial communities to create new approaches to innovation that benefit the broader ecosystem. Michael and his team recently won the inaugural Design Pioneer Award for their work in democratizing innovation and design-driven strategy tools for all changemakers around the world.

Lisa Kay Solomon: What does it mean to be a business designer?

Michael Eales: A business designer is the fusion between the disciplines of business and design. It’s about being flexible and adaptable to the way we tackle challenges and problems in the world, particularly when there are high degrees of uncertainty. When you overlay a discipline of business planning and an operational approach to sorting out uncertain challenges, you can translate that into executable ways of creating value for customers and broader stakeholders.

LKS: You’ve been doing a lot of work lately with government agencies. What are they coming to you for? What kind of help are they looking to get from you?

ME: We’ve found that today there is an awareness that the problems we’ve been trying to solve for a long time aren’t being solved using the ways that have got us to where we are.

This is particularly true in government, which is good at regulating what is known, not exploring the unknown. We help them approach the problem with a beginner’s mindset—not assuming we have a solution that’s linear from the pathway they’ve been on, but rather embarking on a new approach that will likely feel a bit uncomfortable. It is very helpful to explain to these leaders that, while this process may seem messy, there is a discipline and rigor behind it.

“We’re seeing an absolute step change in the way many governments are now talking about policy design.”

One essential part of this new approach is to put the citizen at the center of the challenge—to actually have staff from government agencies watch citizens in the field experience their services. This often prompts some foundational questioning of the bureaucratic barriers maintaining the status quo.

We’re seeing an absolute step change in the way many governments are now talking about policy design. And when we look at the role of a department of government, you’re seeing now the leaders in these areas giving themselves permission to experiment, to fail, and in many ways, that’s a significant mindset shift.

LKS: That seems pretty revolutionary, having government agencies spend time with the citizens trying to understand their needs. Can you give us an example of an experience?

ME: We did some exciting work with the Department of Industry in Australia. Significant budget pressures were forcing two areas of the department to merge into a single unit. One area was the funding arm that focused on financing businesses in Australia. The other group, known as the enterprise connect group, was helping business grow and thrive.

The cultural differences between them were vast.

One area of this department was very much focused on saying yes to helping business, while the other one was often saying no because of the money constraints. Bringing these two teams together, we saw the front-line service mindset of the enterprise connect team was about unpacking and understanding the problems of the business. If you’re providing funding to a business, you’ve got to actually go a lot deeper into understanding the mechanics of that business.

In a pivotal moment, we brought the teams together to hear the needs of business owners through their own personal stories.

It was an opportunity for both sides to hear about the pressures and obstacles business owners experience when they have to jump through one set of hoops to secure funding, while, at the same time, also trying to secure success by leveraging the other resources and networks across the government agency. This became a particularly enlightening exercise when we did some role-playing with the government staff as well.

In the end, this particular department decided to emphasize providing a service to business before they talk about funding. Now, in Australia, a large program called the Entrepreneur’s Infrastructure Programme talks about this concept of infrastructure as an enabler through a service delivery model. It’s the “one-stop shop” version of government services.

The grants and various contribution schemes in Australia now offer a service delivery model as a first point of contact. We’re seeing the funding being almost a secondary conversation to understanding the business value.

LKS: It sounds like these groups started off competing, with many barriers to engagement. But by putting them together, by exposing them to the actual customers—the small businesses—and hearing their stories about getting funding and support from the government, it created a breakthrough moment.

ME: Yes, absolutely. Bringing together 200 public offices to have a conversation about the common purpose behind their mandate helped break through the complexity to reach a shared vision of how to serve the Australian business community in a much bolder and unified way.

Image Credit: Peshkova /

Kategorie: Transhumanismus

New Research Suggests Immunity to CRISPR Gene Editing Poses a Challenge

8 Leden, 2018 - 20:00

CRISPR-Cas9 is the talk of the town in biotechnology. There is a huge amount of public interest in the possibilities provided by this new genome editing technology, and many are hoping CRISPR could eventually cure most genetic disease, with positive impact for millions.

But a bioRxiv preprint of a new study has a potentially disturbing result: many people may already be immune to the most widely used forms of CRISPR.

Instead of successfully modifying the genome, when used therapeutically, the tool could trigger an adaptive immune response. Although this is a preliminary result, it will be an important consideration for clinical trials of CRISPR in humans, which are expected to begin shortly.

The proteins that can efficiently be used to edit the genome are a bacterial self-defense mechanism. CRISPR, which stands for “clustered regularly interspaced palindromic repeats,” is a bacterial immune system. When the bacterium is invaded by a virus, it can snip the RNA of that virus and include it in part of its own genome, allowing the bacterium to recognize that virus and destroy it with those same molecular scissors. These tools, which can efficiently scan through DNA for particular sequences then remove and even replace them, are almost perfect biotech tools for genome editing.

The issue arises in how the Cas9 proteins researchers use are manufactured.

The authors of the study noted that the most widely-used versions of the protein are extracted from the bacteria Staphylococcus aureus (S. aureus) and Streptococcus pyogenes (S. pyogenes). In bioengineering, particular bacteria—usually selected for being widely available and easy to cultivate—are often used to synthesize particular proteins. For example, most of our medical insulin is made from genetically modified E. coli. The researchers behind the study questioned whether because S. aureus and S. pyogenes regularly infect humans we might have built up some resistance to their proteins already. The infections they cause, often called strep and staph, are widespread enough that most people can be expected to have been exposed to the bacteria.

The study discovered that in many cases, human immune systems produced T-cells that specifically responded to the Cas9 protein from Staphylococcus aureus. This suggests that attempts at therapeutic use would cause an adaptive immune response that might render the therapy ineffective. Indeed, the authors of the paper wrote that use of these proteins in those who’ve been exposed to this bacteria could be harmful. “It may even result in significant toxicity to patients,” according to Stanford University’s Matthew Porteus, a  senior author of the paper. A paper by Wei Leong Chew suggests, “If left unchecked…it could lead to mortality.”

The authors of the paper wanted to ensure immune issues weren’t overlooked. “Like any new technology, you want to identify potential problems and engineer solutions for them,” according to Porteus. “And I think that’s where we’re at. This is an issue that should be addressed.”

But this is far from the end for CRISPR.

A popular use that’s envisioned is on human embryos, to edit out specific genes known to cause hereditary diseases. It seems unlikely that the embryos would already have the same immune response as an adult human who has been infected several times before. Or in some cases, adult cells could be removed from the body modified and reintroduced.

What’s more, there’s always the possibility that Cas9 could be synthesized from other bacteria; there are species of bacteria that may have the same bacterial immune system humans never come into contact with, such as those that live deep underwater in hydrothermal vents. Cas9 could be modified in some way to remove human immunity. If it’s produced entirely synthetically, it’s possible that the antigens from the host bacteria could be removed from the protein.

The prospects for the use of CRISPR are exciting, and the technology arises alongside a more advanced understanding of the human genome. The Human Genome Project started in 1990, an ambitious attempt to sequence all of the three billion base pairs in our DNA. It cost $3 billion and took 13 years for the first genome to be sequenced; over a thousand scientists worked together on the project. Now, you can get your genome sequenced for around $1,000, and some ambitious startups are even claiming $100.

With cheap genome sequencing, we can now easily read our own biological code. With CRISPR-Cas9, we may soon have a cheap method of editing that code. But that doesn’t instantly translate into being able to change traits or cure diseases. As far as designer humans go, we’re a long way off.

A sci-fi trope of “genetically perfect” humans who are more intelligent, stronger, and conform to some arbitrary standard of human beauty is out there, sure. But even something as simple as eye color, often used as a textbook example, is not a monogenic trait; it relies on several genes and the interactions between them. For more complex traits such as intelligence, we’re a long way from understanding the genetic links, let alone being able to engineer them.

This is why most experts think CRISPR-Cas9’s most promising use, at least initially, will be in curing monogenic diseases.

There has been promising progress here: experiments have been conducted that knock out Huntington’s disease in mice, while new therapies have cured genetic kidney defects and muscular dystrophy in the same creatures. But we’re still far from reliably identifying the genes involved in humans. For example, when whole genome sequencing was deployed in hospitals, doctors noticed markers for 11 out of 50 monogenic diseases in patients, but none of the patients displayed symptoms of those illnesses. If mishandled, could these monogenic markers result in unnecessary treatments or fear on behalf of patients around rare genetic conditions they don’t have?

CRISPR-Cas9 is at a stage where our ability to act exceeds our understanding of what we could do, and what we should do. We have access to our own source code, but we don’t know the programming language. We have tools that, in theory, could change the world; terrible genetic conditions could be a thing of the past. But before they can be used, we need as much information as possible. It’s difficult to see how we can be sure about the impacts of CRISPR without human trials, but, as this latest immune system response experiment shows, even the path in this direction is not straightforward.

Yet the possibilities range from curing some kinds of cancer to eliminating malaria—perennial foes of the human race. There may be setbacks, but it still seems likely that gene editing technology will change the world.

Image Credit: Meletio Verras /

Kategorie: Transhumanismus

The World Needs More Scientists: How We Can Train Millions in Virtual Labs

8 Leden, 2018 - 17:00

California wild fires, melting arctic ice caps, and record setting temperatures suggest our climate is dramatically changing with potentially catastrophic outcomes for our species. Add to that an increase in the threat of infectious disease, disruptions to our planet’s food production, and water scarcity some warn could lead to armed conflict in the future.

We face many very real problems, and science and technology can contribute very real solutions. But according to Michael Bodekaer, a Danish education professional and entrepreneur, there’s not nearly enough trained scientists to address these dangers.

In a recent conversation with Singularity Hub, Bodekaer pointed out that in the US alone, almost 60% of students in STEM fields drop out of their studies (a claim supported by government-reported attrition rates).

“We are on a mission to help solve global challenges by educating more scientists—and we need more scientists if we are to solve the complex challenges the world is now facing,” he told me.

To do that, he’s co-founded Labster, a Danish company that develops virtual reality science laboratories that offer users virtualized lab equipment in a software simulation.

“Think of Labster as like a flight simulator that trains pilots, only we train scientists in a laboratory,” Bodekaer says. In Labster, users can use pipettes to conduct biology experiments, access PCR machines to manipulate DNA, use gene sequencers and electron microscopes, access training and education video tutorials, and use several types of science lab.

Lab training is a scarce resource in the physical world. According to Bodekaer, even a top university, which can afford expensive equipment, can only provide limited access to the various groups of students who might require it. Many schools can’t afford a lab at all. Labster aims to address these cost and accessibility issues by making the machinery virtual and by simulating various science experiments as a series of stored mathematical equations.

“There is a huge amount of research that simulates real-world chemical and biological reactions using math equations. What we do is take those equations and store them in our simulation engine. Based on these equations, we can simulate what would happen in the real world if you conducted these experiments,” he says.

It’s true that no Labster student will ever discover a novel or breakthrough approach to science, given these simulations rely on a fixed set of rules and equations. But students can learn to run incredibly complex experiments and save time by testing their hypotheses before moving research into the real world.

“A good example of this is our fermentation simulator, where you can change several parameters like temperature, airflow, PH/acidity, and really adjust the things that affect cell growth. Students can receive billions of different outcomes as a result of a mix of the different input parameters,” he pointed out.

Evidence suggests that learning in virtual training environments like these may be more effective than traditional methods of learning.

When a group of learning psychologists at Stanford and Technical University of Denmark measured the effectiveness of Labster, they saw a 76% increase in students’ scores who used Labster in place of traditional teaching methods. Even more impressive, scores jumped 101% for students who combined virtual learning with teacher-led coaching and mentoring.

Virtual reality also provides other added benefits beyond learning effectiveness.

“Because the pace of innovation in science and engineering is happening so fast, many universities cannot keep up to buy the latest machinery and equipment that is needed. In VR, they can access these machines with a software update,” Bodekaer said.

Additionally, Labster allows students a risk-free way to experiment with highly controlled substances. Salmonella, for example, is a dangerous bacteria that requires high levels of security clearance, something most universities don’t have.

Today, over 90% of Labster users access the simulators on an internet browser, but as VR headsets are becoming rapidly cheaper, Bodekaer has noticed a shift to immersive virtual reality. This year, several companies will be releasing standalone VR headsets that will cost less than $200, including Facebook’s Oculus Go. The Labster simulations available in the browser experience also work in VR headsets. Bodekaer hopes cheaper VR will mean more access for students around the world next year.

“We want to bring Labster to millions of students all over the world, even the most remote parts of Africa. With the launch of these $200 headsets—and eventually one day they will be essentially free—that means a child living there can install Labster and have access to an experience that would cost millions of dollars in the real world.”

Half a century ago, computer science students faced similar challenges with lab costs and accessibility. Only those who attended a well-funded university with a dedicated computer lab could learn to use the machines. In the past 50 years, Moore’s Law made a world in which almost half of all humans now carry a smartphone—an exponentially more powerful computer than any university had 50 years ago—in their pocket.

Following on that personal computing revolution, software is now digitizing much of the world and providing access to tools and resources never before available to many. With concepts like Labster, and other virtual reality projects, now even a science laboratory with the latest tech and machinery—something only the wealthiest and best-funded companies and universities currently have access to—will soon be available in your pocket.

Image Credit: Gorbash Varvara /

Kategorie: Transhumanismus

If Work Dominated Your Every Moment Would Life Be Worth Living?

7 Leden, 2018 - 17:00

Imagine that work had taken over the world. It would be the centre around which the rest of life turned. Then all else would come to be subservient to work. Then slowly, almost imperceptibly, anything else—the games once played, the songs hitherto sung, the loves fulfilled, the festivals celebrated—would come to resemble, and ultimately become, work. And then there would come a time, itself largely unobserved, when the many worlds that had once existed before work took over the world would vanish completely from the cultural record, having fallen into oblivion.

And how, in this world of total work, would people think and sound and act? Everywhere they looked, they would see the pre-employed, employed, post-employed, underemployed and unemployed, and there would be no one uncounted in this census. Everywhere they would laud and love work, wishing each other the very best for a productive day, opening their eyes to tasks and closing them only to sleep. Everywhere an ethos of hard work would be championed as the means by which success is to be achieved, laziness being deemed the gravest sin. Everywhere among content-providers, knowledge-brokers, collaboration architects and heads of new divisions would be heard ceaseless chatter about workflows and deltas, about plans and benchmarks, about scaling up, monetization and growth.

In this world, eating, excreting, resting, having sex, exercising, meditating and commuting—closely monitored and ever-optimized—would all be conducive to good health, which would, in turn, be put in the service of being more and more productive. No one would drink too much, some would microdose on psychedelics to enhance their work performance, and everyone would live indefinitely long. Off in corners, rumors would occasionally circulate about death or suicide from overwork, but such faintly sweet susurrus would rightly be regarded as no more than local manifestations of the spirit of total work, for some even as a praiseworthy way of taking work to its logical limit in ultimate sacrifice. In all corners of the world, therefore, people would act in order to complete total work’s deepest longing: to see itself fully manifest.

This world, it turns out, is not a work of science fiction; it is unmistakably close to our own.

‘Total work’, a term coined by the German philosopher Josef Pieper just after the Second World War in his book Leisure: The Basis of Culture (1948), is the process by which human beings are transformed into workers and nothing else. By this means, work will ultimately become total, I argue, when it is the centre around which all of human life turns; when everything else is put in its service; when leisure, festivity and play come to resemble and then become work; when there remains no further dimension to life beyond work; when humans fully believe that we were born only to work; and when other ways of life, existing before total work won out, disappear completely from cultural memory.

We are on the verge of total work’s realization. Each day I speak with people for whom work has come to control their lives, making their world into a task, their thoughts an unspoken burden.

For unlike someone devoted to the life of contemplation, a total worker takes herself to be primordially an agent standing before the world, which is construed as an endless set of tasks extending into the indeterminate future. Following this taskification of the world, she sees time as a scarce resource to be used prudently, is always concerned with what is to be done, and is often anxious both about whether this is the right thing to do now and about there always being more to do. Crucially, the attitude of the total worker is not grasped best in cases of overwork, but rather in the everyday way in which he is single-mindedly focused on tasks to be completed, with productivity, effectiveness and efficiency to be enhanced. How? Through the modes of effective planning, skillful prioritizing and timely delegation. The total worker, in brief, is a figure of ceaseless, tensed, busied activity: a figure, whose main affliction is a deep existential restlessness fixated on producing the useful.

What is so disturbing about total work is not just that it causes needless human suffering but also that it eradicates the forms of playful contemplation concerned with our asking, pondering and answering the most basic questions of existence. To see how it causes needless human suffering, consider the illuminating phenomenology of total work as it shows up in the daily awareness of two imaginary conversation partners. There is, to begin with, constant tension, an overarching sense of pressure associated with the thought that there’s something that needs to be done, always something I’m supposed to be doing right now. As the second conversation partner puts it, there is concomitantly the looming question: Is this the best use of my time? Time, an enemy, a scarcity, reveals the agent’s limited powers of action, the pain of harrying, unanswerable opportunity costs.

Together, thoughts of the not yet but supposed to be done, the should have been done already, the could be something more productive I should be doing, and the ever-awaiting next thing to do conspire as enemies to harass the agent who is, by default, always behind in the incomplete now. Secondly, one feels guilt whenever he is not as productive as possible. Guilt, in this case, is an expression of a failure to keep up or keep on top of things, with tasks overflowing because of presumed neglect or relative idleness. Finally, the constant, haranguing impulse to get things done implies that it’s empirically impossible, from within this mode of being, to experience things completely. ‘My being,’ the first man concludes, ‘is an onus,’ which is to say an endless cycle of unsatisfactoriness.

The burden character of total work, then, is defined by ceaseless, restless, agitated activity, anxiety about the future, a sense of life being overwhelming, nagging thoughts about missed opportunities, and guilt connected to the possibility of laziness. Hence, the taskification of the world is correlated with the burden character of total work. In short, total work necessarily causes dukkha, a Buddhist term referring to the unsatisfactory nature of a life filled with suffering.

In addition to causing dukkha, total work bars access to higher levels of reality. For what is lost in the world of total work is art’s revelation of the beautiful, religion’s glimpse of eternity, love’s unalloyed joy, and philosophy’s sense of wonderment. All of these require silence, stillness, a wholehearted willingness to simply apprehend. If meaning, understood as the ludic interaction of finitude and infinity, is precisely what transcends, here and now, the ken of our preoccupations and mundane tasks, enabling us to have a direct experience with what is greater than ourselves, then what is lost in a world of total work is the very possibility of our experiencing meaning. What is lost is seeking why we’re here.

This article was originally published at Aeon and has been republished under Creative Commons.

Image Credit: Annette Shaff /

Kategorie: Transhumanismus

This Week’s Awesome Stories From Around the Web (Through January 6)

6 Leden, 2018 - 17:00

The Year Robots Backflipped Their Way Into Our Hearts
Will Knight | MIT Technology Review
“Legged robots remain very expensive and difficult to commercialize. So it came as little surprise when Google sold Boston Dynamics to Japan’s Softbank earlier in the year….Even though robots are still relatively stupid, their expanding abilities also raise questions about how our relationship with these machines may evolve.”


Alexa Is Coming to Wearable Devices, Including Headphones, Smartwatches and Fitness Trackers
Sarah Perez | TechCrunch
“Amazon wants to bring Alexa to more devices than smart speakers, Fire TV and various other consumer electronics for the home, like alarm clocks. The company yesterday announced developer tools that would allow Alexa to be used in microwave ovens, for example – so you could just tell the oven what to do. Today, Amazon is rolling out a new set of developer tools, including one called the ‘Alexa Mobile Accessory Kit,’ that would allow Alexa to work Bluetooth products in the wearable space, like headphones, smartwatches, fitness trackers, other audio devices, and more.”


The Clever Engineering Behind Intel’s Chipocalypse
Michael Byrne | Motherboard
“The catch is that when a processor guesses a branch, it’s bypassing an access control check and, for a moment, exposing the protected kernel space to the user space. A clever hacker could theoretically take advantage of this exposure to peek at passwords and keys and other protected resources… Google found three possible exploits that take advantage of speculative execution and they aren’t exclusive to Intel.”


Rise of Bitcoin Competitor Ripple Creates Wealth to Rival Zuckerberg
Nathaniel Popper | The New York Times
“At one point on Thursday, Chris Larsen, a Ripple co-founder who is also the largest holder of Ripple tokens, was worth more than $59 billion, according to figures from Forbes. That would have briefly vaulted Mr. Larsen ahead of Facebook chief executive Mark Zuckerberg into fifth place on the Forbes list of the world’s richest people.”


It’s Not Just Logan Paul and Youtube—The Moral Compass of Social Media Is Broken
Katherine Cross  | The Verge
“The idealistic dream these services sell to users — that anyone can be famous with a mic, a keyboard, a webcam, and a bit of elbow grease — sounds like the culmination of early cyber utopianism. But in practice, it often means elevating people to fame when they are wildly unprepared for the ethical responsibilities or consequences of broadcasting their content to millions of fans (including children) around the world.”

Image Credit: Chaikom /

Kategorie: Transhumanismus

Why the World Is Still Getting Better—and That’s Likely to Continue

5 Leden, 2018 - 17:00

If you read or watch the news, you’ll likely think the world is falling to pieces. Trends like terrorism, climate change, and a growing population straining the planet’s finite resources can easily lead you to think our world is in crisis.

But there’s another story, a story the news doesn’t often report. This story is backed by data, and it says we’re actually living in the most peaceful, abundant time in history, and things are likely to continue getting better.

The News vs. the Data

The reality that’s often clouded by a constant stream of bad news is we’re actually seeing a massive drop in poverty, fewer deaths from violent crime and preventable diseases. On top of that, we’re the most educated populace to ever walk the planet.

“Violence has been in decline for thousands of years, and today we may be living in the most peaceful era in the existence of our species.” –Steven Pinker

In the last hundred years, we’ve seen the average human life expectancy nearly double, the global GDP per capita rise exponentially, and childhood mortality drop 10-fold.

That’s pretty good progress! Maybe the world isn’t all gloom and doom.

If you’re still not convinced the world is getting better, check out the charts in this article from Vox and on Peter Diamandis’ website for a lot more data.

Abundance for All Is Possible  

So now that you know the world isn’t so bad after all, here’s another thing to think about: it can get much better, very soon.

In their book Abundance: The Future Is Better Than You Think, Steven Kotler and Peter Diamandis suggest it may be possible for us to meet and even exceed the basic needs of all the people living on the planet today.

“In the hands of smart and driven innovators, science and technology take things which were once scarce and make them abundant and accessible to all.”

This means making sure every single person in the world has adequate food, water and shelter, as well as a good education, access to healthcare, and personal freedom.

This might seem unimaginable, especially if you tend to think the world is only getting worse. But given how much progress we’ve already made in the last few hundred years, coupled with the recent explosion of information sharing and new, powerful technologies, abundance for all is not as out of reach as you might believe.

Throughout history, we’ve seen that in the hands of smart and driven innovators, science and technology take things which were once scarce and make them abundant and accessible to all.

Napoleon III

In Abundance, Diamandis and Kotler tell the story of how aluminum went from being one of the rarest metals on the planet to being one of the most abundant…

In the 1800s, aluminum was more valuable than silver and gold because it was rarer. So when Napoleon III entertained the King of Siam, the king and his guests were honored by being given aluminum utensils, while the rest of the dinner party ate with gold.

But aluminum is not really rare.

In fact, aluminum is the third most abundant element in the Earth’s crust, making up 8.3% of the weight of our planet. But it wasn’t until chemists Charles Martin Hall and Paul Héroult discovered how to use electrolysis to cheaply separate aluminum from surrounding materials that the element became suddenly abundant.

The problems keeping us from achieving a world where everyone’s basic needs are met may seem like resource problems — when in reality, many are accessibility problems.

The Engine Driving Us Toward Abundance: Exponential Technology

History is full of examples like the aluminum story.  The most powerful one of the last few decades is information technology. Think about all the things that computers and the internet made abundant that were previously far less accessible because of cost or availability …

Here are just a few examples:

  • Easy access to the world’s information
  • Ability to share information freely with anyone and everyone
  • Free/cheap long-distance communication
  • Buying and selling goods/services regardless of location

Less than two decades ago, when someone reached a certain level of economic stability, they could spend somewhere around $10K on stereos, cameras, entertainment systems, etc — today, we have all that equipment in the palm of our hand.

Now, there is a new generation of technologies heavily dependant on information technology and, therefore, similarly riding the wave of exponential growth. When put to the right use, emerging technologies like artificial intelligence, robotics, digital manufacturing, nano-materials and digital biology make it possible for us to drastically raise the standard of living for every person on the planet.

These are just some of the innovations which are unlocking currently scarce resources:

    • IBM’s Watson Health is being trained and used in medical facilities like the Cleveland Clinic to help doctors diagnose disease. In the future, it’s likely we’ll trust AI just as much, if not more than humans to diagnose disease, allowing people all over the world to have access to great diagnostic tools regardless of whether there is a well-trained doctor near them.
    • Self-driving cars are already on the roads of several American cities and will be coming to a road near you in the next couple years. Considering the average American spends nearly two hours driving every day, not having to drive would free up an increasingly scarce resource: time.
The Change-Makers

Today’s innovators can create enormous change because they have these incredible tools—which would have once been available only to big organizations—at their fingertips. And, as a result of our hyper-connected world, there is an unprecedented ability for people across the planet to work together to create solutions to some of our most pressing problems today.

“In today’s hyperlinked world, solving problems anywhere, solves problems everywhere.” –Peter Diamandis and Steven Kotler, Abundance

According to Diamandis and Kotler, there are three groups of people accelerating positive change.

  1. DIY Innovators

    In the 1970s and 1980s, the Homebrew Computer Club was a meeting place of “do-it-yourself” computer enthusiasts who shared ideas and spare parts. By the 1990s and 2000s, that little club became known as an inception point for the personal computer industry — dozens of companies, including Apple Computer, can directly trace their origins back to Homebrew.

    Since then, we’ve seen the rise of the social entrepreneur, the Maker Movement and the DIY Bio movement, which have similar ambitions to democratize social reform, manufacturing, and biology, the way Homebrew democratized computers. These are the people who look for new opportunities and aren’t afraid to take risks to create something new that will change the status-quo.

  2. Techno-Philanthropists

    Unlike the robber barons of the 19th and early 20th centuries, today’s “techno-philanthropists” are not just giving away some of their wealth for a new museum, they are using their wealth to solve global problems and investing in social entrepreneurs aiming to do the same.

    The Bill and Melinda Gates Foundation has given away at least $28 billion, with a strong focus on ending diseases like polio, malaria, and measles for good. Jeff Skoll, after cashing out of eBay with $2 billion in 1998, went on to create the Skoll Foundation, which funds social entrepreneurs across the world. And last year, Mark Zuckerberg and Priscilla Chan pledged to give away 99% of their $46 billion in Facebook stock during their lifetimes.

  3. The Rising Billion

    Cisco estimates that by 2020, there will be 4.1 billion people connected to the internet, up from 3 billion in 2015. This number might even be higher, given the efforts of companies like Facebook, Google, Virgin Group, and SpaceX to bring internet access to the world. That’s a billion new people in the next several years who will be connected to the global conversation, looking to learn, create and better their own lives and communities.In his book, Fortune at the Bottom of the Pyramid, C.K. Pahalad writes that finding co-creative ways to serve this rising market can help lift people out of poverty while creating viable businesses for inventive companies.
The Path to Abundance

Eager to create change, innovators armed with powerful technologies can accomplish incredible feats. Kotler and Diamandis imagine that the path to abundance occurs in three tiers:

  • Basic Needs (food, water, shelter)
  • Tools of Growth (energy, education, access to information)
  • Ideal Health and Freedom

Of course, progress doesn’t always happen in a straight, logical way, but having a framework to visualize the needs is helpful.

Many people don’t believe it’s possible to end the persistent global problems we’re facing. However, looking at history, we can see many examples where technological tools have unlocked resources that previously seemed scarce.

Technological solutions are not always the answer, and we need social change and policy solutions as much as we need technology solutions. But we have seen time and time again, that powerful tools in the hands of innovative, driven change-makers can make the seemingly impossible happen.

You can download the full “Path to Abundance” infographic here. It was created under a CC BY-NC-ND license. If you share, please attribute to Singularity University.

Image Credit: janez volmajer /

Kategorie: Transhumanismus

This Unbelievable Research on Human Hibernation Could Get Us to Mars

4 Leden, 2018 - 17:00

Journeying to Mars is seldom out of the news these days. From Elon Musk releasing plans for his new rocket to allow SpaceX to colonize Mars, to NASA announcing another rover as part of the Mars 2020 mission, both private and public organizations are racing to the red planet.

But human spaceflight is an exponentially bigger task than sending robots and experiments beyond Earth. Not only do you have to get the engineering of the rocket, the calculations of the launch, the plans for zero-gravity travel and the remotely operated Martian landing perfect, but you’d also have to keep a crew of humans alive for six months without any outside help.

There are questions around how to pack enough food and water to sustain the crew without making the rocket too heavy and around how much physical space would be left for the crew to live in. There are questions about what happens if someone gets dangerously ill and about what a claustrophobic half-year in these circumstances would do to the mental health of the Martian explorers.

Enter John Bradford of Atlanta-based SpaceWorks Enterprises.

Using a $500,000 grant from NASA, Bradford’s team has been working on an adaptation of a promising medical procedure that could alleviate many of the human-related limitations of space travel.

Presenting at the annual Hello Tomorrow Summit in Paris, Bradford shared his team’s concept of placing the crew in what’s called a “low-metabolic torpor state” for select phases during space travel—in other words, hibernating the crew.

The idea stems from a current medical practice called therapeutic hypothermia, or targeted temperature management. It is used in cases of cardiac arrest and neonatal encephalopathy. Patients are cooled to around 33°C for 48 hours to prevent injury to tissue following lack of blood flow. Sedatives are then administered to induce sleep. Ex Formula 1 driver Michael Schumacher was famously held in this state following his ski accident in 2013.

Adapting the procedure for spaceflight, the crew would be fed and watered directly into the stomach using what’s called a percutaneous endoscopic gastronomy tube to remove the need for eating and standard digestion, and using whole-body electrical stimulation, their muscles would be activated to avoid atrophy.

Bradford’s team found that while in this torpor state, the body needs over a third less food and water to sustain itself, greatly reducing the payload weight estimates for Mars missions.

A large part of the concept is the rotational element of who is awake and who is in stasis. Current medical procedures only last two to three days, so the plan is to extend the time each person is in torpor state to around eight days. Adding in a two-day wake period, a schedule can be drawn up so that a different member of the crew acts as the caretaker for the others, each in cycles of eight days of torpor and two days awake.

This means humans won’t be asleep for the whole journey, but with these torpor periods making up the majority of their trip, the physical and mental pressure put on the crew and the weight of resources on board would be greatly reduced. The plan for the research, however, is to get these periods up from days to weeks.

It’s not just SpaceWorks who’s looking into the idea of human hibernation for space travel. The European Space Agency has part of their Advanced Concepts team dedicated to this research as well. But their last paper was published in 2004, which suggests Bradford and his crew have the most promising progress.

Naysayers tend to question the ability of the human body to effectively and safely “wake up” from these long periods of stasis, and have concerns around whether our bodies can truly adapt to running healthily at a lower temperature. We are evolved to run at a pretty precise measure, and long-term body temperature changes in humans have not yet been studied.

But the SpaceWorks team’s research has both short and long-term prospects. The advances being made in our understanding and implementation of the torpor state can likely be adapted for use in organ transplants and critical care in extreme environments.

Of course, it’s the long term that excites Bradford. He estimates they could possibly achieve this capability for manned missions as soon as the 2030s. And with Elon Musk aiming for the first manned flights of his new rocket in 2024, it seems this pair might have the ingredients for a Martian future for Earthlings sooner than we expect.

Image Credit: Danomyte /

Kategorie: Transhumanismus

AI Uses Titan Supercomputer to Create Deep Neural Nets in Less Than a Day

3 Leden, 2018 - 17:00

You don’t have to dig too deeply into the archive of dystopian science fiction to uncover the horror that intelligent machines might unleash. The Matrix and The Terminator are probably the most well-known examples of self-replicating, intelligent machines attempting to enslave or destroy humanity in the process of building a brave new digital world.

The prospect of artificially intelligent machines creating other artificially intelligent machines took a big step forward in 2017. However, we’re far from the runaway technological singularity futurists are predicting by mid-century or earlier, let alone murderous cyborgs or AI avatar assassins.

The first big boost this year came from Google. The tech giant announced it was developing automated machine learning (AutoML), writing algorithms that can do some of the heavy lifting by identifying the right neural networks for a specific job. Now researchers at the Department of Energy’s Oak Ridge National Laboratory (ORNL), using the most powerful supercomputer in the US, have developed an AI system that can generate neural networks as good if not better than any developed by a human in less than a day.

It can take months for the brainiest, best-paid data scientists to develop deep learning software, which sends data through a complex web of mathematical algorithms. The system is modeled after the human brain and known as an artificial neural network. Even Google’s AutoML took weeks to design a superior image recognition system, one of the more standard operations for AI systems today.

Computing Power

Of course, Google Brain project engineers only had access to 800 graphic processing units (GPUs), a type of computer hardware that works especially well for deep learning. Nvidia, which pioneered the development of GPUs, is considered the gold standard in today’s AI hardware architecture. Titan, the supercomputer at ORNL, boasts more than 18,000 GPUs.

The ORNL research team’s algorithm, called MENNDL for Multinode Evolutionary Neural Networks for Deep Learning, isn’t designed to create AI systems that cull cute cat photos from the internet. Instead, MENNDL is a tool for testing and training thousands of potential neural networks to work on unique science problems.

That requires a different approach from the Google and Facebook AI platforms of the world, notes Steven Young, a postdoctoral research associate at ORNL who is on the team that designed MENNDL.

“We’ve discovered that those [neural networks] are very often not the optimal network for a lot of our problems, because our data, while it can be thought of as images, is different,” he explains to Singularity Hub. “These images, and the problems, have very different characteristics from object detection.”

AI for Science

One application of the technology involved a particle physics experiment at the Fermi National Accelerator Laboratory. Fermilab researchers are interested in understanding neutrinos, high-energy subatomic particles that rarely interact with normal matter but could be a key to understanding the early formation of the universe. One Fermilab experiment involves taking a sort of “snapshot” of neutrino interactions.

The team wanted the help of an AI system that could analyze and classify Fermilab’s detector data. MENNDL evaluated 500,000 neural networks in 24 hours. Its final solution proved superior to custom models developed by human scientists.

In another case involving a collaboration with St. Jude Children’s Research Hospital in Memphis, MENNDL improved the error rate of a human-designed algorithm for identifying mitochondria inside 3D electron microscopy images of brain tissue by 30 percent.

“We are able to do better than humans in a fraction of the time at designing networks for these sort of very different datasets that we’re interested in,” Young says.

What makes MENNDL particularly adept is its ability to define the best or most optimal hyperparameters—the key variables—to tackle a particular dataset.

“You don’t always need a big, huge deep network. Sometimes you just need a small network with the right hyperparameters,” Young says.

A Virtual Data Scientist

That’s not dissimilar to the approach of a company called, a startup out of Silicon Valley that uses open source machine learning platforms to “democratize” AI. It applies machine learning to create business solutions for Fortune 500 companies, including some of the world’s biggest banks and healthcare companies.

“Our software is more [about] pattern detection, let’s say anti-money laundering or fraud detection or which customer is most likely to churn,” Dr. Arno Candel, chief technology officer at, tells Singularity Hub. “And that kind of insight-generating software is what we call AI here.”

The company’s latest product, Driverless AI, promises to deliver the data scientist equivalent of a chessmaster to its customers (the company claims several such grandmasters in its employ and advisory board). In other words, the system can analyze a raw dataset and, like MENNDL, automatically identify what features should be included in the computer model to make the most of the data based on the best “chess moves” of its grandmasters.

“So we’re using those algorithms, but we’re giving them the human insights from those data scientists, and we automate their thinking,” he explains. “So we created a virtual data scientist that is relentless at trying these ideas.”

Inside the Black Box

Not unlike how the human brain reaches a conclusion, it’s not always possible to understand how a machine, despite being designed by humans, reaches its own solutions. The lack of transparency is often referred to as the AI “black box.” Experts like Young say we can learn something about the evolutionary process of machine learning by generating millions of neural networks and seeing what works well and what doesn’t.

“You’re never going to be able to completely explain what happened, but maybe we can better explain it than we currently can today,” Young says.

Transparency is built into the “thought process” of each particular model generated by Driverless AI, according to Candel.

The computer even explains itself to the user in plain English at each decision point. There is also real-time feedback that allows users to prioritize features, or parameters, to see how the changes improve the accuracy of the model. For example, the system may include data from people in the same zip code as it creates a model to describe customer turnover.

“That’s one of the advantages of our automatic feature engineering: it’s basically mimicking human thinking,” Candel says. “It’s not just neural nets that magically come up with some kind of number, but we’re trying to make it statistically significant.”

Moving Forward

Much digital ink has been spilled over the dearth of skilled data scientists, so automating certain design aspects for developing artificial neural networks makes sense. Experts agree that automation alone won’t solve that particular problem. However, it will free computer scientists to tackle more difficult issues, such as parsing the inherent biases that exist within the data used by machine learning today.

“I think the world has an opportunity to focus more on the meaning of things and not on the laborious tasks of just fitting a model and finding the best features to make that model,” Candel notes. “By automating, we are pushing the burden back for the data scientists to actually do something more meaningful, which is think about the problem and see how you can address it differently to make an even bigger impact.”

The team at ORNL expects it can also make bigger impacts beginning next year when the lab’s next supercomputer, Summit, comes online. While Summit will boast only 4,600 nodes, it will sport the latest and greatest GPU technology from Nvidia and CPUs from IBM. That means it will deliver more than five times the computational performance of Titan, the world’s fifth-most powerful supercomputer today.

“We’ll be able to look at much larger problems on Summit than we were able to with Titan and hopefully get to a solution much faster,” Young says.

It’s all in a day’s work.

Image Credit: Gennady Danilkin /

Kategorie: Transhumanismus