Singularity HUB

Syndikovat obsah Singularity Hub
News and Insights on Technology, Science, and the Future from Singularity University
Aktualizace: 54 min 53 sek zpět

How Giving Robots a Hybrid, Human-Like ‘Brain’ Can Make Them Smarter

26 Říjen, 2020 - 15:05

Squeezing a lot of computing power into robots without using up too much space or energy is a constant battle for their designers. But a new approach that mimics the structure of the human brain could provide a workaround.

The capabilities of most of today’s mobile robots are fairly rudimentary, but giving them the smarts to do their jobs is still a serious challenge. Controlling a body in a dynamic environment takes a surprising amount of processing power, which requires both real estate for chips and considerable amounts of energy to power them.

As robots get more complex and capable, those demands are only going to increase. Today’s most powerful AI systems run in massive data centers across far more chips than can realistically fit inside a machine on the move. And the slow death of Moore’s Law suggests we can’t rely on conventional processors getting significantly more efficient or compact anytime soon.

That prompted a team from the University of Southern California to resurrect an idea from more than 40 years ago: mimicking the human brain’s division of labor between two complimentary structures. While the cerebrum is responsible for higher cognitive functions like vision, hearing, and thinking, the cerebellum integrates sensory data and governs movement, balance, and posture.

When the idea was first proposed the technology didn’t exist to make it a reality, but in a paper recently published in Science Robotics, the researchers describe a hybrid system that combines analog circuits that control motion and digital circuits that govern perception and decision-making in an inverted pendulum robot.

“Through this cooperation of the cerebrum and the cerebellum, the robot can conduct multiple tasks simultaneously with a much shorter latency and lower power consumption,” write the researchers.

The type of robot the researchers were experimenting with looks essentially like a pole balancing on a pair of wheels. They have a broad range of applications, from hoverboards to warehouse logistics—Boston Dynamics’ recently-unveiled Handle robot operates on the same principles. Keeping them stable is notoriously tough, but the new approach managed to significantly improve all digital control approaches by radically improving the speed and efficiency of computations.

Key to bringing the idea alive was the recent emergence of memristors—electrical components whose resistance relies on previous input, which allows them to combine computing and memory in one place in a way similar to how biological neurons operate.

The researchers used memristors to build an analog circuit that runs an algorithm responsible for integrating data from the robot’s accelerometer and gyroscope, which is crucial for detecting the angle and velocity of its body, and another that controls its motion. One key advantage of this setup is that the signals from the sensors are analog, so it does away with the need for extra circuitry to convert them into digital signals, saving both space and power.

More importantly, though, the analog system is an order of magnitude faster and more energy-efficient than a standard all-digital system, the authors report. This not only lets them slash the power requirements, but also lets them cut the processing loop from 3,000 microseconds to just 6. That significantly improves the robot’s stability, with it taking just one second to settle into a steady state compared to more than three seconds using the digital-only platform.

At the minute this is just a proof of concept. The robot the researchers have built is small and rudimentary, and the algorithms being run on the analog circuit are fairly basic. But the principle is a promising one, and there is currently a huge amount of R&D going into neuromorphic and memristor-based analog computing hardware.

As often turns out to be the case, it seems like we can’t go too far wrong by mimicking the best model of computation we have found so far: our own brains.

Image Credit: Photos Hobby / Unsplash

Kategorie: Transhumanismus

Impossible Foods Wants to Make Milk That’s Creamy, Tasty, and Totally Cow-Free

25 Říjen, 2020 - 15:00

Animal-free foods seem to be steadily growing in popularity. Whether for health reasons or as part of a commitment to the environment, more people are opting to go vegetarian or at least to be mindful about where the meat they eat comes from.

Companies are jumping on board, with the list of animal-free food products in development ever expanding: it started with beef, and has since grown to include steak, pork, and fish, among others (I should note that these products, while “animal-free” in the sense that no living animals are killed or harmed to make them, are produced using real animal cells and therefore not vegetarian). A French company is even building a new factory to mass-produce beetles as a substitute protein source in pet food and fish feed (sheesh—we’re getting really desperate here).

But it’s not just meat that people are quitting. How many times have you sat down to a meal with a friend and suggested ordering, oh, I don’t know—say a gooey, delicious pizza—only to be told by your dining companion that he or she is “trying to eat less dairy.” Sigh.

In keeping with the animal-free foods trend, though, we may soon have a viable substitute for traditional dairy products. California-based food company Impossible Foods (which you may be familiar with thanks to the Impossible burger, now available in Whopper form at your nearest Burger King) announced this week that it’s diversifying its product line with plant-based milk.

But, you may argue, there are already so many plant-based milks! Oat milk. Soy milk. Coconut milk. Almond milk. Cashew milk. Does the world really need to add to this list?

Impossible Foods’ product, though, will be plant-based only in makeup. The company’s scientists will aim to mimic the success of the Impossible burger by making a synthetic milk that looks, tastes, and feels like milk from a cow—no nutty undertones or watery textures here.

The initiative comes at a good time; nixing milk (and all of its delicious permutations) isn’t just for the lactose-intolerant anymore. More and more people are cutting dairy out of their diets because of its inflammatory properties, and it’s off-limits for vegans too.

But can we pause to acknowledge what an enormous loss this entails in terms of the enjoyment of food? Forget milk—what about butter? Ice cream? Cheese?! What sort of life can one live without the joy of cheese?

Yes, there are substitutes. But if you’ve ever had a pizza topped with something called “almond cheese”—yes, it’s what it sounds like: a poor, texture-less imitation of cheese made from almond milk—you know they’re not even in the same ballpark as the real thing.

So we’d welcome a plant-based substitute that manages to duplicate the rich, creamy, melt-in-your-mouth properties of real dairy products. Impossible Foods has already got some competition in this arena, as it were. California-based startup Perfect Day has been working on lab-grown dairy for a few years now, and has made some notable progress.

Namely, the company’s scientists were able to recreate the proteins found in conventional cow’s milk, called casein and whey, through fermentation of a genetically modified microflora. The process is similar to using yeast to make alcohol, with the proteins resulting as a byproduct of the fermentation process.

But these proteins, while crucial, are just a small part of what makes milk taste and feel like milk. Another important piece—and one that will be harder to artificially recreate—is milk fat. How do you integrate milk fat into a synthetic dairy product that can only have plant-based ingredients?

Impossible Foods is a good candidate to rise to this challenge, and the science behind its Impossible burger is proof. As tasty as soy or bean-based burgers can be, they never quite mimic beef burgers in a way that’s even remotely convincing. Impossible Foods changed this; they made their plant-based burger taste and feel like real meat by adding a protein from soybeans called leghemoglobin. Leghemoglobin is chemically bound to a non-protein molecule called heme, an iron-containing molecule that gives red meat its color. By teasing out this key ingredient and figuring out how to derive it from plants, Impossible Foods made a truly unique product.

And the company will be well-equipped to do so again: this week they announced plans to double the size of their research and development team in the next year. They also launched a project they’re calling “Impossible Investigator” to attract top scientists from other countries, companies, and academia; applicants can propose anything from “short-term strategies to accelerate the optimization of plant-based milk or steak or fish, to longer-term ideas for a vastly improved supply chain of plant proteins and other ingredients, including novel crops and agricultural practices.”

During a demonstration of Impossible Milk, samples of the product were placed alongside various plant-based milks with the aim of showing how much more like cow’s milk the Impossible version looks. When the employee mixed the milk into a cup of hot coffee, it didn’t curdle.

Image Credit: Impossible Foods

This seems like a promising start. If all goes as planned, it may not be long before you suggest pizza for dinner and instead of shooting down this great idea, your dining partner says, “Sure! Let’s get it with that great Impossible cheese—it’s just like real mozzarella.”

Image Credit: Myriam Zilles from Pixabay

Kategorie: Transhumanismus

This Week’s Awesome Tech Stories From Around the Web (Through October 24)

24 Říjen, 2020 - 15:00
AUGMENTED REALITY

Forget AR Glasses. Augmented Reality Is Headed to Your Windshield
Luke Dormehl | Digital Trends
“…What Envisics has developed is a headset-free, in-car holography system that aims to transform the way we view the road. How? By giving your car an AR overhaul more in line with the kind of HUD technology you’d ordinarily find in a fighter jet or a commercial aircraft worth many millions of dollars.”

ROBOTICS

Boston Dynamics Will Start Selling Arms for Its Robodog Spot Next Year
Andrew Tarantola | Engadget
“i‘Once you have an arm on a robot, it becomes a mobile manipulation system. It really opens up just vast horizons on things robots can do,’ [Boston Dynamics founder Marc Raibert said in June.] ‘I believe that the mobility of the robot will contribute to the dexterity of the robot in ways that we just don’t get with current fixed factory automation.’

SPACE

NASA Is Building 4G Internet on the Moon, So Future Astronauts Can Call Each Other
Katharine Schwab | Fast Company
“The project, for which Nokia will be paid $14.1 million, aims to create the lunar communications infrastructure necessary for voice and video calls, data transmission, robotic controls, and real-time navigation—think Google Maps, but for astronauts.”

SECURITY

How 30 Lines of Code Blew Up a 27-Ton Generator
Andy Greenberg | Wired
“Assante and his fellow INL researchers had bought the generator for $300,000 from an oil field in Alaska. …Now, if Assante had done his job properly, they were going to destroy it. And the assembled researchers planned to kill that very expensive and resilient piece of machinery not with any physical tool or weapon but with about 140 kilobytes of data, a file smaller than the average cat GIF shared today on Twitter.”

SENSORS

Scientists Capture World’s First 3,200-Megapixel Photos
Andy Altman | CNET
“Scientists at the Menlo Park, California-based SLAC National Accelerator Laboratory have taken the world’s first 3,200-megapixel digital photos, using an advanced imaging device that’s built to explore the universe. …That observatory is where the world’s largest digital camera will become the centerpiece of a monumental effort to map the night sky. The camera will spend 10 years capturing the most detailed images of the universe ever taken.”

FUTURE

Don’t Worry, the Earth Is Doomed
Tate Ryan-Mosley | MIT Technology Review
“Catastrophic risks are events that threaten human livelihood on a, well, catastrophic scale. Most are interconnected, meaning that one event—such as a nuclear detonation—is likely to trigger others, like water and food crises, economic depression, and world war. The intricate interdependence of our physical, social, and political systems has left humans vulnerable, something that covid-19 has highlighted.”

TECHNOLOGY

25 Moments in Tech That Defined the Last 25 Years
Editorial Staff | Fast Company
“As Fast Company celebrates our 25th anniversary, we’ve compiled a list of 25 moments that have defined the tech industry since our first issue hit the stands with a cover date of November 1995. …For better or worse—and sometimes both at the same time—these events have had lasting impact. If there’s some alternate universe where they never happened, it’s a different place indeed.”

Image credit: Ian Parker / Unsplash

Kategorie: Transhumanismus

OpenAI’s GPT-3 Wrote This Short Film—Even the Twist at the End

23 Říjen, 2020 - 15:00

OpenAI’s text generating AI has gotten a lot of buzz since its release in June. It’s been used to post comments on Reddit, write a poem roasting Elon Musk, and even write an entire article in The Guardian (which editors admitted they worked on and tweaked just as they would a human-written op ed).

When the system learned to autocomplete images without having been specifically trained to do so (as well as write code, translate between languages, and do math) it even got people speculating whether GPT-3 might be the gateway to artificial general intelligence (it’s probably not).

Now there’s another feat to add to GPT-3’s list: it wrote a screenplay.

It’s short, and weird, and honestly not that good. But… it’s also not all that bad, especially given that it was written by a machine.

The three-and-half-minute short film shows a man knocking on a woman’s door and sharing a story about an accident he was in. It’s hard to tell where the storyline is going, but surprises viewers with what could be considered a twist ending.

The students that created the film used a tool derived from GPT-3 called Shortly Read to write the screenplay. They wrote the first several lines then let the AI take over. The lines the students wrote simply set the scene (“Barb’s reading a book. A knock on the door. She stands and opens it. Rudy, goofy-looking, stands on the other side.”) and provide the first two lines of dialogue.

Everything that follows is based on GPT-3’s 175 billion parameters—the associations the algorithm draws between words or phrases based on its training data.

The story is a little odd, and maybe not the most compelling or character-driven, but there are definitely worse short films out there (written by humans).

Shortly Read’s tagline is “Never experience writer’s block, ever again.” The tool is meant to inspire stuck writers and perhaps nudge them to take stories or other pieces of content in a direction they wouldn’t have thought of on their own. That sounds well and good, but… how much help can a writer get from a tool like this before it becomes something like cheating?

This isn’t the first time screenwriters get some help from a computer, nor even the first time an AI writes an entire movie. It may be the first time an AI that wasn’t specifically trained to write screenplays does so, though. And if GPT-3 can write a reasonably convincing screenplay, what can’t it write? Legal briefs, news articles, political analyses, letters to your great-aunt… It’s this plethora of use cases that made the algorithm’s creators question whether to release the first version of it at all.

According to media futurist and algorithmic filmmaker Alexis Kirke, we should get used to the idea of computers having a hand in our creative endeavors, especially when it comes to writing movie scripts. “A huge amount of experience has been codified by writers, producers, directors, script editors, and so forth,” he told Digital Trends. “Want to reduce the number of adverbs and adjectives in your script? There’s an algorithm for that. Want to ensure your characters’ dialog all sound[s] different from each other? There’s an algorithm for that. Want to generate alternative, less clichéd, rewrites of a page that keep its general meaning? There’s an algorithm for that.”

It seems there are algorithms for everything these days, and GPT-3 is among the most impressive of them. Be on the lookout for its feature film, because it’s probably coming soon to a theater (or a home streaming platform) near you.

Image Credit: Calamity Ai

Kategorie: Transhumanismus

How Future AI Could Recognize a Kangaroo Without Ever Having Seen One

22 Říjen, 2020 - 15:00

AI is continuously taking on new challenges, from detecting deepfakes (which, incidentally, are also made using AI) to winning at poker to giving synthetic biology experiments a boost. These impressive feats result partly from the huge datasets the systems are trained on. That training is costly and time-consuming, and it yields AIs that can really only do one thing well.

For example, to train an AI to differentiate between a picture of a dog and one of a cat, it’s fed thousands—if not millions—of labeled images of dogs and cats. A child, on the other hand, can see a dog or cat just once or twice and remember which is which. How can we make AIs learn more like children do?

A team at the University of Waterloo in Ontario has an answer: change the way AIs are trained.

Here’s the thing about the datasets normally used to train AI—besides being huge, they’re highly specific. A picture of a dog can only be a picture of a dog, right? But what about a really small dog with a long-ish tail? That sort of dog, while still being a dog, looks more like a cat than, say, a fully-grown Golden Retriever.

It’s this concept that the Waterloo team’s methodology is based on. They described their work in a paper published on the pre-print (or non-peer-reviewed) server arXiv last month. Teaching an AI system to identify a new class of objects using just one example is what they call “one-shot learning.” But they take it a step further, focusing on “less than one shot learning,” or LO-shot learning for short.

LO-shot learning consists of a system learning to classify various categories based on a number of examples that’s smaller than the number of categories. That’s not the most straightforward concept to wrap your head around, so let’s go back to the dogs and cats example. Say you want to teach an AI to identify dogs, cats, and kangaroos. How could that possibly be done without several clear examples of each animal?

The key, the Waterloo team says, is in what they call soft labels. Unlike hard labels, which label a data point as belonging to one specific class, soft labels tease out the relationship or degree of similarity between that data point and multiple classes. In the case of an AI trained on only dogs and cats, a third class of objects, say, kangaroos, might be described as 60 percent like a dog and 40 percent like a cat (I know—kangaroos probably aren’t the best animal to have thrown in as a third category).

“Soft labels can be used to represent training sets using fewer prototypes than there are classes, achieving large increases in sample efficiency over regular (hard-label) prototypes,” the paper says. Translation? Tell an AI a kangaroo is some fraction cat and some fraction dog—both of which it’s seen and knows well—and it’ll be able to identify a kangaroo without ever having seen one.

If the soft labels are nuanced enough, you could theoretically teach an AI to identify a large number of categories based on a much smaller number of training examples.

The paper’s authors use a simple machine learning algorithm called k-nearest neighbors (kNN) to explore this idea more in depth. The algorithm operates under the assumption that similar things are most likely to exist near each other; if you go to a dog park, there will be lots of dogs but no cats or kangaroos. Go to the Australian grasslands and there’ll be kangaroos but no cats or dogs. And so on.

To train a kNN algorithm to differentiate between categories, you choose specific features to represent each category (i.e. for animals you could use weight or size as a feature). With one feature on the x-axis and the other on the y-axis, the algorithm creates a graph where data points that are similar to each other are clustered near each other. A line down the center divides the categories, and it’s pretty straightforward for the algorithm to discern which side of the line new data points should fall on.

The Waterloo team kept it simple and used plots of color on a 2D graph. Using the colors and their locations on the graphs, the team created synthetic data sets and accompanying soft labels. One of the more simplistic graphs is pictured below, along with soft labels in the form of pie charts.

Image Credit: Ilia Sucholutsky & Matthias Schonlau

When the team had the algorithm plot the boundary lines of the different colors based on these soft labels, it was able to split the plot up into more colors than the number of data points it was given in the soft labels.

While the results are encouraging, the team acknowledges that they’re just the first step, and there’s much more exploration of this concept yet to be done. The kNN algorithm is one of the least complex models out there; what might happen when LO-shot learning is applied to a far more complex algorithm? Also, to apply it, you still need to distill a larger dataset down into soft labels.

One idea the team is already working on is having other algorithms generate the soft labels for the algorithm that’s going to be trained using LO-shot; manually designing soft labels won’t always be as easy as splitting up some pie charts into different colors.

LO-shot’s potential for reducing the amount of training data needed to yield working AI systems is promising. Besides reducing the cost and the time required to train new models, the method could also make AI more accessible to industries, companies, or individuals who don’t have access to large datasets—an important step for democratization of AI.

Image Credit: pen_ash from Pixabay

Kategorie: Transhumanismus

Hey Google … What Movie Should I Watch Today? How AI Can Affect Our Decisions

21 Říjen, 2020 - 15:00

Have you ever used Google Assistant, Apple’s Siri, or Amazon Alexa to make decisions for you? Perhaps you asked it what new movies have good reviews, or to recommend a cool restaurant in your neighborhood.

Artificial intelligence and virtual assistants are constantly being refined, and may soon be making appointments for you, offering medical advice, or trying to sell you a bottle of wine.

Although AI technology has miles to go to develop social skills on par with ours, some AI has shown impressive language understanding and can complete relatively complex interactive tasks.

In several 2018 demonstrations, Google’s AI made haircut and restaurant reservations without receptionists realizing they were talking with a non-human.

It’s likely the AI capabilities developed by tech giants such as Amazon and Google will only grow more capable of influencing us in the future.

But What Do We Actually Find Persuasive?

My colleague Adam Duhachek and I found AI messages are more persuasive when they highlight “how” an action should be performed, rather than “why.” For example, people were more willing to put on sunscreen when an AI explained how to apply sunscreen before going out, rather than why they should use sunscreen.

We found people generally don’t believe a machine can understand human goals and desires. Take Google’s AlphaGo, an algorithm designed to play the board game Go. Few people would say the algorithm can understand why playing Go is fun, or why it’s meaningful to become a Go champion. Rather, it just follows a pre-programmed algorithm telling it how to move on the game board.

Our research suggests people find AI’s recommendations more persuasive in situations where AI shows easy steps on how to build personalized health insurance, how to avoid a lemon car, or how to choose the right tennis racket for you, rather than why any of these are important to do in a human sense.

Does AI Have Free Will?

Most of us believe humans have free will. We compliment someone who helps others because we think they do it freely, and we penalize those who harm others. What’s more, we are willing to lessen the criminal penalty if the person was deprived of free will, for instance if they were in the grip of a schizophrenic delusion.

But do people think AI has free will? We did an experiment to find out.

Someone is given $100 and offers to split it with you. They’ll get $80 and you’ll get $20. If you reject this offer, both you and the proposer end up with nothing. Gaining $20 is better than nothing, but previous research suggests the $20 offer is likely to be rejected because we perceive it as unfair. Surely we should get $50, right?

But what if the proposer is an AI? In a research project yet to be published, my colleagues and I found the rejection ratio drops significantly. In other words, people are much more likely to accept this “unfair” offer if proposed by an AI.

This is because we don’t think an AI developed to serve humans has a malicious intent to exploit us—it’s just an algorithm, it doesn’t have free will, so we might as well just accept the $20.

The fact that people could accept unfair offers from AI concerns me, because it might mean this phenomenon could be used maliciously. For example, a mortgage loan company might try to charge unfairly high interest rates by framing the decision as being calculated by an algorithm. Or a manufacturing company might manipulate workers into accepting unfair wages by saying it was a decision made by a computer.

To protect consumers, we need to understand when people are vulnerable to manipulation by AI. Governments should take this into account when considering regulation of AI.

We’re Surprisingly Willing to Divulge to AI

In other work yet to be published, my colleagues and I found people tend to disclose their personal information and embarrassing experiences more willingly to an AI than a human.

We told participants to imagine they’re at the doctor for a urinary tract infection. We split the participants, so half spoke to a human doctor, and half to an AI doctor. We told them the doctor is going to ask a few questions to find the best treatment and it’s up to you how much personal information you provide.

Participants disclosed more personal information to the AI doctor than the human one, regarding potentially embarrassing questions about use of sex toys, condoms, or other sexual activities. We found this was because people don’t think AI judges our behavior, whereas humans do. Indeed, we asked participants how concerned they were for being negatively judged, and found the concern of being judged was the underlying mechanism determining how much they divulged.

It seems we feel less embarrassed when talking to AI. This is interesting because many people have grave concerns about AI and privacy, and yet we may be more willing to share our personal details with AI.

But What if AI Does Have Free Will?

We also studied the flipside: what happens when people start to believe AI does have free will? We found giving AI human-like features or a human name could mean people are more likely to believe an AI has free will.

This has several implications:

  • AI can then better persuade people on questions of “why,” because people think the human-like AI may be able to understand human goals and motivations
  • AI’s unfair offer is less likely to be accepted because the human-looking AI may be seen as having its own intentions, which could be exploitative
  • People start feeling judged by the human-like AI and feel embarrassed, and disclose less personal information
  • People start feeling guilty when harming a human-looking AI, and so act more benignly to the AI

We are likely to see more and different types of AI and robots in the future. They might cook, serve, sell us cars, tend to us at the hospital, and even sit on a dining table as a dating partner. It’s important to understand how AI influences our decisions so we can regulate AI to protect ourselves from possible harms.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image Credit: hamburgfinn from Pixabay

Kategorie: Transhumanismus

Can We Trust AI Doctors? Google Health and Academics Battle It Out

20 Říjen, 2020 - 15:00

Machine learning is taking medical diagnosis by storm. From eye disease, breast and other cancers, to more amorphous neurological disorders, AI is routinely matching physician performance, if not beating them outright.

Yet how much can we take those results at face value? When it comes to life and death decisions, when can we put our full trust in enigmatic algorithms—“black boxes” that even their creators cannot fully explain or understand? The problem gets more complex as medical AI crosses multiple disciplines and developers, including both academic and industry powerhouses such as Google, Amazon, or Apple, with disparate incentives.

This week, the two sides battled it out in a heated duel in one of the most prestigious science journals, Nature. On one side are prominent AI researchers at the Princess Margaret Cancer Centre, University of Toronto, Stanford University, Johns Hopkins, Harvard, MIT, and others. On the other side is the titan Google Health.

The trigger was an explosive study by Google Health for breast cancer screening, published in January this year. The study claimed to have developed an AI system that vastly outperformed radiologists for diagnosing breast cancer, and can be generalized to populations beyond those used for training—a holy grail of sorts that’s incredibly difficult due to the lack of large medical imaging datasets. The study made waves across the media landscape, and created a buzz in the public sphere for medical AI’s “coming of age.”

The problem, the academics argued, is that the study lacked sufficient descriptions of the code and model for others to replicate. In other words, we can only trust the study at its word—something that’s just not done in scientific research. Google Health, in turn, penned a polite, nuanced but assertive rebuttal arguing for their need to protect patient information and prevent the AI from malicious attacks.

Academic discourse like these form the seat of science, and may seem incredibly nerdy and outdated—especially because rather than online channels, the two sides resorted to a centuries-old pen-and-paper discussion. By doing so, however, they elevated a necessary debate to a broad worldwide audience, each side landing solid punches that, in turn, could lay the basis of a framework for trust and transparency in medical AI—to the benefit of all. Now if they could only rap their arguments in the vein of Hamilton and Jefferson’s Cabinet Battles in Hamilton.

Academics, You Have the Floor

It’s easy to see where the academic’s arguments come from. Science is often painted as a holy endeavor embodying objectivity and truth. But as any discipline touched by people, it’s prone to errors, poor designs, unintentional biases or—in very small numbers—conscious manipulation to skew the results. Because of this, when publishing results, scientists carefully describe their methodology so others can replicate the findings. If a conclusion, say a vaccine that protects against Covid-19, happens in nearly every lab regardless of the scientist, the material, or the subjects, then we have stronger proof that the vaccine actually works. If not, it means that the initial study may be wrong—and scientists can then delineate why and move on. Replication is critical to healthy scientific evolution.

But AI research is shredding the dogma.

“In computational research, it’s not yet a widespread criterion for the details of an AI study to be fully accessible. This is detrimental to our progress,” said author Dr. Benjamin Haibe-Kains at Princess Margaret Cancer Centre. For example, nuances in computer code or training samples and parameters could dramatically change training and evaluation of results—aspects that can’t be easily described using text alone, as is the norm. The consequence, said the team, is that it makes trying to verify the complex computational pipeline “not possible.” (For academics, that’s the equivalent of gloves off.)

Although the academics took Google Health’s breast cancer study as an example, they acknowledged the problem is far more widespread. By examining the shortfalls of the Google Health study in terms of transparency, the team said, “we provide potential solutions with implications for the broader field.” It’s not an impossible problem. Online depositories such as GitHub, Bitbucket, and others already allow the sharing of code. Others allow sharing of deep learning models, such as ModelHub.ai, with support for frameworks such as TensorFlow, which was used by the Google Health team.

Ins-and-outs details of AI models aside, there’s also the question of sharing data that those models were trained from. It’s a particularly thorny problem for medical AI, because much of those datasets are under license and sharing can generate privacy concerns. Yet it’s not unheard of. For example, genomics has leveraged patient datasets for decades—essentially each person’s genetic “base code”—and extensive guidelines exist to protect patient privacy. If you’ve ever used a 23andMe ancestry spit kit and provided consent for your data to be used for large genomic studies, you’ve benefited from those guidelines. Setting up something similar for medical AI isn’t impossible.

In the end, a higher bar for transparency for medical AI will benefit the entire field, including doctors and patients. “In addition to improving accessibility and transparency, such resources can considerably accelerate model development, validation and transition into production and clinical Implementation,” the authors wrote.

Google Health, Your Response

Led by Dr. Scott McKinney, Google Health did not mince words. Their general argument: “No doubt the commenters are motivated by protecting future patients as much as scientific principle. We share that sentiment.” But under current regulatory frameworks, our hands are tied when it comes to open sharing.

For example, when it comes to releasing a version of their model for others to test on different sets of medical images, the team said they simply can’t because their AI system may be classified as “medical device software,” which is subject to oversight. Unrestricted release may lead to liability issues that place patients, providers, and developers at risk.

As for sharing datasets, Google Health argued that their largest source used is available online with application to access (with just a hint of sass that their organization helped to fund the resource). Other datasets, due to ethical boards, simply cannot be shared.

Finally, the team argued that sharing a model’s “learned parameters,”—that is, the bread-and-butter of how they’re constructed—can inadvertently expose the training dataset and model to malicious attack or misuse. It’s certainly a concern: you may have previously heard of GPT-3, the OpenAI algorithm that writes unnervingly like a human—enough to fool Redditors for a week. But it would take a really sick individual to bastardize a breast cancer detection tool for some twisted gratification.

The Room Where It Happens

The academic-Google Health debate is just a small corner of a worldwide reckoning for medical AI. In September 2011, an international consortium of medical experts introduced a set of official standards for clinical trials that deploy AI in medicine, with the goal of plucking out AI snake oil from trustworthy algorithms. One point may sound familiar: how reliably a medical AI functions in the real word, away from favorable training sets or conditions in the lab. The guidelines represent some of the first when it comes to medical AI, but won’t be the last.

If this all seems abstract and high up in the ivory tower, think of it another way: you’re now witnessing the room where it happens. By publishing negotiations and discourse publicly, AI developers are inviting additional stakeholders to join in on the conversation. Like self-driving cars, medical AI seems like an inevitability. The question is how to judge and deploy it in a safe, equal manner—while inviting a hefty dose of public trust.

Image Credit: Marc Manhart from Pixabay

Kategorie: Transhumanismus

Scientists Just Achieved Room Temperature Superconductivity for the First Time

19 Říjen, 2020 - 15:16

Superconductivity could be the key to groundbreaking new technologies in energy, computing, and transportation, but so far it only occurs in materials chilled close to absolute zero. Now researchers have created the first ever room-temperature superconductor.

As a current passes through a conductor it experiences resistance, which saps away useful energy into waste heat and limits the efficiency of all of the modern world’s electronics. But in 1911, Dutch physicist Heike Kamerlingh Onnes discovered that this doesn’t have to be the case.

When he cooled mercury wire to just above absolute zero, the resistance abruptly disappeared. Over the next few decades superconductivity was found in other super-cooled materials, and in 1933 researchers discovered that superconductors also expel magnetic fields. That means that external magnetic fields, which normally pass through just about anything, can’t penetrate the superconductor’s interior and remain at its surface.

These two qualities open up a whole host of possibilities, including lossless power lines and electronic circuits, ultra-sensitive sensors, and incredibly powerful magnets that could be used to levitate trains or make super-efficient turbines. Superconductors are at the heart of some of today’s most cutting-edge technologies, from quantum computers to MRI scanners and the Large Hadron Collider.

The only problem is that they require bulky, costly, and energy-sapping cooling equipment that severely limits where they can be used. But now researchers from the University of Rochester have demonstrated superconductivity at the comparatively balmy temperature of 15 degrees celsius.

“Because of the limits of low temperature, materials with such extraordinary properties have not quite transformed the world in the way that many might have imagined,” said lead researcher Ranga Dias in a press release. “Our discovery will break down these barriers and open the door to many potential applications.”

The breakthrough, described in a paper in Nature, comes with some substantial caveats, though. The team was only able to create a tiny amount of the material, roughly the same volume as a single droplet from an inkjet printer. And to get it to superconduct they had to squeeze it between two diamonds to create pressures equivalent to three-quarters of those found at the center of the Earth.

The researchers are also still unclear about the exact nature of the material they have made. They combined a mixture of hydrogen, carbon, and sulfur then fired a laser at it to trigger a chemical reaction and create a crystal. But because all these elements have very small atoms, it’s not been possible to work out how they are arranged or what the material’s chemical formula might be.

Nonetheless, the result is a major leap forward for high-temperature superconductors. It follows a string of advances built on the back of Cornell University physicist Neil Ashcroft’s predictions that hydrogen-rich materials are a promising route to room-temperature conductivity, but it has blown the previous record of -13C out of the water.

For the discovery to ever have practical applications though, the researchers will have to find a way to reduce the pressure required to achieve superconductivity. That will require a better understanding of the properties of the material they’ve created, but they suggest there is lots of scope for tuning their recipe to get closer to ambient pressures.

How soon that could happen is anyone’s guess, but the researchers seem confident and have created a startup called Unearthly Materials to commercialize their work. If they get their way, electrical resistance may soon be a thing of the past.

Image Credit: Gerd Altmann from Pixabay

Kategorie: Transhumanismus

When Did We Become Fully Human? What Fossils and DNA Tell Us About the Evolution of Modern Intelligence

18 Říjen, 2020 - 18:00

When did something like us first appear on the planet? It turns out there’s remarkably little agreement on this question. Fossils and DNA suggest people looking like us, anatomically modern Homo sapiens, evolved around 300,000 years ago. Surprisingly, archaeology—tools, artifacts, cave art—suggest that complex technology and cultures, “behavioral modernity,” evolved more recently: 50,000 to 65,000 years ago.

Some scientists interpret this as suggesting the earliest Homo sapiens weren’t entirely modern. Yet the different data tracks different things. Skulls and genes tell us about brains, artifacts about culture. Our brains probably became modern before our cultures.

Key physical and cultural milestones in modern human evolution, including genetic divergence of ethnic groups. Image credit: Nick Longrich / author provided The “Great Leap”

For 200,000 to 300,000 years after Homo sapiens first appeared, tools and artifacts remained surprisingly simple, little better than Neanderthal technology, and simpler than those of modern hunter-gatherers such as certain indigenous Americans. Starting about 65,000 to 50,000 years ago, more advanced technology started appearing: complex projectile weapons such as bows and spear-throwers, fishhooks, ceramics, sewing needles.

People made representational art—cave paintings of horses, ivory goddesses, lion-headed idols, showing artistic flair and imagination. A bird-bone flute hints at music. Meanwhile, arrival of humans in Australia 65,000 years ago shows we’d mastered seafaring.

The Venus of Brassempouy, 25,000 years old. Image credit: Wikimedia Commons

This sudden flourishing of technology is called the “great leap forward,” supposedly reflecting the evolution of a fully modern human brain. But fossils and DNA suggest that human intelligence became modern far earlier.

Anatomical Modernity

Bones of primitive Homo sapiens first appear 300,000 years ago in Africa, with brains as large or larger than ours. They’re followed by anatomically modern Homo sapiens at least 200,000 years ago, and brain shape became essentially modern by at least 100,000 years ago. At this point, humans had braincases similar in size and shape to ours.

Assuming the brain was as modern as the box that held it, our African ancestors theoretically could have discovered relativity, built space telescopes, written novels and love songs. Their bones say they were just as human as we are.

300,000 ya skull, Morocco. Image credit: NHM

Because the fossil record is so patchy, fossils provide only minimum dates. Human DNA suggests even earlier origins for modernity. Comparing genetic differences between DNA in modern people and ancient Africans, it’s estimated that our ancestors lived 260,000 to 350,000 years ago. All living humans descend from those people, suggesting that we inherited the fundamental commonalities of our species, our humanity, from them.

All their descendants—Bantu, Berber, Aztec, Aboriginal, Tamil, San, Han, Maori, Inuit, Irish—share certain peculiar behaviors absent in other great apes. All human cultures form long-term pair bonds between men and women to care for children. We sing and dance. We make art. We preen our hair, adorn our bodies with ornaments, tattoos and makeup.

We craft shelters. We wield fire and complex tools. We form large, multigenerational social groups with dozens to thousands of people. We cooperate to wage war and help each other. We teach, tell stories, trade. We have morals, laws. We contemplate the stars, our place in the cosmos, life’s meaning, what follows death.

The details of our tools, fashions, families, morals and mythologies vary from tribe to tribe and culture to culture, but all living humans show these behaviors. That suggests these behaviors—or at least, the capacity for them—are innate. These shared behaviors unite all people. They’re the human condition, what it means to be human, and they result from shared ancestry.

We inherited our humanity from peoples in southern Africa 300,000 years ago. The alternative—that everyone, everywhere coincidentally became fully human in the same way at the same time, starting 65,000 years ago—isn’t impossible, but a single origin is more likely.

The Network Effect

Archaeology and biology may seem to disagree, but they actually tell different parts of the human story. Bones and DNA tell us about brain evolution, our hardware. Tools reflect brainpower, but also culture, our hardware and software.

Just as you can upgrade your old computer’s operating system, culture can evolve even if intelligence doesn’t. Humans in ancient times lacked smartphones and spaceflight, but we know from studying philosophers such as Buddha and Aristotle that they were just as clever. Our brains didn’t change, our culture did.

That creates a puzzle. If Pleistocene hunter-gatherers were as smart as us, why did culture remain so primitive for so long? Why did we need hundreds of millennia to invent bows, sewing needles, boats? And what changed? Probably several things.

First, we journeyed out of Africa, occupying more of the planet. There were then simply more humans to invent, increasing the odds of a prehistoric Steve Jobs or Leonardo da Vinci. We also faced new environments in the Middle East, the Arctic, India, Indonesia, with unique climates, foods and dangers, including other human species. Survival demanded innovation.

Many of these new lands were far more habitable than the Kalahari or the Congo. Climates were milder, but Homo sapiens also left behind African diseases and parasites. That let tribes grow larger, and larger tribes meant more heads to innovate and remember ideas, more manpower, and better ability to specialize. Population drove innovation.

Beijing from space. Image credit: NASA

This triggered feedback cycles. As new technologies appeared and spread—better weapons, clothing, shelters—human numbers could increase further, accelerating cultural evolution again.

Numbers drove culture, culture increased numbers, accelerating cultural evolution, on and on, ultimately pushing human populations to outstrip their ecosystems, devastating the megafauna and forcing the evolution of farming. Finally, agriculture caused an explosive population increase, culminating in civilizations of millions of people. Now, cultural evolution kicked into hyperdrive.

Artifacts reflect culture, and cultural complexity is an emergent property. That is, it’s not just individual-level intelligence that makes cultures sophisticated, but interactions between individuals in groups, and between groups. Like networking millions of processors to make a supercomputer, we increased cultural complexity by increasing the number of people and the links between them.

So our societies and world evolved rapidly in the past 300,000 years, while our brains evolved slowly. We expanded our numbers to almost eight billion, spread across the globe, reshaped the planet. We did it not by adapting our brains but by changing our cultures. And much of the difference between our ancient, simple hunter-gatherer societies and modern societies just reflects the fact that there are lots more of us and more connections between us.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image credit: Wikimedia Commons

Kategorie: Transhumanismus

This Week’s Awesome Tech Stories From Around the Web (Through October 17)

17 Říjen, 2020 - 15:00
ARTIFICIAL INTELLIGENCE

A Radical New Technique Lets AI Learn With Practically No Data
Karen Hao | MIT Technology Review
“Shown photos of a horse and a rhino, and told a unicorn is something in between, [children] can recognize the mythical creature in a picture book the first time they see it. …Now a new paper from the University of Waterloo in Ontario suggests that AI models should also be able to do this—a process the researchers call ‘less than one’-shot, or LO-shot, learning.”

FUTURE

Artificial General Intelligence: Are We Close, and Does It Even Make Sense to Try?
Will Douglas Heaven | MIT Technology Review
“A machine that could think like a person has been the guiding vision of AI research since the earliest days—and remains its most divisive idea. …So why is AGI controversial? Why does it matter? And is it a reckless, misleading dream—or the ultimate goal?”

HEALTH

The Race for a Super-Antibody Against the Coronavirus
Apoorva Mandavilli | The New York Times
“Dozens of companies and academic groups are racing to develop antibody therapies. …But some scientists are betting on a dark horse: Prometheus, a ragtag group of scientists who are months behind in the competition—and yet may ultimately deliver the most powerful antibody.”

SPACE

How to Build a Spacecraft to Save the World
Daniel Oberhaus | Wired
“The goal of the Double Asteroid Redirection Test, or DART, is to slam the [spacecraft] into a small asteroid orbiting a larger asteroid 7 million miles from Earth. …It should be able to change the asteroid’s orbit just enough to be detectable from Earth, demonstrating that this kind of strike could nudge an oncoming threat out of Earth’s way. Beyond that, everything is just an educated guess, which is exactly why NASA needs to punch an asteroid with a robot.”

TRANSPORTATION

Inside Gravity’s Daring Mission to Make Jetpacks a Reality
Oliver Franklin-Wallis | Wired
“The first time someone flies a jetpack, a curious thing happens: just as their body leaves the ground, their legs start to flail. …It’s as if the vestibular system can’t quite believe what’s happening. This isn’t natural. Then suddenly, thrust exceeds weight, and—they’re aloft. …It’s that moment, lift-off, that has given jetpacks an enduring appeal for over a century.”

FUTURE OF FOOD

Inside Singapore’s Huge Bet on Vertical Farming
Megan Tatum | MIT Technology Review
“…to cram all [of Singapore’s] gleaming towers and nearly 6 million people into a land mass half the size of Los Angeles, it has sacrificed many things, including food production. Farms make up no more than 1% of its total land (in the United States it’s 40%), forcing the small city-state to shell out around $10 billion each year importing 90% of its food. Here was an example of technology that could change all that.”

COMPUTING

The Effort to Build the Mathematical Library of the Future
Kevin Hartnett | Quanta
“Digitizing mathematics is a longtime dream. The expected benefits range from the mundane—computers grading students’ homework—to the transcendent: using artificial intelligence to discover new mathematics and find new solutions to old problems.”

Image credit: Kevin MuellerUnsplash

Kategorie: Transhumanismus

NASA’s About to Try Grabbing a Chunk of Asteroid to Bring to Earth—and You Can Watch

16 Říjen, 2020 - 14:00

If you’ve seen the movie The Martian, you no doubt remember the rescue scene, in which (spoiler alert!) Matt Damon launches himself off Mars in a stripped-down rocket in hopes of his carefully-calculated trajectory taking him just close enough to his crew for them to pluck him from the void of outer space and bring him safely home to Earth. There’s a multitude of complex physics involved, and who knows how true-to-science the scene is, but getting the details right to successfully grab something in space certainly isn’t easy.

So it will be fascinating to watch NASA aim to do just that, as its OSIRIS-REx spacecraft attempts to pocket a fistful of rock and dust from an asteroid called Bennu then ferry it back to Earth—with the whole endeavor broadcast live on NASA’s website starting Tuesday, October 20 at 5pm Eastern time. Here are some details to know in advance.

The Asteroid

Bennu’s full name is 101955 Bennu, and it’s close enough to Earth to be classified as a near-Earth object, or NEO—that means it orbits within 1.3 AU of the sun. An AU is equivalent to the distance between Earth and the sun, which is about 93 million miles. The asteroid orbits the sun at an average distance of 105 million miles, which is just (“just” being a relative term here!) 12 million miles farther than Earth’s average orbital distance from the sun.

Every six years, Bennu comes closer to Earth, getting to within 0.002 AU. Scientists say this means there’s a high likelihood the asteroid could impact Earth sometime in the late 22nd century. Luckily, an international team is already on the case (plus, due to Bennu’s size and composition, it likely wouldn’t do any harm).

Bennu isn’t solid, but rather a loose clump of rock and dust whose density varies across its area (in fact, up to 40 percent of it might just be empty space!). Its shape is more similar to a spinning top than a basketball or other orb, and it’s not very big—about a third of a mile wide at its widest point. Since it’s small, it spins pretty fast, doing a full rotation on its axis in less than four and a half hours. That fast spinning also means it’s likely to eject material once in a while, with chunks or rock and other regolith dislodging and being flung into space.

The Spacecraft

OSIRIS-REx stands for Origins, Spectral Interpretation, Resource Identification, Security-Regolith Explorer. Yeah—that’s a lot. It’s the size of a large van (bigger than a minivan, smaller than a bus), and looks sort of like a box with wings and one long arm. It’s been orbiting Bennu for about two years (since 2018) after taking two years to get there (it was launched in 2016).

The spacecraft’s “arm” is called TAGSAM, which stands for Touch-And-Go Sample Acquisition Mechanism. It’s 11 feet long and has a round collection chamber attached to its end.

OSIRIS-REx doesn’t have any legs to land on, but that’s for a good reason: landing isn’t part of the plan. Which brings us to…

The Plan

As far as plans go, this one is pretty cool. The spacecraft will approach the asteroid, and its arm will reach out to tap the surface. A pressurized canister will shoot out some nitrogen gas to try to dislodge some dust and rock from Bennu, and the collection chamber on the spacecraft’s arm will open up to grab whatever it can; scientists are hoping to get at least 60 grams’ worth of material (that’s only 4 tablespoons! It’s less than the cup of yogurt you eat in the morning!).

And that’s not even the wildest detail; if the mission goes as planned and OSIRIS-REx scoops up those four tablespoons of precious cargo, scientists on Earth still won’t see them for almost three more years; the spacecraft is scheduled for a parachute landing in the Utah desert on September 24, 2023.

The NASA team working on this project thinks it’s likely they’ll find organic material in the sample collection, and it may even give them clues to the origins of life on Earth.

Does the mission have better odds of success than Matt Damon’s rescue in The Martian? Tune in on Tuesday to see for yourself.

Image Credit: NASA

Kategorie: Transhumanismus

Estonia Is a ‘Digital Republic’—What That Means and Why It May Be Everyone’s Future

15 Říjen, 2020 - 15:00

People around the globe have been watching the buildup to the US election with disbelief. Particularly confusing to many is the furor over postal ballots, which the US president, Donald Trump, is insisting will lead to large-scale voter fraud—despite a complete lack of evidence to back this. And yet this issue has become a central feature of the debate.

Citizens of Estonia, a small nation in the Baltic region, will perhaps be particularly perplexed: since 2005, Estonians have been able vote online from anywhere in the world. Estonians log on with their digital ID card and vote as many times as they want during the pre-voting period, with each vote cancelling the last. This unique technological solution has safeguarded Estonian voters against fraud, use of force, and other manipulations of remote voting that many American voters are apprehensive about in the 2020 US election.

Voting online is just the start. Estonia offers the most comprehensive governmental online services in the world. In the US, it takes an average taxpayer with no business income eight hours to file a tax return. In Estonia, it takes just five minutes. In the UK, billions of pounds have been spent on IT, yet the NHS still struggles to make patient data accessible across different health boards. In Estonia, despite having multiple private health service providers, doctors can collate and visualize patient records whenever and wherever necessary, with consent from patients—a real boon in the country’s fight against coronavirus.

Branding itself the first “digital republic” in the world, Estonia has digitized 99 percent of its public services. And, in an era when trust in public services is declining across the globe, Estonia persistently achieves one of the highest ratings of trust in government in the EU. The Estonian government claims that this digitization of public services saves more than 1,400 years of working time and 2 percent of its GDP annually.

The Tiger Leap

The foundation of this digital republic dates back to 1997, a time when only 1.7 percent of the world’s population had internet access, a startup called Google had just registered its domain name, and British prime minister John Major was celebrating the launch of 10 Downing Street’s official website.

Meanwhile, the government of the newly-formed state of Estonia envisaged the creation of a digital society where all citizens would be technologically literate and governance would be paperless, decentralized, transparent, efficient, and equitable. The young post-Soviet government decided to ditch all communist-era legacy technologies and inefficient public service structure.

In a radical move, the government—which had an average age of 35—also decided not to embrace Western technologies. Neighboring Finland offered an analogue telephone exchange as a gift and the Estonian government declined, envisaging communicating over the internet rather than analogue telephone.

The government of Estonia launched a project called Tiigrihüpe (Tiger Leap) in 1997, investing heavily in development and the expansion of internet networks and computer literacy. Within a year of its inception almost all (97 percent) of Estonian schools had internet access and by 2000, Estonia was the first country to pass legislation declaring access to the internet a basic human right. Free wi-fi hotspots started being built in 2001, and now cover almost all populated areas of the country.

The government also understood that in order to create a knowledge-based society, information needs to be shared efficiently while maintaining privacy. This was a radical understanding even in the context of today, when for most countries, data sharing among different organizations’ databases is still limited. It is predicted that by 2022, 93 percent of the world’s total data collected or stored will be such “dark” or siloed data.

Two decades ago, in 2001, Estonia created an anti-silo data management system called X-Road through which public and private organizations can share data securely while maintaining data privacy through cryptography. Initially developed by Estonia, the project is now a joint collaboration between Estonia and Finland.

A large number of Estonian government and financial institutions using X-Road came under cyber-attack from Russian IP addresses in 2007. This attack made clear how vulnerable centralized data management systems are, and so Estonia required a distributed technology that is resistant to cyber-attack. Addressing this need, in 2012 Estonia became the first country to use blockchain technology for governance.

Blockchain Governance

Distributed ledger technology, commonly known as blockchain, is the underpinning technology of the cryptocurrency Bitcoin. The technology has moved on significantly since its inception in 2009 and is now used for a variety of applications, from supply chains to fighting injustice.

Blockchain is an open-source distributed ledger or database system in which an updated copy of the records is available to all stakeholders at all times. Due to this distributed nature, it is almost impossible for a single person or company to hack everybody’s ledger, ensuring security against cyberattacks.

Deploying blockchain technology not only ensures protection against any future attacks, but also poses many other benefits to Estonians. For example, in most countries citizens have to fill in many different forms with the same personal information (name, address) when they need to access public services from different government agencies. In Estonia, citizens only need to input their personal information once: the blockchain system enables the relevant data to be immediately accessible to the required department.

This might scare people worried about data privacy. But citizens, not the government, own their personal data in Estonia. Citizens have a digital ID card and approve which part of their information can be reused by which public service. Estonians know that even government officials can’t access their personal data beyond what is approved by them for the required public service. Any unauthorized attempt to access personal data will be identified as invalid: indeed, it is a criminal offense in Estonia for officials to gain unauthorized access to personal data. This transfer of ownership and control of personal data to individuals is facilitated by blockchain technology.

This should be an inspiration for the rest of the world. It is true that most countries do not have similar circumstances to post-Soviet Estonia when the Tiger Leap was introduced. But the same futuristic mindset is required to address the challenge of declining trust.

Minor amendments were made to this article on October 12 to make some of the context behind X-Road clearer.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image Credit: crew2139 from Pixabay

Kategorie: Transhumanismus

Alphabet’s New Moonshot Is to Transform How We Grow Food

14 Říjen, 2020 - 15:00

In the 1940s, agronomist Norman Borlaug was tasked by the US government with improving the yield of wheat plants in Mexico. The thinking was that if America’s southern neighbor had better food security, relations between the two countries would improve, and fewer migrants would cross the US border.

At that time, a plant disease called stem rust was ravaging crops in Mexico and parts of the US, depleting harvests and causing panic among farmers. Borlaug started crossbreeding seeds in hopes of stumbling upon a genetic combination that was resistant to stem rust and produced a high yield. Over the course of three years, Borlaug and his assistants pollinated and inspected hundreds of thousands of plants by hand: 110,000 in just one growing season.

Their work paid off; the resulting wheat seeds produced three times more yield on the same amount of land. Borlaug is known as the father of the Green Revolution, and was later awarded the Nobel Peace Prize.

With the global population growing while climate change begins to impact our ability to produce food, many are calling for a 21st-century Green Revolution. In short, we need to figure out better ways to grow food, and fast.

This week a tech powerhouse joined the effort. Google parent company Alphabet’s X division—internally called “the moonshot factory”—announced a project called Mineral, launched to develop technologies for a more sustainable, resilient, and productive food system.

The way we grow crops now, the project page explains, works pretty well, but it’s not ideal. Dozens or hundreds of acres of a given crop are treated the same across the board, fertilized and sprayed with various chemicals to kill pests and weeds. We get the yields we needs with this method, but at the same time we’re progressively depleting the soil by pumping it full of the same chemicals year after year, and in the process we’re making our own food less nutrient-rich. It’s kind of a catch-22; this is the best way to grow the most food, but the quality of that food is getting worse.

But maybe there’s a better way—and Mineral wants to find it.

Like many things nowadays, the key to building something better is data. Genetic data, weather pattern data, soil composition and erosion data, satellite data… The list goes on. As part of the massive data-gathering that will need to be done, X introduced what it’s calling a “plant buggy” (if the term makes you picture a sort of baby stroller for plants, you’re not alone…).

It is in fact not a stroller, though. It looks more like a platform on wheels, topped with solar panels and stuffed with cameras, sensors, and software. It comes in different sizes and shapes so that it can be used on multiple types of crops (inspecting tall, thin stalks of corn, for example, requires a different setup than short, bushy soybean plants). The buggy will collect info about plants’ height, leaf area, and fruit size, then consider it alongside soil, weather, and other data.

Having this type of granular information, Mineral hopes, will allow farmers to treat different areas of their fields or even specific plants individually rather than using blanket solutions that may be good for some plants, but bad for others.

It’s sort of like the “quantified self” trend in healthcare; all of our bodies are different, as are our genomes and the factors likely to make us ill; by gathering as much data as possible about ourselves and monitoring our bodies’ various systems, we can customize our diets, medications, exercise, and lifestyles to what will work best for us, rather than what’s likely to work best for the average person.

In a blog post about Mineral, project lead Elliott Grant asks, “What if every single plant could be monitored and given exactly the nutrition it needed? What if we could untangle the genetic and environmental drivers of crop yield? What if we could measure the subtle ways a plant responds to its environment?” He and his team hope that tools like those being developed as part of Mineral will help the agriculture industry transform how food is grown.

There are all sorts of projects—all over the world—devoted to the future of food, from cultured meat and fish to nanoparticles that help plants grow in the desert to factories raising millions of bugs for protein. Google X has taken on some ambitious goals and hasn’t disappointed, so with Mineral joining the effort, we may see another Green Revolution in the not-too-distant future.

Image Credit: Mineral/X

Kategorie: Transhumanismus

Scientists Found a New Way to Control the Brain With Light—No Surgery Required

13 Říjen, 2020 - 15:00

If I had to place money on a neurotech that will win the Nobel Prize, it’s optogenetics.

The technology uses light of different frequencies to control the brain. It’s a brilliant mind-meld of basic neurobiology and engineering that hijacks the mechanism behind how neurons naturally activate—or are silenced—in the brain.

Thanks to optogenetics, in just ten years we’ve been able to artificially incept memories in mice, decipher brain signals that lead to pain, untangle the neural code for addiction, reverse depression, restore rudimentary sight in blinded mice, and overwrite terrible memories with happy ones. Optogenetics is akin to a universal programming language for the brain.

But it’s got two serious downfalls: it requires gene therapy, and it needs brain surgery to implant optical fibers into the brain.

This week, the original mind behind optogenetics is back with an update that cuts the cord. Dr. Karl Deisseroth’s team at Stanford University, in collaboration with the University of Minnesota, unveiled an upgraded version of optogenetics that controls behavior without the need for surgery. Rather, the system shines light through the skulls of mice, and it penetrates deep into the brain. With light pulses, the team was able to change how likely a mouse was to have seizures, or reprogram its brain so it preferred social company.

To be clear: we’re far off from scientists controlling your brain with flashlights. The key to optogenetics is genetic engineering—without it, neurons (including yours) don’t naturally respond to light.

However, looking ahead, the study is a sure-footed step towards transforming a powerful research technology into a clinical therapy that could potentially help people with neurological problems, such as depression or epilepsy. We are still far from that vision—but the study suggests it’s science fiction potentially within reach.

Opto-What?

To understand optogenetics, we need to dig a little deeper into how brains work.

Essentially, neurons operate on electricity with an additional dash of chemistry. A brain cell is like a living storage container with doors—called ion channels—that separate its internal environment from the outside. When a neuron receives input and that input is sufficiently strong, the cells open their doors. This process generates an electrical current, which then gallops down a neuron’s output branch—a biological highway of sorts. At the terminal, the electrical data transforms into dozens of chemical “ships,” which float across a gap between neurons to deliver the message to its neighbors. This is how neurons in a network communicate, and how that network in turn produces memories, emotions, and behaviors.

Optogenetics hijacks this process.

Using viruses, scientists can add a gene for opsins, a special family of proteins from algae, into living neurons. Opsins are specialized “doors” that open under certain frequencies of light pulses, something mammalian brain cells can’t do. Adding opsins into mouse neurons (or ours) essentially gives them the superpower to respond to light. In classic optogenetics, scientists implant optical fibers near opsin-dotted neurons to deliver the light stimulation. Computer-programmed light pulses can then target these newly light-sensitive neurons in a particular region of the brain and control their activity like puppets on a string.

It gets cooler. Using genetic engineering, scientists can also fine-tune which populations of neurons get that extra power—for example, only those that encode a recent memory, or those involved in depression or epilepsy. This makes it possible to play with those neural circuits using light, while the rest of the brain hums along.

This selectivity is partially why optogenetics is so powerful. But it’s not all ponies and rainbows. As you can imagine, mice don’t particularly enjoy being tethered by optical fibers sprouting from their brains. Humans don’t either, hence the hiccup in adopting the tool for clinical use. Since its introduction, a main goal for next-generation optogenetics has been to cut the cord.

Goodbye Surgery

In the new study, the Deisseroth team started with a main goal: let’s ditch the need for surgical implants altogether. Immediately, this presents a tough problem. It means that bioengineered neurons, inside a brain, need to have a sensitive and powerful enough opsin “door” that responds to light—even when light pulses are diffused by the skull and brain tissue. It’s like a game of telephone where one person yells a message from ten blocks away, through multiple walls and city noise, yet you still have to be able to decipher it and pass it on.

Luckily, the team already had a candidate, one so good it’s a ChRmine (bad joke cringe). Developed last year, ChRmine stands out in its shockingly fast reaction times to light and its ability to generate a large electrical current in neurons—about a 100-fold improvement over any of its predecessors. Because it’s so sensitive, it means that even a spark of light, at its preferred wavelength, can cause it to open its “doors” and in turn control neural activity. What’s more, ChRmine rapidly shuts down after it opens, meaning that it doesn’t overstimulate neurons but rather follows their natural activation trajectory.

As a first test, the team used viruses to add ChRmine to an area deep inside the brain—the ventral tegmental area (VTA), which is critical to how we process reward and addiction, and is also implicated in depression. As of now, the only way to reach the area in a clinical setting is with an implanted electrode. With ChRmine, however, the team found that a light source, placed right outside the mice’s scalp, was able to reliably spark neural activity in the region.

Randomly activating neurons with light, while impressive, may not be all that useful. The next test is whether it’s possible to control a mouse’s behavior using light from outside the brain. Here, the team added ChRmine to dopamine neurons in a mouse, which in this case provides a feeling of pleasure. Compared to their peers, the light-enhanced mice were far more eager to press a lever to deliver light to their scalps—meaning that the light is stimulating the neurons enough for the mice to feel pleasure and work for it.

As a more complicated test, the team then used light to control a population of brain cells, called serotonergic cells, in the base of the brain, called the brainstem. These cells are known to influence social behavior—that is, how much an individual enjoys social interaction. It gets slightly disturbing: mice with ChRmine-enhanced cells, specifically in the brainstem, preferred spending time in their test chamber’s “social zone” versus their siblings who didn’t have ChRmine. In other words, without any open-brain surgery and just a few light beams, the team was able to change a socially ambivalent mouse into a friendship-craving social butterfly.

Brain Control From Afar

If you’re thinking “creepy,” you’re not alone. The study suggests that with an injection of a virus carrying the ChRmine gene—either through the eye socket or through veins—it’s potentially possible to control something as integral to a personality as sociability with nothing but light.

To stress my point: this is only possible in mice for now. Our brains are far larger, which means light scattering through the skull and penetrating sufficiently deep becomes far more complicated. And again, our brain cells don’t normally respond to light. You’d have to volunteer for what amounts to gene therapy—which comes with its own slew of problems—before this could potentially work. So keep those tin-foil hats off; scientists can’t yet change an introvert (like me) into an extrovert with lasers.

But for unraveling the inner workings of the brain, it’s an amazing leap into the future. So far, efforts at cutting the optical cord for optogenetics have come with the knee-capped ability to go deep into the brain, limiting control to only surface brain regions such as the cortex. Other methods overheat sensitive brain tissue and culminate in damage. Yet others act as 1990s DOS systems, with significant delay between a command (activate!) and the neuron’s response.

This brain-control OS, though not yet perfect, resolves those problems. Unlike Neuralink and other neural implants, the study suggests it’s possible to control the brain without surgery or implants. All you need is light.

Image Credit: othebo from Pixabay

Kategorie: Transhumanismus

Space Mining Should Be a Global Project—But It’s Not Starting Off That Way

12 Říjen, 2020 - 15:00

Exploiting the resources of outer space might be key to the future expansion of the human species. But researchers argue that the US is trying to skew the game in its favor, with potentially disastrous consequences.

The enormous cost of lifting material into space means that any serious effort to colonize the solar system will require us to rely on resources beyond our atmosphere. Water will be the new gold thanks to its crucial role in sustaining life, as well as the fact it can be split into hydrogen fuel and oxygen for breathing.

Regolith found on the surface of rocky bodies like the moon and Mars will be a crucial building material, while some companies think it will eventually be profitable to extract precious metals and rare earth elements from asteroids and return them to Earth. But so far, there’s little in the way of regulation designed to govern how these activities should be managed.

Now two Canadian researchers argue in a paper in Science that recent policy moves by the US are part of a concerted effort to refocus international space cooperation towards short-term commercial interests, which could precipitate a “race to the bottom” that sabotages efforts to safely manage the development of space.

Aaron Boley and Michael Byers at the University of British Columbia trace back the start of this push to the 2015 Commercial Space Launch Competitiveness Act, which gave US citizens and companies the right to own and sell space resources under US law. In April this year, President Trump doubled down with an executive order affirming the right to commercial space mining and explicitly rejecting the idea that space is a “global commons,” flying in the face of established international norms.

Since then, NASA has announced that any countries wishing to partner on its forthcoming Artemis missions designed to establish a permanent human presence on the moon will have to sign bilateral agreements known as Artemis Accords. These agreements will enshrine the idea that commercial space mining will be governed by national laws rather than international ones, the authors write, and that companies can declare “safety zones” around their operations to exclude others.

Speaking to Space.com Mike Gold, the acting associate administrator for NASA’s Office of International and Interagency Relations, disputes the authors’ characterization of the accords and says they are based on the internationally-recognized Outer Space Treaty. He says they don’t include agreement on national regulation of mining or companies’ rights to establish safety zones, though they do assert the right to extract and use space resources.

But given that they’ve yet to be released or even finalized, it’s not clear how far these rights extend or how they are enshrined in the agreements. And the authors point out that the fact that they are being negotiated bilaterally means the US will be able to use its dominant position to push its interpretation of international law and its overtly commercial goals for space development.

Space policy designed around the exploitation of resources holds many dangers, say the paper authors. For a start, loosely-regulated space mining could result in the destruction of deposits that could hold invaluable scientific information. It could also kick up dangerous amounts of lunar dust that can cause serious damage to space vehicles, increase the amount of space debris, or in a worst-case scenario, create meteorites that could threaten satellites or even impact Earth.

By eschewing a multilateral approach to setting space policy, the US also opens the door to a free-for-all where every country makes up its own rules. Russia is highly critical of the Artemis Accords process and China appears to be frozen out of it, suggesting that two major space powers will not be bound by the new rules. That potentially sets the scene for a race to the bottom, where countries compete to set the laxest rules for space mining to attract investment.

The authors call on other nations to speak up and attempt to set rules through the UN Committee on the Peaceful Uses of Outer Space. Writing in The Conversation, Scott Shackelford from Indiana University suggests a good model could be the 1959 Antarctic Treaty, which froze territorial claims and reserved the continent for “peaceful purposes” and “scientific investigation.”

But the momentum behind the US’ push might be difficult to overcome. Last month, the agency announced it would pay companies to excavate small amounts of regolith on the moon. Boley and Byers admit that if this went ahead and was not protested by other nations, it could set a precedent in international law that would be hard to overcome.

For better or worse, it seems that US dominance in space exploration means it’s in the driver’s seat when it comes to setting the rules. As they say, to the victor go the spoils.

Image Credit: NASA

Kategorie: Transhumanismus

Watch a Jet Suit Pilot Glide Up a Mountain in a Test for Wilderness Paramedics

11 Říjen, 2020 - 15:00

A few years ago, I saw a guy in a jet suit take off in San Francisco’s Golden Gate Park. The roar was deafening, the smell of fuel overwhelming. Over the span of a few minutes, he hovered above the ground and moved about a bit. The jet suit’s inventor, Richard Browning, had left a career in the energy industry and a stint in the Royal Marines, to go after a childhood dream.

Amazingly, he’d succeeded.

But the jet suit seemed a bespoke, one-off kind of thing. It didn’t appear poised to revolutionize office commutes (remember those?) or even to divert oneself on the weekends. Not yet. Since then, however, Browning’s dialed in his invention, and in addition to a barnstorming tour, his company, Gravity Industries, has begun exploring ways his jet suit could help people.

Which explains why, not too long ago, you’d have found Browning gliding up a mountainside to the aid of an “injured” hiker in England’s Lake District. It was a trial, in partnership with the Great North Air Ambulance Service (GNAAS), to see if a personal jet suit might be a new tool for emergency responders in wilderness areas.

The idea isn’t to replace emergency personnel on foot or helicopters to airlift serious cases. Rather, the main motivation is getting a first responder on site as fast as possible. Whereas it would have taken emergency responders 25 minutes to get to the hikers on foot, Browning and his jet suit were on location in a mere 90 seconds. A clear advantage.

The Lake District has dozens of patients in need of support every month, according to GNAAS director of operations and paramedic Andy Mawson. The first paramedic to reach a patient can assess the situation, communicate what’s needed to the team, and stabilize the patient.

“We think this technology could enable our team to reach some patients much quicker than ever before. In many cases, this would ease the patient’s suffering. In some cases, it would save lives,” Mawson said in a press release.

The test certainly demonstrated the jet suit’s speed. That said, it may not be useful in every situation. For example, the suit is limited to locations within 5 to 10 minutes flight time (one way). This is one reason the GNAAS chose the Lake District, which has a high volume of calls in a fairly compact geographic footprint. Also, Browning says he typically operates the jet suit near the ground for safety reasons. Extra steep or cliffy terrain might prove an impediment (though you could still fly to the base of any such features).

And what about training, you may ask?

Browning makes it look easy, but he invented the thing and has logged many hours flying it. According to a Red Bull interview from last year, it was no walk in the park to fly early on, requiring great balance and strength.

But Browning has since refined the suit, including the addition of a rear jet for stability, goggles with a head-up display, and computer-automated thrust to compensate for the suit losing weight as it burns through fuel. These days, according to Browning, it’s a much more intuitive experience.

“It’s a bit like riding a bicycle or skiing or one of those things where it’s just about you thinking about where you want to go and your body intuitively going there,” Browning told Digital Trends in a recent profile. “You’re not steering some joystick or a steering wheel. We’ve had people learn to do this in four or five goes—with each go just lasting around 90 seconds. All credit to the human subconscious—it’s just this floating, dreamlike state.”

The biggest near-term barrier may actually be cost. According to the Red Bull article, at least one suit has sold for over $400,000. Depending on the customer and use case, then, that price tag might be a bit steep. But it needn’t stay that high forever. With enough demand, one could imagine a standardized manufacturing process bringing the cost down.

Still, the test certainly impressed all participants, and GNAAS and Gravity Industries plan to continue exploring next steps. “We could see the need. What we didn’t know for sure is how this would work in practice,” Mawson said. “Well, we’ve seen it now, and it is, quite honestly, awesome.”

Yep. It’s a guy in a jet suit after all, which is pretty freaking cool. And if their work brings about a new cadre of jet-suit-equipped wilderness paramedics—all the cooler.

Image credit: Gravity Industries

Kategorie: Transhumanismus

This Week’s Awesome Tech Stories From Around the Web (Through October 10)

10 Říjen, 2020 - 15:00
ARTIFICIAL INTELLIGENCE

GPT-3 Bot Spends a Week Replying on Reddit, Starts Talking About the Illuminati
Rhett Jones | Gizmodo
“..the length of the replies was especially unusual in that they were sometimes coming within a minute of the question first being asked. After an impressive run, the user was revealed to be a bot using OpenAI’s remarkable language model GPT-3. And now we’re looking at every ‘person’ online with an extra level of skepticism.”

TECHNOLOGY

The Quantum Internet Will Blow Your Mind. Here’s What It Will Look Like
Dan Hurley | Discover
“Fifty or so miles east of New York City, on the campus of Brookhaven National Laboratory, Eden Figueroa is one of the world’s pioneering gardeners planting the seeds of a quantum internet. Capable of sending enormous amounts of data over vast distances, it would work not just faster than the current internet but faster than the speed of light—instantaneously, in fact, like the teleportation of Mr. Spock and Captain Kirk in Star Trek.”

ROBOTICS

This Robot Fry Chef on Rails Can Be Yours for $30,000
James Vincent | The Verge
“Like Flippy before it, Flippy ROAR is designed to automate simple food prep, specifically anything involving fryers and grills. The robot uses machine learning to identify foodstuff and a camera array (which includes a 3D depth-sensing cam from Intel and a thermal camera) to navigate its environment. A robotic arm then wields a spatula and grabs baskets full of food to fry.”

AUGMENTED REALITY

Snapchat Has Turned London Into an Augmented Reality Experiment
Will Bedingfield | Wired UK
“The dream of graffiti artists everywhere is now a reality—vandals have daubed the whole of Carnaby Street in red and blue paint. Luckily, this vandalism is easy to clean and totally invisible. Today, Snapchat has launched Local Lenses—a new feature that is one of the first persistent, large scale, collaborative uses of augmented reality.”

SPACE

SpaceX Has Launched Enough Satellites for Starlink’s Upcoming Public Beta
Jon Brodkin | Ars Technica
“After [Tuesday’s] launch of 60 Starlink satellites, SpaceX CEO Elon Musk wrote on Twitter that ‘[o]nce these satellites reach their target position, we will be able to roll out a fairly wide public beta in northern US & hopefully southern Canada. Other countries to follow as soon as we receive regulatory approval.’ …SpaceX has over 700 satellites in orbit after yesterday’s launch.”

GOVERNANCE

Congress Unveils Its Plan to Curb Big Tech’s Power
Gilad Edelman | Wired
“The implications of that agreement go beyond regulating Big Tech. …That’s the result of two generations of letting monopolists monopolize. If the Big Tech investigation convinces Democrats and Republicans to tip the scales back in the little guy’s favor, it could reshape the US economy in ways that ripple far outside the borders of Silicon Valley.”

INNOVATION

Let’s Make the Post-Pandemic Era a New Golden Age for Invention
Todd Rovak | Fast Company
“The post-pandemic era could be the next golden age for inventors. All new concepts are born out of necessity and pressure—and there has never been another time with as much pressure as we are experiencing right now. Here are ways companies in several industries can lead the charge on this new approach.”

Image credit: Kevin Mueller / Unsplash

Kategorie: Transhumanismus

A Ridiculously Huge New Solar Farm Just Came Online in China

9 Říjen, 2020 - 15:00

The Chinese economy has suffered as a result of the pandemic, but one sector that’s forging full-steam ahead is energy. Last week saw the opening of a massive new solar farm—the second-largest in the world—in the northwest province of Qinghai.

The project is a collaboration between Chinese company Sungrow, which specializes in inverters for renewable energy sources, and the state-owned utility Huanghe Hydropower Development. Its 2.2 gigawatt capacity makes it second only to India’s Bhadla solar park, which opened late last year and has a capacity of 2.5 gigawatts.

These both squash the US leader in solar power generation, which is currently the Solar Star farm outside Los Angeles. It uses more than 1.7 million solar panels to generate 579 megawatts of energy, enough to power 255,000 homes.

The Qinghai plant—located in the desert outside a city of 1.4 million people called Xining—will not only generate almost four times as much electricity as the LA plant, it will be able to store a lot of it too, with 202.8 megawatt-hours of storage capacity.

So why would a relatively small (for China) city need so much energy?

The solar farm wasn’t built with the intent of supplying power to nearby communities. Rather, the site is connected to an 800 kilovolt power line that will run 1,587 kilometers (986 miles) to the east across Qinghai, Gansu, Shaanxi, and Henan provinces. The ultrahigh-voltage line minimizes the amount of power lost in transit by using a high voltage of direct current, which flows through conductors more uniformly than alternating current.

If you look at a map of China, it’s clear that the bulk of the country’s population resides in cities scattered across the east and south. Several are mega-cities with more than 10 million people—Shanghai, Beijing, Shenzhen, and Chengdu, to name a few—and beyond these giants, there’s a whopping 113 cities with more than a million people. For comparison’s sake, only 10 cities in the US have more than a million people.

The Chinese Communist Party has an ambitious plan to build the world’s largest supergrid, which would connect six different regional grids and carry power from renewable sources from the west to the east. And last month, President Xi Jinping pledged (virtually, of course) before the UN General Assembly that China would reach carbon neutrality—meaning it will sequester more carbon than it emits—by 2060. Xi also said the country would reach its peak carbon dioxide emissions in just 10 years, by 2030.

China currently emits more carbon than any other nation in the world, so these are no small goals. Experts estimate that meeting the targets will entail more than $5 trillion of investment. If China was able to fulfill these promises, though, it would have the biggest impact by far of any carbon-neutral commitments made by other countries or companies.

Opening the world’s second-largest solar power plant may be just a drop in the bucket, but it’s not a bad place to start.

Image Credit: Sungrow

Kategorie: Transhumanismus

A New Factory in France Will Mass-Produce Bugs as Food

8 Říjen, 2020 - 15:00

Though the world’s population is no longer predicted to grow as much as we thought by the end of this century, there are still going to be a lot more people on Earth in 30, 50, and 80 years than there are now. And those people are going to need healthy food that comes from a sustainable source. Technologies like cultured meat and fish, vertical farming, and genetic engineering of crops are all working to feed more people while leaving a smaller environmental footprint.

A new facility in northern France aims to help solve the future of food problem in a new, unexpected, and kind of cringe-inducing way: by manufacturing a huge volume of bugs—for eating.

Before you gag and close the page, though, wait; these particular bugs aren’t intended for human consumption, at least not directly.

Our food system and consumption patterns are problematic not just because of the food we eat, but because of the food our food eats. Factory farming uses up a ton of land and resources; a 2018 study found that while meat and dairy provide just 18 percent of the calories people consume, it uses 83 percent of our total farmland and produces 60 percent of agriculture’s greenhouse gas emissions. That farmland is partly taken up by the animals themselves, but it’s also used to grow crops like corn and soy exclusively for animal consumption.

And we’re not just talking cows and pigs. Seafood is part of the problem, too. Farm-raised salmon, for example, are fed not just smaller fish (which depletes ecosystems), but also soy that’s grown on land.

Enter the insects. Or, more appropriately in this case, enter Ÿnsect, the French company with big ambitions to help change the way the world eats. Ÿnsect raised $125 million in Series C funding in early 2019, and at the time already had $70 million worth of aggregated orders to fill. Now they’re building a bug-farming plant to churn out tiny critters in record numbers.

You’ve probably heard of vertical farms in the context of plants; most existing vertical farms use LED lights and a precise mixture of nutrients and water to grow leafy greens or other produce indoors. They maximize the surface area used for growing by stacking several layers of plants on top of one another; the method may not make for as much space as outdoor fields have, but can yield a lot more than you might think.

Ÿnsect’s new plant will use layered trays too, except they’ll be cultivating beetle larvae instead of plants. The ceilings of the facility are 130 feet high—that’s a lot of vertical space to grow bugs in. Those of us who are grossed out by the thought will be glad to know that the whole operation will be highly automated; robots will tend to and harvest the beetles, and AI will be employed to keep tabs on important growing conditions like temperature and humidity.

The plant will initially be able to produce 20,000 tons of insect protein a year, and Ÿnsect is already working with the biggest fish feed company in the world, though production at the new facility isn’t slated to start until 2022.

Besides fish feed, Ÿnsect is also marketing its product for use in fertilizer and pet food. It’s uncertain how realistic the pet food angle is, as I’d imagine most of us love our pets too much to feed them bugs. But who knows—there’s plenty of hypothesizing that insects will be a central source of protein for people in the future, as they’re not only more sustainable than meat, but in some cases more nutritious too.

We’ll just have to come up with some really creative recipes.

Image Credit: Ÿnsect

Kategorie: Transhumanismus

2020 Nobel Prize in Physics Awarded for Work on Black Holes—an Astrophysicist Explains the Trailblazing Discoveries

7 Říjen, 2020 - 15:00

Black holes are perhaps the most mysterious objects in nature. They warp space and time in extreme ways and contain a mathematical impossibility, a singularity—an infinitely hot and dense object within. But if black holes exist and are truly black, how exactly would we ever be able to make an observation?

Robert Penrose is a theoretical physicist who works on black holes, and his work has influenced not just me but my entire generation through his series of popular books that are loaded with his exquisite hand-drawn illustrations of deep physical concepts. Yesterday the Nobel Committee announced that the 2020 Nobel Prize in physics will be awarded to three scientists—Sir Roger Penrose, Reinhard Genzel and Andrea Ghez—who helped discover the answers to such profound questions. Andrea Ghez is only the fourth woman to win the Nobel Prize in physics.

Roger Penrose was famous for his detailed illustrations. This is one of his diagrams of an empty universe. Roger Penrose via Wikimedia, CC BY-ND

As a graduate student in the 1990s at Penn State, where Penrose holds a visiting position, I had many opportunities to interact with him. For many years I was intimidated by this giant in my field, only stealing glimpses of him working in his office, sketching strange-looking scientific drawings on his blackboard. Later, when I finally got the courage to speak with him, I quickly realized that he is among the most approachable people around.

Dying Stars Form Black Holes

Sir Roger Penrose won half the prize for his seminal work in 1965 which proved, using a series of mathematical arguments, that under very general conditions, collapsing matter would trigger the formation of a black hole.

This rigorous result opened up the possibility that the astrophysical process of gravitational collapse, which occurs when a star runs out of its nuclear fuel, would lead to the formation of black holes in nature. He was also able to show that at the heart of a black hole must lie a physical singularity—an object with infinite density, where the laws of physics simply break down. At the singularity, our very conceptions of space, time, and matter fall apart and resolving this issue is perhaps the biggest open problem in theoretical physics today.

Penrose invented new mathematical concepts and techniques while developing this proof. Those equations that Penrose derived in 1965 have been used by physicists studying black holes ever since. In fact, just a few years later, Stephen Hawking, alongside Penrose, used the same mathematical tools to prove that the Big Bang cosmological model—our current best model for how the entire universe came into existence—had a singularity at the very initial moment. These are results from the celebrated Penrose-Hawking Singularity Theorem.

The fact that mathematics demonstrated that astrophysical black holes may exactly exist in nature is exactly what has energized the quest to search for them using astronomical techniques. Indeed, since Penrose’s work in the 1960s, numerous black holes have been identified.

Black Holes Play Yo-Yo With Stars

The remaining half of the prize was shared between astronomers Reinhard Genzel and Andrea Ghez, who each lead a team that discovered the presence of a supermassive black hole, four million times more massive than the sun, at the center of our Milky Way galaxy.

Genzel is an astrophysicist at the Max Planck Institute for Extraterrestrial Physics, Germany and the University of California, Berkeley. Ghez is an astronomer at the University of California, Los Angeles.

The location of the black hole in the Milky Way galaxy relative to our solar system. Johan Jarnestad/The Royal Swedish Academy of Sciences, CC BY-NC

Genzhel and Ghez used the world’s largest telescopes (Keck Observatory and the Very Large Telescope) and studied the movement of stars in a region called Sagittarius A* at the center of our galaxy. They both independently discovered that an extremely massive—four million times more massive than our sun—invisible object is pulling on these stars, making them move in very unusual ways. This is considered the most convincing evidence of a black hole at the center of our galaxy.

This 2020 Nobel Prize, which follows on the heels of the 2017 Nobel Prize for the discovery of gravitational waves from black holes, and other recent stunning discoveries in the field—such as the the 2019 image of a black hole horizon by the Event Horizon Telescope—serve as great recognition and inspiration for all humankind, especially for those of us in the relativity and gravitation community who follow in the footsteps of Albert Einstein himself.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image Credit:  Gerd Altmann from Pixabay

Kategorie: Transhumanismus