Futurism - Robot Intelligence
eDavid stands for the Drawing Apparatus for Vivid Interactive Display.
Recent advances in artificial intelligence have made it clear that our computers need to have a moral code. Disagree? Consider this: A car is driving down the road when a child on a bicycle suddenly swerves in front of it. Does the car swerve into an oncoming lane, hitting another car that is already there? Does the car swerve off the road and hit a tree? Does it continue forward and hit the child?
Each solution comes with a problem: It could result in death.
It’s an unfortunate scenario, but humans face such scenarios every day, and if an autonomous car is the one in control, it needs to be able to make this choice. And that means that we need to figure out how to program morality into our computers.
Vincent Conitzer, a Professor of Computer Science at Duke University, recently received a grant from the Future of Life Institute in order to try and figure out just how we can make an advanced AI that is able to make moral judgments…and act on them.Making Morality
At first glance, the goal seems simple enough—make an AI that behaves in a way that is ethically responsible; however, it’s far more complicated than it initially seems, as there are an amazing amount of factors that come into play. As Conitzer’s project outlines, “moral judgments are affected by rights (such as privacy), roles (such as in families), past actions (such as promises), motives and intentions, and other morally relevant features. These diverse factors have not yet been built into AI systems.”
That’s what we’re trying to do now.Click for full infographic.
In a recent interview with Futurism, Contizer clarified that, while the public may be concerned about ensuring that rogue AI don’t decide to wipe-out humanity, such a thing really isn’t a viable threat at the present time (and it won’t be for a long, long time). As a result, his team isn’t concerned with preventing a global-robotic-apocalypse by making selfless AI that adore humanity. Rather, on a much more basic level, they are focused on ensuring that our artificial intelligence systems are able to make the hard, moral choices that humans make on a daily basis.
So, how do you make an AI that is able to make a difficult moral decision?
Contizer explains that, to reach their goal, the team is following a two path process: Having people make ethical choices in order to find patterns and then figuring out how that can be translated into an artificial intelligence. He clarifies, “what we’re working on right now is actually having people make ethical decisions, or state what decision they would make in a given situation, and then we use machine learning to try to identify what the general pattern is and determine the extent that we could reproduce those kind of decisions.”
In short, the team is trying to find the patterns in our moral choices and translate this pattern into AI systems. Contizer notes that, on a basic level, it’s all about making predictions regarding what a human would do in a given situation, “if we can become very good at predicting what kind of decisions people make in these kind of ethical circumstances, well then, we could make those decisions ourselves in the form of the computer program.”Right now, maybe our moral development hasn’t come to its apex.
However, one major problem with this is, of course, that morality is not objective—it’s neither timeless nor universal.
Contizer articulates the problem by looking to previous decades, “if we did the same ethical tests a hundred years ago, the decisions that we would get from people would be much more racist, sexist, and all kinds of other things that we wouldn’t see as ‘good’ now. Similarly, right now, maybe our moral development hasn’t come to its apex, and a hundred years from now people might feel that some of the things we do right now, like how we treat animals, is completely immoral. So there’s kind of a risk of bias and with getting stuck at whatever our current level of moral development is.”
And of course, there is the aforementioned problem regarding how complex morality is.”Pure altruism, that’s very easy to address in game theory, but maybe you feel like you owe me something based on previous actions. That’s missing from the game theory literature, and so that’s something that we’re also thinking about a lot—how can you make this, what game theory calls’ Solutions Concept‘—sensible? How can you compute these things?”
To solve these problems, and to help figure out exactly how morality functions and can (hopefully) be programmed into an AI, the team is combining the methods from computer science, philosophy, and psychology “That’s, in a nutshell, what our project is about,” Contizer asserts.
But what about those sentient AI? When will we need to start worrying about them and discussing how they should be regulated?The Human-Like AI
According to Contizer, human-like artificial intelligence won’t be around for some time yet (so yay! No Terminator-styled apocalypse…at least for the next few years).
“Recently, there have been a number of steps towards such a system, and I think there have been a lot of surprising advances….but I think having something like a ‘true AI,’ one that’s really as flexible, able to abstract, and do all these things that humans do so easily, I think we’re still quite far away from that,” Contizer asserts.It may be quite a bit further out, but to computer scientists, that means maybe just on the order of decades.
True, we can program systems to do a lot of things that humans do well, but there are some things that are exceedingly complex and hard to translate into a pattern that computers can recognize and learn from (which is ultimately the basis of all AI).
“What came out of early AI research, the first couple decades of AI research, was the fact that certain things that we had thought of as being real benchmarks for intelligence, like being able to play chess well, were actually quite accessible to computers. It was not easy to write and create a chess-playing program, but it was doable.”
But Conitzer clarifies that, as it turns out, playing games isn’t exactly a good measure of human-like intelligence. Or at least, there is a lot more to the human mind. “Meanwhile, we learned that other problems that were very simple for people were actually quite hard for computers, or to program computers to do. For example, recognizing your grandmother in a crowd. You could do that quite easily, but it’s actually very difficult to program a computer to recognize things that well.”
Since the early days of AI research, we have made computers that are able to recognize and identify specific images. However, to sum the main point, it is remarkably difficult to program a system that is able to do all of the things that humans can do, which is why it will be some time before we have a ‘true AI.’
Yet, Conitzer asserts that now is the time to start considering what the rules we will use to govern such intelligences. “It may be quite a bit further out, but to computer scientists, that means maybe just on the order of decades, and it definitely makes sense to try to think about these things a little bit ahead.” And he notes that, even though we don’t have any human-like robots just yet, our intelligence systems are already making moral choices and could, potentially, save or end lives.
“Very often, many of these decisions that they make do impact people and we may need to make decisions that we will typically be considered to be a morally loaded decision. And a standard example is a self-driving car that has to decide to either go straight and crash into the car ahead of it or veer off and maybe hurt some pedestrian. How do you make those trade-offs? And that I think is something we can really make some progress on. This doesn’t require superintelligent AI, this can just be programs that make these kind of trade-offs in various ways.”
But of course, knowing what decision to make will first require knowing exactly how our morality operates (or at least having a fairly good idea). From there, we can begin to program it, and that’s what Contizer and his team are hoping to do.
So welcome to the dawn of moral robots.
This interview has been edited for brevity and clarity.
Sony has announced plans to develop a robot “capable of forming an emotional bond with customers.”
Sony’s chief executive Kazuo Hirai did not disclose specific details about the robots, but says it will propose new business models that integrate hardware and services to provide emotionally compelling experiences.
Sony is re-entering the consumer robotics game after increased competition in the Asian markets led to massive cost cutting in 2006. It has, however, launched Aibo, its canine-modelled artificial intelligence robot. Alongside its popularity as a consumer robot, the robots were used by researchers for a number of areas, including a robotic football tournament in 2005.Image source: MyGaming Following AIBO’s Pawsteps
A decade later, it seems that a lot of tech companies are finding ways to make the human-robot interaction as warm and fuzzy as possible. Japanese telecoms company SoftBank made similar “emotional” claims about its Pepper companion robot, while Boston Dynamics has unveiled earlier the SpotMini, a robot with a sense of humor.
It’s not surprisingly that Sony would want to get back in the game after it had made great strides a decade ago with AIBO dogs, which some users have gone as far as to hold funerals for.
Hirai also announced that virtual reality will be another future area of growth for Sony. The PlayStation VR system is set to launch in October, and Sony believes it’s well-placed to take advantage of the technology in areas like entertainment and digital imaging as well as gaming. The company also says it’s considering “cultivating [VR] as a new business domain.”
All in all, Sony has a lot in store for the newest generation of techies.
The post Sony To Create Robot That Can Form Emotional Bond With People appeared first on Futurism.
This year marks the 20th Robocup, featuring 3,500 participants: 500 teams from 40 countries. The cup takes place from June 30 until July 4th in Leipzig, Germany.
The event is a big one. The competition allots 70 playing fields, ranging from six and 170 square meters in size, to ensure adherence to international standards and regulations of precision. In fact, the soccer competitions alone have been allocated 2,200 square meters of playing fields.
Robocup is a truly international event. The Leipziger Messe (Leipzig Trade Fair) has issued more than 800 visa invitation letters specifically for the cup.The FIFA Challenge And More
According to the website, “ever since 1997, the RoboCup Federation has been pursuing its objective of developing intelligent humanoid soccer-playing robots which by 2050 will be able to beat the current FIFA champions.” This does sound ambitious, but Bonn-Rhein-Sieg University of Applied Sciences professor, Gerhard Kraetzschmar, echoes the same message in his welcome note for the Robocup this year: “By 2050, a team of fully autonomous humanoid robot soccer players shall win a soccer game, played according to official rules of FIFA, against the winner of the most recent FIFA World Cup.”
The competition has been broadening their categories far beyond the field of soccer through the years. The website boasts, “Additional application disciplines addressing diverse societal needs such as intelligent robots as assistants for rescue missions, in households and in industrial production have been added during the last few years.” These additions include robot-based elderly care, autonomous vehicles, and disaster response.
For those who are interested in attending the event, you can buy your tickets here. Additionally, for the aspiring next generation of scientists and engineers, there is also RobocupJunior to help kick-start inventive thinking and discovery in young people.
The post Battle of the Machines: Robocup 2016 Starts Tomorrow! appeared first on Futurism.
Cozmo was programmed to have a strong personality and emotions.
Retired United States Air Force Colonel Gene Lee recently went up against ALPHA, an artificial intelligence developed by a University of Cincinnati doctoral graduate. The contest? A high-fidelity air combat simulator.
And the Colonel lost.
In fact, all the other AI’s that the Air Force Research Lab had in their possession also lost to ALPHA…and so did all of the other human experts who tried their skills against ALPHA’s superior algorithms.
And did we mention ALPHA achieves superiority while running on a $35 Raspberry Pi?
Saying that Lee is experienced when it comes to aerial combat is a remarkable understatement. He is an instructor who has trained with thousands of U.S. Air Force pilots. he is also an Air Battle Manager who has been fighting against AI opponents in air combat simulations since the 1980s.
Yet, he was not successful in winning against ALPHA. Not even once. Indeed, not even when the researchers deliberately handicapped ALPHA’s aircraft, impeding it in terms of speed, turning, missile capability, and sensor use.
“I was surprised at how aware and reactive it was. It seemed to be aware of my intentions and reacting instantly to my changes in flight and my missile deployment. It knew how to defeat the shot I was taking. It moved instantly between defensive and offensive actions as needed,” Lee said.
ALPHA makes decisions using a genetic fuzzy tree system, which is a subtype of fuzzy logic algorithms. It can calculate strategies based on its opponent’s movements 250 times faster than a person can blink—a speed that gives it an undeniable advantage in an arena where a mix of advanced skills in aerospace physics and intuition are required.The Future of Air Combat
The development team says ALPHA would be a valuable asset to team with a fleet of human pilots, as it can can quickly map out accurate strategies and coordinate with a team of aircraft.
UC aerospace professor Kelly Cohen said: “ALPHA could continuously determine the optimal ways to perform tasks commanded by its manned wingman, as well as provide tactical and situational advice to the rest of its flight.”UC grad and Psibernetix President and CEO Nick Ernest, David Carroll, and Gene Lee (seated).
This raises some concerns, as it may be ushering in an era of autonomy in battle aircraft. Eventually, a team of completely Unmanned Combat Aerial Vehicles (UCAVs) could be deployed to accomplish missions, further eliminating the chances of human error, but also operating without any human input.
Nick Ernest, who founded the company Psibernetix to develop ALPHA, says they intend to develop ALPHA further. “ALPHA is already a deadly opponent to face in these simulated environments. The goal is to continue developing ALPHA, to push and extend its capabilities, and perform additional testing against other trained pilots. Fidelity also needs to be increased, which will come in the form of even more realistic aerodynamic and sensor models. ALPHA is fully able to accommodate these additions, and we at Psibernetix look forward to continuing development.”
The post An AI Just Defeated Human Fighter Pilots in An Air Combat Simulator appeared first on Futurism.
While it may seem that Rolls-Royce is purely a luxury car company, it has been cooking up something completely unexpected: remote-controlled cargo ships. The Rolls-Royce led Advanced Autonomous Waterborne Applications Initiative (AAWA) has presented its vision at the Autonomous Ship Technology Symposium 2016 in Amsterdam of how remote and autonomous shipping can become a reality.
The company is working on virtual decks where land-based crews would control every aspect of the ship. Additionally, there will also be VR camera views and monitoring drones to spot issues humans cannot. Therefore, only requiring one human to steer several boats.
Autonomy promises many advantages such as mitigating the human factor in both crew error and the threat of piracy. Also the removal of crew quarters frees up more space for goods.
Part of the initiative has already arrived. One ship, the Stril Luna, has a smart Unified Bridge system in place for coordinating all its equipment. Rolls-Royce is aiming to launch the first remote-controlled cargo ships by 2020, then to have autonomous fleets within two decades.
The post Rolls-Royce To Launch Robot Cargo Ships By “End of the Decade” appeared first on Futurism.
Toyota has been exploring robotic applications for quite some time– even before its President, Akio Toyota, hired Gill Pratt, chief executive officer of Toyota Research Institute, last year. The institute is funded by Toyota with $1 billion for five years. The company also wants to apply its own production system to make robots and AI more accessible and affordable.
This move is aligned with Prime Minister Shinzo Abe’s push for “robot revolution” in Japan, which aims to more than quadruple its robotics industry sales to ¥2.4 trillion ($23 billion) by 2020.
Learn more about one of Toyota Research Institute’s product by watching this video:
Toyota has been developing motorized wheelchairs that scale stairs and other helper devices aimed to assist patients with visual impairments and those who are bedridden.
Toyota Research Institute is one of the prospective buyers of Boston Dynamics after Alphabet Inc., Google’s parent company, put it on sale. Toyota now forges ahead of Google after Alphabet Inc. decided that the technology giant isn’t likely to produce a profitable product within the next few years.
The post Toyota is Spending $1 Billion to Make Helper-Robots For the Elderly appeared first on Futurism.
Scientists have developed a structure-mapping engine (SME) model that could give computers the ability to reason more like humans and even make moral decisions. The SME is capable of analogical problem solving, including ways humans spontaneously use analogies to solve moral dilemmas.
“In terms of thinking like humans, analogies are where it’s at,” said Forbus, Walter P. Murphy Professor of Electrical Engineering and Computer Science in Northwestern’s McCormick School of Engineering. “Humans use relational statements fluidly to describe things, solve problems, indicate causality, and weigh moral dilemmas.”
The new model builds on psychologist Dedre Gentner’s structure-mapping theory of analogy and similarity, which has been used to explain and predict many psychological phenomena. The idea is that analogy and similarity involve comparisons between relational representations, which connect entities and ideas.Phys.org From Complex to Simple
Scientists are trying out a range of analogies, from the complex (electricity flows like water) to the simple (his new cell phone is very similar to his old phone). In the past iterations of the SME, researchers have not been able to scale it to the size of representations that people tend to use. This new one can handle the size and complexity of relational representations that are needed for visual reasoning, cracking textbook problems, and solving moral dilemmas, scientists say.
Apart from solving everyday moral dilemmas, the SME has also been used to learn to solve physics problems from the Advanced Placement test, with a program being trained and tested by the Educational Testing Service. It shows the model can be used for multiple visual problem-solving tasks.
The post Structure-Mapping Engines Bring Computers One Step Closer to Solving Moral Dilemmas appeared first on Futurism.
Unmanned Aerial Vehicles (UAVs) have proven to be beneficial in the farming and energy production fields. One disadvantage, though, is that these machines still need humans to control them.
Airobotics, a company based in Tel Aviv, resolved this issue when they presented a completely-automated patrol drone system with the same name. The autonomous drone system is capable of operating with virtually no human intervention and it can also maintain itself.
The system is made up of three parts: the drone, the “Airbase” or the base station ,and the command software. It utilizes a UAV that can carry a load of 1-kilogram for 30 minutes. After finishing a patrol, the drone lands at the base station and a robotic arm then automatically changes its battery and payload. All these actions are controlled by the integrated software that can be pre-programmed by the users and can allow the user to view real-time video and data feed.
Airobotic will most likely be useful in the mining and oil and gas industries.
The post This Autonomous Patrol Drone System Can Even Change its own Batteries appeared first on Futurism.
FoldiMate is a robotic folding machine that can fit in your laundry room.
In the midst of fears of automation phasing out human employees, the robotics industry seems to be facing another hurdle.
The European parliament’s committee on legal affairs submitted a draft on May 31st, 2016 proposing that robots be categorized as “electronic persons” and companies employing them as workers should be required to pay taxes and social security for them.
The draft mentions “that at least the most sophisticated autonomous robots could be established as having the status of electronic persons with specific rights and obligations.”
The draft also proposes a registry for smart autonomous robots to ensure they are legally accounted for. On top of this, they state that companies must declare how much money they saved in social security contributions by replacing manpower with robots.Revenues Under Threat
It seems the EU is starting to worry that automation would become so big a thing that the unemployment it would create would be substantial enough to make a huge dent in revenue from human income taxes and contributions.
Obviously, this is a proposition that does not sit well with everyone. German organization VDMA, which speaks for robot manufacturers like Siemens and Kuka, says it’s too early in the evolution stage of robotics to require something that is likely to hinder the development of the industry.
“That we would create a legal framework with electronic persons – that’s something that could happen in 50 years, but not in 10 years,” said Patrick Schwarzkopf, managing director of the VDMA’s robotic and automation department. “We think it would be very bureaucratic and would stunt the development of robotics.”
In addition, Schwarzkopf argues that the correlation between automation and unemployment is not definitively proven. He points out that between 2010 and 2015, industrial robots supply rose by 17% in the automotive industry, and so did the number of employees—by 13%. This is, of course, is a point that is highly debated, with a number of predictions and recent events suggesting otherwise.
Now, while this is an unsettling proposition for robot manufacturers and companies gearing towards automation, the draft is a long way from materializing. Even with full support from the Parliament, it does not have the authority to propose legislation.
Given the advancement of automation and its inevitable substitution of human workers, it is, however, a topic worth debating now.
A team of physicists has just created an algorithm that studies the behavior of pro-ISIS online groups and predicts their future actions, including when they may become real-world threats.
The team focused on a Russia-based social platform called VKontakte, and identified 196 pro-ISIS online groups, singled out due to the group’s propagation of actual ISIS content, pledge of support for ISIS, or calling of jihad in the name of ISIS.
The study, published in Science, found that the number of these online groups—which offer a form of anonymity and freedom of speech for individuals—dramatically increase in the lead-up to real-world events. This allowed the group to create an algorithm that leverages this information to predict when things will occur.The number of pro-ISIS groups prior to an ISIS attack on Kobane. The attack happened on the red line, and each horizontal bar is a pro-ISIS group. Credit: Science Surprising Solutions
Eliminating terrorist activity on social media presents a challenge—often shutdowns come from the platform itself, which must navigate the line between public safety and free speech. But even with efforts from platforms and governments and even hacktivist groups—Anonymous removed 20,000 Twitter accounts with ties to ISIS last year— the challenge still remains
This is because, as hackers or online moderators shut down groups in sites like VKontakte, the members split into, or join, smaller groups. Then these eventually coalesce into larger groups, which get shut down, and so on.
While the algorithm promises to be no silver bullet to ISIS in social media, the study does recommend on how the problem should be approached.
“The math shows there’s a very precise rate at which aggregates [online groups] need to get shut down. If you don’t want the pro-ISIS or pro-extreme aggregates to grow into one huge global aggregate, then you’ve got to shut it down faster than this precise formula that we’ve come up with.”, says lead author Neil Johnson in an interview with Pacific Standard.
The post Physicists Create Algorithm That Maybe Be Able To Predict Terror Attacks appeared first on Futurism.
More than 40 percent of Canadians are at risk of losing their jobs due to automation, according to The Brookfield Institute for Innovation + Entrepreneurship at Ryerson University in Toronto. And notably, breakthroughs in artificial intelligence and advanced robotics means automation is not limited to just manual and restricted tasks anymore.Photograph: Blutgruppe/Blutgruppe/Corbis
According to the report, the following jobs are at risk of being replaced by automation:
- Retail salesperson.
- Administrative assistant.
- Food counter attendant.
- Transport truck driver.
According to the report, in the next 10 to 20 years, there is a 70% or higher probability that these jobs will be dramatically affected by automation. The analysis also notes that workers in these “high risk” jobs earn less and have lower educational attainments than the rest of the labor force, which could make the transition particularly hard on these workers.
The top five low risk jobs, facing less than 30% probability of being replaced, “are linked to high level skills” and earnings, such as jobs in STEM. These jobs are:
- Retail and wholesale trade managers.
- Registered nurses.
- Elementary and kindergarten teachers.
- Early childhood educators and assistants.
- Secondary school teachers.
This news isn’t exactly unexpected. In truth, it’s mostly accepted that many routine and manual jobs have been, and will continue to be, taken over by automated systems. But there are an increasing number of reports of AI algorithms being used to conquer cognitive tasks in fields like medicine, law, journalism, and more.
How easy the transition will be remains to be seen, but the world of tomorrow will likely consist of an entirely different landscape.
The post Study Says Automation May Replace 40% of Canadians in Just 10 to 20 Years appeared first on Futurism.
According to a statement released via The Drum, the magazine edited by Watson contains different features that shows Watson’s capabilities. It has different analytical functions, as well as skills necessary to assist modern-day marketers. Also, Watson has been programmed to have the capacity to answer a series of questions about David Olgivy, the “advertising legend,” and was able to give some predictions for the winners of this year’s Cannes Lions awards.
Learn about Watson by watching this video:
While it is not yet the end for the human editors’ careers, this does showcase the potential that artificial intelligence has in an ever-increasing number of fields. IBM Watson program chief David Kenny hopes that one day Watson will be able to ask people questions and develop abductive reasoning skills. That would be truly revolutionary.
The post Replacing Humans With AI? IBM’s Watson Edits An Entire Magazine On Its Own appeared first on Futurism.
Despite all the benefits of robots in society, performing dangerous industrial tasks and going to places humans can’t or won’t go, there are still very credible fears about them. The military is increasing reliance on drones and automated robots. Many feel that jobs are being threatened by increasing automation. Now they have something else to worry about: robots that can harm humans on their own.
A roboticist and artist based in Berkeley, California has just created a robot that breaks Asimov’s first rule, “A robot may not injure a human being, or, through inaction, allow a human being to come to harm.”
Let’s qualify that: robots harm humans all the time. Industrial accidents that involve robots and humans are common and drone strikes certainly kill many. But this one is different: it hurts humans deliberately and even its creator, Alexander Reben, cannot tell whether it will harm someone.
“The decision to hurt a person,” Reben said to The Washington Post, “happens in a way that I can’t predict.” The software does not use machine learning or artificial intelligence to decide but neither is it a 50:50 chance. “I don’t know the probability,” Reben adds.Ethical Dilemmas
Yes, the robot harms humans but was designed to inflict minimal harm. Reben’s creation is a robot arm. A sensor detects when someone places a finger beneath the arm. If the robot decides to strike, it will do so with a a small needle attached to the arm. It cost about $200 (£141) to make and took a few days to put together.
The main goal of the robot is for people to confront the possibility of robots that can harm humans, about responsibility for the harm caused, and how we view robots in general.
“I wanted to make a robot that does this that actually exists…That was important to, to take it out of the thought experiment realm into reality, because once something exists in the world, you have to confront it. It becomes more urgent. You can’t just pontificate about it.” Reben said to The Fast Company.
The post Scientists Created a Robot that Randomly, and Intentionally, Hurts People. Here’s Why appeared first on Futurism.
“Drone” is unquestionably one of the largest tech buzzwords at the moment. The term itself is currently known through several different media outlets to be at its “peak of inflated expectations.”
However, one industry of the world that currently faces a dearth of drone excitement is agriculture. But to that end, there have been a variety of publications coming forward expressing how drones will transform the industry into a brave new world.So what about the data supporting drones in agriculture?
The image below describes the market research/forecast for the industry with drone technology, something many of us may be familiar with:Source: AUVSI
According to AUVSI’s “The Economic Impact of Unmanned Aircraft Systems Integration in the United States” report released in early 2013, agricultural Unmanned Aerial Vehicles (UAV) would dominate the market and “dwarf all others”. This report was the initial finding that inspired many other market tests in the same topic. What’s interesting to note, however, is that this report has drawn assumptions that are questionable at best, or entirely wrong at worst.
Here is a great read on this topic, in a nutshell:
- The AUVSI report is not an objective piece of research due to this organization’s goal is to “promote unmanned systems”
- Japan is not a good proxy for the US due to a very different agriculture landscape based on field location, average field size, agriculture products and UAV application (pesticide spraying in Japan and remote sensing in US) itself
However, let’s step aside from research papers and take a look at actual numbers, specifically the adoption of drones.
In 2015 Crop Life Magazine conducted a survey among Agricultural dealers and producers on how they use precision agriculture technologies, including questions on drones for the first time in the magazine’s history.
Results are pretty eye-opening (here is the full survey):While 16% of US Ag dealers were offering drone-related services in 2015… Source: CropLife …only 13% of them were generating a profit from it Source: Personal Analysis With that said : Just 2% of US Ag dealers are making money on drones. Such demand definitely doesn’t correlate with AUVSI forecast mentioned above. Thereby, are there any prerequisites that drone services in agriculture would become a sustainable business? Drones in Ag: Threats and Opportunities
Down below I’d analyze potential threats and opportunities for drone-related businesses in agriculture.Threats
1. Satellite / Aircraft platforms
Current research and venture capital interest in space-related startups have made small satellite-based Earth Observation(EO) a very large threat to drones.Source:Personal Analysis
However , satellite/drone imagery adoption rate (perhaps the most important metric for assessing one or another technology’s perspectives) point to an interesting phenomenon : UAVs’ adoption among both Ag dealers and producers is at the level satellite imagery has been 11 years ago.Source:Personal Analysis
Then, if we look at the historical perspective, satellite imagery adoption rate among US Ag dealers and producers correlates really well with new satellites’ launches: Source:Personal Analysis
Thus, taking into account that a huge number of Earth Observation satellites (both by commercial companies and government/academia) is going to be launched in coming years, satellite imagery price will definitely decline and this would significantly reduce the barriers for farmers to use remote sensing data.
Moreover, there is a competition from aircrafts which are sometimes claimed as an optimal platform for gathering imagery in terms of spatial resolution and data acquisition costs at the moment (nice read on this here). And there are some startups utilizing imagery from manned aircrafts, such as YC-backed TerrAvion, which declares it collected more data than the whole electric drone industry combined in 2015
2. Unclear AgTech mid-term adoption
In addition to the questionable perspective of drones in agriculture in competition with other imaging platforms, it is unclear how farmers will adopt tech startups’ products in mid-term.
According to Kenneth S. Zuckerberg, senior research analyst at Rabobank Food & Agriculture, commodity price downturn will have a serious impact on AgTech startups adoption.Source: AgFunderNews
The key takeaway from this study is that farmers would hardly invest their resources to innovations (say provide AgTech startups a revenue) due to significantly declined income.
“Adoption of agtech to drive productivity gains will continue to be delayed and this delay increases the risk of down rounds for startups over the next few years”. — Kenneth S. Zuckerberg, Rabobank Food & Agriculture
Jonah Kolb, vice president at farm management group Moore & Warner, and Arne Duss, the founder and CEO of HighPath Consulting, outlined even more reasons for limited AgTech adoption in another research paper. A few are the following:
- Falling farm incomes
USDA forecasted that 2015 farm incomes were down 36% from 2014, making them the lowest farm since 2006.
- Low incentive
Many US farms are owned by their operators, meaning there is little need to deliver market-rate returns to investors, making adoption of yield-enhancing tech slow.
- Risk appetite
With 62% of US farmers nearing retirement age, there is less appetite for systems upgrades.
- Growing season
A single growing season in much of the US reduces adoption opportunities and the number of potential technology iterations each year.
Summing up, there are some unfavorable macro trends for drones in agriculture beyond competition with satellites/aircraft.
3. Beyond-Visual-Line-Of-Sight (BVLOS) Restrictions
The absence of an established mechanism for safely managing UAV operations in low-altitude airspace (at or below 500 feet) and, consequently, beyond-visual-line-of-sight (BVLOS) flight restrictions imposes significant limitations on the efficiency of drones implementation in agriculture.
While some farms only consist of several acres and could be fully surveyed within-visual line of sight (VLOS), many more farms do not fit this description. For these larger farms, the importance of being able to conduct BVLOS operations is magnified.
If farmers with large acreage are restricted to VLOS requirements then they would need to fly multiple, potentially redundant missions to cover the necessary ground. Instead of capturing the imagery and collecting the relevant data all at once, these farmers would be forced to expend precious additional resources into stitching together maps and synthesizing data. This would be highly inefficient — both in terms of manpower and time — and could nullify the potential time and cost savings provided by UAS for Ag industry.
At the same time, there are some technical (such as lack of access to spectrum and uncertain UAS traffic management (UTM) system architecture) and regulatory barriers that may cause the system to be years away (more on this here).
Moreover, BVLOS are limited in most of the countries (not only in US):Source: Precision Hawk
Thus, BVLOS restrictions serve as another threat to the potential expansion of UAVs in Ag.Opportunities
Despite the threats described above, there are some positive signals making drones’ perspectives more promising.
1. Drones Are Getting Smarter & Stronger
Generally, drones implementation is limited due to several technological difficulties, such as:
- Lack of autonomy
UAVs full potential can be unlocked only when truly autonomous drones would be available.
- Low flight endurance
The efficiency of drones operations strongly tied with the flight endurance, which is, for most of the professional UAVs, such as SenseFly eBee and AgEagle, is about 30-40 minutes, which is not enough obviously for continuous surveys.
However, a lot of great startups are overcoming these challenges with their products.
Recent computer vision applications improved drone capabilities beyond pretty straightforward “follow-me” features towards impressive autonomy. To name a few startups in this area there are Skydio (raised $28M from Accel and a16z) and Percepto (raised $1M from Mark Cuban and some other high-profile angel investors). Moreover, advanced computer vision is already embedded in consumer models, such as brand new DJI Phantom 4.
As for flight endurance improvement, there are 2 options: ground charging stations and advanced (not LiPo) batteries.
Regarding batteries, there are several projects developing fuel cells for UAVs, such as hydrogen fuel cell by Intelligent Energy. Moreover, world record for the longest drone flight (more than 3 hours) that was recently set in Russia involved hydrogen-air fuel cells.Hydrogen-powered drone by Intelligent Energy. Source: geek.com
Thereby, considering that drones would be capable of flying for hours and doing it without humans’ assistantship quite soon, this will significantly increase drones’ operations efficiency.
2. UAVs Sensors’ Advancements
One area in which drones are definitely ahead of satellites at the moment is sensors variability and data resolution. With lidars , hyper-/multispectal imagers and thermal sensors drones are capable of collecting unique data compared to satellites. For example, an average hyperspectral image space-based system can provide has 30–50m GSD, which is 2 order of magnitude lower than it’s possible to get using UAV.
But, while these sensors have already proved its value for construction,mining, energy and O&G, it remains unclear which applications (except NDVI calculation utilizing multispectral sensors, which can be successful provided with satellites as well) for agriculture can be useful for farmers (crop counting or weed detection?).
3. Ag Can Adopt New Tech Pretty Fast (if it brings real value)
Despite the data on satellite/drone imagery penetration and unfavorable AgTech mid-term adoption forecast mentioned above, historically,the Ag industry has snapped up some technologies pretty fast, such as GPS-related ones.
Thus, if we take these technologies as a reference to UAVs and extrapolate adoption growth rate we’d have a pretty optimistic scenario with around 60%(taking GPS guidance with auto control/autosteer 30% CAGR as a reference) penetration of drones in US agriculture industry till 2020.Conclusion
In my opinion, unfavorable conditions prevail over the favorable ones at the moment and global expansion of agriculture drones is questionable (at best).
However, there is probably some niche on the intersection of:
- Hyperspetral Imagery
UAV-based hyperspectral imagery is much more affordable than the one from satellite-based sensors at the moment. Thereby, if startups would be able to solve some technology problems related to hyperspetral data (huge data volume, the need for calibration for different geographical areas) and create uses cases providing value to the Ag customers — it’s is one of the things that would give drones an edge compared to other platforms
- Relatively small farms
At the moment, to purchase satellite imagery it is required to buy some minimal order amount of data (e.g. 500 km2 for 5m GSD data from RapidEye constellation), which makes it unreasonable for small farmers. Consequently, small farmers represent a market opportunity in agriculture for drone startups, perhaps Japan and Western Europe, speaking of geography. However, there are some limitations on the area on which drones’ implementation is economically reasonable (here is a research paper on drones, aircrafts and satellite cost effectiveness comparison:
Thus, drone Ag startups probably should try to target a niche market/use case, establish a beach head instead of aiming for global expansion from the beginning and prove they can really bring some value to the farmers (and provide return to its investors consequently).
The post Drones in Agriculture: Are They Really Taking Over? appeared first on Futurism.
One of the world’s fastest supercomputers, Tianhe-2, is capable of carrying out up to 55 quadrillion operations per second. This machine is definitely faster than the human brain, but it must be noted that the brain only needs about 20 watts of power whereas Tianhe-2 requires 17.8 megawatts.
That is equal to one dim light bulb for the brain and 900,000 of the same light bulb for the supercomputer.Credit: STOCKPHOTO.COM/ERAXION
Now, Tae-Woo Lee, a material scientist at Pohang University of Science and Technology in South Korea, has created artificial synapses that are more efficient than biological ones. These synapses only requires 1.23 femtojoules of energy for every synaptic event as compared to the 10 femtojoules required by biological synapses.
A kind of transistor, these artificial synapses mimic the biological ones by switching on and off. The team developed 144 synaptic transistors on a 4-inch wafer. In the middle are wires that are 200 to 300 nanometers thin, even thinner than a human hair.
According to Lee, the team is now working on developing organic nanowires that are much smaller. The team even thinks they can still reduce the energy consumption of the device by experimenting with the selection and composition of the materials used.
Scientists tested the artificial intelligence (AI) during a competition at the annual International Symposium of Biomedical Imaging, where it was tasked to look for breast cancer in images of lymph nodes. It turns out it can detect breast cancer accurately 92 percent of the time and won in two separate categories during the contest.
Andrew Beck from BIDMC says they used the deep learning method, which is commonly used to train AI to recognize speech, images and objects. They fed the machine with hundreds of slides marked to indicate which parts have cancerous cells and which have normal ones. The AI had difficulty identifying some samples, but scientists fed it more difficult samples until it learned its lesson.AI + Humans = Win
Despite this success, AI is still no match for human pathologists, who are accurate 96 percent of the time. However, this contest clearly shows that there is much to be gained with further study and innovation.
Beck says what’s remarkable is when they combined pathologists’ analysis with those of the AI, the results showed 99.5 percent accuracy. “Our results in the ISBI competition show that what the computer is doing is genuinely intelligent and that the combination of human and computer interpretations will result in more precise and more clinically valuable diagnoses to guide treatment decisions.”
No one who has ever said that two heads are better than one, specified that both heads had to be human.
The post This Artificial Intelligence was 92% Accurate in Breast Cancer Detection Contest appeared first on Futurism.
Now your videos of surfing, skiing, snowboarding, motocross…and all those other zany activities will be easier than ever to edit, thanks to machine learning and a new attachment known as Pik’d.
Pik’d is meant to help you ensure that fun and adrenaline are the protagonists of your footage, and not much time or concern needs to be devoted to post-production.
The communication between the technologies works automatically, so the user doesn’t have to manipulate the device. All you have to do is attach the tech to your GoPro and record. Once the images have been captured, connect the camera to the computer and open the Pik’d software, which will be supplied free of charge.
From there, Pik’d picks from the recording on your GoPro (hence the name). Depending on the level of the athlete and the sport practiced, the best times are automatically indicated, leaving a manual review of the videos unnecessary.
And the team notes that a one-hour video takes no more than 10 seconds. Moreover, Pik’d’s battery life is at least 4 hours and can store several weeks of action.
In the future this device will be adding more sports to its ‘brain’ and also making it so that it can be adapted to other action cameras, such as Sony or Contour. Pik’d is currently a startup Chrysalis portfolio and will officially make its first debut to the world soon, through a crowdfunding campaign on Indiegogo.
“We hope to offer a cloud for users to have their collection of highlights at hand and that Pik’d become a platform for high-quality content,” Moritz Hawelka, a founding partner of Pik’d, noted in an interview.
The Pik’d founders are comprised of an Austrian team, and they have partnered with
teams in Viña del Mar, Chile in order to bring their tech to life and internationalize the product. The location in Chile is not capricious, because in Chile it is possible to tackle things like surfing, skiing, snowboarding, or downhill biking in short order. Indeed, in one day, you can enjoy the snow and the insurmountable waves of Pichilemu.
The post The GoPro is Getting A Brain, Thanks to Machine Learning appeared first on Futurism.