Futurism - Robot Intelligence
Anybody following developments at Apple knows they’re a very secretive company. Instead, the tech giant seems to prefer that the public glean what we can from the kind of talent it’s hiring and its patent applications.
But that may not be the case for its foray into artificial intelligence (AI). Twitter posts reveal that AI director, Russ Salakhutdinov, has announced that Apple will be publishing its AI research and engaging more with the academe.
— hardmaru (@hardmaru) December 6, 2016
Apart from the tweet, we really don’t know much. Apple has yet to make any formal announcement on the matter. We don’t know if it will be publishing everything or cherry picking the data it wants everyone to know.
Why this departure? According to Engadget, it may be because the company wants more AI researchers to work with them. Apple has been acquiring AI talent, but that kind of talent wants the recognition for the breakthroughs they accomplish. That’s something you can’t have if the parent company is as secretive as Apple.
It remains to be seen how this will affect the company, but it can do nothing but good for the field. AI research benefits from more brains working on research, and many new AI efforts have been aimed at “democratizing” AI and making it available for everyone.
The post Accelerating Intelligence: Hints Suggest That Apple Will Openly Publish Its AI Research appeared first on Futurism.
This tiny robot represents the future of cataract surgery.
The post Axsis, a Minimally Invasive Robot, Performs Cataract Surgery appeared first on Futurism.
Auro has no driver and no emissions.
It’s a concept that either sounds strange or fascinating depending on who you ask: a car that can feel human emotions. It’s a claim that Honda has made about their concept car that’s going to be shown in next year’s Consumer Electronics Show (CES).
In a press release on Honda’s website, the automotive company says the theme for their participation in the CES will be “Cooperative Mobility Ecosystem.” Honda says it is exploring more interactive and immersive experiences for passenger vehicles.
The centerpiece of their exhibit, and the culmination of this theme, will be the company’s new concept car called “NeuV.” It’s a concept automated EV commuter vehicle equipped with artificial intelligence (AI) called ’emotion engine.’ They go on to explain that the emotion engine is a group of AI technologies that allow the vehicle to generate artificial emotions.Assisting Drivers
At first glance, an emotional AI sounds unnecessary and even asinine. However, the concept Honda is exploring is a new aspect of AI that assists drivers. Recently, AI has been used to assist drivers physically, by enabling them to take over a driver of a car.
This is extremely helpful in lowering a driver’s fatigue when operating a vehicle for long periods of time and for long distances, as Tesla’s autopilot feature demonstrates. It has even been used in the trucking industry where drivers are often faced with the exhausting job of hauling products across the country.
Honda hasn’t yet released details on what the “emotion engine’ will actually do, or its place in the vehicle, but it’s not hard to imagine what it could do. For example, it could pull up weather data from the internet and announce the results to the driver in a cheery voice. How the driver will react would depend on the person, but having an AI car companion cheer you on while driving is an amusing thought.
The post An ‘Emotion Engine:’ Honda’s New Concept Car Can Feel appeared first on Futurism.
OpenAI, an artificial intelligence research center based in San Fransisco, has released an open source software platform that virtually tests and trains AI.
The platform, called Universe, is a digital playground comprised of games, web browsers, and even protein folding software that an AI can interact with. AI does this by sending simulated mouse and keyboard strokes via what’s called Virtual Network Computing, or VNC.
We're releasing Universe, a platform for measuring and training AI agents: https://t.co/bx7OjMDaJK
— OpenAI (@OpenAI) December 5, 2016
Universe facilitates reinforcement learning, where the AI learns tasks by trial and error; through risk and reward. Over time, OpenAI researcher and former Googler Ilya Sutskever says AI can even practice “transfer learning,” in which an agent takes what it has learned in one application and applies it to another.Credit: OpenAI
Games that AI can currently access include Portal, Shovel Knight, and SpaceChem. Video games may be a good benchmark and training-aide for the AI, but the researchers intend to add more apps tot he list, and teach AI problem solving skills in unfamiliar environments.
“An AI should be able to solve any problem you throw at it,” Sutskever said to Wired. Michael Bowling, a University of Alberta professor lauds the wide scope of the platform and it’s role in improving AI; “It crystallizes an important idea: Games are a helpful benchmark, but the goal is AI,” he says.
The post A Whole New Universe: OpenAI Just Opened a School for AI appeared first on Futurism.
Apple enthusiasts and fans have been speculating for some time that the company Steve Jobs left behind is developing a smart car — aka Project Titan. Well, we’ve finally got some sort of confirmation to the long-standing rumor. Apple is indeed getting into the autonomous car business, but it’s currently unclear whether it will actually build a car or just an autonomous driving system.
In a five-page letter to US transport regulators, written by Product Integrity Director Steve Kenner, Apple expressed that it “is excited about the potential of automated systems in many areas, including transportation.” Apple commends the National Highway Traffic Safety Administration (NHTSA) for its Federal Automated Vehicles Policy, which prompted the writing of the letter.
Apple also acknowledged that it “is investing heavily in the study of machine learning and automation,” which perhaps is a clear indication that the company is already studying the possibilities of developing one, or an autonomous driving system at the very least. Apple has also already registered several car-related internet domains, such as apple.car and apple.auto.Credits: carwow Riding along
One of the things Kenner proposes to the NHTSA in his letter is the sharing of data by companies in the autonomous vehicle industry. “Apple agrees that companies should share de-identified scenario and dynamics data from crashes and near-misses,” Kenner says. “Data should be sufficient to reconstruct the event, including time-series of vehicle kinematics and characteristics of the roadway and objects.”
Apple certainly isn’t the first in this field. Tesla, of course, comes into mind. There is also Uber with its self-driving taxis and trucks. Volvo also has its own plans, and even Nvidia is into the technology. Ford is also supposedly making its own models, on the premise that Apple is developing one.
While Apple may not be ahead of the game in this case, the company seems to be riding along at the right moment, especially with policies for autonomous cars being crafted and refined. Soon, we might not just be asking Siri for directions to our favorite restaurant; she could drive us there.
The post Rumors of an Autonomous Apple Vehicle Surface Again appeared first on Futurism.
What happens when you sleep? Sleeping is more than just merely resting our bodies. In sleep, processes vital to the maintenance of our bodies occur like a computer rendering memories from and information learned in the day. Dreams are vital to stabilizing memory. In dreaming, the brain integrates new pieces of memory to old ones, helping us solidify these into long-term memories.
If sleep and dreams can help keep the complex human body running, Google believes that the same can be applied for their AI. Google DeepMind, an advanced neural network, is pioneering new technology that allows the program to dream in order to enhance its rate of learning. In the Google research group’s paper on the Cornell University ArXiv, the team likened human dreaming to the “unsupervised auxiliary tasks” used to train their programs.
The DeepMind AI’s dreams are of scenes from video games like Starcraft II and Labyrinth. The prompts in the video games help the AI learn to recognize challenges in the game, helping it make better decisions when it is being tested on the game controls. Impressively, dream training resulted in increased performance from Deepmind. The AI rated 880 percent expert human performance in Atari games, and on the game Labyrinth, it had a learning rate ten times faster that previous iterations of the AI.DeepMind The Future’s AI
Perfecting video games is merely training ground for DeepMind before it moves on to functions like facial detection or cancer research. AI already plays a huge role in today’s society, optimizing systems in a wide range of industries.
Improving how these bots learn could unlock a better future for us. It does raise some concerns on the uncontrolled training AI experiences over dreaming. While Google and other companies developing neural networks definitely don’t intend for the AI to turn against humans, it’s a possibility that could be fed into an AI’s dreams. The “reveries” in the universe of Westworld allowed for that fictional AI to exhibit some unintended (ahem) consequences. But chances are we won’t be seeing that here.
The methods, though racking up impressive results, are still experimental, and Google is still doing numerous tests. For now, we’re all looking forward to what could be the best AI yet.
The post Bringing Westworld’s AI to Life: Google is Teaching its Robots to Dream appeared first on Futurism.
RoboBees were developed by the Wyss Institute and are about the size of a penny.
The post These Robotic Bees Are Straight Out Of An Episode Of Black Mirror appeared first on Futurism.
When we try to figure out how some aspect of thinking works, we always start looking at how the human brain does it. But while humans could be said to be the apex species in this planet, we are far from being the only ones who can think.
Take for example chimpanzees, a staple for arguments for intelligent thinking in animals. When you compare them with other similar primates like bonobos, you see that they are more inclined towards tool-use. Not only do bonobos not use tools as much, young chimpanzees also manipulate objects more than young bonobos.
You also start seeing active intelligence in some species when comparing the actions of say, digger wasps and corvids (crows, magpies, rooks, etc). Digger wasps hide food for the future, but this is more of a programmed behavior. On the other hand, corvids display behavior consistent with actively planning for the future.
“Defining intelligence or culture in a way that is restricted to humans makes no sense in the grander scheme of evolution. Once we widen these definitions to include other animals, we find culture in other primates, tool use, and incredible intelligence in corvids,” says Dr Kathelijne Koops, formerly from the University of Cambridge’s Division of Biological Anthropology.The Future of AI
In fact, we can extend this to non-biological intelligence, to intelligence man itself is building. If we consider skills or abilities to be the metric of intelligence, AI and robotics systems clearly qualify.
Now we have programs that can process data and even give predictions, on a far bigger scale than humans can. In the narrow fields that they are programmed in, these machines are the best. This is in contrast to humans, who are generally mediocre or good at a lot of things—generalists and specialists, in other words.
In fact, these machines have their own unique neural architectures, distinct from the human kind. Adapting how the brain thinks is hard for just these narrow competencies, so completely new ways of “learning” had to be developed.
Ultimately, all of these will have to be considered both in developing AI and in a society built around it. We need to consider other kinds of neural architectures and ways of thinking in order to better develop machine learning.
But more importantly, we’ll need to start talking about how we are going to treat these machines. Are they simple tools engineered for a task? Or are they intelligent beings capable of thinking just like human beings?
The post Cambridge Scientists: We Shouldn’t Define Intelligence According to Humanity appeared first on Futurism.
We’ve come a step closer to those human-like, walking “droids” we all love from the Star Wars universe. A new locomotion algorithm has been developed that allows the Atlas robot to gingerly walk through rough and uneven terrain.
Most robots would need flat surfaces to traverse. This control algorithm, developed at the Florida Institute for Human and Machine Cognition (IHMC) for the Atlas robot from Boston Dynamics, mimics a human-like line of thinking of deliberating carefully before making a step. Atlas balances itself before stepping onto an uneven foothold.
“Our humanoid projects are focused on enabling our bipedal humanoids [to] handle rough terrain without requiring onboard sensors to build a model of the terrain,” said the developers from IHMC. “Our goal is to tackle increasingly more difficult walking challenges.”Human Robots?
Boston Dynamics’ Atlas has been in the works for a while now under the DARPA Robotics Challenge, a competition funded by the US Government. The company, a subsidiary of Google, has been continuously bringing out developments to make this robot as close to the human form as possible. Particularly, they have been focusing on equipping Atlas with balance and agility, qualities that traditionally would be quite difficult for robots.
Humanoid robots, once a tale from science fiction, are slowly coming to life. Today, they’re mastering walking through rubble and keeping balance on a thin plank of wood. Tomorrow, they could be indispensable partners in our daily lives. The calculated steps that Atlas now makes could be life-saving when put to the right uses. Like in the movies, the future could have artificially intelligent healthcare providers, policemen, and more.
Today, major industries are looking into getting help from humanoid robots. NASA is starting up their Space Robotics Challenge to possibly find an automated bot partner for future missions. Aircraft tech provider Airbus has used humanoids for their manufacturing line.
It looks like we’ll head into the future with humanoid robots, and they’ll be able to keep the pace with each step, no matter the terrain…
The post Atlas the Life-Sized Robot Just Became a Bit More Human appeared first on Futurism.
Senator Ted Cruz opened up last Wednesday’s hearing by the US Senate Subcommittee on Space, Science, and Competitiveness with a description of the changing landscape of technology: “Whether we recognize it or not, artificial intelligence is already seeping into our daily lives.”Andrew Moore speaks at a Senate hearing on AI. Image: TechRepublic
Senator Cruz explained that scientists are predicting how investments in AI will increase by more than 300 percent in the next few years, which means AI will have a more prominent role in society. With that in mind, the subcommittee’s hearing focused on the impact AI has in various sectors of US society, and how to best ensure US leadership in AI development.
The hearing came at a time when developments in computing are giving rise to faster and smarter AI. Dr. Andrew Moore from Carnegie Mellon University and Dr. Eric Horvitz, Managing Director at Microsoft Research Lab both stress how the increased computing speed, digital datasets, and the development of deep learning techniques have created an “inflection point” for the rapid development of AI.Opportunities and Challenges Found in AI
The committee’s experts pointed out that AI can introduce rapid developments in fields like medicine and space exploration. However, there’s still the problem of worker displacement as automation threatens to take out numerous job sectors in the future. Senator Bill Nelson from Florida said at the hearing, “there’s another part about A.I., and that is the replacement of jobs and we’ve got to prepare for that.”
“[Elon Musk] predicted — predicted! — robots could eventually take over many jobs away from folks so that they would have to depend on the government in order to have a living.”
It’s an issue that experts in the field have thought about. “There is an urgent need for training and re-training of the U.S. workforce so as to be ready for expected shifts in workforce needs and in the shifts in distributions of jobs that are fulfilling and rewarding to workers,” said Horvitz in his testimony.
The preliminary hearing didn’t touch much on concrete solutions for solving the looming job crisis, but the conversation could lead to more public discussions on AI. “We are truly living in the dawn of artificial intelligence and it is incumbent that Congress and this subcommittee being to learn about the vast implications of this emerging technology to ensure that the United States remains a global leader throughout the 21st Century,” Senator Cruz declared.
The post The Dawn of AI: Congress Is Discussing What We’ll Do in a World Run by Robots appeared first on Futurism.
Artificial intelligence and increasing automation is going to decimate middle class jobs, worsening inequality and risking significant political upheaval, Stephen Hawking has warned.British scientist Prof. Stephen Hawking gives his ‘The Origin of the Universe’ lecture to a packed hall December 14, 2006 at the Hebrew University of Jerusalem, Israel. Hawking suffers from ALS (Amyotrophic Lateral Sclerosis or Lou Gehrigs disease), which has rendered him quadriplegic, and is able to speak only via a computerized voice synthesizer which is operated by batting his eyelids. David Silverman/Getty Images
In a column in The Guardian, the world-famous physicist wrote that “the automation of factories has already decimated jobs in traditional manufacturing, and the rise of artificial intelligence is likely to extend this job destruction deep into the middle classes, with only the most caring, creative or supervisory roles remaining.”
He adds his voice to a growing chorus of experts concerned about the effects that technology will have on workforce in the coming years and decades. The fear is that while artificial intelligence will bring radical increases in efficiency in industry, for ordinary people this will translate into unemployment and uncertainty, as their human jobs are replaced by machines.
Technology has already gutted many traditional manufacturing and working class jobs — but now it may be poised to wreak similar havoc with the middle classes.
A report put out in February 2016 by Citibank in partnership with the University of Oxford predicted that 47% of US jobs are at risk of automation. In the UK, 35% are. In China, it’s a whopping 77% — while across the OECD it’s an average of 57%.
And three of the world’s 10 largest employers are now replacing their workers with robots.
Automation will, “in turn will accelerate the already widening economic inequality around the world,” Hawking wrote. “The internet and the platforms that it makes possible allow very small groups of individuals to make enormous profits while employing very few people. This is inevitable, it is progress, but it is also socially destructive.”
He frames this economic anxiety as a reason for the rise in right-wing, populist politics in the West: “We are living in a world of widening, not diminishing, financial inequality, in which many people can see not just their standard of living, but their ability to earn a living at all, disappearing. It is no wonder then that they are searching for a new deal, which Trump and Brexit might have appeared to represent.”
Combined with other issues — overpopulation, climate change, disease — we are, Hawking warns ominously, at “the most dangerous moment in the development of humanity.” Humanity must come together if we are to overcome these challenges, he says.
Stephen Hawking has previously expressed concerns about artificial intelligence for a different reason — that it might overtake and replace humans. “The development of artificial intelligence could spell the end of the human race,” he said in late 2014. “It would take off on its own, and redesign itself at an ever increasing rate. Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded.”
The post Stephen Hawking: Automation and AI Are Going to Decimate Middle Class Jobs appeared first on Futurism.
Today, if you ask the Google search engine on your desktop a question like “How big is the Milky Way,” you’ll no longer just get a list of links where you could find the answer — you’ll get the answer: “100,000 light years.”
While this question/answer tech may seem simple enough, it’s actually a complex development rooted in Google’s powerful deep neural networks. These networks are a form of artificial intelligence that aims to mimic how human brains work, relating together bits of information to comprehend data and predict patterns.
Google’s new search feature’s deep neural network uses sentence compression algorithms to extract relevant information from big bulks of text. Essentially, the system learned how to answer questions by repeatedly watching humans do it — more specifically, 100 PhD linguists from across the world — a process called supervised learning. After training, the system could take a large amount of data and identify the short snippet from it that answered the question at hand.Thinkstock Self-Taught AI?
Training AI like this is both difficult and expensive. Google has to provide massive amounts of data for their systems as well as the human experts that the neural network can learn from.
Google and other technology companies like Facebook and Elon Musk’s OpenAI are currently working on better, more automated neural networks, the kind capable of unsupervised learning. Those networks wouldn’t need people to label data before they could learn from it; they could figure it out on their own.
If these companies are successful, a multitude of opportunities would be opened up for humankind. Advanced AI systems could quickly and accurately translate between languages, make our internet more secure, develop better medical treatments, and so much more. The data machines like that could process would change our world permanently.
Tech companies are currently still years away from discovering how to create fully autonomous AI. Nevertheless, that digital voice now answering our search engine queries puts us one step closer.
In an exciting demonstration of the power of artificial intelligence (AI) and the diversity of species, a team composed of two programmers and an ornithologist (an expert on birds) created a map of visualized bird sounds.
— Google (@google) November 28, 2016
Coders Manny Tan and Kyle McDonald worked with ornithologist Jessie Barry to create this visually euphonious interactive map of bird sounds. Tan and McDonald used machine learning to organize thousands of bird sounds from a collection by Cornell University. They didn’t supply their algorithm with tags or even names of the bird sounds. Instead, they wanted to see how it would learn to organize all the data by listening to the bird sounds.
The results were amazing. Their algorithm was able to group similar sounds together. It generated visualizations of the sound — an image that served as the sound’s fingerprint — using a machine learning technique called t-distributed stochastic neighbor embedding (t-SNE), which allowed it to group together sounds with similar fingerprints.AI Collaborations
What’s fascinating about this project is the work that AI can do for various disciplines, apart from just biology and ornithology. Deep learning algorithms are beginning to transform the fields of medicine and medical research, most recently with a retina-scanning program that can help prevent blindness. AI has also ventured into the realm of psychology, being able to identify patients with depression and suicidal tendencies.
AI isn’t just allowing us to understand our world better, it’s also changing how we interact with it. The prevalence of automated vehicles or unmanned transportation technology is proof of this, with AI learning to become better car and truck drivers, pilots, and even sailors (in a manner of speaking). AI might even venture into space ahead of us.
Obviously, we’re still far from perfecting AI. In as much as deep neural networks are continually learning, we’re also in the process of developing better systems.
The post Artificial Intelligence is Helping Scientists “See” the Diversity of Sound appeared first on Futurism.
These robots look spindly, but they’re built to have incredible balance.
The post These Bipedal Robots Have Better Balance Than Most Humans appeared first on Futurism.
Google DeepMind, the artificial intelligence (AI) research subsidiary of Alphabet, has had considerable applications in the field of medicine and medical research through DeepMind Health. One of its more recent achievements is an eye-scanning AI algorithm that can detect one of the most common forms of blindness.
This algorithm uses the same machine learning technique that Google uses to categorize millions of web images. It searches retinal images and detects signs of diabetic retinopathy — a condition that results from damaged eye blood vessels, and leads to gradual loss of sight — like a highly trained ophthalmologist.
According to computer scientists at Google, and medical researchers from the U.S. and India, the algorithm was originally developed to analyze retinal images and wasn’t explicitly designed to identify features that might indicate diabetic retinopathy. It learned this on its own, after having been exposed to thousands of healthy and diseased eyes.
The algorithm was exposed to 128,000 retinal images classified by at least three ophthalmologists as a training data set. Its was then tested on 12,000 retinal images, where it successfully identified the disease and how severe it was, matching or even exceeding the performance of experts. The results were published in the Journal of the American Medical Association — the first study in the journal ever published involving deep learning, according to editor-in-chief Howard Bauchner.The AI eye doctor is in
“One of the most intriguing things about this machine-learning approach is that it has potential to improve the objectivity and ultimately the accuracy and quality of medical care,” says Michael Chiang of the Oregon Health & Science University’s Casey Eye Institute.
While its diagnosis can still be made more efficient and reliable through some form of automated detection, Google is currently conducting clinical trials with real patients, in collaboration with the Aravind Medical Research Foundation in India.An OCT eye scan. Credits: Google’s DeepMind Health
There is also a need to be able to explain how an algorithm such as this one arrives at its conclusions, as Google’s Lily Peng acknowledges. “We understand that explaining will be very important.”
A retinal scanning algorithm capable of performing as best as — or even better than — medical experts can certainly be useful. For starters, it can facilitate diagnosis and treatment in areas where the availability of doctors is scarce. AI is truly reshaping the future of medical research.
The post Google’s AI Can Read Your Retinas to Prevent Blindness appeared first on Futurism.
FEDOR goes where no one else wants to go.
The post Meet FEDOR, Russia’s Robot Built For Extreme Conditions appeared first on Futurism.
The post White House AI Report: Everything You Need to Know [INFOGRAPHIC] appeared first on Futurism.
Summon a fleet of drones to guide you.