Futurism - Robot Intelligence
By virtue of how many words, phrases, and grammar rules there are, you can only imagine how difficult it is for a person to deliver a translation from one language to another that accurately conveys the thought and nuance behind even the simplest sentence, let alone how difficult it would be for a machine. This, however, was a challenge that Google wanted to tackle head on, and today, the results of all those years spent working on a machine learning translation system were finally unveiled.
Languages are naturally phrase-based, so letting machines learn as many of those phrases and the subtleties behind the language as possible so that they could be applied to the translation was essential. Getting machines to do all that requires a lot of data, and adding complex language rules into the mix requires neural networks. While Google may not be the only company that has been looking into this method for more precise and accurate translations, they managed to be the first to get it done.
The Google Neural Machine Translation system (GNMT) uses state-of-the-art training techniques that significantly improve machine translation quality. It relies on long short term memory recurrent neural networks (LSTM-RNNs) and neural networks that have been trained by graphics processing units (GPUs), and tensor processing units (TPUs) to crunch data more efficiently. It all adds up to a new system that can lower translation errors by more than 55 to 85 percent.Image Credit: Google Blog Narrowing The Gap
Right now, GNMT is the most effective language system that uses neural networks to look at a sentence as a whole while still factoring in the smaller elements that add to its meaning, quite similar to how humans try to process language and understand the nuances behind a particular statement beyond the words.
Of course, machine translation is still far from perfect. Despite its advances, GNMT can still mistranslate, particularly when it encounters proper names or rare words, which prompt the system to, again, translate individual words instead of looking at them within the context of the whole. Clearly, there is still a gap between human and machine translations, but with GNMT, it is getting smaller.
But considering that Google has pioneered the system for Chinese-to-English translation (languages with a combined 1.5 billion speakers worldwide), GNMT is a major accomplishment that could revolutionize global communication.
The post Google’s Neural Network for Language Translation Narrows Gap Between Machines & Humans appeared first on Futurism.
Imagine you’re sitting in a self-driving car that’s about to make a left turn into on-coming traffic. One small AI system in the car will be responsible for making the vehicle turn, one system might speed it up or hit the brakes, other systems will have sensors that detect obstacles, and yet another system may be in communication with other vehicles on the road. Each system has its own goals — starting or stopping, turning or traveling straight, recognizing potential problems, etc. — but they also have to all work together toward one common goal: turning into traffic without causing an accident.
Harvard professor and Future of Life researcher, David Parkes, is trying to solve just this type of problem. Parkes told FLI, “The particular question I’m asking is: If we have a system of AIs, how can we construct rewards for individual AIs, such that the combined system is well behaved?”
Essentially, an AI within a system of AIs—like that in the car example above—needs to learn how to meet its own objective, as well as how to compromise so that it’s actions will help satisfy the group objective. On top of that, the system of AIs needs to consider the preferences of society. For example, it needs to determine if the safety of the passenger in the car or a pedestrian in the crosswalk is a higher priority than turning left.
Because environments like a busy street are so complicated, an engineer can’t just program an AI to act in some way to always achieve its objectives. AIs need to learn proper behavior based on a rewards system. “Each AI has a reward for its action and the action of the other AI,” Parkes explained. With the world constantly changing, the rewards have to evolve, and the AIs need to keep up not only with how their own goals change, but also with the evolving objectives of the system as a whole.Making an Evolving AI
The idea of a rewards-based learning system is something most people can likely relate to. Who doesn’t remember the excitement of a gold star or a smiley face on a test? And any dog owner has experienced how much more likely their pet is to perform a trick when it realizes it will get a treat. A reward for an AI is similar.
A technique often used in designing artificial intelligence is reinforcement learning. With reinforcement learning, when the AI takes some action, it receives either positive or negative feedback. And it then tries to optimize its actions to receive more positive rewards. However, the reward can’t just be programmed into the AI. The AI has to interact with its environment to learn which actions will be considered good, bad, or neutral. Again, the idea is similar to a dog learning that tricks can earn it treats or praise, but misbehaving could result in punishment.
More than this, Parkes wants to understand how to distribute rewards to subcomponents (the individual AIs) in order to achieve good system-wide behavior. How often should there be positive (or negative) reinforcement, and in reaction to which types of actions?Rather than programming a reward specifically into the AI, Parkes shapes the way rewards flow from the environment to the AI in order to promote desirable behaviors as the AI interacts with the world around it.
For example, if you were to play a video game without any points or lives or levels or other indicators of success or failure, you might run around the world killing or fighting aliens and monsters, and you might eventually beat the game, but you wouldn’t know which specific actions led you to win. Instead, games are designed to provide regular feedback and reinforcement so that you know when you make progress and what steps you need to take next. To train an AI, Parkes has to determine which smaller actions will merit feedback so that the AI can move toward a larger, overarching goal.
But this is all for just one AI. How do these techniques apply to two or more AIs?Gaming the System
Much of Parkes’ work involves game theory. Game theory helps researchers understand what types of rewards will elicit collaboration among otherwise self-interested players, or in this case, rational AIs. Once an AI figures out how to maximize its own reward, what will entice it to act in accordance with another AI?
To answer this question, Parkes turns to an economic theory called mechanism design.
Mechanism design theory is a Nobel-prize winning theory that allows researchers to determine how a system with multiple parts can achieve an overarching goal. It is a kind of “inverse game theory.” How can rules of interaction – ways to distribute rewards, for instance – be designed so individual AIs will act in favor of system-wide and societal preferences? Among other things, mechanism design theory has been applied to problems in auctions, e-commerce, regulations, environmental policy, and now, artificial intelligence.
The difference between Parkes’ work with AIs and mechanism design theory is that the latter requires some sort of mechanism or manager overseeing the entire system. In the case of an automated car or a drone, the AIs within have to work together to achieve group goals, without a mechanism making final decisions. As the environment changes, the external rewards will change. And as the AIs within the system realize they want to make some sort of change to maximize their rewards, they’ll have to communicate with each other, shifting the goals for the entire autonomous system.
Parkes summarized his work for FLI, saying, “The work that I’m doing as part of the FLI grant program is all about aligning incentives so that when autonomous AIs decide how to act, they act in a way that’s not only good for the AI system, but also good for society more broadly.”
Parkes is also involved with the One Hundred Year Study on Artificial Intelligence, and he explained his “research with FLI has informed a broader perspective on thinking about the role that AI can play in an urban context in the near future.” As he considers the future, he asks, “What can we see, for example, from the early trajectory of research and development on autonomous vehicles and robots in the home, about where the hard problems will be in regard to the engineering of value-aligned systems?”
The post Keeping AI Well Behaved: How Do We Engineer An Artificial System That Has Values? appeared first on Futurism.
This surgical robot is so precise it can sew the skin back onto a grape.
For the past couple of decades, the trend in technological development has been toward maximizing the capacities of computers and machines to do tasks that people would rather not do, or at least ones that machines could do cheaper. In an interview with Raconteur, chief strategy and innovation officer for The Future Laboratory Tracey Fellows predicts that 35 percent of jobs currently done by humans could be taken over by robots one day, and those would include jobs that are either tedious or dangerous, saving innumerable hours and lives.
The landmark achievement of the 21st century is, arguably, artificial intelligence (AI). With the writing of some strings of code, machines can now perceive their environment, process relevant information, and execute actions that provide the highest probability of success. Further innovations in artificial intelligence are often driven by the elimination of human error in day-to-day tasks (e.g., self-driving cars).
The increasing reliance on AI, however, comes with considerable risks. Artificial intelligence, whatever form it may take, is run by a specific set of commands. This renders it vulnerable to hacking and many other forms of attack and manipulation, and in that same interview, Fellow also predicts that “by 2040, more crime will be committed by machines than by humans.”Rise of the machines
While most machines powered by AI are programmed to do specific jobs, these technological wonders could also be commanded to learn. Exceptionally powerful programs could be wired to learn how to bypass security measures and wreak havoc with classified information, GPS telemetry, sensor input, etc.
What’s more problematic is that this coincides with the integration of the Internet of Things (IoT) — the presence of the internet in day-to-day living. With more devices being retooled to facilitate easier living, we are surrounding ourselves with more ways to be victimized by cyber crime.
Thankfully, this grim future isn’t unavoidable. Ex-FBI-counter-terrorism operative and present Carbon Black national security strategist Eric O’Neill is confident that, with so much at stake, companies invested in the Internet of Things will act way before an attack even happens, preempting possible hacks or security breaches.
He says, “As the good guys become more active in remediating threats before hackers launch their attacks, espionage and digital fraud will become far less economically beneficial for the bad guys, giving us a far better chance of keeping them out.”
The post Experts Predict: “By 2040 More Crime Will Be Committed by Machines Than by Humans” appeared first on Futurism.
Google’s self-driving sports utility vehicle has been taking a beating: it’s been involved in three crashes caused by humans in the last month. The most recent was caused when a driver of a van ran a red light and struck the passenger side of the Lexus RX 450 SUV, leaving the doors banged up. The vehicle was operating in autonomous mode when the accident occurred, with a human driver inside, as is required by the company. Thankfully, no one was hurt in the accident.Photo Credit: Ron van Zuylen (twitter.com/grommet)
Google’s driverless cars have been involved in a number of accidents throughout nearly seven years of testing, but those numbers dwindle in comparison to the stats showing human-caused road accidents and deaths. Nearly every previous incident with the self-driving cars have been caused by a human driver–its first at-fault incident occurred early this year when the Lexus hit the side of a bus. Google continues to mention the point that 94 percent of urban car accidents like this are the result of human error, and that the whole point of its self-driving tech is to make the roads safer. After all, the technology is driven by the ultimate goal of making mass transit safe, accident-free, and accessible to all.
The post Google’s Self-Driving Car Sustains Major Damage in Worst Crash To Date appeared first on Futurism.
Okay, here’s what happened.
A pair of programmers at Carnegie Mellon developed an artificial intelligence (AI) that can play a version of the video game Doom. Using what they call “deep reinforcement learning,” Guillaume Lample and Devendra Singh Chaplot made an AI that plays the game the way humans would — essentially, to hunt and kill anything that moves.
The reinforcement in learning came from the AI getting points for picking up items, moving about, and scoring kills, while it was reprimanded for taking hits and dying. Basically, like how a human player would, making it different from programmed bots in the game.
It’s just a video game, it’s not real. Right?
Here’s the thing, though. The AI is as real as it gets. While it may have only been operating inside an environment of pixels, it does raise up questions about AI development in the real world.Keeping tabs on AI development
Some of the current applications of AI have been controversial. Most however, have been technological breakthroughs with very useful consequences in the fields of medicine, space technology, and transportation, just to name a few.
Those who propose to have clear policies regarding AI development think it is okay to do so even now, at a rather early stage of the technology. Miles Brundage, AI policy research fellow at the University of Oxford and fellow at Arizona State University, believes the “key question related to AI policy […] is not whether AI should be governed at all, but how it is currently being governed, and how that governance might become more informed, integrated, effective, and anticipatory.”
Sound policy with regard to AI, perhaps, does not obstruct the possibilities for developing the technology. Rather, it should protect the technology from those who might try to abuse or misuse it.
Sounds noble enough.
The post An AI Was Taught to Hunt and Kill Humans In Video Games: Here’s Why This Matters appeared first on Futurism.
Prodrone’s PD6B-AW-ARM collects hazardous materials using robotic arms.
A team from the Houston Methodist Research Institute says they have developed artificial intelligence software capable of analyzing mammograms for breast cancer with 99 percent accuracy. The technique involves scanning patient charts and cross-checking them with results from mammogram X-rays and clinical reports.
“The imaging characteristics of breast cancer subtypes have been described previously, but without standardization of parameters for data mining,” according to the study published in Cancer.
But their algorithm allows for a more comprehensive and accurate analysis that helps avoid false positives — a very common incident.Mammogram. Lester Lefkowitz/Getty Images.
“We figured out you can mine a clinical report for additional information,” said lead researcher Stephen Wong. “Most of the clinical reports are not in a structured format, they are in free form text. So if we can run an AI program to extract the medical information and build a risk assessment model we can score the information and reduce unnecessary biopsies.”
Their AI algorithm has been certified by the Breast Imaging Reporting and Data System as category 5 (BI-RADS 5), a mammogram that “is almost certainly predictive of breast cancer with a positive predictive value of about 95%,” according to Canadian radiologist Steven Halls.
Wong hopes their algorithm will set a standard for accurate assessments of breast cancer risk.FALSE POSITIVES
Women who undergo false positives often suffer from the stress and fear a biopsy causes years after the test, according to a 2013 study from the University of Copenhagen.Stereotactic breast biopsy. Vero Radiology Associates.
“False-positive findings on screening mammography causes long-term psychosocial harm,” John Brodersen and Volkert Dirk Siersma of the University of Copenhagen said in a report. “Three years after a false-positive finding, women experience psycho-social consequences that range between those experienced by women with a normal mammogram and those with a diagnosis of breast cancer.”
The post Artificial Intelligence Reads Mammograms With 99% Accuracy appeared first on Futurism.
If David Ortiz had his way, this snack tray drone would follow you wherever you go.
Consumer robots out in the market today are normally confined to personal assistants that remind you of things or help you with basic household work.
Enter Obi, a consumer robot that could do exactly that. Obi is designed to allow people with disabilities to get back the dignity of feeding themselves.
Having some health factor which diminishes any aspect of autonomy is an understandable source of frustration. “Every day, millions of people must be fed by caregivers, and they find the experience to be conspicuous and frustrating,” says Jon Dekar, creator of Obi.
That inspired Dekar and his father to found DESῙN, which would eventually build Obi. The robot enables people with multiple sclerosis, ALS, Parkinsons, and other diseases that impair fine motor control to eat on their own, with only minimal help from others.
The robot works by having a robot arm feed the person. A caregiver only has to “teach” the robot on the delivery location once, and then Obi can take over. Obi’s rechargeable battery works for about four hours, and can be carried around like a laptop.
The robot itself is made from BPA plastics, and the plate and spoon are dishwasher and microwave-safe. Each one retails for $4,500.
Using robotics to enrich the lives of those who may have lost some autonomy is a very noble cause. From exoskeletons to give the ability to walk back to those who’ve lost it, to others which can negate tremors in Parkinson’s patients, further development to restore lost functionality will do wonders to also maximize dignity.
The post Meet Obi, The Robot That Helps People Feed Themselves appeared first on Futurism.
What seemed before like science fiction is now a rapidly growing and evolving technology. Artificial intelligence transforms the way the world works. Now, AI is a powerful problem-solving tool in numerous functions of society. Previously tedious steps industrial production is automated by robots that can anticipate changes in the system. In businesses, bulk amounts of fed data are easily manipulated and responses are faster and more reliable.
The SONY CSL Research Laboratory, however, proves that AI is not just for solving technical work and business problems. AI gets creative in their music-composing programs called Flow Machines.
In the style of The Beatles, Daddy’s Car is an exciting juxtaposition of yesterday’s music and the technology of the future.
Daddy’s Car was released alongside Mr Shadow, a song inspired from American songwriters including Irving Berlin and Duke Ellington.
The software works by using a user-specified music style to generate audio chunks optimized from an enormous database of songs. Lead sheets are generated. The human composer finishes the editing and production. Composer Benoît Carré arranged Daddy’s Car, and wrote the lyrics.
So, no, the AI didn’t completely create the song, but it did play a big part.
What could the future be like with the artistic AI? We’ll get to know more when SONY releases their AI pop album in 2017.
Driverless vehicles are looking more and more like a big part of the future. The much-hyped autonomous vehicles Uber is testing on Pittsburgh and the smaller, more discreet tests going on elsewhere indicate that fully autonomous cars could be here sooner than we think.
One group has suggested a rethink on road policies for self-driving cars, proposing a plan to create a “driverless lane” connecting the cities of Vancouver and Seattle. This plan to dedicate at least one lane of the I-5 from Seattle to Highway 99 in Richmond, B.C., comes not from city planners, but from tech industry experts, with proponents of the plan including Tom Alberg, a board member of Amazon, and Craig Mundie, a former Microsoft executive.
The plan calls for autonomous vehicles to share high-occupancy lanes with regular vehicles initially. Over time, these lanes would become dedicated to driverless vehicles, with regular vehicles banned except when the highway is experiencing only light traffic.
Why do this? According to the released report, “The principal benefit is that it allows drivers to recapture all the time otherwise spent behind the wheel…Other very significant benefits from autonomous vehicles include substantial reductions in vehicle accidents and deaths…increased use of shared vehicles, reduced congestion, and lower transportation costs for consumers.”Madrona Venture Group Staying A Lap Ahead
With fully autonomous vehicles not yet a major part of day-to-day life, why think of this now? As is the case with any new technology, the development of automation could outpace the necessary societal changes for adaptation. The report says that this plan would give the two cities an edge in innovation and technology.
Also, making changes to road policy is far less expensive than developing new public transport systems, which have the same goal of decongesting roads. The report estimates that a true high-speed rail connecting Seattle and Vancouver would cost around $20-$30 billion, far more than what the plan calls for.
Eventually, travelers may have no choice but to give up their position behind the wheel (metaphorically) as transportation experts such as Lyft founder John Zimmer predict driverless cars will one day account for most, if not all, of the cars on the road. And with plans like the one described above already in the works, that day may be sooner than you think.
The post A Driverless-Car-Only Highway Could Connect Seattle and Vancouver appeared first on Futurism.
Keen Security Lab, a China based hacking research group, announced they have discovered several security vulnerabilities on the Tesla Model S while in parking and driving mode.
The team composed of white-hat hackers (good-guy hackers that expose vulnerabilities for good rather than evil) has spent months conducting thorough research on the vehicle. In keeping with the global industry practice of responsible disclosure, Keen Security Lab reveals that they were not only able to confirm multiple vulnerabilities, but they also managed to exploit them, successfully implementing remote control on the Model S.
According to the group, they were able to unlock the Model S without a key, as well as remotely activate the brakes and bring the vehicle to a complete stop from a 19 km (12 mi) distance. The research was conducted on several Model S units and the team believes it’s possible that other Tesla models may have the same security weaknesses.
You can watch the full video of the demonstration below:A Quick Fix
The team of hackers alerted Tesla regarding these vulnerabilities (again, because they’re good-guy hackers). As a result, the company has already fixed the software bug that allowed the hackers to take over the vehicle. Software updates are one of the conveniences of owning a Tesla, it seems. No need to visit a repair shop.
“Our realistic estimate is that the risk to our customers was very low,” a Tesla spokesperson said in a statement on Tuesday. “But this did not stop us from responding quickly.”
These findings however, underscore the importance of having and following the stringent guidelines proposed by the Department of Transportation for autonomous cars. There’s much to be considered as this technology becomes more widely available.
The post Chinese Hackers Took Remote Control of A Tesla From 12 Miles Away appeared first on Futurism.
The self-driving revolution is upon us, with companies inching ever closer to their goal of level four autonomy. Uber just placed an autonomous taxi service in the streets of Pittsburgh, NuTonomy is doing the same thing in Singapore, and Ford plans to sell driverless cars by 2025.
Another player wants to make its own self-driving car, and not for the reason you’d think. Udacity plans to build an autonomous vehicle and completely open-source the whole design. This isn’t about charity or generosity, it’s about education. The company is working with Didi Chuxing, Mercedes-Benz, Nvidia and Otto to architect a self-driving “nanodegree” program. Udacity offers higher education programs that it creates alongside its partners.
The company wants people to actually build their own self-driving cars, ramping up homegrown self-driving projects. More than just vehicle plans, it will also be releasing driving data and the necessary code.
The startup is the brainchild of Sebastian Thrun, who got the self-driving program over at Google rolling.Udacity
Udacity wants to fill a need that is currently growing in the tech industry with this degree. Since developing self-driving cars is a relatively new field, there is a noticeable lack of people with the skill set to work on them. It is worth noting that Udacity’s program is not formally accredited. However, there’s a full money-back guarantee, if you don’t find a job.
Thrun actually values people with autonomous vehicle know-how at $10 million each, based on acquisitions by Uber and GM, of Otto and Cruise, respectively. When companies buy self-driving startups, they don’t just buy technology, they’re buying talent as well.
The post Udacity Is Building a Self-Driving Car, and They’re Opensourcing It appeared first on Futurism.
Turns out Watson’s a pretty good movie trailer editor.
On average, each American spends hundreds of hours driving every year. Countrywide, that equates to billions of hours spent on the road, and this has a rather notable impact on health—indeed, on lives. Each year, thousands of people in the U.S. die in car crashes, with a staggering 94 percent of those crashes caused by human error.
The innovation of automated vehicles comes with the promise of faster and more efficient transportation. With the reduction of human error and improved traffic systems, a higher quality of life awaits at the end of the road to widespread integration of autonomous vehicles. This could potentially prevent nearly 100 percent of car accidents, and now the White House is offering its support.EPA Shifting Gears
Current federal motor vehicle safety standards don’t address automated vehicle technology, but the White House is looking to change that. The administration’s new set of policies will place the Department of Transportation at the helm of development, situating the government in a position to inspect and approve self-driving technologies.
With this in mind, the guidelines released underscored important issues that need immediate attention, such as safety expectations. However, they weren’t so specific and restrictive that pioneers like Lyft, Uber, and Tesla will be left with no room to innovate. As US Transportation Secretary Anthony Foxx noted, “Our intention is to cover the waterfront as best we can. There’s going to be a need for conversation over the long term.”
The pivot towards autonomous and semi-autonomous cars definitely signals a transition in the transportation industry. This nascent development promises not only hours and dollars saved with efficient travel routes, but also the potential to prevent hundreds of vehicle-related deaths.
The post The Age of the Self-Driving Car: U.S. Unveils Sweeping Policy for Autonomous Tech appeared first on Futurism.
Predictions of the future of the cars are as numerous as the companies that make them. Some predict that people will own futuristic self-driving cars in the next few years, while others are willing to stick it out with the human-controlled variety.
Outside of car manufacturing, some are even saying car ownership not a part of our autonomous driving future. At least that’s what Lyft president and co-founder John Zimmer thinks will happen when self-driving cars eventually take over.
Zimmer released a 14-page document that outlined his vision for the future of transport. In it, he predicts that a majority of Lyft’s fleet will be self-driving and completely driverless by the year 2021. He sees that personal ownership of cars will be a thing of the past, at least in America, by 2025. Instead, we will have services like Lyft operate on a per-mile subscription basis (think Netflix). There could even be cars tailored to entertainment, work, or even partying.Retooling the Urban Landscape
Uber seems to be slightly ahead of the curve with their autonomous ventures. The rival ride sharing service recently put its semi-autonomous (a driver is in the car to respond to any glitches or emergency situations) vehicles on the streets of Pittsburgh. Zimmer views this as a “marketing stunt.” Uber has partnered with Volvo to produce a fully autonomous car, also by 2021.
These revolutionary developments could literally change the urban landscape. Mother Jones reports that the total area of parking space in the US is greater than the size of the entire state of Connecticut. Similar to the reclamation of urban space done in New York City with the High Line, parking space would be able to be reclaimed for parks, housing, hospitals, schools, or any other useful development. Even more, pollution from gas guzzling cars would dramatically decrease as more people would be using fewer cars, but also these vehicles will likely be electric.
The post Lyft Co-Founder: In 5 Years, Our Cars Will be Driving Themselves appeared first on Futurism.
This software predicts flooding before a single drop of rain falls.
Gasparzinho is a pediatrics bot that helps children have fun at the doctor’s.
The London borough of Enfield has enlisted artificial intelligence Amelia to take on customer service tasks for its residents starting late this year.
Developed by IPSoft, Amelia is a cognitive agent, capable of automating certain tasks as well as learn from its interactions. As to what makes Amelia a competent asset to the council, IPSoft says she is “[c]apable of analysing natural language, she understands context, applies logic, learns, resolves problems and even senses emotions.”
Amelia currently handles cognitive customer experience for professional services company Accenture and financial consultancy firm Deloitte. This will, however, be her first public sector role.
“From Alexa to Viv, the world is now full of voice-enabled cloud-connected assistants. But Amelia is more than merely a series of speech-savvy algorithms — she’s Siri with a doctorate in psychology,” IPsoft says on their website. She also speaks in natural human language.
IPsoft CEO Frank Lansink says that Amelia is the perfect solution to the changing demands of public services. She will be assisting residents in completing applications, finding information, and will help simplify the council’s internal processes. Plans to have Amelia streamline authentication of permits and license applications are also being considered.
Lansink says Amelia will drive costs down while providing residents better, effortless interaction at their own convenience.
“With the rise of powerful cognitive platforms such as Amelia, government organizations have an opportunity to completely reimagine how frontline public services are delivered.”
The post The AI of Tomorrow: Amelia’s Like Siri, But With A Doctorate in Psychology appeared first on Futurism.