Futurism - Robot Intelligence
The US Navy is deploying a new weapon in drone warfare: Submarine-launched drones. Dubbed “Blackwings,” these small reconnaissance drones can be tube-launched from submarines and unmanned underwater vehicles.
Developed by California-based AeroVironment Inc., the drones are about 20 inches long and weigh about four pounds. They fold up into a three-inch wide canister that can be tube-launched from the submarine. Once the canister clears the surface, the Blackwing pops out and its wings unfold.
Each unit has a motor that can last up to 60 minutes, and is fitted with miniaturized electro-optical and infrared sensors, as well as its anti-spoofing GPS and a secure digital data link.
The Blackwing was developed to counter Chinese advancements in anti-ship ballistic missile and similar “anti-access, area denial (A2AD)” technologies. The Navy has given no deployment date, but given China’s aggressive posturing, it may come just in the nick of time.
The robotic capsule, once swallowed, is guided through the body by external magnetic fields.
The post This Medical Robot, Once Swallowed, Will Help Remove Objects From Your Stomch appeared first on Futurism.
Last week’s Google I/O event certainly showcased where future web and mobile technologies are headed. At the event, Japan-based Softbank Robotics Corp. announced it is finally taking its popular “emotional robot,” Pepper, onto U.S. soil…complete with extra goodies for developers.
The company is opening its first outpost in San Francisco to address local development in the States. With this move comes more exciting news: the robot, which currently runs on Linux, will be able to run on Android, together with a software development kit (SDK).
The kit is perfect for those seeking to freely and creatively work on the potential of Pepper, since its arrival might open up a variety of ways to put it to good use.
The developers who have the SDK, however, won’t get their own Pepper (unless they buy one), but will instead have a virtual version of the robot to work with.
The post Pepper the Robot Is Coming To America (and Android Developers) appeared first on Futurism.
At the ICML Deep Learning Workshop 2015, six scientists from different institutions briefly discuss their views on the possibilities and perils of technological singularity—the moment when humans create artificial intelligence that so far advanced that it surpasses us (and maybe decides to eradicate us).
Throughout the years, singularity has been one of the most popular bets on what will cause the apocalypse (assuming that it happens).
Jürgen Schmidhuber (Swiss AI Lab IDSIA), Neil Lawrence (University of Sheffield), Kevin Murphy (Google), Yoshua Bengio (University of Montreal), Yann LeCun (Facebook, New York University), and Demis Hassabis (Google Deepmind) discuss their views on the possibility of singularity.
Watch the video below.Will We Be Gods or Slaves?
With the speed of advancements in robotics and AI, science fiction is quickly becoming science fact. Recent discussions include worries about how AI is already starting to take over some of our jobs, and how, in the future, there may be no roles left for humans.
Prominent figures such as Stephen Hawking, Bill Gates, and Elon Musk have publicly voiced their concerns regarding advancements in the artificial intelligence industry. And Hawking warns that we would become obsolete, “It [AI] would take off on its own and re-design itself at an ever increasing rate. Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded.”
On the other hand, some people argue that, when AI exceeds our capabilities (assuming that it ever does) it may lead to them treating us like gods, insisting that these robots will be our allies rather than enemies. Some people have even started to form of religion around these ideas, believing that god is technology, an ideology some refer to as “rapture of the nerds.”
For now, we can only guess how things will go. But one thing is for sure Like any technology, AI is as susceptible to misuse as it is beneficial to mankind.
The post Experts Weigh In: Is Artificial Intelligence Really a Threat to Humanity? appeared first on Futurism.
A bioinformatics company called Insilico Medicine has just announced Aging.AI, an online platform that guesses a person’s age using data from blood tests.The Aging.AI logo.
This ensemble was able to determine an individual’s age around 80% of the time. The study also determined the five most important markers for predicting human chronological age: albumin, glucose, alkaline phosphatase, urea, and erythrocytes.
Determining these biomarkers is important, since one of the major impediments in aging research is the absence of a set of biomarkers that may be measured to track the effectiveness of therapies.
Since neural networks often require large data sets to improve performance, the study used 60,000 samples from common blood biochemistry and cell count tests from routine health exams performed by a single laboratory and linked to chronological age and sex.
The ensemble was then used to power Aging.AI, an online platform where any user may input data from their own blood test, and the DNN will guess age and sex based on the data.
The post AI Uncovers the Biomarkers That Are Related to Aging appeared first on Futurism.
Google is unveiling more and more about their revolutionary software, including the innovations behind the search Search and Street View functions. However, they haven’t been as forthcoming in terms of their hardware.
But that just changed.
Google just unveiled one of its hardware brainchilds: the Tensor Processing Unit (TPU). The TPU is a custom ASIC created specifically for Google’s open source software library for machine learning, TensorFlow.
According to Google’s Cloud Platform Blog,
We’ve been running TPUs inside our data centers for more than a year, and have found them to deliver an order of magnitude better-optimized performance per watt for machine learning. This is roughly equivalent to fast-forwarding technology about seven years into the future (three generations of Moore’s Law).
TPU is tailored to machine learning applications, allowing the chip to be more tolerant of reduced computational precision, which means it requires fewer transistors per operation. Because of this, we can squeeze more operations per second into the silicon, use more sophisticated and powerful machine learning models and apply these models more quickly, so users get more intelligent results more rapidly. A board with a TPU fits into a hard disk drive slot in our data center racks.
The processor is tailored to machine learning applications, allowing the chip to be more tolerant of reduced computational precision, which means it requires fewer transistors per operation. Again, Google explains,
Because of this, we can squeeze more operations per second into the silicon, use more sophisticated and powerful machine learning models and apply these models more quickly, so users get more intelligent results more rapidly. A board with a TPU fits into a hard disk drive slot in our data center racks.Chip Use One of the TPUs. Credit: Google
The TPU is already in use at Google, powering the software RankBrain used to optimize the relevancy of Google Search. Street View also gets a boost from TPUs, improving the accuracy and quality of maps and navigation.
Also, AlphaGo was powered by TPUs in the matches against Go world champion, Lee Sedol, enabling it to “think” much faster and look farther ahead between moves.
The end goal of the project is gaining an edge on the machine learning industry and making the technology available to Google customers. The use of TPUs in the company’s infrastructure stack allows the power of Google to be available to developers across software like TensorFlow and Cloud Machine Learning with advanced acceleration capabilities.
The post Google is Developing its Own Chip for Artificial Intelligence appeared first on Futurism.
As more and more civilian drones are being flown freely, the threat to commercial airspace is increasing. Over the past two years, the Federal Aviation Administration (FAA) has received countless reports of incidents related to these objects in the skies.
As a result, the FAA is expanding its research into ways to detect these unmanned aircraft, in hopes that it will be able to provide more security for airports. In fact, the FAA borrowed an FBI drone detection system and field-tested it at the John F. Kennedy International Airport (JFK) last week.Credit: Sky News
The FAA’s press release details 40 trial runs involving the deployment of five different rotor and fixed-wing drones. Academia and staff from various agencies took part in the evaluation of the technology. The research being done by the FFA has been underway since earlier this year, when they did tests at the Atlantic City International Airport.
While drone detection seems to be the current goal, knocking them out of the sky is still another serious possibility.
The post FAA Testing FBI System For Dealing with “Rogue” Drones appeared first on Futurism.
Less than two years since Amazon launched its voice-activated home device, rumors are spreading about Google coming up with a contender. And at this year’s I/O event, Google finally confirms what people have been anticipating for the past few years.
Home’s VP of product management, Mario Queiroz, built the device on Chromecast—a home device he himself successfully launched for Google. And like Chromecast, Home can push media to other Cast-compatible devices, change temperature or lighting through Nest devices, and integrate with services like Spotify.
As of now, Home’s compatibility with third party services is limited but Google says they may be accommodated in the future as the platform develops.VP of product management Mario Queiroz unveils Google Home at the I/O 2016 event. Source: CBS Unmatched Voice Recognition
The device boasts sophisticated voice recognition technology, honed through several years of development. “Google Home is unmatched in far-field voice recognition since it’s powered by more than ten years of innovation in natural language processing. I can continue the two-way dialogue with the assistant that Sundar [Pichai, chief executive of Google] mentioned earlier, whether I’m standing nearby cooking dinner or sitting across the room playing a game with my daughter,” says Queiroz.
Much like OK Google, Google Home answers questions and performs basic tasks as playing music, checking the weather, and sending texts when users start a command by saying “OK Google.”
The device is a virtual butler that is “always-listening.” It uses Google Assistant, a high-level artificial intelligence, and connects with users’ Google accounts to sync appointments and lists.
“Computing is evolving beyond phones, and people are using it in context across many scenarios, be it in their television, be it in their car, be it something they wear on their wrist or even something much more immersive,” Pichai said.
While the system is a very entertaining addition to any home, some worry that a device with a microphone that is “always listening” is a privacy concern. Perhaps future developments should incorporate security as one of the next priority features of the device.
The post Google’s Voice-Activated Home Assistant is a Virtual Butler That’s Always Listening appeared first on Futurism.
Uber has announced that it is testing its first self-driving car, a hybrid Ford Fusion, around Pittsburgh throughout the coming weeks. The car, developed by Uber’s Advanced Technologies Center (ATC), is outfitted with a variety of sensors. These include radars, laser scanners, and high resolution cameras to map details of the environment.
But the car isn’t going solo just yet. A human will be in the driver’s seat to monitor operations.Source: Uber News Room
Uber chose Steel City for its “world-class engineering talent and research facilities,” stating that Pittsburgh’s “wide variety of road types, traffic patterns and weather conditions” makes it an ideal testing ground.
Pittsburgh Mayor William Peduto welcomes their choice: “We’re excited that Uber has chosen the Steel City as they explore new technologies that can improve people’s lives — through increased road safety, less congestion, and more efficient and smarter cities.”Pittsburgh at night, panoramic view.
Uber is making the shift to self-driving cars because, as they note, some 94% of road accidents involve human error. They are joining Google, Ford, Volvo, and Lyft in an alliance that aims to urge lawmakers to revise current existing laws stating that artificial intelligence is not (and cannot be) a legal driver.
The post Uber’s First Self-Driving Car Just Hit the Streets appeared first on Futurism.
Allo’s features include emojis and stickers, gesture controls, the option to send full-bleed photos and doodle on them (much like Snapchat), an incognito mode to ensure private messages, and Smart Reply.
The Smart Reply features works closely with Google Assistant and makes the most out of the machine learning capability of the app. If you type that you’re craving pizza, Smart Reply will automatically pull up options for deliveries from nearby restaurants. Users can also type “@google” in the chat window to talk with a robot the same way that you’d talk to a search engine.
Allo, for Android and iOS, will be available later this summer.
The post Google’s New Allo App is Their AI Answer to Facebook Messenger appeared first on Futurism.
Physicists may soon join the ranks of the unemployed due to artificial intelligence (AI).
Australian physicists have created an AI that can run and even improve a complex physics experiment with little oversight. They’ve automated an experiment creating an extremely cold gas trapped in a laser beam, known as a Bose-Einstein condensate.
Bose-Einstein condensates are some of the coldest objects in the Universe, far colder than outer space—typically less than a billionth of a degree above absolute zero. They are extremely sensitive to external disturbances, which allows them to make very precise measurements such as tiny changes in the Earth’s magnetic field or gravity.
This experiment won three physicists the Nobel Prize in 2001. The machine was able to learn the experiment itself, from scratch, in under an hour. This same procedure would take a simple computer program longer than the age of the universe to run through all the combinations, said co-lead researcher Paul Wigley from ANU Research School of Physics and Engineering.
The team cooled the gas to around 1 microkelvin, and then handed control of the three laser beams over to the artificial intelligence to cool the trapped gas down to a nanokelvin (billionth of a kelvin).Credit: Australian National University
It had to figure out how to apply the lasers and control other parameters to best cool the atoms. Over dozens of repetitions, it continued to find more efficient ways to do so. The researchers never expected the methods the AI would cook up, such as changing one laser’s power up and down the frequency scale, and compensating with another.
The AI is tailor-made for the specific purpose of generating Bose-Einstein condensates, but similar methods could be rigged for other applications. And, in the meantime, it means the possibility of using this strange, ultra-cold form of matter for a broader range of pursuits, and doing so more cheaply than ever before possible.
“You could make a working device to measure gravity that you could take in the back of a car, and the artificial intelligence would recalibrate and fix itself no matter what,” said Michael Hush, the author co-lead researcher.
The post Nobel-Winning Physics Experiment Recreated by AI…In One Hour appeared first on Futurism.
The Pre-Touch technology is able to detect your finger hovering and display menus and options before you select them.
The post Microsoft’s New Tech Knows What You Want To Do Before You Touch It appeared first on Futurism.
Anthony Levandowski, the robot-loving engineer who pioneered Google’s venture into self-driving vehicles, quit the company to work on start-up Otto, a company focused on eliminating transportation muda (the wastes that come with transporting materials within manufacturing facilities).
The company builds sleek, minimal-design autonomous vehicles that provide efficient solutions to cut down time and resources required in a production line.
This time, however, the company is taking their creations beyond factories and out onto the road: They want to turn ordinary commercial trucks—those really big, heavy-duty freight haulers on the highway—into self-driving giants.
To be clear, they are not making a new line of trucks: They are building a kit made of a hardware that has sensors and an OS that can be retrofitted into trucks to make them self-driving. This makes it a much easier transition, and keeps in line with one of their core principles—being frugal. “Profit is the engine that lets us achieve our goals,” their website says. “If we do more with less, we can do more.”
“If you need to replace all of your trucks to get the technology on it, the rate of penetration you’ll be able to have is pretty low. Trucks last ten years, a million miles,” Levandowski says.
As a test run, Otto has retrofitted their system into a Volvo VNL 780, and would like to eventually work with Class 8 trucks.AI Drivers Pending Legality
As of now, the kit does not aim to eliminate truck drivers completely. It still requires a human in the vehicle. Otto says the kit will provide good support, as humans can only drive for a certain amount of time and are more liable for error.
In addition, current laws do not consider AI’s as legal drivers…yet.
While Otto is on to something, it may take a while before their innovations become legally accepted and actually make it to our roads and highways.
How machines deal with comprehending human languages is called Natural Language Understanding (NLU), and revolutionary changes in this technology have given us the many virtual assistants we have today. However, NLU still has many obstacles to go through due to the ambiguous nature of the countless languages all over the world.Parsey and SyntaxNet
Now, Google claims they’re cutting through these difficulties as they announced the open sourcing of a neural network software developed with TensorFlow, SyntaxNet, together with…Parsey McParseface, apparently an English parser.
Parsing, in linguistics, is the breaking down of sentences into their component parts to define what each part means. Experts assert that this is a first key component in NLU systems.
In this case, SyntaxNet is the framework for such an approach, taking a sentence as input and tagging each word with its function in that sentence. This was designed to be trained based on the data you have, and create a model for understanding such data linguistically.
Parsey McParseface, on the other hand, is a ready-made version, capable of accurately analyzing the linguistic structure of input in the English language. How both software processes all of this looks identical to the way we look at a dependency-based parse tree.Cutting Through Ambiguity
Google points out that humans do an almost seamless job of dealing with these misinterpretations, basically because of how we incorporate logic and experience in the matter, effectively disregarding senseless syntactic structures.
Indeed, a single complex sentence can possibly have thousands of possible structures that vary on how we should understand it. This is one of the major hurdles of NLU.Two dependency-based parses with one being correct (left), and the other senseless (right). Credit: Google Research Blog
In their own study, Google gave randomly drawn English sentences to Parsey McParseface which it processed with over 94% accuracy.
Google says that their data suggest that they’re close to reaching human performance with this software. With their characteristic ambition, they intend to develop this framework across all languages.
The post Google Has a New AI That Understands English. And Its Name is ‘Parsey McParseface’ appeared first on Futurism.
Fast food chains have been scrambling to find a way to cash in on the technological revolution going on around them. From delivery drones to meddling in VR, fast food companies are trying to brand themselves as tech savvy establishments.Zerohedge
Which is why KFC has unveiled its first intelligent robot concept store called “Original+.” Located in Shanghai’s National Convention and Exhibition Centre, the store features “Du Mi,” an AI-controlled robot that helps customers order food and make payments.
The Baidu-launched robot marks the first commercial use of artificial intelligence in the fast food industry, according to Chinese news outlet Sohu. Du Mi was released by Baidu during its World Conference in 2015.
According to China Daily, the store also features upgraded automatic ordering machines and Music Charging Tables. Here, customers can place cell phones with wireless charging capabilities inside a designated area. While the devices get energized, their owners can relax by enjoying a list of songs customized by BaiduMP3 for the store.
The post Now You Can Get Your Kentucky Fried Chicken With a Side of Artificial Intelligence appeared first on Futurism.
Among the many startup companies that joined TechCrunch’s Disrupt NY 2016 this week, O-Robotix made themselves pretty well known, despite not winning the contest, thanks to their remarkable product, SeaDrone.
Many drone developments in recent years have focused on airborne technologies, but O-Robotix is seizing an unnoticed share of the drone market: underwater applications.
Co-founder Eduardo Moreno said in the event that SeaDrone is like an underwater quadcopter, small and versatile, incorporating self-stabilizing technologies many aerial drones have today—which is actually a key feature of this drone.
This technology keeps the drone at level underwater, leaving out the need for the user to focus on piloting, but instead on what is actually being observed.
The company asserts that they made everything as “smart” as possible with the intention of making user-experience highly intuitive. The drone is controlled through a powerful tablet app capable of not just recording footage, but also creating an integrated logbook of data.
The post Meet the SeaDrone: The Affordable Underwater Robot appeared first on Futurism.
Researchers from MIT, the University of Sheffield, and the Tokyo Institute of Technology unveiled a capsule that can unfold once swallowed. The revolutionary capsule is steered by external magnetic fields and is capable of being used to help remove objects, patch up internal woulds, or deliver medicine.
The capsule is made of dried pig intestine (typically used as sausage casing) and a small magnet. When folded up it can be ingested easily by a patient, and once it reachers the stomach, it unfolds in its acidic juices where it is guided to complete the task at hand.Work-In-Progress
The design for the capsule is still a work in progress, but it does offer a lot of potential for future development and use.
“For applications inside the body, we need a small, controllable, untethered robot system,” said Daniela Rus, the director of MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and co-creator of robot. “It’s really difficult to control and place a robot inside the body if the robot is attached to a tether.”
In the demonstration however, which required the origami robot to remove a swallowed battery from the stomach, it was able to do so easily using a pig’s stomach. They filled the model with water and lemon juice in an attempt to mimic the acidity of the stomach.
The next step is now to add sensors to the robot so that it can control itself without the need of an external magnetic field.
The post Ingestible Origami Robot Unfolds When Ingested to Deliver Medicine or Patch Up Wounds appeared first on Futurism.
According to a source at Recode, Google is working on its very own version of the Amazon Echo. The hardware device would integrate Google’s search and voice assistant technology into a package that will resemble its OnHub wireless router.Google OnHub We don’t know the official name of the product, yet. Internally, though the project goes by “Chirp.”
While sources at Recode say the device is unlikely to launch next week at Google’s I/O developer conference, others are hopeful that we’ll get a sneak peek soon. If anything, Google is sure to highlight the newest developments in voice assistant tech used in its Android phones — beckoned by the words “Okay, Google” — along with virtual reality developments.
Despite conflicting reports, it seems that the “Chirp” should land at some point this year.
The post Google’s “Chirp” Could be the Next Big Voice-Controlled Personal Assistant appeared first on Futurism.
The use of machine learning has allowed us to solve many of our problems. It can allow us to effectively manage bandwidth, possibly predict solar flares, automate the rooting out of weeds, and so much more. The ability to learn and experience the world much as humans do allows our machines to be better at the tasks we give them.
Sometimes, they’re even better than humans are. US chemists have created a machine-learning algorithm that studies successful and failed experiments in order to beat humans at predicting ways to make crystals.
The study, which was reported in Nature, involved the creation of templated vanadium selenites: compounds of vanadium, selenium and oxygen, in which small organic molecules, such as amines, guide the arrangement of the elements.
To train the algorithm, it was fed data on nearly 4,000 attempts to make the crystals under different conditions such as temperature, concentration, reactant quantity and acidity. This involved transcribing data on failed “dark reactions” from the chemist’s own lab notebooks into a format the machine recognizes and understands. Using this data, the machine was tasked with picking out principles that separated successful experiments from failures.Testing
For their test, the researchers chose previously untried combinations of reactants, and then tried to predict the combination of reactant conditions that would be best to make selenite materials. The algorithm was able to give combinations that resulted in a crystalline product in 89% of around 500 cases. By comparison, the researchers’ ten accumulated years of experience with the materials was successful only 78% of the time.
The chemists then converted the results into a decision tree, or a series of makeshift rules that scientists can use for guidance in the lab, involving simple questions like “Is sodium present?” or “Is the pH greater or less than 3?”Sample of a decision tree created by the team. Credit: Nature
To facilitate further study and to get more data, the team has set up a website, called the Dark Reactions Project, to encourage others to share—in a machine-readable format—their own failed attempts to make new crystals.
As Alex Norquist, a materials-synthesis researcher who participated in the study, observes, “Failed reactions contain a vast amount of unreported and unextracted information. There are far more failures than successes, but only the successes generally get published.”
The post Machine Learning: AI That Runs on Human Failure Succeeds in Making Crystals appeared first on Futurism.
Robots have done and are able to do some of the most amazing tasks we never thought we could trust to a machine. Now, robots have a hand in healthcare, perform delivery tasks by themselves, even milk cows.
But one thing they have never been really good at is using hands. You get these amazing robots that look like humans, talk like humans, and have the pincers of a crab.
A five-fingered robot developed at the University of Washington is capable of dexterous hand movements, and the ability to learn from its own experiences without human intervention.
In a paper to be presented at the IEEE International Conference on Robotics and Automation, the team revealed their creation of an accurate simulation model that allows a computer to analyze movements in real-time.
They married this program to their creation of a five-fingered robotic hand with the combination of speed, force
and compliance required for dexterous manipulation. Their latest demonstration showcased this merger of hardware and software, allowing the bot to do real-world tasks like spinning an elongated tube.
The making of the robot of course begins with hardware. The five-fingered robot—built at a cost of roughly $300,000—incorporates a Shadow Dexterous Hand skeleton and a custom pneumatic system. This allows it to move faster than a human hand.
This special set-up means that it is too expensive for routine commercial or industrial use, but the increased dexterity allows researchers to push core technologies and test innovative control strategies.
The other important aspect of the robot is the software package. This was developed by first creating algorithms that made the robot able to do complex hand tasks—in simulation. These algorithms were then plugged into the robot and made to do the tasks in real life.
As the robot performs all of the tasks, and fails at them, the system collects data from various sensors and motion capture cameras and employs machine learning algorithms to continually refine and develop more realistic models.
So goodbye pincer hands. Looking forward to the first handshake with a robot.
The post This Five-Fingered Robot Hand Is Nimbler Than Your Own appeared first on Futurism.