Futurism - Robot Intelligence
Too many people have died in the fight against ISIS. Families, friends, loved ones are left alone and suffering. And the death toll just keeps rising.
One invention is looking to put, at least some, soldiers out of harm’s way on the battlefield.The armed robotic system on display at a security and defense exhibition in Baghdad in early 2016. Image Source: @nomorestans/Twitter
Two, unnamed, Iraqi brothers has built a machine-gun-wielding, remote controlled, ground vehicle intended for combat against ISIS forces. Alrobot (Arabic for robot), is a car-sized and tank-like robot, equipped with an automatic machine gun and a rocket launcher. Guided by four cameras, the vehicle can be controlled by a someone on a laptop from a kilometer away. After watching a video of the machine posted by Baghdad Post, Eliot Zweig of the Middle East Media Research Institute tells Defense One that the robot is going to be used to help retake the city of Mosul from ISIS.
Since information about this robot, and its creators, is scarce (possibly for their own safety), we don’t know whether this is a one-off or whether there are more to come in the future.
The post Say ‘Hello’ to the Remote Controlled Gun-Wielding, Rocket Launching Vehicle appeared first on Futurism.
The post Meet SwagBot: The Robot Cowboy That Can Herd Cattle appeared first on Futurism.
The post This Robot Flash Mob Broke A Guinness World Record appeared first on Futurism.
Its reaction wheels rotate and brake instantly to help the cube jump.
The post This Gravity-Defying Cube Can Jump, Balance, And Walk appeared first on Futurism.
Three months after announcing that they are testing their self-driving car in Pittsburgh, Uber says their autonomous cars will be out on the streets as early as this month.
In a plan to build an autonomous vehicle empire, Uber plans to launch more than a million self-driving cars to replace human drivers, with around 100 Volvo XC90s modified for autonomy to start taking passengers. An engineer will be on the driver’s seat to start, as well as a co-pilot to take down observations. A “liquid-cooled” computer will also be recording trip and map data in the trunk. Passengers who take the experimental self-driving cars will get their ride for free.Uber’s self-driving car. Image source: Uber
Uber plans to install self-driving kits into existing vehicles rather than building autonomous cars from the ground up. Earlier this month, Uber acquired Otto, a company that works on retrofitting heavy-duty freight haulers on the highway into self-driving trucks.
Otto’s LIDAR (light detection and ranging) sensor technology detects infrared emissions to monitor speed is geared to be adapted for use in Uber’s autonomous vehicles.
For those of us who live in crowded cities, rush hour traffic is a daily struggle we aren’t likely to get used to. The past few years have seen an ever-lengthening travel time in different cities all over the world.
Nobody is immune, not even the most innovative minds of the world. Aircraft manufacturing company Airbus notes the irony that techies in Silicon Valley come up with all sorts of innovation every day, yet none of them has solved one of their own biggest problems: traffic congestion. “Silicon Valley may pride itself on speed, but during rush hour, everything around the IT Mecca grinds to a halt,” they wrote on their website. “The situation is even worse in cities such as Mumbai, Manila, or Tokyo,” they added. In the Philippines, an estimate says PHP 2.5 billion ($57 million) of potential income is lost to traffic every day, and will rise to P6 billion daily by 2030. In the US, this loss is estimated at $160 billion a year.
They add that the problem is only going to get worse, because more and more people will be living in cities. An estimate by the UN projects a huge increase in urban populations within the next few decades: “Today, 54 per cent of the world’s population lives in urban areas, a proportion that is expected to increase to 66 per cent by 2050.”Urban population growth. Image Credit: Luminocity3d.
In response to this, Airbus, among a few others, is looking to the sky for a solution: flying cars! The company is planning to have a prototype for an autonomous flying car up in the air by next year. “To address this rising concern, Airbus Group is harnessing its experience to make the dream of all commuters and travelers come true one day: to fly over traffic jams at the push of a button.”Safety First
While this would be an exciting way to commute, there are, of course, some particulars that need to be set straight before they’re allowed to fly.
First of all, we don’t want these things spinning out of control and wreaking havoc all over the cities. Yes, we wanted to solve heavy traffic…but in methods that don’t involve killing off the population. The biggest challenge for Airbus is engineering a smart obstacle avoidance system, such as those in driverless cars.
Second, as with most new technologies, there’s a waiting period until legislation catches up and sets the ground (in this case, air) rules.
Airbus has already managed to get approval for its drone parcel delivery service in Singapore next year. They’re already building a self-flying vehicle platform called Vahana, which could carry passengers and cargo.
Their long-term plan is to enable passengers to hail a self-driving flying vehicle from an app through their phone, much like Uber. Shared rides through this autonomous ferry service, which they call CityAirbus, would cut down costs.
The post Look Up! 2017 is Going to be the Year of the Autonomous Flying Taxi appeared first on Futurism.
The Bionic Bird soars up to 100m (330ft) at speeds of 20kph (12mph).
In David Karlak’s short film “Rise,” robots have developed advanced emotional sophistication, and their human creators don’t like it. Here is one dark conclusion—a vision of what is possible.
It’s equipped with beta software that analyzes writing form, slant, and spacing.
The post This Pen-Holding Robot Can Write Letters In Your Handwriting appeared first on Futurism.
Elon Musk’s artificial intelligence (AI) company OpenAI just received a package that took $2 billion to develop: NVIDIA CEO Jen-Hsun Huang just delivered the first DGX-1 supercomputer to the non-profit organization, which is dedicated to “advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return.”Jen-Hsun Huang with Elon Musk, and the DGX-1. NVIDIA.
The “AI supercomputer in a box” is packed with 170 teraflops of computing power—that’s equivalent to 250 conventional servers. NVIDIA says it’s a very fitting match: “The world’s leading non-profit artificial intelligence research team needs the world’s fastest AI system.”
“I thought it was incredibly appropriate that the world’s first supercomputer dedicated to artificial intelligence would go to the laboratory that was dedicated to open artificial intelligence,” Huang added.Reddit-training
The supercomputer will tackle the most difficult challenges facing the artificial intelligence industy…by reading through Reddit forums. And apparently, Reddit’s size was not a hindrance. In fact, the site’s size the main reason why the online community was specifically chosen as DGX-1’s training ground.
“Deep learning is a very special class of models because as you scale up, they always work better,” says OpenAI researcher Andrej Karpathy.
The nearly two billion Reddit comments will be processed by DGX-1 in months instead of years, as the $129,000 desktop-sized box contains eight NVIDIA Tesla P100 GPUs (graphic processing units), 7 terabytes of SSD storage, and two Xeon processors, apart from the aforementioned 170 teraflops of performance.
DGX-1 will take on Reddit to learn faster and to chat more accurately. “You can take a large amount of data that would help people talk to each other on the internet, and you can train, basically, a chatbot, but you can do it in a way that the computer learns how language works and how people interact,” Karpathy said.
The supercomputer is also equipped to make things easier from the developers at OpenAI. “We won’t need to write any new code, we’ll take our existing code and we’ll just increase the size of the model,” says OpenAI scientist Ilya Sutskever. “And we’ll get much better results than we have right now.”
The post Elon Musk’s OpenAI is Using Reddit to Teach An Artificial Intelligence How to Speak appeared first on Futurism.
This robot is covered in artificial multifilament muscles.
The post We Are One Step Closer To Making A Terminator-Like Robot appeared first on Futurism.
Most of us are fascinated by creativity. New ideas in science and art are often hugely exciting – and, paradoxically, sometimes seemingly “obvious” once they’ve arrived. But how can that be? Many people, perhaps most of us, think there’s no hope of an answer. Creativity is deeply mysterious, indeed almost magical. Any suggestion that there might be a scientific theory of creativity strikes such people as absurd.
And as for computer models of creativity, those are felt to be utterly impossible. But they aren’t.Understanding Creativity
Scientific psychology has identified three different ways in which new, surprising, and valuable ideas – that is, creative ideas – can arise in people’s minds. These involve combinational, exploratory, and transformational creativity. The information processes involved can be understood in terms of concepts drawn from Artificial Intelligence (AI). They can even be modelled by computers using AI techniques.
The first type of creativity involves unfamiliar combinations of familiar ideas. This is widely recognised. Indeed, it’s usually the only type that’s mentioned, even by people professionally committed to the study of creativity. Examples include puns, poetic imagery, and scientific analogies (the heart as a pump, the atom as a solar system).
The second, exploratory creativity, arises within a culturally accepted style of thinking. This may involve cooking, chemistry, or choreography, and, of course, it may concern either art or science. The notion that creativity is confined to the arts or to the “creative industries” is mistaken.
In exploratory creativity, the rules defining the style are followed, explored, and sometimes also tested in order to generate new structures that fit within that style. An example might be another impressionist painting, or another molecule within a particular area of chemistry. So rules aren’t the antithesis of creativity, as is widely believed. On the contrary, stylistic constraints make exploratory creativity possible.
The third and final form is transformational creativity. This grows out of exploratory creativity, when one or more of the previously accepted rules is altered in some way. It often happens when testing of the previous style shows that it cannot generate certain results which the person concerned wanted to achieve. The alteration makes certain structures possible which were impossible before.
For instance, the “single viewpoint” convention of classical portraiture implies that a face shown in profile must have only one eye. But cubism dropped that convention. Features visible from any viewpoint could be represented simultaneously – hence works such as Picasso’s The Weeping Woman (1937), which depicts its subject with two eyes on the same side of her face.
As that example reminds us, transformational creativity often produces results that aren’t immediately valued, except perhaps by a handful of people. That’s understandable, because one or more of the previously accepted rules has been broken.Making A Creative Computer
All three types of creativity have been modelled by computers (and all have contributed to computer art). That is not to say that the computers are “really” creative. But it does demonstrate that they at least appear to be creative. Their performance would be regarded as creative if it were done by a person.
You might think that, with respect to combinational creativity, this isn’t surprising. After all, nothing could be simpler than to provide a computer with words, images, musical notes etc and get it to combine examples at random. Certainly, many of the results would be novel, and surprising.
Well, yes. But would they be valuable? Most random word combinations, for instance, would be senseless. A practiced poet might be able to use them in a way that showed their relevance – to each other and/or to some other ideas that we find interesting. But the computer itself could not. Unless the programmer had provided clear criteria for judging relevance, the random word combinations couldn’t be pruned to keep only the valuable ones. There are no such criteria, at present – and I’m not holding my breath!
Those few AI models of creativity that do rely on novel combinations generally combine random choice with specific criteria chosen for the task at hand. For example, a joke-generating programme called JAPE churns out riddles like these:
Q: What do you call a depressed train?
A: A low-comotive
Q: What do you get if you combine a sheep with a kangaroo?
A: A woolly jumper.
JAPE is really doing exploratory creativity. It has structured templates for eight types of joke, and explores the possibilities with fairly acceptable results.
Exploratory creativity in general is easier to model in computers than combinational creativity is. But that’s not to say it’s easy: the style of thinking concerned has to be expressed with the supreme clarity required by a computer programme. In JAPE, the style is the joke template. In other cases, it’s a way of writing music (from a Bach fugue to a Scott Joplin rag), or of drawing Palladian villas or human figures. All these, and more, have been achieved already.
Many people assume that transformational creativity is the most difficult of all for computer modelling – perhaps even impossible. After all, a computer can do only what its programme tells it to do. So how can there be any transformation?
There can be, if the programme can alter its own rules. Such programs exist, and are used not only in some computer art but also in designing engines. They are evolutionary programmes, employing “genetic algorithms” inspired by mutations in biology to make random changes in their rules.
Some evolutionary programmes can also prune the results, selecting those which are closest to what the task requires, and using them to breed the next generation. That’s true of engine-design systems, for instance. Often, however, the selection is done by a human, because the programme can’t define suitable selection criteria to do the job automatically. In short, transformation isn’t the problem. The key problem again is relevance, or value.
So creativity is, after all, scientifically intelligible, as I’ve argued in my books The Creative Mind: Myths and Mechanisms (2004) and Creativity and Art: Three Roads to Surprise (2010). But it’s not scientifically predictable. Human minds are far too rich, far too subtle, and far too idiosyncratic for that.
By Margaret Boden. Professor of cognitive science at the University of Sussex, author of Mind As Machine, awarded an OBE in 2001
The San Francisco based startup, Zipline, is set to begin testing its drone delivery system intended to deliver medical supplies to remote areas of the United States.
While it’s still pending FAA approval, flights for Zipline’s drones are poised to demonstrate just how useful unmanned aircraft technology will be for the dissemination of critical care supplies. The initiative is set to reach areas such as Smith Island, Maryland and the Pyramid Lake Tribal Health Clinic in Nevada, where it will deliver items such as blood, medicine, and other medical products.Biplane Major Investments
Backed by $19 million in venture capital investments from companies such as Google Ventures, Sequoia Capital, Microsoft co-founder Paul Allen and Yahoo co-founder Jerry Yan, Zipline has designed its own drone, and a launch and landing system for the project.
While testing will be conducted in the US, the company has focused on Africa as its primary market since its inception; in fact, the company has pioneered the use of unmanned aerial vehicles for logistics services focused on health. Nevertheless, Zipline believes that this kind of technology has a lot of applications in the US and the rest of the world. And once Zipline begins testing this White House approved initaitive, it has the potential to shift the national framework for the commercial drone sector.
The post Drone-Delivered Medical Supplies Set to Bring Vaccines to Remote Areas appeared first on Futurism.
These wall-climbing robots weave structures from carbon fiber.
The post These Wall-Climbing Robots Can Build Entire Structures appeared first on Futurism.
The SAW robot produces a pure wave motion using a single motor.
The post Meet The 3D Printed Robot That Can Crawl, Climb, And Swim appeared first on Futurism.
Imagine a robot that you’d want to use in combat. Like the U.S. Army, you probably first chose a bulky robot that looks like a mini-tank. Sporting heavy machine guns and other high caliber weaponry, these robots often become fire support assets to soldiers.
But, smaller might actually be a better option for future robot allies. In trial runs of robot candidates for field deployment, soldiers preferred tiny, pocket sized recon bots over tracked ground robots.
The trials were part of the Pacific Manned-Unmanned Initiative (PACMAN I). Soldiers of the 25th Infantry Division controlled air and land drones to determine which ones were ready to be part of their formations.Pocket Predator
One of the drones the group tested was the PD-100 Black Hornet, a mini drone sporting a camera for short reconnaissance missions. Silent enough not to be heard when in the air, the palm-sized robot has steerable cameras, weighs just 18.25 grams (less than an ounce) and can be set up in three to five minutes.U.S. Army
It has a range of up to 2.4 km (1.5 mi) and a flight time of 25 minutes. It comes with a backpack charger, allowing soldiers to charge one while another is flying.
“That was a system that we could actually take right now…on the battlefield,” says Staff Sergeant James Roe to Breaking Defense. “Some of these other systems, as with any electronics and robotics, there are some things that have to be worked out.”
One of the systems that needed tweaking was MUTT, the Multipurpose Unmanned Tactical Transport. An unmanned ground vehicle, it is tracked and sports a .50 caliber machine gun. While it was good at carrying heavy equipment and providing fire, soldiers did take issue with its speed and lack of maneuverability in the environs of the Pacific.
The post These Tiny, Pocket-Sized Drones May be The Combat Robots of The Future appeared first on Futurism.
French designers Pierre Emm and Johan da Silveira worked with Autodesk engineer David Thomasson to equip an industrial robot with a tattoo machine — and to test it on Emm.
This project obviously entails careful examination of safety measures.
Unlike a human tattoo artist, a robot cannot adjust to subtle movements when inking human flesh with a needle. So it’s necessary to give the robot precise positioning information. This is done by taking a 3D scan of the human area to be tattooed and creating a design through software.
“To be tattooed by a robot represents for us the accomplishment in the development of our process which we have worked towards,” Emm tells The Creators Project. “Behind the machine is a lot of human involvement. The tattoo also represents the connection between the tool and the person piloting the machine. We work very hard on the question of hygiene and security to bring the process to the highest level possible.”
The post Watch As the First Human is Tattooed By An Industrial Robot appeared first on Futurism.
Who wouldn’t want a robot to do all the cleaning?
Maya Cakmak, an assistant professor of computer science and engineering at the University of Washington points out that getting a robot to clean would require much more than simply getting it to hold a tool to some surface.
She says, “Cleaning is different from other tasks we’ve thought about in robotics, which typically involved manipulating objects, or moving them place to place…There’s the angle, how much you’re pushing and pressure you’re applying, how fast you move it, how much you move it, and even the orientation [of the tool] relative to the dirt.”
And we need not stop there. There’s the kind of mess (whether it’s liquid or solid), the type of surface (whether it’s rough or smooth, glass or cement), and many others. There are just so many variables to it that programming software to handle all of them is really a remarkably daunting robotics challenge. But as Cakmak might have said to herself: Challenge Accepted.Machine Learning
To enable her robots to perform such tasks, she uses a technique called “Programming by Demonstration.” Using this, the machines learn by imitating a researcher performing several cleaning techniques via the robot’s vision system. Using a variety of cleaning attachments and colored aquarium crystals as the dirt in the tests, she and her team want to get the robot to generalize the cleaning motion from the human demonstration, and also correctly identify the “state of dirt” before and after the cleaning action.
We’ve a long way to go before we start seeing any Rosies zipping around the house. Not only are we still in the phase of manually teaching the bot how to perform the chores, but we also need to account for the bot’s working environment.
Cakmak suggests that household robots can’t become truly autonomous until we redesign our houses to make them more machine-friendly. For instance, long hallways might require markings that a robot can read for geolocation purposes.
Still, with the huge potential to enhance quality of life and increase independence for people living with disabilities, researchers are still taking on the enormous task of surmounting this challenge.
So for now, don’t throw away your brooms just yet. We’re going to have to wait a bit more.
The post Evidence Suggestes We’ve Got A Ways to Go Before We Can Make The Robot of Our Dreams appeared first on Futurism.
Tesla has been hitting some rough patches, with widely publicized crashes that were allegedly caused by its Autopilot feature. It seems those patches are about to get rougher, as another crash has been reported, this time in China.
Tesla has announced that one of its cars equipped with the Autopilot feature has crashed against a parked car on the side of the road.
The Model S, which is owned by programmer Luo Zhen, was driving in Beijing with the Autopilot turned on when it struck a car parked half off the road. This resulted in a side mirror that was sheered off and scrapes to both cars, but no injuries.
Luo was able to tape the incident with his dashcam and post it on Weibo.
But in a response to Reuters, Tesla officials blame the driver. “The driver of the Tesla, whose hands were not detected on the steering wheel, did not steer to avoid the parked car and instead scraped against its side.”
Tesla manuals and instructions dictate that drivers must keep hands on the steering wheel at all times. So the argument is: The driver wasn’t the one driving…so it’s the driver’s fault.
The post Tesla Autonomous Car Crashes Again, This Time In China appeared first on Futurism.