Futurism - Robot Intelligence
As artificial intelligence improves, machines will soon be equipped with intellectual and practical capabilities that surpass the smartest humans. But not only will machines be more capable than people, they will also be able to make themselves better. That is, these machines will understand their own design and how to improve it – or they could create entirely new machines that are even more capable.
The human creators of AIs must be able to trust these machines to remain safe and beneficial even as they self-improve and adapt to the real world.Recursive Self-Improvement
This idea of an autonomous agent making increasingly better modifications to its own code is called recursive self-improvement. Through recursive self-improvement, a machine can adapt to new circumstances and learn how to deal with new situations.
To a certain extent, the human brain does this as well. As a person develops and repeats new habits, connections in their brains can change. The connections grow stronger and more effective over time, making the new, desired action easier to perform (e.g. changing one’s diet or learning a new language). In machines though, this ability to self-improve is much more drastic.
An AI agent can process information much faster than a human, and if it does not properly understand how its actions impact people, then its self-modifications could quickly fall out of line with human values.
For Bas Steunebrink, a researcher at the Swiss AI lab IDSIA, solving this problem is a crucial step toward achieving safe and beneficial AI.Building AI in a Complex World
Because the world is so complex, many researchers begin AI projects by developing AI in carefully controlled environments. Then they create mathematical proofs that can assure them that the AI will achieve success in this specified space.
But Steunebrink worries that this approach puts too much responsibility on the designers and too much faith in the proof, especially when dealing with machines that can learn through recursive self-improvement. He explains, “We cannot accurately describe the environment in all its complexity; we cannot foresee what environments the agent will find itself in in the future; and an agent will not have enough resources (energy, time, inputs) to do the optimal thing.”
If the machine encounters an unforeseen circumstance, then that proof the designer relied on in the controlled environment may not apply. Says Steunebrink, “We have no assurance about the safe behavior of the [AI].”Experience-based Artificial Intelligence
Instead, Steunebrink uses an approach called EXPAI (experience-based artificial intelligence). EXPAI are “self-improving systems that make tentative, additive, reversible, very fine-grained modifications, without prior self-reasoning; instead, self-modifications are tested over time against experiential evidences and slowly phased in when vindicated, or dismissed when falsified.”
Instead of trusting only a mathematical proof, researchers can ensure that the AI develops safe and benevolent behaviors by teaching and testing the machine in complex, unforeseen environments that challenge its function and goals.
With EXPAI, AI machines will learn from interactive experience, and therefore monitoring their growth period is crucial. As Steunebrink posits, the focus shifts from asking, “What is the behavior of an agent that is very intelligent and capable of self-modification, and how do we control it?” to asking, “How do we grow an agent from baby beginnings such that it gains both robust understanding and proper values?”
Consider how children grow and learn to navigate the world independently. If provided with a stable and healthy childhood, children learn to adopt values and understand their relation to the external world through trial and error, and by examples. Childhood is a time of growth and learning, of making mistakes, of building on success – all to help prepare the child to grow into a competent adult who can navigate unforeseen circumstances.
Steunebrink believes that researchers can ensure safe AI through a similar, gradual process of experience-based learning. In an architectural blueprint developed by Steunebrink and his colleagues, the AI is constructed “starting from only a small amount of designer-specific code – a seed.” Like a child, the beginnings of the machine will be less competent and less intelligent, but it will self-improve over time, as it learns from teachers and real-world experience.
As Steunebrink’s approach focuses on the growth period of an autonomous agent, the teachers, not the programmers, are most responsible for creating a robust and benevolent AI. Meanwhile, the developmental stage gives researchers time to observe and correct an AI’s behavior in a controlled setting where the stakes are still low.The Future of EXPAI
Steunebrink and his colleagues are currently creating what he describes as a “pedagogy to determine what kind of things to teach to agents and in what order, how to test what the agents understand from being taught, and, depending on the results of such tests, decide whether we can proceed to the next steps of teaching or whether we should reteach the agent or go back to the drawing board.”
A major issue Steunebrink faces is that his method of experience-based learning diverges from the most popular methods for improving AI. Instead of doing the intellectual work of crafting a proof-backed optimal learning algorithm on a computer, EXPAI requires extensive in-person work with the machine to teach it like a child.
Creating safe artificial intelligence might prove to be more a process of teaching and growth rather than a function of creating the perfect mathematical proof. While such a shift in responsibility may be more time-consuming, it could also help establish a far more comprehensive understanding of an AI before it is released into the real world.
Steunebrink explains, “A lot of work remains to move beyond the agent implementation level, towards developing the teaching and testing methodologies that enable us to grow an agent’s understanding of ethical values, and to ensure that the agent is compelled to protect and adhere to them.”
The process is daunting, he admits, “but it is not as daunting as the consequences of getting AI safety wrong.”
If you would like to learn more about Bas Steunebrink’s research, you can read about his project here, or visit http://people.idsia.ch/~steunebrink/. He is also the co-founder of NNAISENSE, which you can learn about at https://nnaisense.com/.
The post Avoiding Ex Machina: How We Can Ensure Our AI Are Safe appeared first on Futurism.
Over the past few decades, smart machines and robots have taken on numerous manual labor jobs, and developments are showing no signs of stopping. Where does this leave the future of the work force? Surely only blue collar jobs are at risk, right?
In a new study, father-and-son Richard and Daniel Susskind, information technology researchers, sought to debunk the standing belief that some human experts—like doctors, lawyers, and accountants—cannot be replaced by robots equipped with artificial intelligence (AI). The belief is maintained by the claim that there’s just some things too tricky for robots, like subjective judgement, creativity, and empathy.
The researchers asserted, however, that AI does have a role in these positions, considering there’s already a big dependence on tech-based services. They noted that the monthly hits on Web MD network (a collection of health sites) outnumber the visits to all doctors in the US. Trade sites even have algorithms that can settle legal disputes. eBay used their “online dispute resolution” to solve 60 million disagreements instead of lawyer consultations, a number three times the annual lawsuits filed in the US. Just this year, the AI lawyer “Ross” was employed by a firm for its bankruptcy practice. All these examples may be indicative of the shift in professional services.Credit: Ebon Geek Better Than All of Us?
The authors deem that the view that AI cannot replace human roles, because they cannot be creative or empathetic as sentient humans, is a big fallacy. According to the researchers:
“The error here is not recognizing that human professionals are already being outgunned by a combination of brute processing power, big data, and remarkable algorithms. These systems do not replicate human reasoning and thinking. When systems beat the best humans at difficult games, when they predict the likely decisions of courts more accurately than lawyers, or when the probable outcomes of epidemics can be better gauged on the strength of past medical data than on medical science, we are witnessing the work of high-performing, unthinking machines.”
It’s true that most of us would turn first to the internet to look for a diagnosis and treatment when we’re a bit ill—the doctor could come later. Could our collective bot-trusting character, coupled with ever-speeding technological advancement, lead to a future work force completely run by AI? The Susskinds believe we’re on our way.
The post Researchers: AI Could Take Over Much More Than Blue Collar Jobs appeared first on Futurism.
Speaking at the launch of the Leverhulme Centre for the Future of Intelligence (CFI) in Cambridge, science icon Stephen Hawking warned listeners about the future of artificial intelligence (AI) and humanity.
“Success in creating AI could be the biggest event in the history of our civilization,” Hawking acknowledged, noting the unprecedented and rapid development of AI technology in recent years, from self-driving cars to a computer playing (and defeating humans) in a game of Go. “But it could also be the last,” he warned.
This isn’t the irrational ramblings of a technophobe. Quite the contrary, in fact. Hawking himself acknowledges the value of AI and what it could contribute to humanity’s future, saying he believes artificial intelligence and this century’s technological revolution will parallel the previous century’s industrial one. “The potential benefits of creating intelligence are huge. We cannot predict what we might achieve, when our own minds are amplified by AI,” said Hawking.
But Hawking is also hyperaware of the potential dangers associated with AI. “Alongside the benefits, AI will also bring dangers, like powerful autonomous weapons, or new ways for the few to oppress the many,” Hawking added during his speech. He also hinted at the singularity being a possibility, when AI develops a will of its own that could conflict with the will of humanity.Setting the right course
According AI pioneer Maggie Boden, who sits on the center’s advisory board, that’s where CFI comes into play. Speaking at the launch event, she said, “CFI aims to pre-empt these dangers, by guiding AI development in human-friendly ways.” The $12.2-million interdisciplinary think-tank will work hand-in-hand with policy-makers and the tech industry to investigate topics associated with the growth of AI in today’s world — from regulating autonomous weapons to AI’s implications in democracy.
Cambridge, Oxford, Berkeley, and Imperial College, London, are behind the initiative, so some of the best minds on the planet will be working together to shape the future of AI. “The research done by this center is crucial to the future of our civilization and of our species,” a hopeful Hawking concluded during his speech.
AI is like any life-changing technology in that it’s not the development itself that can be good or bad. The people who develop the tech are responsible for determining how it’s used, and despite his ominous warnings, Hawking’s work with CFI seems to prove he believes that AI technology isn’t to be feared given the right research and preparation.
The post Hawking: Creating AI Could Be the Biggest Event in the History of Our Civilization appeared first on Futurism.
Dubai will soon find robot cops patrolling its streets. A prototype of the new android police officer was seen patrolling the halls of the Gulf Information Technology Exhibition (GITEX), Dubai’s annual computer and electronic trade show.
The prototype can scan faces and was equipped with a touchscreen which can be used to report a crime and process fines for traffic violations. Best of all, it can salute and shake hands — a feat other existing police robots can’ really do since they lack the hands to do so.
The robot cop is still in bootcamp, so to speak, with Dubai Police looking to add AI technology that will allow it to “spot people from 10 meters or 20 meters away, approach them and greet them,” explained Major Adnan Ali, Technical Innovations Department Head, last June. They are working with IBM’s supercomputer, Watson, and Google to add a virtual assistant system to allow the robot to follow voice commands.Law Enforcement Assistants
Dubai’s version of RoboCop is definitely not the first robot that entered law enforcement service. While it is the first one that can shake hands and salute like a proper police officer, several robots and AI systems have already been used and shown their mettle in real life situations (except in Russia where the police arrested a robot, instead).
Examples include, among others, a robot used by Cleveland police to scan for bombs in the Republican Convention last July; urban combat robots in China; and an intruder-chasing security drone in Japan.
As our robots become better and better, and with AI increasingly becoming more intelligent, it seems inevitable that these be put to good use. What’s better than having these serve and protect? Again, rather than fear what may be — that these turn against us — let’s first appreciate the help robots can provide for us.
The post Real-Life RoboCop: Dubai’s Police Force is Getting Android Reinforcements Next Year appeared first on Futurism.
A study published last Monday, heralded as an historic achievement by Microsoft, details a new speech recognition technology that’s able to transcribe conversational speech as well as humans — or at least, as best as professional human transcriptionists (which is better than most humans).
The technology scored a word error rate (WER) of 5.9%, which was lower than the 6.3% WER reported just last month. “[I]t’s the lowest ever recorded against the industry standard Switchboard speech recognition task,” Microsoft reports. The rate is the same as (or even lower than) the human professional transcriptionists who transcribed the same conversation.
“We’ve reached human parity,” says Xuedong Huang, Microsoft’s chief speech scientist. The new technology uses neural language models that allow for more efficient generalization by grouping similar words together.
The achievement comes decades after speech pattern recognition was first studied in the 1970s. With Google’s DeepMind making waves in speech and image recognition (and speaking like humans do), the technology is Microsoft’s timely contribution to the fast-paced artificial intelligence (AI) research and development.
The achievement was unlocked using the Computational Network Toolkit, Microsoft’s homegrown system for deep learning.Next step: Understanding
The applications for the new technology are bound to improve user experience for Microsoft’s personal voice assistant for Windows and Xbox One. “This will make Cortana more powerful, making a truly intelligent assistant possible,” says an excited Harry Shum, the executive vice president heading the Microsoft Artificial Intelligence and Research group. Of course, it will also develop better speech-to-text transcription software.Credits: The Verge
Microsoft clarifies, however, that parity does not mean perfection. The computer did not recognize every word clearly, which is something not even humans could do perfectly (nor can Siri or other existing voice assistants).
Impressive as it is, there remains room for improvement. The next goal: making computers understand human conversation. “The next frontier is to move from recognition to understanding,” says Geoffrey Zweig, Speech & Dialog research group manager.
The post Microsoft’s Speech Recognition Tech Is Officially as Accurate as Humans appeared first on Futurism.
Just this week, Amazon received a patent for the smallest drone ever. The miniaturized, unmanned, aerial vehicle (UAV) is designed to assist users in a number of ways — from recovery of lost persons and items, to providing assistance to policemen and firefighters. It’s Amazon’s first venture into a drone product for consumers – not just for delivery.
The drone’s features make it an all-around personal assistant, equipped with voice-control and Alexa (Amazon’s AI-equipped voice assistant). Accordingly, it can respond to voice commands such as “follow me” and “hover,” allowing for varied uses. Amazon also plans to make the drone quite small – small enough to fit in your pocket or to dock on a police officer’s radio.Amazon
Using RFID-search capabilities and facial recognition software, the drone can help locate lost persons or even your elusive car keys. In terms of locating people, the patent states:
The UAV can receive a “find Timmy” command, which can include the “search” routine, and possibly an “identify Timmy” subroutine to locate a person identified as “Timmy.” In some examples, Timmy can have an RFID tag sewn into his clothes or a bar code printed on his shirt to facilitate identification.An unmanned future
Today’s advances in unmanned vehicle technology are unprecedented. We’re definitely up for a future of unmanned everything (or at least many things), from aerial drones to driverless cars, self-driving factory workers, and even submarines and boats.
Amazon has taken a major step in the right direction as the company moves beyond using drones for delivery. While many of the intended uses of this mini drone are still considered illegal in the United States, the FAA is supposedly updating their rules for commercial drone use within the next five years.
The future is bright for unmanned vehicle technology. Rather than fear such technological advancements, we should look to the added value they offer to the way we work and live.
The post The World’s Smallest Drone is Voice Controlled, And it Can Fit in Your Pocket appeared first on Futurism.
This entire film was shot by pre-programmed drones.
The artificial intelligence that beat human players in Go can now learn from its own memory. Google’s DeepMind AI, according to its programmers, is now capable of intelligently building on what’s already inside its memory.
DeepMind is now equipped with a system called Differential Neural Computer (DNC). It’s a hybrid system that uses the vast data bank of conventional computers, paired with a neural network. “These models… can learn from examples like neural networks, but they can also store complex data like computers,” DeepMind researchers Alexander Graves and Greg Wayne wrote in a blog post.
The DNC combines AI’s neural network approach with external memory (similar to your external hard drives). Neural networks simulate brain capabilities using massive interconnected nodes that work dynamically.The DNC continually optimizes its responses, becoming more and more accurate over time, without any extra help.Like the Human Brain
What’s fascinating about the DNC is that it works out information on its own, being able to effectively juggle huge amount of data in its memory all at once. In short, it functions like a human brain — using data from memory to figure out new information.
That’s how we work, right?
One way our brain makes decisions is by using experience — memory! DeepMind can do this now, thanks to the DNC. Explaining it in Nature, the researchers said:
Like a conventional computer, [a DNC] can use its memory to represent and manipulate complex data structures, but, like a neural network, it can learn to do so from data. We show that it can learn tasks such as finding the shortest path between specified points and inferring the missing links in randomly generated graphs, and then generalize these tasks to specific graphs such as transport networks and family trees. When trained with reinforcement learning, a DNC can complete a moving blocks puzzle in which changing goals are specified by sequences of symbols.
These connections are easily made by the human brain, of course, but this is a first for AI. Without learning every possible answer beforehand, DeepMind can figure things out independently with just its memory.
It’s a step towards AI that can reason by itself.
The post Dawn of Synthetic Reason: DeepMind Can Learn From Its Own Memory appeared first on Futurism.
The bot can jump on one leg without a tether to keep it from falling over.
The post Disney’s New Robot Will Hop Its Way Into Your Heart appeared first on Futurism.
Uber has evolved from a simple ride-hailing app into a business that wants to revolutionize the way we get around. The San Francisco-based company is already shaking up the market with autonomous vehicles and its food delivery service UberEats.
Uber wants to shake up another industry: trucking. Uber has purchased self-driving truck startup Otto to enter the long-haul freight business and put their trucks on the road by 2017.
Uber just bought Otto this summer for $680 million. And while their dreams of autonomous trucking are decades away, Uber and Otto wants to start using the mapping, tracking, and hailing experience Uber has for autonomous big rigs.
Otto co-founder Lior Ron argues Uber’s tracking technologies could make a big difference in the inefficient trucking industry.
“In Uber, you press a button and an Uber shows up after three minutes,” Ron said, according to Reuters. “In freight … the golden standard is that it takes (the broker) five hours of phone calls to find your truck. That’s how efficient the industry is today.”
Uber is currently persuading shippers, truck fleets, and independent drivers to utilize its services so it can cut into the market of traditional freight brokers. But with its negative backlash from traditional taxi companies, it’ll be interesting to see if Uber can disrupt the trucking industry.
The post Uber Vows to Put Self-Driving Otto Trucks on the Road by 2017 appeared first on Futurism.
A novel design for robots allows them to “sweat”, greatly improving thermal and mechanical integrity. The bot from SCHAFT was a top scorer in the DARPA Robotics Challenge Trials in 2013.
The University of Tokyo’s JSK Lab’s Kengoro is a 1.7-meter (5.6 feet) tall, 56-kilogram (123 pounds) musculoskeletal humanoid crammed to the brim with circuit boards and 108 motors. These structural components generate a lot of heat which would constrain the bot’s performance, and there wasn’t much room for any cooling mechanisms. The researchers coped with this by comparing it to our own—sweat.
JSK lab developed Kengoro’s 3-D frame to be porous, capable of maintaining a system of flowing water. The frame is made of laser-sintered aluminum filled with channels like that of a sponge. A cup of deionized water keeps Kengoro running for about half a day, and needs to be kept “hydrated” throughout the day.
“Usually the frame of a robot is only used to support forces,” lead author Toyotaka Kozuki told Spectrum. “Our concept was adding more functions to the frame, using it to transfer water, release heat, and at the same time support forces.”
JSK reports that the cooling technology is three times better than air cooling, but falls short of the efficiency of a traditional radiator using active cooling. It has allowed Kengoro to run at full power for longer, and the bot can even do push-ups for 11 minutes straight without exhausting its design.
The post Cool Automatons: Humanoid Robots Have Been Given the Ability to Sweat appeared first on Futurism.
A team of researchers from Belgium think that they are close to extending the anticipated end of Moore’s Law, and they didn’t do it with a supercomputer. Using an artificial intelligence (AI) algorithm called reservoir computing, combined with another algorithm called backpropagation, the team developed a neuro-inspired analog computer that can train itself and improve at whatever task it’s performing.
Reservoir computing is a neural algorithm that mimics the brain’s information processing abilities. Backpropagation, on the other hand, allows for the system to perform thousands of iterative calculations that reduce error, which lets the system improve its solution to a problem.
“Our work shows that the backpropagation algorithm can, under certain conditions, be implemented using the same hardware used for the analog computing, which could enhance the performance of these hardware systems,” Piotr Antonik explains.
Antonik, together with Michiel Hermans, Marc Haelterman, and Serge Massar at the Université Libre de Bruxelles in Brussels, Belgium, published their study on this self-learning hardware in the journal Physical Review Letters.Authentic Self-Learning
Not only is the team’s self-learning hardware better at solving difficult computing tasks than other experimental reservoir computers, it’s also capable of handling tasks previously considered beyond what traditional reservoir computing could do.“Full” refers to the new hardware system, while “Reservoir” is the traditional one. Credits: Hermans et al. ©2016 American Physical Society
By physically implementing reservoir computing and backpropagation algorithm on a photonic setup (delay-coupled electro-optical system), the hardware was able to perform three tasks: (1) TIMIT, a speech recognition task; (2) NARMA10, often used to test reservoir computers; and (3) VARDEL5, a complex nonlinear task, supposedly beyond traditional reservoir computing.
The researchers look towards expanding what this new reservoir computing can handle, especially since it’s a technology that can improve itself. “We are, for instance, writing up a manuscript in which we show that it can be used to generate periodic patterns and emulate chaotic systems,” Antonik says.
Moving forward, the team aims to increase the speed of their experiments. “We are currently testing photonic systems in which the internal variables are all processed simultaneously—we call this a parallel architecture. This can provide several orders of magnitude of speed-up. Further in the future, we may revisit physical error backpropagation, but in these faster, parallel, systems.”
The post Self-Learning AI: This New Neuro-Inspired Computer Trains Itself appeared first on Futurism.
There’s more to this BMW than meets the eye.
The U.S. may be trailing behind China in artificial intelligence (AI) research — or at least in journal articles that mention “deep learning” or “deep neural network” — according to the White House’s National Artificial Intelligence Research and Development Strategic Plan.
While the U.S. remains an early leader in deep learning research, China seems to be spending more time studying the technology and making influential contributions to the field than we are, according to White House’s research, leading the U.S. in both the number of published deep learning studies and the number of studies cited by other researchers.
The administration’s plan proposes future research and development projects in the field of AI and was released in conjunction with Preparing for the Future of Artificial Intelligence, a report that overviews the current and future state of AI technology and its place in society. Both papers were shared in anticipation of the White House Frontiers Conference.Data from the NSTC’s AI R&D Plan. Credits: Office of Science and Technology Policy/The White House AI should be a top priority
Both countries, obviously, are devoting a lot of attention to AI and deep learning, in particular. Speech and image recognition technology has majorly improved in the last couple of years due to better deep learning algorithms, but while the benefits of the technology are clearly being enjoyed in the country at present, it wouldn’t do well for the U.S. to get left behind in the actual development of AI.
“When AI stands to transform virtually everything including labor, the environment, and the future of warfare and cyberconflict, the United States could be put at a disadvantage if other countries, such as China, get to dictate terms instead,” asserts Brian Fung of The Washington Post.
According to the Preparing for the Future… report, “current levels of R&D spending are half to one-quarter of the level of R&D investment that would produce the optimal level of economic growth,” so the U.S. needs to step up, both for the sake of the tech and also for the safety of its citizens given the potential dangers of AI’s usage.
“Becoming a leader in artificial-intelligence research and development puts the United States in a better position to establish global norms on how AI should be used safely,” says Fung.
This dog-like robot climbs fences and opens doors.
The Obama administration has consciously been putting itself at the helm of science and development efforts. Today, to prove the administration’s continuous commitment on building the country’s capacity in science and technology, President Obama hosts the White House Frontiers Conference in Pittsburgh, “to imagine the Nation and the world in 50 years and beyond, and to explore America’s potential to advance towards the frontiers that will make the world healthier, more prosperous, more equitable, and more secure,” according a White House report.Credits: CMU/Univ, of Pittsburgh
As primers to today’s discussions, the White House released two documents yesterday that focus on artificial intelligence. One is called Preparing for the Future of Artificial Intelligence, a 58-page report that “surveys the current state of AI, its existing and potential applications, and the questions that progress in AI raise for society and public policy. The report also makes recommendations for specific further actions.”
This document covers key policy questions regarding AI and how the government should adapt regulations that affect AI industries — encouraging innovation while keeping the public safe and protected.AI Research and Development
The second document, the National Artificial Intelligence Research and Development Strategic Plan, is a companion paper to the first. This paper details seven key strategies to keep AI research and development on point.
The strategies include making long-term investments in AI research and development to understanding ethical, legal, and societal implications of AI. There’s also a strategy for providing environments for AI training and testing.
“Already, AI technologies have opened up new markets and new opportunities for progress in critical areas such as health, education, energy, and the environment. In recent years, machines have surpassed humans in the performance of certain specific tasks, such as some aspects of image recognition,” the White House report acknowledges.
“In the coming years, AI will continue contributing to economic growth and will be a valuable tool for improving the world in fields as diverse as health care, transportation, the environment, criminal justice, and economic inclusion.”
With the release of these papers and today’s conference, the Obama administration is making sure that, whatever the results of the coming elections, the government takes AI policies that are authentically scientific and geared toward real progress.
The post White House: Artificial Intelligence Isn’t Just Important, It Will Help Ensure Our Survival appeared first on Futurism.
In a recent interview with Wired editor-in-chief Scott Dadich and MIT Media Lab director Joi Ito, President Barack Obama discussed the importance of exploring the concept of universal basic income (UBI) as it relates to the rise of technology and artificial intelligence (AI).
AI could fundamentally change employment in the future, and Obama notes that is already ingrained in our daily lives in more ways than we could imagine, citing the medical and transportation industries at the forefront. And while AI is inevitably poised to create a more productive system and possibly a more efficient economy, he’s also quick to mention that there could be downsides in terms of eliminating jobs, increasing inequality, and suppressing wages.
When asked about the future of Universal Basic Income in relation to autonomy, Obama skirted the issue, saying, “What is indisputable … is that as AI gets further incorporated, and the society potentially gets wealthier, the link between production and distribution, how much you work and how much you make, gets further and further attenuated.”
The economic implications are evident, and Obama asserts that a conversation on how to manage this is something that should happen soon. But while Obama recognizes UBI as a system that could potentially address concerns of technology taking over jobs, its role in the future is still dependent on how much widespread support it will get, and the President failed to come down on either side—noting instead that this conversation will be increasing over the coming years.Universal Basic Income
A UBI system of wealth distribution entails the government providing all of its citizens with a fixed amount of money, regardless of income. There are no stipulations as to what the money can be used for, and it comes with no strings attached.
How effective the system will be has long been debated between advocates and critics. Proponents consider UBI a simple and straightforward way to lift people from poverty. Critics however, point out that giving away free money is unfair and completely disregards the idea of working hard for a living.
“Whether a universal income is the right model — is it gonna be accepted by a broad base of people? — that’s a debate that we’ll be having over the next 10 or 20 years,” Obama notes.
While the system would be a major change, with a basic income in place, people would be able to meet their basic needs for survival while having the opportunity to pursue their passions. We’d let the machines focus on working, and we’d focus on living.
The post Obama: In the Age of Autonomy, Universal Basic Income Will Enter Our Debates appeared first on Futurism.
Last August, President Barack Obama sat down with entrepreneur and MIT Media Lab director Joi Ito and WIRED’s editor-in-chief Scott Dadich in the Roosevelt Room of the White House to discuss artificial intelligence — a topic fit for a room so solemnly historical.
With the White House releasing a report on the future of AI and today’s White House Frontiers Conference, it seems fitting to go back to the finer points of that historic interview — which Dadich saw as an opportunity “to sort through the hope, the hype, and the fear around AI.”(L to R) Ito, Dadich, and Obama. Credits: Christopher Anderson/Magnum Photos for WIRED On The Age of AI
The discussion covered several important points; however, Dadich’s first question regarding AI got straight to the major point, and Obama was quick to note that, in many ways, synthetic intelligence is already altering our society.
Dadich asked, “When was the moment you knew that the age of AI was upon us?” Obama gave a rather good answer, noting that a total transformation comes slowly and piecemeal:
My general observation is that it has been seeping into our lives in all sorts of ways, and we just don’t notice; and part of the reason is because the way we think about AI is colored by popular culture […] We’ve been seeing specialized AI in every aspect of our lives, from medicine and transportation to how electricity is distributed, and it promises to create a vastly more productive and efficient economy. If properly harnessed, it can generate enormous prosperity and opportunity. But it also has some downsides that we’re gonna have to figure out in terms of not eliminating jobs. It could increase inequality. It could suppress wages.
Ito took a more social approach to it, adding that “this is the year that artificial intelligence becomes more than just a computer science problem. Everybody needs to understand that how AI behaves is important. In the Media Lab we use the term extended intelligence [machine learning as extensions of human intelligence]. Because the question is, how do we build societal values into AI?”
Obama replied by noting that these advances are on the cusp of reality, and that it is really policy that is holding us back (a policy that will soon be worked out):
The technology is essentially here. We have machines that can make a bunch of quick decisions that could drastically reduce traffic fatalities, drastically improve the efficiency of our transportation grid, and help solve things like carbon emissions that are causing the warming of the planet. […] There are gonna be a bunch of choices that you have to make, the classic problem being: If the car is driving, you can swerve to avoid hitting a pedestrian, but then you might hit a wall and kill yourself. It’s a moral decision, and who’s setting up those rules?
Asked what what the role of government should be in AI development, Obama replied:[T]he government should add a relatively light touch, investing heavily in research and making sure there’s a conversation between basic research and applied research. As technologies emerge and mature, then figuring out how they get incorporated into existing regulatory structures becomes a tougher problem, and the government needs to be involved a little bit more. Not always to force the new technology into the square peg that exists but to make sure the regulations reflect a broad base set of values. The Future of AI and Us
On the matter of whether AI would outpace us, both agreed that it’s important “to find the people who want to use AI for good—communities and leaders—and figure out how to help them use it,” he said,
I think my directive to my national security team is, don’t worry as much yet about machines taking over the world. Worry about the capacity of either nonstate actors or hostile actors to penetrate systems, and in that sense it is not conceptually different than a lot of the cybersecurity work we’re doing. It just means that we’re gonna have to be better, because those who might deploy these systems are going to be a lot better now.
How will we manage AI replacing jobs in the future? Obama said:[M]ost people aren’t spending a lot of time right now worrying about singularity—they are worrying about “Well, is my job going to be replaced by a machine?” I tend to be on the optimistic side—historically we’ve absorbed new technologies, and people find that new jobs are created, they migrate, and our standards of living generally go up […] High-skill folks do very well in these systems. They can leverage their talents, they can interface with machines to extend their reach, their sales, their products and services.
Read the full interview here.
The post Obama: Synthetic Intelligence Will Totally Transform Our Future appeared first on Futurism.
Chatbots powered by AI are all the rage nowadays. Google, Facebook, Microsoft, all are developing chatbots that can not only decipher speech, but also understand the meaning behind those words.
Chinese search giant Baidu wants to take that all one step further. The company just unveiled Melody, an AI-powered chatbot tailored for the medical field.Baidu
The bot is able to chat with patients, getting them to reveal symptoms, and then transmits that data to doctors, along with a partial analysis. Suppose a patient is sick, all he has to do is post a query to Melody. The chatbot will keep asking questions to gather more data. That data is then compared to all the previous medical knowledge Melody has stored. After that, the symptoms and possible diagnosis is sent to the doctor, who will recommend the next steps.
To do all this, Melody has been outfitted with neural networks, and has been trained on medical textbooks, records, and messages between actual patients and doctors.
Melody has been integrated with Baidu’s Doctor app for Android and iOS, but if you’re reaching for a phone to download the app, hold your horses. Melody is currently trained only in Chinese, and the English version still needs more work.
The post An AI-Powered Chatbot Is Helping Doctors Diagnose Patients appeared first on Futurism.