The machines are coming – but what does that mean?

17 Nov 2017 Ryan Miller    Last updated: 17 Nov 2017

Photo by Alex Knight on Unsplash
Photo by Alex Knight on Unsplash

What do we want from technology? Specifically, what do we as individuals and a society want – and not want – from artificial intelligence? These questions need to be answered now.

We want artificial intelligences that are human, obedient, and prevent us from making mistakes.

Most people don’t want AIs to be religious, but almost a quarter of us do.

If superintelligence is made real – and only a small fraction of people don’t want this to happen – then more than half of us think man and machine should be in control together. Only about a quarter of people think humans should be in charge.

These are the findings of some recent surveys into our attitudes about AI. However, are we sure these results show what people really think? Have we, as a society, even properly begun to have these conversations, let alone come to solid conclusions? It helps, at least, that most people will be familiar with the broadest terms of the debate we need to have – although this understanding might be disproportionately focused on the dramatic.

Artificial intelligence has been around in stories for a century or more. And, for all of that time, the potential for sentient creations of the future with intelligence equal to or greater than our own to destroy humankind has been a recurring story.

In 1920 Czech writer Karel Čapek wrote the play R.U.R. in which self-replicating synthetic roboti turn on their human masters and stage a rebellion. Doomsday makes for gripping stories. But AI is here, now, and exists in the hands of very few people. It is time for civic society to really think about these issues - not just concerns about existential threats (although most certainly that too).

What do we want from, and for, Artificial Intelligence? What purpose, beyond pure intellectual exploration, does this science serve? What limits, if any, should be placed on research and development?

Currently AI is rudimentary but the pace of change is rapid. We, humanity, are creating new kinds of intelligence. The time for these debates is now. Across the world these conversations are beginning – with different organisations carrying out research to gauge public opinion.

The results are eye-opening, compelling, interesting and more.

Our survey says

Space10 is a research hub supported by Ikea that seeks “to explore and design innovative and responsible business models for the future that enables a more meaningful and sustainable life”.

Recently it carried out a survey - Do You Speak Human? - with nearly 12,000 respondents in 139 countries. The results are wild and unfocused (and, it has to be said, unrepresentative, given this is a set of questions that anyone can answer) but, nonetheless, they are interesting.

The survey is still available online. Its questions are broad, and designed to get a feel for how humans feel about AI, rather than going into fine detail. Apparently, we want AI to be like humans.

In an article about the survey written by Space10 and posted on Medium, the organisation said: “Most people prefer AI-infused machines to be human-like, not robotic. In fact, 73 percent of respondents said they want AI to be human-like; 85 percent want AI to be able to detect and react to emotions; and 69 percent want AI to reflect our values and worldview.”

What are “our values and worldview”? We can’t seem to decide that amongst ourselves – how do we deliver it upon others? However, that quotation from the article could be read a couple of ways and is, thus, slightly misleading.

The question, as posed in the survey, is “Should your AI reflect your values and worldview?” There is a big difference between a person wanting something to reflect our shared values, or reflecting their own personal ones.

As for being human – respondents were asked what gender AI should have, and 29% said male, 26% said female and 44% said gender neutral. This question, in particular, might be susceptible to strong gender bias amongst respondents, and only 30% of those who have taken the poll are female.

The survey also asked if people thought AI should be religious – and 26% of respondents said yes.

Should your AI fulfil your needs before you ask? 69% of people said yes, 31% said no. Should your AI collect your data to improve the experience? 49% of people said yes, 10% no, 41% said only when the data is anonymous. Should your AI prevent you from making mistakes? 74% yes, 26% no.

How do you want your AI to behave? 28% said mothering and protective, 27% autonomous and challenging, 45% obedient and assisting. Is “obedient and assisting” the best answer? Are any of them bad answers? Or good?

Future of Life

The Future of Life Institute is an NGO whose banner motto is “Technology is giving life the potential to flourish like never before... Or to self destruct. Let's make a difference!”

Founded by some world-renowned scientists, including MIT physics professor Max Tegmark, and with a list of around 20 advisors including geniuses like Stephen Hawking and Alan Guth, and tech pioneer Elon Musk, the organisation’s aim is to explore the potential, good and bad, of our life on planet Earth. Part of its work involves looking at “existential risks” from biotech, climate change, nuclear weapons – and, of course, AI, for which it has a neat primer on some of the myths and realities of where artificial intelligence could go very wrong from humans.

The institute has also carried out its own survey – similar to that from Space10, in that anyone can take part online and, again, it is still open with results available based on the first 14,866 responses at the time of writing.

Nearly three quarters of respondents said they want superintelligence to appear, and nearly a quarter said they were unsure; a small fraction only said they did not want it. When asked “if superintelligence arrives, who should be in control?” only about a quarter of respondents said it should be humans, with more than half saying machines and humans should do this together.

Asked, if they one day get an AI helper, should it be conscious and be able to have a subjective experience, about twice as many people said yes as said no – but about a third of respondents said they were unsure or that it would depend on circumstances. That is about as far from a consensus as it is possible to get.

Future of Life also listed 12 different possible evolutions for society – ranging from libertarian and egalitarian utopias, to life under a benevolent AI dictator, to people being eradicated by AI, to humankind eradicating itself – and asked respondents to rate each one on a scale of one to five. Unsurprisingly the utopian futures scored much more highly than the annihilation of humanity, although this did score marginally better than the Orwellian option named 1984, or the Zookeeper scenario whereby “an omnipotent AI keeps some humans around, who feel treated like zoo animals and lament their fate.”

Another AI survey has been carried out by “foundational technology” company Arm, with nearly 4,000 respondents providing answers. Like these other surveys, it is worth having a closer look, but results include the fact that 61% of people think AI will make the world a better place (with 22% saying it will be worse), and that 36% of people think AI has a noticeable impact on their life now, 71% think it will do by 2022, and 92% think it will by 2027.

The top three jobs that people felt AI could do better than humans were heavy construction, package delivery and piloting public transport; the top three most appealing applications of AI were said to be medical apps that will diagnose illnesses earlier, improved traffic lights to cut congestion, and personal companions or assistants (including fully autonomous vehicles).

If anything, this suggests that people can see a very short distance into the future potential of AI, and do not really grasp quite how pervasive it is going to become.

The future

Roughly speaking, right now, AIs are bundles of algorithms that usually learn through consumption of huge amounts of data. Take, for example, Google’s AlphaGo Zero – it taught itself a world-beating mastery of Go, the ancient board game, in three days.

However, even though it is able to use reinforcement learning (trial and error, effectively) to achieve this in an incredible timescale, and is now being put to use in medicine, working out how proteins fold, it is still incredibly limited, and not able to deal with problems that cannot be perfectly simulated on a computer.

AIs are likely to become more open-ended and adaptable. more human, in a sense, and perhaps more lifelike. Perhaps, even, more alive. Many people want us to strive for AI that is beyond our comprehension and utterly unlike the human mind, not just in scale but also in nature.

The impact on society is already significant and is going to be immense.

Space10 concluded, based on its survey, “that the range of […] responses suggest that we need to have a much broader debate about the development of AI.”

This is an enormous understatement. People, and the organisations who represent them, need to think long and hard about all these ethical issues, and more – probably including many that are impossible to anticipate right now.

What does this mean for individuals? What does this mean for the structure of society? What about all the links in between? It would be fantastic if this article could answer all these questions, but it cannot. All it can do is highlight that these are not questions for the future, they are issues for today.

This might all seem like a fever dream, fanciful nonsense after watching 2001 and The Terminator too many times. It is not. AI is happening and is happening now. All of us – individuals, organisations, everyone – needs to form an educated opinion and make it heard.

Right now, we as a people do not seem to know what we want or, at least, our ideas are half baked.

The arrival of AI has left us with plenty of thinking to do.

Join the Conversation...

We'd love to know your thoughts on this article.
Join us on Twitter and join the conversation today.

Join Our Newsletter

Get the latest edition of ScopeNI delivered to your inbox.