If technology is not making us happy, why are we using it?
And if it is making us happy, why do we run away from it as soon as we can?
On Monday I went to the debate “This house believes that Artificial Intelligence/Robotics will make us happy“, organised by CSAR (Cambridge Society for the Application of Research). While it was the first time I went to one of these events so I wasn’t sure what to expect, I certainly didn’t expect the panel to only include one woman, with an audience of similar (but slightly more balanced) gender ratio, and an average age of maybe 60. I was stunned that there were only a handful of people my age, given that we are the generation that can have maybe the strongest impact on what direction AI can take.
Equality and diversity issues aside, there were three ‘pro’ representatives (Prof. Alan Blackwell – Cambridge University, Dr Fumiya Iida – Cambridge University, Nigel Miller – Telegraph), three ‘against’ representatives (Dr. David Good – Cambridge University, Dr. Advait Sarkar – Microsoft Research, Dr Beth Singler – Cambridge University, and two moderators (Dr. David Cleevely – CBE, Dr Nigel Bennee). By the end of the two hours, a show of hands in the audience ruled that AI will not make us happy. At least, not yet and not in its current state. Below are some of my thoughts on the topic and the debate.
One of the major concerns with the future of AI and robotics is the risk that they will take over most tasks, leaving us without our jobs. At which point we need to figure out how we can earn money – is a universal basic income the solution? One argument made was that our economic model doesn’t work anymore. We start seeing how the same item is valued differently for different people or depending on contextual factors – take Uber who has recently started charging based on what you are willing to pay. But if we will no longer have a job to go to every day, what will we do? Do we use the money saved to retrain people and restructure our governments?
According to ‘pro’ supporters, while AI is going to do most things, it won’t do everything, and more meaningless jobs like cleaning sewers can be done by a robot, leaving us time to learn, do more mentally challenging jobs, and enjoy creative activities. But there already are applications that have taken over creative tasks like writing. Wordsmith is a software that is able to generate endless stories from a single dataset and is used by Bloomberg to generate their financial stories. How long will it take before more of these applications come out and we can’t even tell the difference? Google is already working on how art can be created by AI, with Project Magenta.
We’ve seen part of this happen before, during the industrial revolution for example. People were suddenly de-skilled and what was once a challenging task the human, a machine could do more easily. Yet we still survived, and we were able to find new jobs and new ways of earning. However, as Alan Carr pointed out in his keynote at CHI 2017, we need challenging tasks in order to develop new skills. If AI automates all tasks that require human effort, we become dependent on technology and fall into an automation complacency. We, therefore, need to figure out what is the cutting edge that a computer can take over control? Or, to see it from the human perspective, how can we maximise human engagement and enjoyment? Can we design technology that includes some degree of friction, so we can learn new skills?
Most of the time we seem to forget that technology tells us what we can do, not what we should do. Our regulatory systems are slow, political, and don’t grow as quickly as technology progress, which is led by just a handful of people (mostly in Silicon Valley) driven by a commercial – not political – preposition. The motif of current technology is not to make us happy, but rather to make some people profit. To this end, we start to see movements that rebel against a computer-centric world and back away from attention-demanding screens. For example, the Time Well Spent movement or Digital Detox proposers. Although I personally question some of the arguments they make, these movements play an important role in raising issues that otherwise would be confined to works of science fiction.
While we maybe do need to deconstruct the Pollyanna enthusiasm that drives technological determinism for happiness, that is not to say, technology is all bad for us, and we should abandon it. Instead, we should consider more carefully the values and morality that are being taught to AI and designed into our devices, and who do they belong to: do they represent everyone’s perspective? Approaches like Value-Sensitive Design and Postive Computing are trying to forefront human values and psychological wellbeing when developing technology, and as we move forward in this field, these approaches and theories will become more and more important to shape our interactions.