“47% of all jobs will be replaced by technology in the next 10 years.”
I was sitting listening to Karie Willyerd (Twitter: @angler) talk about the future of HR technology during our annual SAP SuccessConnect conference and, though I was aware that the topic of the talk had moved on, my mind had seized onto that one terrifying statistic and it wasn’t letting go. It hadn’t been three weeks since I’d sat with my 14 year old son looking through grade 9 subject choices with him. What subjects should he choose to set himself up for the right university courses and admittance to those areas that interest him? We’d just attended a Monash University science day and talked with lecturers about careers in science – my son’s passion – and what he should be focusing on if he wanted to pursue those particular disciplines. Based on current learning conventions, over the course of his schooling he’d continue to refine those skills ever more finely and then he’d enter university where he’d undergo an expensive big-bang approach to learning that would set the foundation for his future career – the set of related jobs that would comprise his working adult life. Only … if what I was seeing on the screen was true then this was somewhat akin to gambling. Half of all jobs would be gone by the time he left university and, given the relentless pace of technology, would be replaced by what? The model seems broken to me, the big-bang approach to learning flawed and a relic of a pre-digital era. The pressure on our kids to get it right in an ever increasing competitive and populous world: overwhelming.
It is easy for those of us who work in technology, and love technology for technology’s sake, to gloss over what the digital transformation will mean for ourselves and for our species. It’s easy to get carried away with the marvels of machine learning and robotics, hyper-connected devices, ever smarter smart phones, and a cornucopia of gadgets to delight and to distract. We find ourselves attracted to the easy and emotionally safe components of these conversations – how self-driving cars will be a boon for traffic and road safety and gloss over the effects these changes will have on people. Whole industries will be transformed forever as disruptive technologies like Uber are currently doing with the taxi industry. It’s easy to not think about the livelihoods of taxi drivers or long-haul truckers and the uncertain future they face. For some it’s also easy to be lulled into a false sense of security, that somehow our specialisation and skills make us safe from these changes. Nothing could be further from the truth as analysts predict that highly-skilled jobs like those in finance, or engineering, or medicine could be equally affected by disruptive technology.
New fields of consulting seem to be springing up around us like weeds. “Transform your Business for the Digital Tomorrow” they promise, or “Ensure your Continued Professional Relevance”; the quick-fix feel-good for the future. But I am concerned at the superficial nature of the questions we’re asking as part of the process and the assumptions we’re making without a significant re-evaluation of our circumstances, our motivations, or our mindset. So much of how we’re measured in our daily and professional lives – from instant feedback, to time-tracking, to always-on mobile, to sifting constantly through a daily barrage of digital information for relevance – is arguably dehumanising us in subtle ways. It is the classic boil the frog slowly problem and we’ve bought into this new normal: this always on, always distracted, always connected, and always processing and sifting vast amounts of content. The problem is that neuropsychology shows were not very good at these things as human beings – our biology just hasn’t had the time to catch up with the information age. Computers, they’re exceptionally good at this stuff. As machines get smarter and better at information processing we seem to be trying to keep up with them while at the same time getting further away from those things machines can’t do well: the capacity for love and empathy, the ability to think deeply on topics of importance, the ability to connect in an invested way, to appreciate beauty, to be creative, I’d even argue the importance of being bored once in a while.
Science fiction in the 1950’s and 1960’s promised us a golden future where technology would replace jobs but would give back to us in time and leisure to pursue the more self-actualised aspects of our humanity. Current science fiction, the bellwether of technology trends and progression, is painting a bleaker more dystopian future. So what’s happening?
Professor Lynda Gratton, Professor of Management Practice at the London Business School, argues:
“We’re designing work that takes away the only opportunity humans have to be different from machines … the very technology that makes creativity important is limiting it because of the way we’re choosing to make jobs work.”
One issue driving our acceptance of current technology trends is the belief in, and the sense of security we have around, the direction of technology innovation. We have an unconscious bias in believing that technology builds on a series of rational or sound scientific milestones and that somehow positive intent is a given. The reality seems to show something slightly different; that technology follows a more organic pattern and feeds on, and is driven by, many aspects of human nature that are arguably less desirable. In his 1976 book The Selfish Gene, Richard Dawkins coined the word “meme” to represent an idea or behaviour that spreads between people or within cultures. The informational payload of memes have many aspects of biological propagation and mutation and are subject to selective pressures within the environments they are active. Technology it seems propagates in a very similar way and much of technology is driven by ever increasing consumerism and commoditisation. Some of the most negatively disruptive aspects of technology are driven solely by shareholder primacy and the need to see change on the bottom line at the cost of all other considerations – the increasing prevalence of targeted advertising, both as a general trend and one driving many machine learning initiatives in online retail is a good example.
When Elon Musk and Stephen Hawking issued concerns about artificial intelligence (A.I.) most of the world had a little fun with the Skynet / Terminator analogies and then moved on. On a recent Reddit, Stephen Hawking clarified his position by saying:
“The real risk with artificial intelligence isn’t malice but competence.”
His analogies are simple:
“You’re probably not an evil ant-hater who steps on ants out of malice, but if you’re in charge of a hydroelectric green project and there’s an anthill in the region to be flooded, to bad for the ants.”
With predictions pointing to a critical threshold and then an exponential ramp-up both in capacity and intelligence in A.I. it is not out of the realms of possibility to foresee a time when the collective intelligence of a human being might appear ant-like to a machine. Hawking calls for us to be careful about how we approach machine intelligence and cautions that rather than exploring undirected A.I. we should be focusing on ensuring A.I. research is more targeted to us and the outcomes we need as human beings. Hawking foresees two possible futures: one where most people can live a better, more luxurious life if the resources freed up by machines are shared. In another future most people are “miserably poor” and the rich who own the machines end up consolidating the wealth and the benefit. Given our current situation the worry is the second future is more likely.
We are undoubtedly at an inflection point – one that demands of us the most human of attributes applied correctly to solving the problems of the future in a way that benefits ourselves and the generations to come. The digital revolution is here and we’ve only scratched the surface of the disruption that will reshape our world. We should embrace those aspects that make us truly unique from machines and direct our creative endeavours to solving some of the hurdles that lie before us now. This demands of us a responsibility for the tools and the automation we deploy. It demands that we think deeply about the human story in the fields of disruption and provide solid mechanisms for re-training and learning for those who need new skills. It demands that we proceed with caution and with a strong ethical framework in the fields of machine intelligence.
What good is any technology unless is frees us to explore the better aspects of out nature? The best technology stories always have people at their heart and they, in my humble opinion, always enable a better future for us and allow us to do what we do best: be human.