In a future where many jobs will be lost to automation the question arises as to both the future of the workforce and the need for traditional departments such as HR.
The reality though is that whilst automation will encroach further and further into the workplace removing swathes of jobs and assisting with others, AI for all its hype, has its limits.
To understand where the watermark rests is not quite so simple, but there are guiding principles. Machines are capable of processing huge amounts of data instantly and identifying patterns, trends and correlations. This provides enormous benefit to decision makers who can leverage this data to make informed decisions.
However, decision making at a high level is often the result of considering not just one data point however well evidenced, but many and from different disciplines. Computers often can do one thing very well and fast but can rarely assess the bigger picture.
Neural Networks are capable of learning, but again this is often domain focused. We will see in future the rise of self-driving cars and literally billions of pounds are being spent each year to make this a reality. Whilst there is a high degree of confidence that this huge investment will succeed, the code behind it won’t be able to diagnose cancer in a patient.
Humans are never born to drive or diagnose a patient but have the unique ability to learn completely new and unrelated skills. Given the rapidly changing world and technology, this is a good thing, and it is ironical that whilst we are rapidly adapting to new technology, technology itself is not very good at adapting.
So, what does this all mean for the workplace, the new world of work and HR? Computers and AI will be focused on “narrow field” activities and tasks, those that require speed, accuracy and analysing big data. On the other hand, humans adapt rapidly, have holistic and “outside the box” thinking, multi-disciplinary knowledge and creativity.
Whilst HR contains a lot of administrative tasks which can be automated, there is much that cannot. HR requires a whole range of diverse knowledge and insight from understanding the Law to the values and culture of the organisation, from the needs and objectives of both company and staff to respecting union rules and the wider culture of society in which it operates.
HR acts not just to re-enforce polices and values, but also a change maker within the organisation. In fact, to do HR well, you need to understand that you are working with human beings; A statement so obvious it is often missed when discussing how a computer (with no sense of self, empathy or deep understanding) could replace people in role that requires deep interaction with others.
Computers can learn but learning without context can be at best a disaster, and at worst, catastrophic. For example, Microsoft took down Tay, an AI Chatbot on twitter only 16 hours after launch because – through learning – had started tweet offensive and racist comments. It had no moral compass or understanding of the wider culture to recognise that there is good learning and bad learning.
Imagine you are driving your car 60 mph when a child crosses the road in-front of you. There is no time for you to break without hitting the child, so you can either swerve the car up onto a pavement and hit a wall (with the potential you will incur life changing injuries) or kill the child. This is not hypothetical but a real moral and legal dilemma for the manufacturers of self-driving cars. Is their legal responsibility to the owner of the car or to other road-users? There is no legal requirement for a driver to risk or sacrifice their life to save another. Supposing the car is programmed to risk your life rather than kill the pedestrian, but now the person running across the road is a terrorist with a gun whom you are trying to stop with your car.
You might think this is going off-topic, but having a moral perspective, values and a big picture view are all important for the right decisions to be made every day.
Even the best AI lacks these things and for those who believe these issues will be sorted in the future, the answer is that we might not need to wait after all. Many in the AI field believe these kinds of issues can only be solved if AI moves to a biological architecture (rather than digital), that it requires consciousness, self-awareness and intentionality. If correct, then these attributes already exist in what we currently call humans.