Here’s a quick movie quiz (not a very difficult one). Name the movie in which a key character says:

“Your scientists were so preoccupied with whether or not they could, they didn't stop to think if they should.”

Just in case it hasn’t come your way, it was Jeff Goldblum as Dr. Ian Malcom in Jurassic Park. 

It’s a well-rehearsed philosophical point and, like the animatronic stars of the classic movie, it has acquired some fearsome teeth with the arrival of artificial intelligence (AI). Because now we’re talking about technology that has real cultural, societal and economic impact.

There is wide discussion about the future of work, what it means for privacy, for justice and, among some deep thinkers, even for humanity itself. Which is why the regulators are circling like pterodactyls* to try and impose some kind of ethical framework around the use of AI, with the European Union among the apex of the species.

One of its most compelling utterances on the topic so far comes from its High-Level Expert Group on Artificial Intelligence, in its report Ethic Guidelines for Trustworthy AI.

The Guidelines set out seven key requirements that the Group feels AI systems should meet in order to be trustworthy: 
•    Human agency and oversight 
•    Technical robustness and safety, including resilience to attack and security
•    Privacy and data governance, including access to data 
•    Transparency, including traceability, “explainability” and communication 
•    Diversity, non-discrimination and fairness
•    Societal and environmental wellbeing, including sustainability, society and democracy 
•    Accountability, including auditability, and minimisation of negative impact

At a glance, it’s a pretty common-sense list, though each of the points could probably justify a series of blogs in their own right. What concerns us in this blog is the fact that the list, and the report that contains it, exists at all.

It’s significant because it’s an example of a political organisation taking a stance on a particular technology. Equally important is the fact that it explores Dr. Malcolm’s point in considerable depth.

There is a mass of content out there about the question of “human agency and oversight”, because in essence it’s a rather formal and anodyne way of expressing the visceral fear that the machines are going to take over. This is still where the AI debate lies in a lot of the mainstream media. 

In effect, the EU’s High-Level Experts are saying that we can’t let those pesky algorithms run our lives unless we know what they’re up to, in case we end up as anonymous extras in that other philosophical epic The Matrix.

The other ethical questions are equally critical – perhaps even more so, because they present more genuine and immediate issues to debate. 

Are we happy for judges to be presented with sentencing recommendations calculated by bots? Do you want your CV to risk rejection by a digital HR system before anybody gets a chance to consider its carefully-crafted nuances? Are you sure your digital vote is tamper proof?

These are the questions that the EU’s report is seeking to address, because the European Commission states clearly that it wants organizations and individuals to reap the benefits of AI. The framework it wants to establish is designed to make AI safe for all of us.

But if you’re deploying AI, or robotic process automation, or any associated technologies – the Internet of Things has a virtual hand in this as well, of course – the first question is not about the technology itself. It’s about the data that feeds it.

The old adage of computing still applies: junk in, junk out. And where your bot is controlling a fork lift truck, or a chemical pumping system, junk out is not an option.

It’s no surprise that we’re seeing heavy investment in data centre modernization, analytics and new, more organic approaches to cyber security as our customers gear up for the AI revolution. They know that making sense of the new torrents of data flowing in from mobile devices, IoT sensors and countless other connected systems is the key to making AI safe and effective.

For the most part, of course, the risks that Jeff Goldblum’s character was worried about are a far cry from the occasional misplaced decimal point or jammed document reader that are probably the biggest inconveniences arising from AI and automation for now.

Nevertheless, the continuing discussion of the ethical aspects of the technology is essential and we should be glad that bodies such as the EU are taking an interest. It’s a clear sign that our industry is becoming recognised by the societies it serves as more than just a new kind of utility. It is transforming the way we live, impacting our values and principles and taking away the divisions between us. 

In time, we could find that it has made us more genuinely human than we have ever been before. 

*We should not, of course, assume that pterodactyls circled. Dr. Malcolm would be furious.