Welcome to THE BIG TECH QUESTION, a monthly feature where we ask members of the H+K Technology team to debate the latest issues in tech and beyond. 

First up, Hannah Taylor puts the argument FOR the use of AI in healthcare.

Artificial Intelligence (AI) is having a rough time of it. As a narrative device, AI has existed in science fiction for decades, and more often than not it has been portrayed as an insidious enemy to humanity. That might be why we’re still seeing a steady stream of alarmist newspaper headlines such as the recent Facebook furore suggesting AI will make us ‘obsolete’ in one way or another. AI PR nightmares like Microsoft’s Tay probably aren’t helping the cause either. In spite of this, we are beginning to see some of the positive new possibilities AI represents.

In a number of industries, AI has been positioned as an instrumental technology that will support the human race in tackling some of the major issues facing our planet today, from food production for a growing population to fighting climate change.

Healthy AI practices

Another major area for AI is healthcare. AI has the potential to revolutionise the way that patients are treated, from advancing personalised healthcare, to early detection of disease. While some will argue that medicine shouldn’t lose its human touch, a growing and ageing population means that we need to put these emotional arguments aside. According to recent figures published by The Lancet, there will be a 25% increase in people needing care within the next eight years. AI will enable us to keep pace with modern diseases and keep our populations healthy, freeing up time and resources for important services like social care.

The problem is that although the government is making strides to invest in technologies like AI, they’re still struggling to decide on regulations at the same pace as technology is evolving. This means that the industries that stand to benefit from these technologies are ill-informed about the steps they need to take to ensure the data used in AI initiatives is used legally and ethically. The real spectre in the room here isn’t a terminator-esque doctor sitting in your GP clinic, it’s the large companies who are at the helm of the most advanced technologies looking to profit from them.

This issue came to the public spotlight recently, when Google’s AI subsidiary, DeepMind, was fined £500,000 for accessing patient information against the terms set out in the UK’s privacy laws during a trial with the Royal Free NHS Foundation Trust. The trial used patient data as a way to detect kidney injuries and details on about 1.6 million patients were provided to DeepMind in the early stages of development last year. At the time Elizabeth Denham, the NHS’s information commissioner, said that ‘the price of innovation does not need to be the erosion of fundamental privacy rights.’

AI for good

Unmonitored and unregulated access to this data is problematic for a number of reasons. The most obvious is the sense of personal invasion, but a more troubling one is organisations using the data against us. We’re lucky in the UK to have access to free healthcare via the NHS, but in the US there’s the issue of what unregulated access to patient data could mean for insurance premiums.

In spite of this cautionary example, this shouldn’t be an excuse for us to throw our hands in the air and give up. The marriage of health and technology won’t be an easy road, but the potential gains given the right circumstances can no longer be ignored. We need to tackle these issues head on, and work with healthcare providers to get the best possible results with patients’ wellbeing and privacy at heart. This vision can be achieved, but only when we have a regulatory ecosystem where AI technologies are able to thrive and work for good, not pure profit.

Tune in next week as Robyn Gravestock puts the argument AGAINST the use of AI in healthcare.