Welcome to THE BIG TECH QUESTION, a monthly feature where we ask members of the H+K Technology team to debate the latest issues in tech and beyond.
This month we ask: IS AI THE FUTURE OF HEALTHCARE?
Following Hannah Taylor’s plea FOR last week, Robyn Gravestock puts forward the argument AGAINST.
Many films have attempted to present the downfalls of AI, spinning the plot into an epic battle between man and machine. With media continuing the hyperbole around the benefits, success and breakthroughs in the field, these outlandish stories, although extreme, act as a reminder of just how much we don’t know about AI and its limitations.
Will Smith as Detective Del Spooner, in I, Robot, explains one limitation to AI with perfect reasoning. When the detective’s car is involved in an accident and plunged into water, a robot makes the choice to save him over the life of a young girl. Detective Spooner states, “I was the logical choice. It calculated that I had a 45% chance of survival. Sarah only had an 11% chance. That was somebody’s baby. 11% is more than enough. A human being would’ve known that.”
Curious signs
AI has reached the point where it is capable of telling doctors how to treat their patients. Its clever ability to tap into the collective knowledge of data from millions of past patient visits, patents and websites means it can provide recommendations in a matter of seconds. This is largely due to doctors becoming increasing curious around the benefits of Al, and reforms in regulations around new technologies in healthcare.
Although these recommendations have proved advantageous in providing fast solutions, AI should never be allowed to make the final decision. Machines lack the ability to understand context, they can understand a ‘yes’ or ‘no’ answer to a question but can’t spot the signs a human doctor can. When faced with the question of whether or not a patient smokes, a machine would take the answer as truth, but a doctor might notice a lingering smell or nicotine stains on the patients’ hands that indicate they’re lying.
Similarly, humans have the ability to notice another layer of illness, in which resolving the first would cause fatal consequences due to the second. Take a radiologist for example, they’re trained to not only look for an embolism that triggered a stroke but notice the tiny bleed that means the use of clot-busting medication could be devastating. The machine will look for the obvious problem, and recommend the quickest solution. Putting the responsibility of a decision in the hands of a machine might take the pressure off of the doctor, but what if they start to rely on machines to spot these issues it simply can’t identify?
Treating AI with caution
While machines are proficient at teaching themselves to find solutions for illnesses, they don’t have a drive to understand why the illness was caused. After all, it is our passion to understand why and to gain an explanation that is “what powers medical advances.” A machine’s intelligence can only go so far as to solve a case it “cannot build a case.”
The advantages of AI are undeniable; it will aid doctors and specialists to continue their brilliant work in improving healthcare and treatments globally. However, machines will only be able to distinguish diagnoses is their separate entities, at least for the foreseeable future. Humans have the ability and the capacity to understand the context of a situation and so it should be a collaborative movement, where machines and humans gain knowledge together. It’s not a question of man versus machine, as the films dictate, but of emotion versus reality, logic versus reasoning – born versus developed.
Authored by: Robyn Gravestock