ResearchTSSG News

Explainable AI: How Transparency Leads to Trust

By 2nd December 2020 No Comments

As the reach of Artificial Intelligence (AI) grows in transforming industries such as medicine, transport and defence, we find ourselves entrusting our health, safety and security to intelligent machines. A worry for many, however, is that these machines are “black boxes” i.e. closed systems that receive an input, produce an output, and offer no clue why.

Engineers may be able to deliver ever more accurate models to forecast pandemic spread, classify symptoms of mental disease etc. but if they cannot explain these models to the relevant decision-makers such doctors, public health officials and politicians then how can the models be trusted? Were something to go wrong, being unable to explain why could be the death knell to an otherwise transformative technology.

Explainable AI TSSGChallenges

There are a wealth of AI/ML techniques available but Deep Learning is often the technology behind some of the greatest advancements in the field. Unfortunately, it is also one of the most notoriously opaque. Its representation of a problem is captured in its refinement of many processing nodes across many layers. This can become so complex that it is often impossible for a human to look at and say “aha, that’s how the model spotted this pattern in the data”.

For instance, an experimental neural net at Mount Sinai called Deep Patient can forecast whether a patient will receive a particular diagnosis within the next year, months before a doctor would make the call. The system was trained by feeding it 12 years’ worth of electronic health records from 700,000 patients. It then discerned hidden indicators of illness incredibly well, but without giving any explanation of how it does this.

Opportunities

Researchers in the TSSG are approaching explainability from several angles considering model reproducibility as well as model transparency. When it comes to reproducibility, a Nature survey found that more than 70% of researchers have tried and failed to reproduce another scientist’s experiments, and more than half have failed to reproduce their own experiments. We believe making sure our research is reproducible is a cornerstone in making sure it is understandable. Transparency is key because bias is embedded in our algorithmic world; it pervasively affects perceptions of gender, race, ethnicity, socioeconomic class and sexual orientation. The impact can be profound in deciding, for example, who gets a job, how criminal justice proceeds or whether a loan will be granted.

A recent advancement in this area, pioneered by Google, is the idea of ‘model cards’. Model cards are short documents accompanying trained machine learning models that provide benchmarked evaluation in a variety of conditions that are relevant to the intended application domains. They also disclose the context in which models are intended to be used, details of performance evaluation procedures and other relevant information.

FAITH

Within TSSG one such project dealing with these problems is FAITH, an EU-funded research project that aims to provide an Artificial Intelligence application that remotely identifies depression markers in people that have undergone cancer treatment using Federated Learning. Explainable AI is obviously of particular importance in healthcare where the AI service might be making a clinical decision that could affect a person’s life. The FAITH AI, however, isn’t making an automated decision. It is providing the professional with an alert therefore leaving any diagnosis in their hands.

That being said we believe factoring in transparency and explainability from the start will strengthen FAITH for long-term adoption. There are various types of transparency in the context of human interpretability of algorithmic systems. Of those we are striving for global interpretability (a general understanding of how an overall system works) and local interpretability (an explanation of a particular prediction or decision). We are looking at ways of automating the steps required to answer the following key questions:

  • What features in the data did the model think are most important?
  • For any single prediction from a model, how did each feature in the data affect that particular prediction
  • What interactions between features have the biggest effects on a model’s predictions?

In essence, FAITH aims to deliver a set of tools and processes that provide for more understandable AI implementations which will give our healthcare stakeholders more insight into the decisions that were made by the platform therefore giving more confidence in the decision-making process. Ultimately these tools should become usable across other domains.

Einstein once remarked that “whoever is careless with the truth in small matters cannot be trusted with important matters”. Since a driving force of all the research we undertake is a belief in its ability to benefit humanity – trust and transparency are key.

For more information contact Philip O’Brien, pobrien@tssg.org