By Niamh McKenna, Managing Director, Accenture Health, UK
These days, barely a day goes by without another announcement about artificial intelligence in health – where the subject is touted as the answer to healthcare’s biggest challenges.
Don’t get me wrong, I firmly believe AI will be an increasing feature of our lives going forward. Just as today we can barely imagine life ‘pre-smartphone,’ in the future, we will come to be just as reliant on AI to help predict, prevent and cure illness and disease. However, there is one thing we will have to get right now if we are to realise the full potential: the data.
Behind every AI is a data-hungry algorithm. In order to train this machine, we need to find the right data in the first place. Data, by its very nature, is often full of impurities and problems – and hospital/patient-related data is no exception. If the data you’ve sourced on specific types of cancer is from a hospital that happens to be in a demographic which is, for example, predominantly white and middle class, then just applying that algorithm to another location with a different demographic may mean it doesn’t work as well.
Sometimes that’s easy to spot – a human check after the AI diagnosis may spot when a diagnosis is missed, but the worst case is that the bias is hidden deeply and not spotted at all.
If we are to trust that AI can help us make clinical decisions, (and I believe a human/machine combination is definitely going to be a better option than a human-one alone), then we will need to get data governance right.
We will need a national data framework to set standards and hold organisations, from the NHS, to tech companies, to account. This could involve a quality certification process, just as for other kinds of medical devices, as well as a clinical testing process that we can all rely on.
It requires using sample data which the AI has never seen before, to test for biases and inaccuracies. This means ringfenced real data that is kept as the “control set” or creating test data that looks real enough to mirror the actual data. This involves an approach to using a clinical control file for AI, just as we have for clinical trials on new drugs.
The challenge is that data resides in different parts of our health systems, so the risk of fragmentation will always create further problems before you can create and identify the correct data.
The only way to ensure we get this correct is to manage this process at a national level. It is simply too hard to do it at a local level and only a national body can make sure that this information is being properly curated and looked after. A data custodian, if you like!
This involves granting an organisation real and meaningful powers, similar to the Information Commissioner’s Office in the UK.
It will be a vital step, because the information that impacts different health conditions is particularly specific, and getting this wrong from a health perspective will affect many lives.
It’s inconvenient at best, and at worst financially-detrimental, if my bank gets breached and my data gets taken, but if someone dies – then, that’s a different ball game.
Instituting an approval process – without wishing to add process for process’ sake and fetter innovation – will be key to getting this right, creating trust in AI, and releasing the potential it can bring.
Artificial intelligence can transform the way in which health is delivered and I am excited for the future – we just need to make sure we get this right from the start.