In the news

Doctorlink – A responsible approach to AI in Healthcare

Introduction

The advent of Artificial Intelligence is offering us the opportunity to resolve some of the hardest problems that the human race has been wrestling with for a long time. The power of AI has exceeded expectations in many industries and

continues to demonstrate that we are only at the beginning of a lengthy and exciting road towards a future where many of today’s challenges will be easily addressed by machines. This, however, does not mean that AI will remove people from the equation, but will instead enable us to focus on what we do best: ingenuity, empathy and creative collaboration.

Healthcare is one such industry, and perhaps because it deals with people’s lives, it is even more important to ensure that the human traits that are required to deliver healthcare services remain part of the system. There are areas where machines are becoming better than us at making assertions, such as the use of computer vision to detect certain types of cancer or other serious conditions on medical images. There are other areas however, where the human touch is very much needed, and machines can assist us to remove unnecessary, repeatable, automatable work in order to leave more time to clinical staff to perform these tasks unencumbered.

At Doctorlink we believe that the use of AI in certain areas of healthcare, such as triage and diagnosis, should be there to support clinicians in making ever-improving decisions. We are therefore implementing what we have termed Machine Assisted Learning as a process to reinforce the work our clinicians perform when designing algorithms for our advanced expert system.

In this paper we discuss how this takes place and what the advantages of such an approach are for our clinicians, our patients and the medical staff making use of our platform and tools.

Expertly built algorithms

At Doctorlink we have a team of clinicians with expertise in designing and building complex health-related algorithms, such as Symptom Assessment, Health Risk Assessment, Treatment and Point of Care Testing algorithms. All these are devised by practicing physicians and supporting clinical staff, thoroughly tested with a suite of fully automated regression tests and then evaluated by an external panel of independent clinicians who run through a variety of scenarios to validate that the conclusions reached by the algorithms are indeed valid for the proposed scenario (age, sex, family history, environmental and demographic circumstances, etc.)

This process is not a one-off effort, but a continuous cycle of review and improvement, applying the latest medical guidelines and recommendations for each specific market. As a result of ongoing improvement cycles and external peer validation, our algorithms are fully indemnified against misdiagnosis or malfunction, protecting our users against malpractice claims. To date, and since the first algorithms were first created in 2001, there have been no claims against them.

Our approach to AI in Healthcare

The process described above is both complex and at times cumbersome, requiring thousands of hours of careful work by highly qualified clinicians. The introduction of AI into our work processes is therefore taking place gradually.

At Doctorlink we are firmly of the opinion that AI in healthcare serves a supporting and enabling role, as opposed to an overriding, substitution role.

This is because our area of work, unlike other healthcare fields relying for example on computer vision, is very much based on assessing an individual’s conditions, state of wellbeing and risk factors and providing accurate, reliable and compassionate guidance towards the safest course of action, within the recommended timeframe and best professional resource to deal with the identified conditions.

In order to achieve this, we will always need the human in the loop, not only to provide unique and experience-based skills, but also to ensure that the empathy of a person is always behind every algorithm. In addition, for a triage and/or diagnosis system, the requirement of auditability is at the top of the list,

so that a supplier can demonstrate why a particular decision was taken by the algorithm. This is currently not possible on pure AI engines, since they are effectively black boxes that can’t externally explain their internal decision-making process, unlike expert systems like the one created by Doctorlink.

It will always ultimately be the decision of our clinicians whether to accept the advice given by the system and implement the proposed enhancements into their algorithms.

On the other hand, machines excel at dealing with huge amounts of data, identifying trends that the most skilled professionals might easily miss, and therefore make recommendations around those observations in an unbiased and objective way. It is precisely because of this unbiased and data-driven approach that at Doctorlink we are making use of AI in order to identify anomalies or unexpected patterns in our outcome data, validate this with sufficiently large sample volumes in order to guarantee statistical significance, and then leverage these observations in order to propose amendments and improvements to our algorithms.

Notice how we make use of the word “propose” since that is exactly what our AI technology will do. It will always ultimately be the decision of our clinicians whether to accept the advice given by the system and implement the proposed enhancements into their algorithms. We call this Machine Assisted Learning (MAL) and the diagram below summarises how it operates in the context of Symptom Assessment, one of our most complex algorithms.

When the clinician designing the new release of the Symptom Assessment algorithm is satisfied that the new updated algorithm works our Automated Regression Testing (ART) tool runs a comprehensive battery of automated end-to-end tests that validate the assessment for a large variety of potential patients using vignettes that represent their differing variants (age, sex, background…) and looks out for the expected outcomes to assess whether any of the algorithm improvements have caused a regression in our logic.

After the algorithm is fully validated by ART it is then passed on to our external panel of expert reviewers. These panels are assembled with specialists from the appropriate areas of care that the algorithm targets. In the case of Symptom Assessment this includes General, Urgent Care and other specialist practitioners. Once the panel is satisfied with the conclusions reached by the algorithm for all their test scenarios (after a potential number of iterations of feeding back amendments or fixes) the algorithm is approved and released.

Once released, the end-users (patients) have access to the algorithm through one of our customer-facing apps, such as the Doctorlink NHS app. The users traverse the algorithm and arrive at a set of conclusions. Ultimately a disposition is presented to the patient, including who could best treat the condition, in what timeframe and at what service location (GP Practice, Pharmacy, Home…).

Whilst this is taking place, a large amount of data is collected around how the algorithm performed, as well as what action the user took (e.g. booked and appointment, stayed home, went to the pharmacy). In parallel we have information about the geographical and socioeconomic status of the area where the particular patient lives and we are always looking to improve on our knowledge of the context in which a user is using our systems.

All this data is then analysed by the AI platform, applying anomaly detection, clustering and segmentation techniques in order to find correlations between the algorithm suggested outcomes and the eventual outcomes that resulted from the patient’s actions. This can then result in identifying areas of improvement to the algorithms, such as eliminating unnecessary questions (they don’t correlate to any resulting outcome), ironing out complex language questions (indicated by question hesitation) and others. Not all data is used in this improvement process, but the context as to who the patient is (as a non-identifiable individual), where they live and how they feel are all relevant data-points to consider in our AI methodology.

Conclusion

Ultimately, the use of AI in our platform is designed to support our clinicians as well as our users. It supports our clinicians by providing the tools to help them improve their algorithm designs through unbiased and data-based decisions.

Our end-users benefit from the improved user experience of a refined algorithm that has been enhanced from the observation of huge volumes of outcome data gathered whilst other end-users traversed our algorithms.

AI is here to stay and we at Doctorlink believe it is the best chance we have to safely augment our expertise whilst keeping the human firmly in the loop.

 

 

 

Leave a Reply

Your email address will not be published. Required fields are marked *