In this blog post, I want to provide insight into why I believe Artificial Intelligence (AI) will have a significant impact on healthcare. You don’t have to look very far to see the challenges that Emergency Departments (ED) face, especially during winter viral seasons. AI can not only aid clinicians in making diagnoses but can help with resource allocation and improved care delivery when it comes to bed management and directing the flow within the ED.

At Unity Health in Toronto this AI-powered system identifies patients who are ready to be admitted or discharged from the ED. Another way that AI can help is with resource allocation, identifying patients who might be at risk for an Alternate Level of Care designation, this can occur when a patient is occupying a bed when the level of care they receive at the hospital is not appropriate and they might be better suited for home-care.


Another example of the use of AI is CHARTWATCH, an early-warning AI program developed at St. Michael’s that looks at 100 different variables in a patient’s chart, including lab results and vital signs, and determines whether the patient is at a low, moderate or high risk of needing ICU care within the next 24 hours, this hospital has partnered with Signal 1 to integrate this diagnostic tool into the hospital.

There has been a lot of hype that AI will replace all our jobs and that it is something we need to be afraid of. There are several inefficiencies within the global healthcare system which have been documented: from human resource staffing issues to interoperability, and the burden of medical documentation which takes clinicians away from spending valuable time with patients. These inefficiencies are costing healthcare systems both money and lives, the use of AI to help bring about solutions to these real challenges will be imperative in the future.

I recently finished reading Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again




This book had some interesting insights on how AI can impact the world of healthcare in the future. The author includes personal stories, as well as stories from others, on how at times the healthcare system failed to take into account his medical history and misdiagnosed patients. He highlights these stories to show how if AI was present in these cases it might have provided important information to help clinicians accurately diagnose.

He highlights in the future that some medical professions which are focused on the clinician identifying patterns are ripe for AI to be implemented. In the book, he highlights the importance of using AI as a way to enhance diagnostics instead of the notion that clinicians will be replaced. The book highlights a quote from Michael Recht and Nick Bryan “We believe that machine learning and AI will enhance both the value and the professional satisfaction of radiologists by allowing us to spend more time performing functions that add value and influence patient care and less time doing rote asks that we neither enjoy nor perform as well as machines“.

I believe this is a key function of the eventual future of AI, not to replace or takeover a clinician’s function, but to allow them to go back to providing face-to-face care which has been declining, shifting the focus from entering data into EHR (Electronic Health Record) systems to the human connection. The book highlights that it takes more than twenty hours for clinicians to be trained to use the EHR and this showcases the complexity of these systems, this makes sense as EHR systems were designed for billing purposes and not as an extension of care.

The book also highlights how much information we are missing when it comes to providing personalized care being able to incorporate information from the genomic data of the patient when it comes to coming up with a diagnosis. One of the examples of the power of genomic data in the book is an infant who was having constant seizures but the CT scan showed everything was normal, when a blood sample was sent to Rady’s Genomic Institute they used natural language processing to identify that a gene called ALDH7A1 might be causing a metabolic defect, leading to seizures, they were able to treat him with dietary supplements! This links back to several blog posts which I have written on the challenges of interoperability and how important it is to be able to connect data from the labs to the hospital to the doctor’s offices so we can have a holistic view of the patient’s healthcare data to help clinician’s make better decisions, in the future the hope is that this can be improved with the aid of AI.

We can’t talk about the benefits of AI without addressing the downsides that need to be managed alongside innovation. A lack of understanding of how these models work, which is also driven by the propriety nature of these companies developing these algorithms, has risen a new field called Explainable AI which seeks to find ways to help others understand what is under the hood with some of these algorithms, especially when it comes to diagnosing patients, specifically historically marginalized groups.

When it comes to building algorithms that are used in the healthcare field, the data that is fed into the model must be analyzed, understood and biases uncovered before developing systems that can have real-world implications on marginalized groups. For example, a paper published in Science identified a widely used healthcare algorithm that increasingly identified White patients over Black patients concerning patients who need follow-up care, even if the Black patients were just as sick as the White patients. This is because the algorithm focused on where cost is allocated (typically less money is spent on Black patients) over health inequities that Black and minority populations face. Black populations across America generally tend to have lower incomes than White populations and because the algorithm was focused on costs and people who have lower incomes are less likely to have insurance or might be more likely to miss appointments (due to transportation challenges), this results in overall smaller health-care costs. Because the algorithm was focused primarily on costs, the algorithm identified only 18% of Black patients versus 82% of White patients for follow-up care, however, when you focus on how sick someone is the number jumps from 18% to 46% when it comes to referring Black patients for follow-up care.

This issue in the algorithm was only uncovered when doctors themselves took the time to delve into the data and decode the algorithm, it’s estimated this algorithm was used to suggest follow-up care for 70 million patients. Racial biases when building algorithms are widely documented and can continue to negatively affect our healthcare populations if we are not careful to understand the parameters and the data we are feeding into algorithms and ensure we are not bringing in our own biases and lack of understanding of the populations being served. 

Recently here in Canada, we have removed the race correction factor for the algorithm used to estimate kidney function. In Canada, Black people are at elevated risk of developing kidney disease, have a more rapid progression of kidney disease, and are referred later for kidney care, this might be a result of the widespread use of this algorithm as it overestimates kidney function when Black race is used as a factor, the removal of this factor is the first step towards more equitable care delivery.

I also read an amazing book called Unmasking AI



In this book, Dr. Buolamwini begins the book with her experience with a concept she calls the “coded gaze”. When she was writing code for facial analysis software she installed software so that she could track the movement of her face, however, the effort failed and she was only able to continue with her project when she donned a white mask. This was because the dataset used to build the software was not trained on a broad range of skin tones and could not recognize her darker skin tone. Throughout the book, Dr. Buolamwini highlights how these biased datasets can lead to biased algorithms which can create life-or-death situations for marginalized groups, everything from self-driving cars not recognizing people with darker skin tones to algorithms rejecting resumes from female names for job applications at tech companies. I think the role of the AI Ethicist will be an important one in the future as well as it is critical to involve community stakeholders, e.g., Black In AI, when designing algorithms and the implications for marginalized groups.

Another area where AI will have an impact is the use of AI in drug design and discovery. The process of drug development can take upwards of 10+ years and billions of dollars before it is deployed to the general public. The COVID-19 vaccine was a novel experience, of how fast this vaccine came to market and was available to the public. This increased deployment timeline might be the future with the use of AI during the drug design phase. Exscientia is one of the companies that is leading the way in integrating AI models and lab experiments to develop drugs which can predict how drugs might behave in a patient’s body without the patient having to undergo extensive testing in the early stages of drug development. AI is also used to mine the literature to understand how a drug might behave in a patient given their genetic history.

There are several exciting ways that AI can and will impact the healthcare field, of course, there are considerations that need to be met concerning reducing bias within these models and carrying out ethical clinical research studies but the power that AI can have to reduce redundancies and speed up processes is evident and the hope is that it will overall improve patient outcomes and save lives!


0 Comments

Leave a Reply

Avatar placeholder

Your email address will not be published. Required fields are marked *