Blogs

AI From a BME Perspective - Bridging Innovation and Responsibility in Healthcare

Check out the most recent blog on MedWrench, written by Daniel Milewski Perdomo.

Mon Jul 15 2024By Daniel Milewski Perdomo

The medical world is rapidly changing with daily advances in materials science, genomic engineering, and above all artificial intelligence (AI). The rapid adoption and improvements of large language models such as OpenAI’s ChatGPT, highlight that AI is becoming an ever-increasing presence in the lives of innumerable people. 

Biomedical engineering technologists are in a unique position where they utilize interdisciplinary approaches in order to decrease the risk of an adverse event occurring to a patient or medical device user. By integrating various fields such as mechanical, electrical, medical, and social, positive patient outcomes are increased. 

The addition of artificial intelligence into the BME skill set may prove vital in the near future as artificial intelligence and its applications become further ingrained in modern society. This new and ever-evolving technology has some exciting potential uses for biomedical engineering technologists. 

Some of the most exciting uses are better and faster medical device self-diagnosing capabilities, more capable computerized maintenance management systems (CMMS), more efficient capital planning processes, and automated or semi-automated part ordering processes based on the needs of the healthcare setting. These are just a few examples of the far-reaching positive impacts AI could have on BMEs. 

However, there are some potential drawbacks or complications that may arise if AI is implemented too rapidly or without rigorous regulations and standards put in place. Using artificial intelligence, especially in a healthcare setting, will always raise moral, ethical, and legal questions as to whether AI can truly deliver better patient outcomes and if it should be implemented. 

In a clinical setting when a patient is part of an adverse event, creating a team to identify the parties involved and ultimately, what person or group is at fault, is an important process in deciding legal culpability and developing new standards and approaches to avoid repeating the adverse event in the future. Assigning the blame to a certain person, group, or medical device is a way to mitigate future risk by developing the appropriate measures. However, with AI it becomes much harder to find the person or persons responsible for the incident. Recently this concern has been highlighted with the emergence of self-driving cars, airplane autopilot, and is beginning to emerge in the medical field when it comes to diagnosing certain conditions. The main reason people may feel unease when integrating AI into medical devices, is if a medical device with AI capabilities causes patient harm, who is at fault? Is it the clinician? The hospital? The company who designed the device? The person who wrote the algorithm? Answering these questions will become crucial in the coming years and decades. 

Another challenge that needs to be addressed is the accuracy of the AI. If the AI is just marginally more accurate than a doctor, then patients may still choose a human doctor over an artificial intelligence doctor. As time passes, and the gap between machine and man widens, will people still choose a human who is prone to error or a cold emotionless but more accurate artificial intelligence? 

The complications with integrating AI into the medical field discussed thus far have not yet highlighted the specific problems a BME might face. An example of one of the issues is if a machine self-diagnoses incorrectly. If a BME relies too heavily on the machine self-diagnosis it may result in some ineffective or harmful medical devices “slipping between the cracks” and causing patient harm. 

A problem that is becoming much more common is that of cyber security. If medical devices become increasingly more intelligent, and are required to be inter-connected through the clinical environment’s intranet, in order to access its full AI capabilities, this may result in more vulnerabilities that could be exploited by nefarious agents. These agents could demand a ransom, or leak sensitive patient information and cause millions if not billions of dollars in damages. 

Another problem a BME might face is a “walled garden”. These might already exist in some clinical settings where only certain brands are used because they deliver a better user experience and have a built-up ecosystem in which different devices under the same brand work better with one. If AI is also added to this, it may result in monopolies forming, where the company with the best overall AI will have complete dominance and in turn stagnate competition which leads to a decrease in improvements or progress to medical devices as a whole.

To conclude, artificial intelligence has immense opportunity and virtual limitless upside if integrated in a responsible and ethical way. It is important to understand all of the potential complications and problems with integrating AI into the medical field, especially from the perspective of a BME.



This site uses cookies. By continuing to browse the site you are agreeing to our use of cookies. Please review our Privacy Policy for more details.
I Agree