Saturday, April 27, 2024

GET OUR FREE E-NEWSLETTER

“You may choose to look the other way, but you can never say again that you did not know.”

— William Wilberforce

Search

Artificial Intelligence Used in Health Care

AI in health care

Artificial Intelligence (AI) is making its way into health care. It is increasingly being used to help doctors interpret tests, clarify diagnoses and identify which treatments may be most effective.1 According to a recent article in the British Medical Journal (BMJ), some people even believe the use of AI has a place in addressing vaccine hesitancy, which is defined as “a state of indecision before accepting or refusing a vaccination,” by utilizing algorithms that identify keywords and phrases associated with it.2

AI Comes with Unique Ethical Challenges

Health care workers both in the information technology and clinical settings saw an unprecedented number of ways that artificial intelligence was incorporated into health care. But with new technology comes uncharted territories and nuance related to ethics and legality.

The U.S. Congress has begun enacting legislation to regulate AI for the first time in an attempt to protect privacy and prevent harmful misuse of the technology. The health care sector will likely face unique challenges when it comes to the ethical use of AI.

Cara Martinez of Cedars-Sinai Medical Center writes:

While many general principles of AI ethics apply across industries, the healthcare sector has its own set of unique ethical considerations. This is due to the high stakes involved in patient care, the sensitive nature of health data, and the critical impact on individuals and public health.1

In the scientific fields, the use of artificial intelligence known as Large Language Models (LLMs) have generated great interest. LLMs are designed to reproduce human language processing capabilities. With extensive training, LLMs analyze patterns and connections to understand and generate language for text generation or machine translation. A commonly known version of an LLM is a “chatbot” known as ChatGPT—a natural language processing tool that creates humanlike conversational dialogue.3

Chatbots Explored to Combat Vaccine Misconceptions

LLMs, such as ChatGPT, have created much interest and debate within the medical community. A PubMed article exploring AIs use in response to vaccination myths and misconceptions states:

Technological advances have led to the democratization of knowledge, whereby patients no longer rely solely on healthcare professionals for medical information, but they provide their own health education and information themselves. Monitoring this trend… could be useful to help public health authorities in guiding vaccination policies, designing new health education and continuing information interventions.3

The researchers asked ChatGPT eleven of the World Health Organization’s list of vaccine myths and misconceptions:

  1. Weren’t diseases already disappearing before vaccines were introduced because of better hygiene and sanitation?
  2. Which disease shows the impact of vaccines the best?
  3. What about hepatitis B? Does that mean the vaccine didn’t work?
  4. What happens if countries don’t immunize against diseases?
  5. Can vaccines cause the disease? I’ve heard that the majority of people who get disease have been vaccinated.
  6. Will vaccines cause harmful side effects, illnesses or even death? Could there be long term effects we don’t know about yet?
  7. Is it true that there is a link between the diphtheria-tetanus-pertussis (DTP) vaccine and sudden infant death syndrome (SIDS)?
  8. Isn’t even a small risk too much to justify vaccination?
  9. Vaccine-preventable diseases have been virtually eliminated from my country. Why should I still vaccinate my child?
  10. Is it true that giving a child multiple vaccinations for different diseases at the same time increases the risk of harmful side effects and can overload the immune system?
  11. Why are some vaccines grouped together, such as those for measles, mumps and rubella?

 

The ChatGPT responses to these questions were then assessed by two raters with “proven experience in vaccination and health communication topics.”3 The raters findings concluded that ChatGPT provided accurate and comprehensive information, but with room for improvement. The raters did not agree with the way the chatbot answered several questions, including when ChatGPT stated it is not clear why the implementation of mass vaccination is not directly followed by a dramatic drop in the disease incidence. The authors said:

 The AI tool appears to entirely disregard the benefits offered by vaccination in the short term (e.g., the management of infection clusters and management of the disease as demonstrated with the COVID-19 vaccination) and the long term (e.g., the impact of vaccination on economic growth and the sustainability and efficiency of health systems.3

One limitation the authors discussed was potential bias of ChatGPT. Yet, when the accuracy of the answer the bot gave to question three was scored considerably lower than other responses, the raters decided to resubmit the questions in a different order to alter and “improve” the ChatGPT answer.3

JAMA Study Finds AI Incorrectly Diagnosed 80 Percent of Pediatric Case Studies

A study conducted by the Journal of the American Medical Association (JAMA), Pediatrics and the New England Journal of Medicine (NEJM) found that ChatGPT incorrectly diagnosed eight out of 10 pediatric case studies. Authors of the study gave the chatbot the prompt to “list a differential diagnosis and a final diagnosis.” Out of 100 case studies, only 27 percent aligned with the correct diagnoses that the physician researchers would have also diagnosed.4

But all the hurdles associated with ethical blurred lines doesn’t stop AI tech companies from calling the use of artificial intelligence the “new normal” in healthcare.5

AI Seen as a “Major Opportunity” for Public Health

Assistant professor of epidemiology and biostatistics at the University of Albany states that there is a “major opportunity” for the intersection of artificial intelligence and public health as it pertains to enhancing disease prevention, disease surveillance, disease management, and health promotion.6

Wang states that AI can be used as a powerful tool to transform healthcare because it allows for public health officials to collect datasets, social media trends, environmental factors, and healthcare records to predict disease outbreak and mitigate potential health crises.6

Pharma Utilizes AI for Drug Design, Faster Clinical Data and Monitoring Adverse Reactions

The pharmaceutical industry is also utilizing artificial intelligence, with the AI pharma industry growing steadily and expected to reach a market volume of $10 billion by 2024. Uses of the technology within the biopharma industry includes drug discovery and design. Use of AI during drug trials reduces the time it takes to get approval and is thought to yield more efficient clinical data processing, predictive biomarkers, and more.7

Pfizer has been using AI since 2014 to monitor and sort through drug and vaccine adverse event case reports.8


If you would like to receive an e-mail notice of the most recent articles published in The Vaccine Reaction each week, click here.

Click here to view References:

9 Responses

  1. No thanks. The robots will be no better than the baffled doctors. Can you sue a robot for wrecking your life?

    Vaccination of any kind is never a consideration for me and hasn’t been for over 50 years. I am not on the “vaccine” fence, but 100 million light years away from it.

    1. Well said! When I read, “some people even believe the use of AI has a place in addressing vaccine hesitancy . . . .” I just laughed because most people regarded as “vaccine hesitant” are a strong NO, because of past experience of harm occurring after being vaccinated.
      For me, that is the case. It will be NO for eternity!

  2. ‘AI ethics’? Language models. Vaccine ‘misconceptions’. WHO? Artificial pre programmed extremely biased ‘non-intelligence’ is more like it. Shouldn’t ‘language models’ basically encompass entire dictionaries, thesauruses, and complete sets of hundreds of encyclopedias from over a 100 years in the past, to incorporate all the now censored information on holistic care which was proven more effective than what we use today? Oh no, can’t have that, because the AI will be implemented with a similar end motivation to increase medical profits and create more dependent customers, rather than less. Therefore, the only output possible, will be the only input allowed. Not surprisingly in ‘conversations’ with AI online, inquisitive people often recognize this curious feature of AI which creates insurmountable challenges and a constant chain of logical faults. That is the fact that the bots are programmed to accept only certain information, and are not allowed to accept other forms of information which may contradict their pre programmed goals, narratives, and objectives.

    Do yourself a favor; Do not talk to robots. You are a human being not a robot. Worship the next golden idol if you choose. Do so at your own risk. Free thinking people were over AI before it even began. Devoid of logic and reason, a tool of force, a tool of the state. Need a robot making my health care decisions just about as much as I need insurance companies and non licensed administrators limiting what a doctor can and can not prescribe. It’s already illegal for a doctor to prescribe non gmo organic foods. What’s next? And that’s how you know this ‘AI trend’ is a scam, yet another instrument of power which power brokers assigned to themselves. Repeat a lie enough times and it becomes; ‘AI advisement’. When you step into that industrial complex hospital, you are nothing more than a dollar sign on a conveyor belt. When you interact with an AI system, you are accomplishing nothing more than digesting new and unique forms of propaganda. I’ll bet you a big steak dinner that AI, just like the for profit medical establishment, will never actually find a cure.

  3. The AI system has to be programed..Who is the program designer? It might be someone like Adolf Hitler, Mussolini,Gates. As usual this is about greed! Remember the old computer expression. “Garbage in Garbage out”

  4. Who needs AI? As with other things, I suspect that AI will ultimately be used by by sociopaths to destroy and wreck people’s lives. All one has to do is open their eyes and take notice of how everything is being weaponized against humanity. The food, water, air, prescription medicines, vaccines, even freedom of speech. Lord help us all if the WHO’S health resolution passes in May of this year or if ever. I didn’t need AI to start living. I don’t need it to keep living. It’s just another of yoke of slavery that evildoers want to cram down everyone’s throat.

  5. The risk/benefit/abuse of AI will always be contingent on the moral/ethical/consciousness of the AI developer and end stage user and of every biological being in between. It is antithetical thinking that corrupted &/or low vibrational beings could ever utilize any level of AI in the best interests of society. Whether our society or whether advanced on/off-planet civilizations, history is replete with factual evidence of disastrous outcomes when the inept seek power over the rest. IMO we gotta’ thoroughly clean house b/4 any use of AI or for that matter, any technology.

  6. Thankfully, the shift to AI medicine has been a disaster for IBM tasked with being the leader in AI medicine. Around Jan 2022 IBM was forced sell IBM Watson Health after spending billions on the project and also prior to that sale it sold off a multibillion-dollar radiology group it bought with hopes it could train AI to replace radiology physicians.

    Keep in mind also that in an October 2020 FDA meeting one of the FDA scientists describing possible serious adverse effects from the mRNA shots (with a long list that flashed for a very brief moment) said IBM’s Watson Health would be collecting AE data and working with a list of universities and private companies to monitor the situation. The public was never given any of that data or analyses. What happened? Can we FOIA? Obviously, the value of the AI data collection systems appears to be greatly exaggerated.

Leave a Reply

Your email address will not be published. Required fields are marked *

Search in Archive

Search in Site

To search in site, type your keyword and hit enter

Search