Hardly a day passes without a report of some new, startling application of Artificial Intelligence. Two recent articles in the journal Nature described its application to weather forecasting, which, currently, is difficult and time-consuming because meteorologists individually analyze weather variables such as temperature, precipitation, pressure, wind, humidity, and cloudiness. However, new AI applications can significantly speed up the process.
The first article describes how a new AI model, Pangu-Weather, can predict worldwide weekly weather patterns much more rapidly than traditional forecasting methods but with comparable accuracy. The second demonstrates how a deep-learning algorithm could predict extreme rainfall more accurately and quickly than other methods.
A July 5th article in Technology Review magazine offered some examples of how AI is advancing various scientific disciplines:
Scientists at McMaster and MIT, for example, used an AI model to identify an antibiotic to combat a pathogen that the World Health Organization labeled one of the world’s most dangerous antibiotic-resistant bacteria for hospital patients. A Google DeepMind model can control plasma in nuclear fusion reactions, bringing us closer to a clean-energy revolution. Within health care, the US Food and Drug Administration has already cleared 523 devices that use AI—75% of them for use in radiology.
Less momentous but fascinating was a recent article by Ethan Mollick, a professor at the Wharton School at the University of Pennsylvania, which described the ability of the newest, "multimodal" version of chatbot GPT-4 to "see," "hear," and "understand" what is being presented to it. In an experiment in which the chatbot is asked to design a new, trendy women's shoe, it offers several possible alternatives and then, when prompted, serially and skillfully refines the design.
The chatbot can't (yet) do everything perfectly, however. I love this passage from Mollick's article: "I also gave it the challenge of coming up with creative ideas for foods in my fridge based on an original photo (it identified the items correctly, though the creative recipe suggestions were mildly horrifying)."
AI is also being applied to military intelligence and strategy. AI accurately predicted the invasion as early as the fall of 2021, when experts were still undecided about Russian President Vladimir Putin's intentions toward Ukraine. As described in an article in Foreign Policy, analysts used AI to aggregate small but significant pieces of data that together made possible an accurate prediction. The details included the observation that weapons systems moved to the border regions in 2021 for what Russia claimed were military drills remained there, as if pre-positioned for future forward advances. Even Russian officers’ spending patterns were factored in: Their purchasing goods "at local businesses made it obvious they weren’t planning on returning to barracks, let alone home, anytime soon."
Scripps Research cardiologist Eric Topol recently reviewed several promising medical applications of AI, from the ability to make never-before-possible diagnoses from chest X-rays to replacing human scribes who summarize patients' office visits. The influential Mayo Clinic, the largest integrated, nonprofit medical practice in the world, has created more than 160 AI algorithms in cardiology, neurology, radiology, and other specialties, 40 already employed in patient care.
I've become an AI fan, especially after my own interaction with it last month. Let me explain...
Like most physicians, I’m a great believer in preventive medicine. Along with being strongly pro-vaccine and a believer in regular mammograms, blood pressure monitoring, and specific blood tests, I also endorse colonoscopies to detect early cancers in the colon or rectum. I recently came face to face (so to speak) with AI during my colonoscopy.
And yes, I know that colonoscopies are a hard sell because the procedure, or, to be more accurate, the required preparation for it -- the “cleanout” -- isn’t pleasant. However, it’s important; it could save your life.
According to the NIH, colorectal cancer is the third leading cause of cancer in the United States. It is insidious because it can progress for a long time before it becomes symptomatic and, by then, becomes harder to treat. Colorectal cancer usually starts from polyps or other precancerous growths in the rectum or the colon. As part of screening, clinicians perform colonoscopies to detect changes or abnormalities in the lining of the colon and rectum. A colonoscopy involves threading an endoscope -- a thin, flexible tube with a camera at the end -- through the rectum and throughout the entire length of the colon, allowing the doctor to see signs of cancer or precancerous lesions. (The patient is anesthetized, so it’s not painful or even uncomfortable.)
Although colorectal cancer now mainly occurs in people over age 50, incidence rates are rising for young adults. Incidence and death rates are projected to double by 2030. By then, it is estimated that more than 1 in 10 colon cancers will be diagnosed in people younger than 50. Colonoscopy screenings, which should begin at 45 years old, may reduce colorectal cancer mortality by 60-70%.
My recent routine screening colonoscopy was notable in two ways. First, the gastroenterologist – the specialist who does the procedure – prescribed a new cleanout regimen called SUTAB, which consists in part of a large number of tablets instead of the old one, which required drinking vast amounts of a disgusting liquid. It was still no picnic but was somewhat more palatable, literally and figuratively. The less said about that, the better…
The second notable aspect of the experience was something I learned from chatting with the gastroenterologist. While we were discussing the new frontier of Artificial Intelligence’s contributions to medicine, he mentioned that he and his colleagues had begun to use a new AI tool called “GI Genius” to assist with colonoscopies – specifically to help detect abnormalities, such as polyps or adenomas (precancerous lesions), in the colon.
Here's how it works, according to the FDA:
The GI Genius is composed of hardware and software designed to highlight portions of the colon where the device detects a potential lesion. The software uses artificial intelligence algorithm techniques to identify regions of interest. During a colonoscopy, the GI Genius system generates markers, which look like green squares and are accompanied by a short, low-volume sound, and superimposes them on the video from the endoscope camera when it identifies a potential lesion. These signs signal to the clinician that further assessment may be needed, such as a closer visual inspection, tissue sampling, testing or removal, or ablation of (burning) the lesion.
The FDA’s Center for Medical Devices and Radiological Health approved it in April 2021 based on a multicenter, prospective, randomized, controlled study in Italy with 700 subjects 40-80 years old undergoing a colonoscopy for colorectal cancer screening. In the study, colonoscopy plus GI Genius was able to identify lab-confirmed adenomas or carcinomas in 55.1% of patients compared to 42.0% of patients with standard colonoscopy, an absolute improvement of 13%.
In subsequent clinical studies, the module showed a sensitivity of 99.7% with fewer than 1% false positives. My doc said that he occasionally found polyps that GI Genius missed, and vice versa, but that the module was getting smarter and more accurate as more examples of colonoscopies were being fed into its database.
Welcome to the Brave New World of AI.