Unintended Consequences – Part 2

400 400 Artificial Intelligence in Medicine Inc.

Part 2 of 2

Unintended Drawbacks
Automobile Navigation Systems

An interesting example of a drawback—and unexpected outcome that occurs in parallel with a desired result—comes from automobile navigation systems. The Global Positioning System can pinpoint the geographical location of a device with GPS technology to a radius of a few feet. And automotive GPS navigation systems include artificial intelligence technology in the form of “route finders”. When you give the system a desired destination, it will calculate the route you should take to get there.

The problem with this A.I. is that it is not very intelligent. In some instances, you can ask the navigation system to select the quickest route or the shortest route. But this has its drawbacks. I once drove from Toronto to Quebec City, a distance of some 800 km. I entered the address of my Hotel in Quebec City as the destination. The optimal route is to take a single highway—the direct route—which travels along the U.S.-Canada border that bisects Lake Ontario and the St. Lawrence seaway. Yet every time I approached an exit to the U.S., the GPS system urged me to switch to the U.S. side and take a parallel highway there. This was not optimal as I would have to cross the border twice to get to my destination.

A more aggravating problem is that most navigation systems have little or no information about the state of traffic on the roads, or the presence of construction, accidents, detours and so on. Although so-called traffic monitoring zones (TMZs) are springing up, which broadcast traffic information to your GPS, the data are often unavailable or unreliable.

There is, however, one innovative technology that makes navigation assisted travel easy and efficient. It’s called “Waze”. Waze is an approach that uses augmented human intelligence, rather than artificial intelligence. It lets drivers benefit from real-time information gathered form other drivers.

Like all navigation systems, Waze uses maps and GPS. But Waze does something different. It tracks the position of drivers in real-time. By calculating how the drivers’ positions are changing over time Waze can display traffic congestion. What’s more, drivers using Waze can enter road information into the system as they are driving: things like pothole locations, road closures, civic protests, accidents, breakdowns, cars off to the shoulder, gas station prices, and, yes, police traps. These appear as icons on your navigation map. It’s real-time feedback from thousands, even millions of your fellow drivers—and it works marvelously.

The police however, are not that impressed. Sheriffs and police unions claim that Waze is being used to endanger police officers[1]. They say that with the “Police” button, Waze can be used to alert felons about the presence of the law, and thereby allow them to evade the law, or alternatively, target police officers in the field. As far as unintended consequences go, this is both a drawback and an unexpected benefit—depending on which side of the law you stand.

[1] “How Google is endangering police officers”, By David A. Clarke Jr. and Jonathan Thompson, CNN, April 26, 2015

Automated Cancer Reporting

Another example of an unexpected drawback concerns AIM’s own industry leading automated cancer reporting system—called E-Path—and its rapid case ascertainment capability. E-Path uses natural language processing to “read” electronic pathology reports as these are released from laboratory information systems to clinicians. The system is highly sensitive and specific, and does a better job of identifying and reporting cancers than human review. It is also very fast. Essentially, E-Path can operate in real-time: as soon as a report is released, E-Path can analyze it and immediately forward a copy of a reportable cancer to the hospital and/or state cancer registry.

One of the benefits of rapid cancer reporting is that it permits identification of patients that could be candidates for specific types of research or clinical trials, soon after diagnosis, before standard treatment might disqualify them from participation. This kind of rapid case ascertainment can be very useful in recruitment and subject accrual. Bur how rapid is too rapid?

At one time, our E-Path system alerted an investigator about a patient suitable for study participation—it was a case of pancreatic cancer—a rapidly progressing disease. The investigator proceeded to contact the patient to recruit them into the study, only to discover, shockingly, that the patient was not yet aware of the diagnosis. The automated cancer reporting system had alerted the investigator before the primary care provider could confer with the patient. From that point forward, we configure artificial delay times in the E-Path reporting system to avoid such situations.

Prescription Drugs

When it comes to unintended consequences, the pharmaceutical industry poses a conundrum. Almost all drugs have adverse side effects that were presumably not known at the time the drug was developed, and only come to light through clinical trials and post-approval monitoring. What’s more, these adverse effects are often very serious and even life threatening. Food and Drug Administration’s Adverse Event Reporting System (FAERS). It is important to note that since reporting of adverse drug reactions is voluntary in the United States, the data are conservative estimates. Also, in FAERS reporting direct causality between a particular pharmaceutical and its effect does not need to be proven.

Notwithstanding the limitations of FAERS data, the U.S. Food and Drug Administration estimates that approximately 2 million people suffer from adverse drug reactions every year, and that about 100,000 of these result in death[1]. This places death from adverse drug reactions as the fourth leading cause of death in the U.S. ahead of Alzheimer’s, diabetes, pneumonia and automobile accidents. Interestingly, the death toll from consumption of illegal drugs, such as heroin, is just 1/10th of that attributed to legally prescribed drugs.

[1] “Preventable Adverse Drug Reactions: A Focus on Drug Interactions”, U.S. Food & Drug Administration, www.fda.gov, page last updated March 14, 2016

Cause of Death Number (per yr.)
Cardiovascular Disease 803,227
Cancer 591,699
All Accidents 136,053
Motor Vehicle Accidents 35,398
Adverse Drug Reactions 123,927[1]
Alzheimer’s Disease 93,541
Diabetes 76,488
Influenza & Pneumonia 55,227
Suicide 42,773
Firearms 33,599
Homicides 15,809
Heroin Overdose 10,574[2]

[1] FDA Adverse Events Reporting System (FAERS) Electronic Submissions, www.fda.gov page last updated: November 24, 2015

[2] CDC/NCHS, National Vital Statistics System, Mortality File, 2015, last accessed Dec. 11, 2015

Despite what some have called an epidemic and public health emergency, legally approved drugs with known adverse effects continue to be routinely prescribed. Why?

Donald Light, professor at the Edmond J. Safra Center for Ethics at Harvard University points to systemic issues with the way in which pharmaceuticals are developed, marketed, approved and tested as the key causes[1]. He argues that meeting the needs of the pharmaceutical industry has taken priority over the meeting the needs of patients, and that the situation will only get worse unless fundamental changes are made. These include: separating funding for clinical trials from their conduct, analysis and results publication; introducing measures to discourage research and development of drugs that have few clinical benefits; full public funding of the FDA; and creating a new national drug safety board[2].

The unintended consequences of prescription drugs stem, in significant part, from a societal belief that the professed benefits of a given drug outweigh its risks; a tenet that is ingrained in the industry, and promulgated by physicians and patient-directed advertising.

[1] “New Prescription Drugs: A Major Health Risk with Few Offsetting Advantages”, Donald W. Light, https://ethics.harvard.edu June 27, 2014

[2] “Institutional Corruption of Pharmaceuticals and the Myth of Safe and Effective Drugs”, Donald W. Light, Joel Lexchin and Jonathan J. Darrow, Journal of Law, Medicine and Ethics, June 1, 2013, Vol. 14, No. 3: 590-610.

Prescription Drugs

Unfortunately, unintended benefits do not occur frequently. However, there are some stand-out examples. Viagra is one. “Sildenafil”–the generic name for Viagra–was synthesized by chemists working at Pfizer Pharmaceutical in Kent, England. It was originally intended for use in treating hypertension (high blood pressure) and angina pectoris (a symptom of heart disease). Clinical trials were conducted at Morrison Hospital in Swansea England. The Phase I trial results showed the drug had little effect on angina, but could induce significant penile erections. As a result, Pfizer decided to market the drug for erectile dysfunction instead. The rest, as they say, is history. Viagra was patented in 1996 and went on to become hugely successful, with sales in 2008 growing to over $1.9 billion. Ironically, a rare but seriously adverse side effect of Viagra is severe low blood pressure.

Can Unintended Consequences be Avoided?

“Unintended consequences” is a term popularized by American sociologist Robert K. Merton, who first wrote on the subject in a 1936 paper titled “The Unanticipated Consequences of Purposive Social Action”[1]. In it, he describes a wide range of human activity where things don’t turn out as expected and identifies several reasons why this occurs. First, he points to lack of knowledge as an obvious factor leading to the miscalculation of anticipated results. Secondly, he speaks to ignorance and error in appraising situations. Thirdly, he identifies the immediate motivation of interest as a factor, wherein a person’s focus on a particularly desirable result prevents consideration of other possible outcomes of the same action. Finally, he posits that basic values are often a factor; wherein certain courses of action are driven by fundamental values or tenets that obviate consideration of the consequences of these actions. Merton’s line of reasoning also led him to coin the phrases “self-defeating prophecy” and “self-fulfilling prophecy”, which are specific types of unintended consequences.

Unintended consequences can be found in every field of human endeavor for as long as anyone has put their hand to purposeful action. As the poet Robert Burns wrote in 1785 “The best-laid schemes o’ mice an’ men, Gang aft agley, An’ lea’e us nought but grief an’ pain”[2].

The root of the problem is that the universe is innately unpredictable. Uncertainty is a fundamental feature of nature. For example, in quantum theory, there is no way of predicting both the position and momentum of a sub-atomic particle. But even if we adopt the classical Newtonian view of the Universe, as a clockwork mechanism, where cause and effect are deterministic, we’re still left with the issue of complexity. More often than not, the initial conditions and forces that drive any process are not completely known. This is the foundation of Chaos Theory—a branch of mathematics that deals with complex systems that are highly dependent on initial conditions. Chaos can be found in almost every natural system, from weather patterns to climate change, to social behavior, and indeed, to road traffic. As such, unintended consequences are simply the unpredictable results of chaotic systems, where the initial conditions that are intended to produce a desired outcome are not fully known, mistakenly appraised, or simply ignored.

One could argue that one way to avoid unintended consequences is to abstain from purposeful action altogether—don’t do anything. Doing nothing, however, is an act in itself and may have negative repercussions. In fact, doing nothing can have the most unpredictable results. More realistically, avoiding unintended consequences requires detailed knowledge of initial conditions and astute prediction of the effects of human nature, societal tenets, and natural forces. But since our knowledge may always fall short of the mark, we must be resigned to suffer the consequences of our actions (good or bad) and continuously learn from them.

[1] “The Unanticipated Consequences of Purposive Social Action”, Robert K. Merton, American Sociological Review, Volume 1, Issue 6 (Dec., 1936), 894-904.

[2] “To a Mouse, on Turning Up in Her Nest with the Plough”, Robert Burns, November 1785 (Translation: The best laid plans of mice and men go often askew and leave us nothing but grief and pain.)

Back to Part 1