Among the many recurring topics, this year has been the impact of machine learning in our lives, especially the implications for our future work life. Prophecies range from ubiquitous utopian machine servants to a dystopian ravaging, hollowing out the work and economic standing of the middle and lower classes. “What can machine learning do? Workforce implications” by Erik Brynjolfsson and Tom Mitchell in Science provides some perspective on machine learning and its future economic impact.
Machine learning
Machine learning, is a general-purpose form of artificial intelligence, like electricity rather than a specific application. And for the foreseeable, future machine learning is limited to well-defined tasks. The authors identify particular tasks as suitable for machine learning: tasks with easily described goals, readily taught and with performance measures allowing the development and application of algorithms. The resulting applications, while appearing miraculous, are not particularly resilient. Small changes in the scope of their goals or metrics severely degrade their functionality.
Humans learn from reading and by doing. Book learning, associating one idea with another is problematic for machines. But learning by doing, from example, is another matter. On-the-job training, the old-school term for being shown how to do a particular job and the feedback about whether it is being done correctly is “supervised learning.” And computers accelerate supervised learning — machines’ faster computational speeds, less fallible memory, and unwavering focus teach a particular truth quickly.
Limitations
Machine learning is perhaps our best example of teaching to the test, but it is severely limited. Machines cannot generalize their knowledge as humans do. Consider the applications of machine learning to medicine. Most applications are confined to images and simple diagnostics where problem and solution are well defined, the information is readily digitized for machines, and thousands of examples are available without patient involvement. Machine learning has been demonstrated to accurately identify retinal changes in diabetes, the mammographic appearance of breast cancer, and electrocardiogram patterns consistent with abnormal heart rhythms. These problems exhibit a yes-or-no, right-or-wrong clarity. Within these parameters, machine learning will equal or exceed the abilities of their human trainers.
But the algorithm trained to detect breast cancer cannot identify changes in the eye from diabetes — it cannot generalize to different situations. Machine learning reflects as a faithful mimic of “the truth” of its training data set. Machine learning is dependent upon the variables we provide, the unintended bias reflected in those choices and the changing problem under consideration. Consider three examples drawn from medicine:
- Clinical evidence indicated that excess acid was the underlying cause of stomach ulcers. It required several decades to discover a new variable, H. pylori, a bacteria that ultimately was found to be a stomach ulcer’s more fundamental cause.
- Clinical studies of heart disease involved men primarily for several decades. We mistakenly assumed that men and women respond identically. Only in the last few years have we seen that this unintended bias is incorrect and that women often report many different symptoms than men when experiencing heart disease.
- Lung cancer in the early 20th century was so rare that physicians gathered to see examples. Within 50 years it was the most common cause of cancer deaths.
Machines may apply knowledge more consistently, but it would not have found the association with H. pylori or recognized that women were underrepresented or that lung cancer was to become our most significant cause of cancer. Humans made those findings.
Perhaps the greatest weakness in these systems is their explanatory capabilities. They adjust thousands of numerical weighting to come to their probabilistic answers, given as percentages of certainty. Machine learning’s thought chain, all those intermediary steps, are unknown or unclear to humans who have to act upon their recommendations. We do not know the discriminators or their weightings; only the output expressed statistically. Certain decisions require an explanation. A high-tech Magic 8-Ball is insufficient in treating cancer. Do you believe, “the algorithm told me to do it,” will have much credibility with our patients or their malpractice attorneys?
Workforce implications
Some physician work is explicit, easily described, amenable to machine learning. Another portion of our work consists of activity that “we know more than we can tell” — shareable in the sense of “see one, do one,” but not amenable to description, our intuition. Machine learning cannot replicate that experience because we cannot express it in a training data set.
Physician’s work is a variable combination of explicit tasks that will readily succumb to the advances in machine learning and implicit tasks that only humans can perform. Machine learning will disrupt clinical care based upon the balance of these tasks, making its impact far from simple. We cannot win a battle with machines over highly structured repetitive work, jobs in the middle-skill range, our explicit work. Our value lies in our more challenging to acquire implicit skills, comforting and urging, moving people to better health choices or helping to balance a patient’s perceived and actual risks and benefits in the face of uncertainty. Another important source of our value will be our relationships with consultants, hospitals and post-discharge workers and facilities — the ecosystem in which we provide care. Physicians provide greater added value through these relationships than a machine can offer.
The trend to the corporate employment of physicians cannot be easily reversed if it can be reversed at all. We can be machine’s master only when we speak truth to power, we must take back medicine from health care systems and their silicon serfs; stand for our patients and take a far more significant role in determining the physicians and advanced practitioners that we surround ourselves with, together we are more than a machine.
Charles Dinerstein is a surgeon.
Image credit: Shutterstock.com