I found this interesting. I confess that
I started reading this article because I was interested in the kitty. But it goes on from there.
A computer science grad student at Stanford set out to design an algorithm to predict when terminally ill patients were going to die. The idea was that this could make more efficient use of palliative care resources. Providing palliative care too soon is wasteful, providing it too late is of no value. The idea was to identify patients who were likely to die within 3-12 months, to make best use of their remaining time as well as of palliative care resources.
He developed a neural network system and fed it data from 160,000 patients to "learn". Then, he applied his algorithm to an additional 40,000 patients and found that his system was 90% accurate in picking a 3-12 month window when their deaths would occur.
The problem is that although the algorithm has learned to process the data quite accurately, trying to figure out what exactly it has learned from the data is quite difficult:
So what, exactly, did the algorithm “learn” about the process of dying? And what, in turn, can it teach oncologists? Here is the strange rub of such a deep learning system: It learns, but it cannot tell us why it has learned; it assigns probabilities, but it cannot easily express the reasoning behind the assignment. Like a child who learns to ride a bicycle by trial and error and, asked to articulate the rules that enable bicycle riding, simply shrugs her shoulders and sails away, the algorithm looks vacantly at us when we ask, “Why?” It is, like death, another black box.
-k