I recently hosted a meeting of several international key opinion leaders representing a number of innovative technologies with applications for drug development. One spoke to a clever computer algorithm – an artificial neural network – that was being used for clinical purposes (as an example, identifying patients likely to develop Alzheimer’s Disease).
If you are not familiar with neural networks, then you may want to turn elsewhere for a primer (as I am hardly an expert). What I will say is that these are computer programs designed to act like the human brain and adapt – or learn – as they are shown more data. In the example above, show the algorithm more patient cases and it will be increasingly likely to predict who would get Alzheimer’s.
The speaker then showed a slide entitled “Over-Training”. In over-training, if one shows the computer too many cases they actually can start to see error rates increase. To be honest, I tuned out much of the conversation after this slide. I became fixated on the concept that over-training does not cause a plateau in the error, but that too much training actually causes more mistakes. The system becomes overly focused on individual cases and can no longer see the big picture.
Artificial neural networks were designed to mimic the construction of the human brain and our neural networks. But what can we learn from the lessons of over-training the artificial system?
Can we be over-trained? If we see too many cases or focus too sharply on one task, do we lose the ability to see the big picture? If so, how can we innovate and advance drug development? Are those on the “inside” too narrowly focused – and over-trained?
Expertise can be of value – but perhaps all knowledge being with the beginner’s mind.