Abductive Reasoning in Machine Learning
On September 17, PhD student Simon Enni from Aarhus visits the HPS group and will be giving a talk. Our group meetings are rather informal and start with bring-your-lunch from 11.30 before we go on the presentation.
If you are interested in joining, please send an email to email@example.com.
Machine learning is often portrayed as an automatic process utilising inductive reasoning to extract novel insights from empirical material allowing data to “speak for themselves”. However, such an approach has a series of fundamental limitations. In this presentation, I argue that using machine learning as a purely inductive method risks (1) isolating models from the phenomena they represent, (2) challenging the ability of models to cope with changing circumstances, potentially leading to (3) a lack of transparency and general understanding of the mechanisms underlying the modelled phenomena. To account for these weaknesses, I present a model that allows both inductive and abductive ways of reasoning to be mutually informative in an iterative and ampliative process of data-driven scientific discovery. This approach is based on a renewed focus on abductive reasoning as introduced by C. S. Peirce in his work on the logic of science. Here, induced models suggest new relationships to be explored and tested by abduced hypotheses that in turn lend themselves to improving and explaining the utility of the models produced by machine learning.