Speaker: Prof. Tim Grant (Aston Institute for Forensic Linguistics)
Abstract:
In 1986 Norman and Draper published the influential edited volume User Centred System Design setting the agenda for software development for 20 years. Draper writes that “user-centred design emphasizes that the purpose of the system is to serve the user not to use a specific technology, not to be an elegant piece of programming.” (Norman,1986, p61).
As an example of forensic science more generally, forensic authorship analysis has rushed to adopt the latest specific technologies such as Artificial Intelligence (AI). These machine learning and generative systems process and build models using enormous amounts of data and make use of enormous collections of variables, often beyond the comprehension of human users. The demonstrable power and accuracy of these systems is often only matched by their opacity, leading the human analyst to bewilderment at specific outcomes, and crucially with no way of understanding how the outcome was arrived at. In the excitement of new and powerful technological potential, Norman’s principle of serving the user seems to have been forgotten.
On the one hand some research in forensic domains show that AI-systems + human judgement can lead to less good results than if the AI systems alone making decisions. On the other hand, work in authorship analysis has recently demonstrated that theory-free analysis can lead to significant over-estimation of the validity of similar systems. Combining these results creates an uncomfortable dilemma for forensic analyses.
In this paper I argue that this dilemma is grounded in the lack of explainable AI systems, and suggest that the solution of powerful explanation rich systems is indeed possible. I also unpack the idea of explanation focussing on the questions of explanation for who and for what purpose – building on current practice in software design where understanding user profiles and building systems around user stories is a core practice.