Towards explainable AI
Many organisations are exploring the potential of machine learning to support decision-making – but public bodies are held to a higher standard than their private sector counterparts. Citizens expect to be treated fairly, and taxpayers expect accountability and transparency. Those using new approaches in service delivery or policymaking will increasingly be expected to explain their assumptions and reasoning, but not all techniques produce explainable outputs.
So what do public leaders need to know about explainable AI? What’s expected, and what questions should you be asking your technical teams and suppliers?