What is in the Box?
Published:
AI technology exists and is widely available. Anyone can access the repositories, download the weights, read the papers of Arxiv. There is no putting it back in the box, we will have to live in a world with AI tools. So, the question becomes how can we shape these tools. This piece will detail some of the current methods and philosophies of guiding these mathematical heaps towards humane uses.
Purpose. Is where we will have to start. What is the goal of the system, why does it exists? Should it exists?
There are many controversial uses for the technology we call AI. From Clearview.ai specialising in facial recognition for law-enforcement and “investigations”.
This post is not done yet. The content is to describe the current conceptions of explainability, contestability and refusal in AI. I aim to start from the philosophical background behind explainations and the backstop of refusal. Then once the need for trust and iterative improvement is established. Describe and detail tha different explainability techniques from, Agnostic methods, Model-specific methods, Counterfactual explanations, Example-based explanations, Interactive/human-centric methods, Concept-based approaches, Ante-hoc vs. Post-hoc distinctions, Contrastive methods and mechanistic interpretability methods. This is in order to assess which of these methods may be best integrated with convivial conceptions of AI, easily accessible, low energy, and enabling tinkering.