Recent advances in the field of artificial intelligence have enabled significant breakthroughs in automated biometric systems for discriminative and generative fields. In many applications, AI-powered and automated biometric systems are able to match or surpass human abilities, while enabling the application of biometric systems at a massive scale compared to manual, human-supervised solutions. Given the performance and the increasingly widespread use of automated biometric systems, it is important for researchers to be able to explain their functionality and determine whether their biometric decisions are based on sound principles, i.e., such that they are made in a fair, unbiased and non-discriminative manner, while respecting user privacy and data protection to the greatest extent possible. Biometric systems that meet these criteria are highly desirable, which has also recently been recognized by national- and EU-level regulations regarding data protection, user privacy, the right to an explanation, and similar legal frameworks. Mechanistic interpretability is one of the latest proposed frameworks for explaining state-of-the-art deep learning AI models, which aims to produce a gears-level understanding of deep learning models as opposed to previous methods that aimed to produce decision attributions while treating the model as a black box. Within the scope of the proposed MIXBAI research project, we will both extend the capabilities of the latest AI explainability techniques, and apply them to the study of modern biometric AI systems, in order to produce the kind of model and decision explanations that allow automated biometric AI systems to be used safely and legally within the various existing and prospective regulatory frameworks. Increasing the transparency of biometric AI decision making will also increase user trust in these systems, precluding possible public controversies regarding their use.