The use of technologies in personnel selection has come under increased scrutiny in recent years, revealing their potential to amplify existing inequalities in recruitment processes. To date, however, there has been a lack of comprehensive assessments of respective discriminatory potentials and no legal or practical standards have been explicitly established for fairness auditing. The current proposal of the Artificial Intelligence Act classifies numerous applications in personnel selection and recruitment as high-risk technologies, and while it requires quality standards to protect the fundamental rights of those involved, particularly during development, it does not provide concrete guidance on how to ensure this, especially once the technologies are commercially available. We argue that comprehensive and reliable auditing of personnel selection technologies must be contextual, that is, embedded in existing processes and based on real data, as well as participative, involving various stakeholders beyond technology vendors and customers, such as advocacy organizations and researchers. We propose an architectural draft that employs a data trustee to provide independent, fiduciary management of personal and corporate data to audit the fairness of technologies used in personnel selection. Drawing on a case study conducted with two state-owned companies in Berlin, Germany, we discuss challenges and approaches related to suitable fairness metrics, operationalization of vague concepts such as migration* and applicable legal foundations that can be utilized to overcome the fairness-privacy-dilemma arising from uncertainties associated with current laws. We highlight issues that require further interdisciplinary research to enable a prototypical implementation of the auditing concept in the mid-term.