Developing Meaningful Explanations for Machine Learning Models in the Telecom Domain
Objective
AI systems commonly involve a variety of stakeholders, each playing a unique role in relation to these systems. Consequently, explanations regarding system outputs should be customized to cater to the diverse stakeholders' needs.
Results
Results include identifying the current best practices for generating meaningful explanations and developing novel stakeholder tailored explanations for telecom use-cases.
Duration
01 September 2023 - 30 August 2027
Approach
The research will begin with a literature study, followed by the identification of potential use-cases and stakeholder needs. Prototypes will then be developed, and their ability to provide meaningful explanations will be evaluated.
HU researchers involved in the research
"The expected research output will allow us to fully implement the cornerstone of our AI Governance Policy, namely the Transparency, thus improving the way we do Responsible AI at KPN."