Advanced Model Validation for Machine Learning: Techniques and Tools 2024

This workshop will provide the latest developments in Model Validation for Machine Learning, with special emphasis on the evaluation of conceptual soundness and outcome analysis. Key topics for conceptual soundness includes, model causality, explainability and interpretability, as well as dealing with over-parametrerized/under-specification problems that commonly occur in machine learning models.

Designing inherently interpretable machine learning models will be discussed in-depth; considering the importance of the methodology for high-risk applications, as well as its role for model benchmark. Outcome analysis will cover advanced topics beyond the standard performance analysis, such as:

Identification of model weakness through error slicing Prediction reliability (conformal prediction) Model robustness Model resilience under changing environment Model fairness

This is a hands-on workshop where the participants will learn practical concepts and review a case study exercise using PiML (Python Interpretable Machine Learning) package for model development and validation: https://github.com/SelfExplainML/PiML- Toolbox . PiML is an easy to use low-code package, so participants with minimum familiarity with Python language will be able to follow without difficulty.

See Also

International Conference on Machine Learning 2024