Skip to content

A CheckList for Code Capabilities

Alejandro Velasco edited this page Oct 13, 2022 · 4 revisions

A Checklist for Code Capabilities

Let's start with some relevant definitions of machine learning interpretability and evaluation.

  • Interpretability: refers to methods and models that make the behavior and predictions of machine learning systems understandable to humans (Molnar,2022). This area of research is also know as Interpretable Machine Learning which has been applied to NLP models so far. https://volpato.io/articles/1907-nlp-xai.html

  • Model Evaluation: refers to metrics or strategies to determine if a machine learning model is performing well or bad with respect to a ground truth or expected behavior, intended to determine if a system behavior can be predicted when preconditions and postconditions are meet. (i.e, ground truth evaluation, white-box testing, black-box testing, Cross Entropy, Mutual Information).

  • Probing: Probing turns supervised tasks into tools for interpreting representations (Hewitt, 2020)

  • Capabilities: A type of behavior expected from a controlled input to a machine learning model based on behavioral tests in SE (Ribeiro, 2020).

Project Design in Miro

This is the board: link

Clone this wiki locally