الخطوط العريضة للقسم

  • 1. Overview

    This course will mainly address two different problems in a continuous manner:

    • In its first part, we will consider the problem of how to model uncertainty and how to make decisions from uncertainty models. We will start from probabilities and will then proceed to more complex models.
    • In the second part, we will consider the problem of quantifying uncertainties in learning problems, and more particularly in the prediction part of learning problems.

    We shall also provide illustrations of how to use the mathematical elements of the lectures in applications, such as large language models, Out-of-Distribution detection, and so on.

    2. Assessment

    As requested by UTC, we will perform two types of evaluations.

    • The first evaluation will take the form of a mini exam, where the students are expected to successfully finish at least half of the multiple choice questions, which are related to either notions given during the lectures or (simple) exercises which is similar to the ones used to illustrate the content of the lectures.   
    • The second evaluation will take the form of a two-round (Jigsaw) group assignment, with the constraint that the roles of the group members should be different for each round.
    2.1. First evaluation: Details

    The first evaluation will take the form of a mini exam, where the students are expected to successfully finish at least half of the multiple choice questions, which are related to either notions given during the lectures or (simple) exercises which is similar to the ones used to illustrate the content of the lectures. There will be 6 questions , and the students will have up to 20 minutes for the mini exam. 

    The mini exam will be given at the beginning of Lecture 6. Students who wish to have a second chance can take this mini exam again at the beginning of Lecture 7. If a student takes the mini exam twice, the better results will be counted.  

    The idea is to remind the students of basic notions that are given during the lectures, which may also appear in the presentations of the (other) groups.  

    2.2. Second evaluation: Details

    The second evaluation will take the form of a two-round (Jigsaw) group assignment, with the constraint that the roles of the group members should be different for each round. For each round, each group of 3 students will have 30 minutes for the presentations/illustrations, followed by at most 10 minutes for the question and answer session, during the last two lessons of AOS4. The idea is to give each student ~10 min per round to contribute to the group assignment.  

    For this evaluation, groups will have to choose one of the following assignments:

    • Short lecture or "being in a teacher's shoes". In this case, the group should create a lecture focusing on a topic we did not cover in class, that can either concerns uncertainty modeling and uncertainty in learning problems. What we expect as a result of such a choice is the following:
      • A short explanation (in .pdf) of the topic and its merits, followed by your (short) reflections on pedagogical aspects of your illustration.
      • A set of slides to be used during the lectures, and additional possible pedagogical material (notebooks, etc.). Such slides should clearly be intended as a lecture on the topic.
      • A lecture where the group will act as teachers to deliver a short course on a specific topic, which can be accompanied by live demonstration, illustration, or anything that will make the course easy to follow for other students
    • Tutorial or “wake up the blogger in you”. In this case, each group will have to make a tutorial or a blog post (in the style one can find in Kaggle or towards data science) about a learning method. What we expect as a result of such a choice is the following:
      • The implementation of a method.
      • A way to easily test and understand the method: this can be a notebook, a readme file to execute, etc.
      • A short explanation (in .pdf, as a blog post) of the method and its merits, followed by your (short) reflections on pedagogical aspects of your tutorial.
      • An off-line tutorial (in the style of towards data science/kaggle), possibly with an accompanying notebook
    • Paper illustration or "explain to your high-school nephew". In this case, each group will take a paper and will have the task to illustrate/explain a part of the paper through a media of their choice: it can be a presentation, a video, a poster, a live demonstration/exercise, an interactive website, etc.  The illustration/explanation should be pedagogical, in the sense that it should be accessible to a non-expert (who does not know advanced math or computing).  What we expect as a result of such a choice is the following:
      • A short explanation (in .pdf) of the method and its merits, followed by your (short) reflections on pedagogical aspects of your illustration.
      • Supported materials (if there are any) , such as a video, a poster, a live demonstration/exercise, an interactive website, etc. 
      • A pedagogical illustration of a paper topic (not especially illustrating the whole paper, but at least making a part of the paper understandable to a wide audience).

    Each group is recommended to choose one paper from the suggested list as the key reference for the assignment. The groups should choose different key references, ie, no reference should be chosen by more than 1 group. 

    The groups are recommended to  look at related works in the literature to enrich the content of the presentations/illustrations. Depending on the size and complexity of the chosen paper, not all of it has to be explained/illustrated. It is better to focus on a specific part and be really pedagogical/illustrative than trying to show too much and be confusing.

    The two rounds: Details

    • First round (Lecture 6): The  groups should
      • submit the required materials in advance 
      • give their presentations/illustrations
      • take into account the feedback from other students and teachers to prepare for the second round 
    • Second round (Lecture 7): The  groups should
      • submit the  revised materials in advance 
      • give their revised presentations/illustrations

    NOTE: In the revisions of the short explanations, which should be submitted as part of the second round, each group should add a (short) reflection on the following points:

    • The benefits (if you think there are any) of the feedback received from the first round
    • The benefits (if you think there are any) of the discussions with other members, such as the one who takes care of your role during the first round. 

    Teachers

    • Vu-Linh Nguyen, Heudiasyc laboratory (head lecturer)
    • Sébastien Destercke, Heudiasyc laboratory
  • Dates: 14/11 (13h30 - 17h30), Vu-Linh Nguyen

    This first lecture, dedicated to uncertainty in machine learning, will provide some first illustration as to how the mathematical elements found in the literature can be used in machine learning. This will notably be done through simple illustrations and examples.

    Objectives of the lecture:

    After the lectures, the students should be able to

    • Understand the basics of the Imprecise Dirichlet Model (IDM)
    • Apply it to a simple local learning scheme
    • Implement decision rules for this specific learning scheme
    • Identify the main sources of uncertainty
    • Have a basic understanding of the challenges underlying the evaluation of cautious classifiers
  • Dates: 17/11 (13h30 - 17h30), Vu-Linh Nguyen

    This lecture will provide some first illustration as to how the mathematical elements of the previous lectures can be used to build some simple imprecise classifiers.  Simple illustrations and examples will be provided.

    Objectives of the lecture:

    After this lecture students should be able to

    • Use IDM and related models in Naïve (credal) classifier (NCC)
    • Use IDM and related models in decision trees
  • Dates: 24/11 (13h30 - 17h30; Recorded lecture), Vu-Linh Nguyen

    Objectives of the lecture:

    After this lecture students should be able to

    • describe commonly used notions of classifier calibration
    • describe a few calibration errors and calibration methods
    • describe commonly used notions of coverage
    • describe a few coverage metrics and conformal procedures
  •  

    Dates: 1/12 (13h30 - 17h30), Sébastien Destercke

    These first lectures will introduce generic uncertainty models, motivate their needs and justify them from a theoretical perspective using a betting scheme.

    Objective of the lecture :

    After the lectures, the students should be able to

    • motivate, from a betting perspective, why probabilities are good candidates for modeling uncertainties and making decisions
    • provide reasons why one may wish to go beyond probabilities, ie, why one could consider them not completely satisfactory
    • proposes an extension of probabilities, taking care of those potential critics
    • know and manipulate some specific models that have "easy" mathematical properties
    • know and apply decision rules in generic uncertainty contexts
  • Dates: 8/12 (13h30 - 17h30), Vu-Linh Nguyen

    This lecture will provide illustrations of how to use the mathematical elements of the previous lectures to build and assess some probabilistic and credal classifiers, and their applications. 

    Objectives of the lecture:

    After this lecture, students should be able to describe (a few)  

    • probabilistic and credal classifiers
    • and how to use them to make singleton and set-valued predictions,
    • and their (potential) applications.
  • Dates: 12/15 (1:30 p.m. - 5:30 p.m.), Students and Vu-Linh Nguyen

  • Dates: 5/1 (1:30 p.m. - 5:30 p.m.), Students and Vu-Linh Nguyen

     

  • Here is a list of possible papers.

    Suggestion of papers to select from:

    • [quost2018classification] Quost, B., & Destercke, S. (2018). Classification by pairwise coupling of imprecise probabilities. Pattern Recognition, 77, 412-425.
      Topic: pairwise decomposition in classification
      Nature: methodological paper

     

    • [maua2018robustifying] Mauá, D. D., Conaty, D., Cozman, F. G., Poppenhaeger, K., & de Campos, C. P. (2018). Robustifying sum-product networks. International Journal of Approximate Reasoning, 101, 163-180.
      Topic: extending a specific probabilistic circuit (can be seen as a specific neural network) to deal with probability sets
      Nature: mostly methodological (some theory)

     

    • [yang2016cost]Yang, Gen, Sébastien Destercke, and Marie-Hélène Masson (2016). "The costs of indeterminacy: how to determine them?." IEEE transactions on cybernetics 47.12: 4316-4327.
      Topic: Desirable properties of utilities
      Nature:
      methodological

     

    • [bernard2005introduction] Bernard, J. M. (2005). An introduction to the imprecise Dirichlet model for multinomial data. International Journal of Approximate Reasoning, 39(2-3), 123-150.
      Topic: extending the Dirichlet model used in Bayesian approaches to estimate multinomials to the imprecise case

      Nature: detailed and technical introduction to the model

     

    • [nguyen2025credal] Vu-Linh Nguyen, Haifei Zhang and Sébastien Destercke (2025). Credal ensemble in multi-class classification. Machine Learning , 114 (1), 19.
      Topic: learning model that uses random forest to derive credal sets
      Nature: methodological

     

    • [alarcon2021imprecise] Alarcon, Y. C. C., & Destercke, S. (2021). Imprecise gaussian discriminant classification. Pattern Recognition, 112, 107739
      Topic: learning model that generalises discriminant analysis
      Nature: methodological

     

    • [angelopoulos2021gentle] Angelopoulos, A. N., & Bates, S. (2021). A gentle introduction to conformal prediction and distribution-free uncertainty quantification.
      Topic: general introduction to conformal prediction, and up-to-date survey
      Nature: survey of many results (groups can consider only a part of it)

    • [silva2023classifier] Silva Filho, T., Song, H., Perello-Nieto, M., Santos-Rodriguez, R., Kull, M., & Flach, P. (2023). Classifier calibration: a survey on how to assess and improve predicted class probabilities. Machine Learning, 1-50.
      Topic: general introduction to calibration methods, and up-to-date survey of many results (groups can consider only a part of it)
      Nature: survey of many results (groups can consider only a part of it)

    • [corani2008learning] Corani, G., & Zaffalon, M. (2008). Learning Reliable Classifiers From Small or Incomplete Data Sets: The Naive Credal Classifier 2. Journal of Machine Learning Research, 9(4).
      Topic: Extending Naive Bayes Classifier 
      Nature: mostly methodological (some theory)

    • [mantas2014credal] Mantas, Carlos J., and Joaquin Abellan. "Credal-C4. 5: Decision tree based on imprecise probabilities to classify noisy data." Expert Systems with Applications 41.10 (2014): 4625-4637.
      Topic: extending decision trees
      Nature: methodological

  • Groups and choices:

    Group "name":

    • Members: Emila FILIPI, Ruoyang WANG        
    • Paper: [quost2018classification] Quost, B., & Destercke, S. (2018). Classification by pairwise coupling of imprecise probabilities. Pattern Recognition77, 412-425.
    • Choice: Paper illustration or "explain to your high-school nephew".

    Group "name":

    • Members:  Aya Benkirane, Hans Emmanuel Gamido, Xue RUI
    • Paper:  [bernard2005introduction] Bernard, J. M. (2005). An introduction to the imprecise Dirichlet model for multinomial data. International Journal of Approximate Reasoning39(2-3), 123-150.
    • Choice: Paper illustration or "explain to your high-school nephew".

    Group "name": 

    • Members: Nicolò Aicardi, Daniel Estremiana López, Shruti Debnath
    • Paper: [nguyen2025credal] Vu-Linh Nguyen, Haifei Zhang and Sébastien Destercke (2025). Credal ensemble in multi-class classification. Machine Learning , 114 (1), 19
    • Choice: Paper illustration or "explain to your high-school nephew".

    Group "name":

    • Members:  Musel Tabares, Emin Kokonozi, Marina PETANI    
    • Paper: [angelopoulos2021gentle] Angelopoulos, A. N., & Bates, S. (2021). A gentle introduction to conformal prediction and distribution-free uncertainty quantification.
    • Choice: Paper illustration or "explain to your high-school nephew".

    Group "name":

    • Members:  Klea Kalliri, Atosi Roy        
    • Paper: [silva2023classifier] Silva Filho, T., Song, H., Perello-Nieto, M., Santos-Rodriguez, R., Kull, M., & Flach, P. (2023). Classifier calibration: a survey on how to assess and improve predicted class probabilities. Machine Learning, 1-50.
    • Choice: Paper illustration or "explain to your high-school nephew".

    Group "name":

    • Members: Kejsi KALLIRI, Pedro Henrique Simao Achete, Bogaçhan Arslan                
    • Paper: [mantas2014credal] Mantas, Carlos J., and Joaquin Abellan. "Credal-C4. 5: Decision tree based on imprecise probabilities to classify noisy data." Expert Systems with Applications 41.10 (2014): 4625-4637.
    • Choice: Paper illustration or "explain to your high-school nephew".