Lista de tópicos
-
Content
This course will mainly address two different problems, in a continuous manner:
- In its first part, we will consider the problem of how to model uncertainty and how to make decisions from uncertainty models, in a generic manner. We will start from probabilities and will then proceed to more complex models.
- In the second part, we will consider the problem of quantifying uncertainties in learning problems, and more particularly in the prediction part of learning problems.
Evaluation
As requested by UTC, we will perform two types of evaluations. Each evaluation will take the form of a group assignment, with the constraint that the groups have to be different for both assignments (the same students cannot work in the same group for both assignments).
First Assignment (two options): details
The first assignment will take the form of a reverse lecture/practical exercice, where the students will have to either give the lecture or the exercice. Each group will have 30 minutes during the two last lessons of AOS4. For this assignement, groups will have the choice between two different assignments:
- Exercice creation or "being in a TA shoes". In this case, the group should create one advanced exercice in relation to the course, that either emphasizes some aspect of the course, allows one to practice some of its aspects, or investigate a topic connected to the course, but that we did not explore during it. Each group would then be in the shoe of a teaching assistant (TA) in charge of producing exercices for practical/training classes or courses. What we expect as a result of such a choice is the following:
- A document presenting the exercise statement, presenting the problem to be solved and the various associated questions and sub-questions (there can be only one main question/statement, or multiple follow-up questions).
- A document detailing the solution of the exercise (not just the end result), so that another TA (or ourselves) can reuse the exercise easily
- A short statement explaining the pedagogical purpose of the exercise: to practice some technical aspects, to illustrate a particular point, to make the student discover new concepts, etc... in short, after having done this exercise, what would be the gain of the student?
- A 30 minute practical session where the group can hand out the exercice to the rest of the class and act as teaching assistants explaining it to the students
- Short lecture or "being in a teacher shoes". In this case, the group should create a lecture focusing on a topic we did not cover in class, that can either concerns uncertainty modelling and uncertainty in learning problems. The lecture can be accompanied by live demonstration, illustration or anything that will make the course easy to follow for other students. What we expect as a result of such a choice is the following:
- A set of slides to be used during the lectures, and additional possible pedagogical material (notebooks, etc.). Such slides should clearly be intended as a lecture on the topic.
- A 30 minute lecture where the group will act as teachers to deliver a short course on a specific topic.
- A set of slides to be used during the lectures, and additional possible pedagogical material (notebooks, etc.). Such slides should clearly be intended as a lecture on the topic.
Second Assignment (two options): details
The second assignment will take the form of either an off-line tutorial (in the style of towards data science/kaggle), possibly with an accompanying notebook or of a pedagogical illustration of a paper topic (not especially illiustrating the whole paper, but at least making a part of the paper understandable to a wide audience).
- Tutorial or "wake up the blogger in you". In this case, each group will have to make a tutorial or a blog post (in the style one can find in Kaggle or towards data science) about a learning method. What we expect as a result of such a choice is the following:
- The implementation of a method.
- A way to easily test and understand the method: this can be a notebook, a readme file to execute, etc.
- A short explanation (in .pdf, as a blog post) of the method and its merits
- Paper illustration or "explain to your high-school nephew". In this case, each group will take a paper and will have the task to illustrate/explain a part of the paper through a media of their choice: it can be a presentation, a video, a poster, a live demonstration/exercise, an interactive website, etc. The rules are as follows:
- The illustration/explanation should be pedagogical, in the sense that it should be accessible to a non-expert (that does not know advanced maths or computing). It should not be too long (i.e., less than 10/15 minutes).
- Depending on the size and complexity of the paper, not all of it has to be explained/illustrated. It is better to focus on a specific part and be really pedagogical/illustrative than trying to show too much and be confusing.
- Each group must take a different paper. The rule is first come, first served (each time a group chooses a paper and tells us so, this paper is no longer available).
Lecturers
- Vu-Linh Nguyen, Heudiasyc laboratory (head lecturer)
- Sébastien Destercke, Heudiasyc laboratory
-
Dates: 12/11 (14h15 - 18h30), Sébastien Destercke
These first lectures will introduce generic uncertainty models, motivate their needs and justify them from a theoretical perspective using a betting scheme.
Objective of the lecture:
After the lectures, the students should be able to
- Motivate, from a betting perspective, why probabilities are good candidates for modelling uncertainties and making decisions
- Provide reasons why one may wish to go beyond probabilities, i.e., why one could consider them not completely satisfactory
- Propose an extension of probabilities taking care of those potential critics
- Know and manipulate some specific models that have "easy" mathematical properties
- Know and apply decision rules in generic uncertainty contexts
-
Dates: 18/11 (14h15 - 18h30), Vu-Linh Nguyen
This first lecture dedicated to uncertainty in machine learning will provide some first illustration as to how the mathematical elements of the previous lectures can be used in machine learning. This will notably be done through simple illustrations and examples.
Objectives of the lecture:
After the lectures, the students should be able to
- Understand the basics of the Imprecise Dirichlet Model (IDM)
- Apply it to a simple local learning scheme
- Implement decision rules to this specific learning scheme
- Identify the main sources of uncertainty
- Have a basic understanding of the challenges underlying the evaluation of cautious classifiers
-
Dates: 25/11 (14h15 - 18h30), Vu-Linh Nguyen
This lecture will provide some first illustration as to how the mathematical elements of the previous lectures can be used to build some simple imprecise classifiers. Simple illustrations and examples will be provided.
Objectives of the lecture:
After this lecture students should be able to
- Use IDM and related models in Naïve (credal) classifier (NCC)
- Use IDM and related models in decision trees
-
Dates: 02/12 (14h15 - 18h30), Vu-Linh Nguyen
Objectives of the lecture:
After this lecture students should be able to
- describe commonly used notions of classifier calibration
- describe a few calibration errors and calibration methods
- describe commonly used notions of coverage
- describe a few coverage metrics and conformal procedures
-
Dates: 9/12 (14h15 - 18h30), Vu-Linh Nguyen
Objectives of the lecture:
After this lecture students should be able to
-
Dates: 16/12 (14h15 - 18h30), Students, Vu-Linh Nguyen and Sébastien Destercke
-
Dates: 6/1 (14h15 - 18h30), Students, Vu-Linh Nguyen and Sébastien Destercke
-
Here is a list of possible papers. Hardness of a paper range from + (rather easy to follow) to +++++ (quite hard to follow) and is based on our subjective perception of the paper.
We expect that the easier a paper is, the more of it is covered in the ilustration, and the more worked out this later is.
For each paper, we also specify for which type of assignment we think a paper is suited (since all papers do not lend themselves that well to, e.g., implementation).
Suggestion of papers to select from:
- [quost2018classification] Quost, B., & Destercke, S. (2018). Classification by pairwise coupling of imprecise probabilities. Pattern Recognition, 77, 412-425.
Topic: pairwise decomposition in classification
Nature: methodological paper
Possible assignments: "Being in a teacher shoes", "Being in a TA shoes", "Wake up the blogger in you"
- [couso2000survey] Couso, I., Moral, S., & Walley, P. (2000). A survey of concepts of independence for imprecise probabilities. Risk, Decision and Policy, 5(2), 165-181.
Topic: independence notions for imprecise probabilities
Nature: survey paper
Possible assignments: "Being in a teacher shoes", "Being in a TA shoes", "Explain to your high-school nephew"
- [maua2018robustifying] Mauá, D. D., Conaty, D., Cozman, F. G., Poppenhaeger, K., & de Campos, C. P. (2018). Robustifying sum-product networks. International Journal of Approximate Reasoning, 101, 163-180.
Topic: extending a specific probabilistic circuit (can be seen as a specific neural network) to deal with probability sets
Nature: mostly methodological (some theory)
Possible assignments: "Being in a teacher shoes", "Explain to your high-school nephew", "wake up the blogger in you"
- [zaffalon2012evaluating] Zaffalon, M., Corani, G., & Mauá, D. (2012). Evaluating credal classifiers by utility-discounted predictive accuracy. International Journal of Approximate Reasoning, 53(8), 1282.
Nature: methodological
Possible assignments: "Being in a teacher shoes" (Selected; Lecture/Exercise; 1 st assignment), "Explain to your high-school nephew" (Selected; 2nd assignment), "wake up the blogger in you"
- [bernard2005introduction] Bernard, J. M. (2005). An introduction to the imprecise Dirichlet model for multinomial data. International Journal of Approximate Reasoning, 39(2-3), 123-150.
Topic: extending the Dirichlet model used in Bayesian approaches to estimate multinomials to the imprecise case
Nature: detailed and technical introduction to the model
Possible assignments: "Being in a TA shoes", "Explain to your high-school nephew"
- [nguyen2023learning] Vu-Linh Nguyen, Haifei Zhang and Sébastien Destercke. Learning sets of Probabilities through ensemble methods. ECSQARU 2023
Topic: learning model that uses random forest to derive credal sets
Nature: methodological
Possible assignments: "Being in a TA shoes", "wake up the blogger in you", "Being in a teacher shoes"
- [alarcon2021imprecise] Alarcon, Y. C. C., & Destercke, S. (2021). Imprecise gaussian discriminant classification. Pattern Recognition, 112, 107739
Topic: learning model that generalises discriminant analysis
Nature: methodological
Possible assignments: "wake up the blogger in you", "Being in a teacher shoes"
- [angelopoulos2021gentle] Angelopoulos, A. N., & Bates, S. (2021). A gentle introduction to conformal prediction and distribution-free uncertainty quantification.
- [quost2018classification] Quost, B., & Destercke, S. (2018). Classification by pairwise coupling of imprecise probabilities. Pattern Recognition, 77, 412-425.