v2.11.0 (5354)

Cours scientifiques - SOD314 : Optimisation coopérative pour les sciences des données

Domaine > Optimisation, Recherche opérationnelle et Commande, Mathématiques et leurs applications.

Descriptif

The course presents continuous optimization techniques that have been developed to deal with the increasing amount of data. In particular, we look at optimization problems that depend on large-scale datasets, spatially distributed data, as well as local private data.

We will focus on three different aspects: (1) the development of algorithms to decompose the problem into smaller problems that can be solved with some degree of coordination; (2) the trade- off of cooperation vs. local computation; (3) how to design algorithms that ensure privacy of sensitive data.

This course is open to students of the M2 "Data Sciences".

Objectifs pédagogiques

Understand the challenges in cooperative optimization for large-scale data applications.

21 heures en présentiel (6 blocs ou créneaux)

33 heures de travail personnel estimé pour l’étudiant.

effectifs minimal / maximal:

10/50

Diplôme(s) concerné(s)

Pour les étudiants du diplôme Diplôme d'Ingénieur de l'Ecole Nationale Supérieure de Techniques Avancées

Previous course on convex optimization, especially first-order algorithms (gradient descent), optimality conditions (KKT), and duality. For ENSTA students : OPT201

Format des notes

Numérique sur 20

Littérale/grade européen

Pour les étudiants du diplôme Diplôme d'Ingénieur de l'Ecole Nationale Supérieure de Techniques Avancées

Vos modalités d'acquisition :

 Written exam and project.

Le rattrapage est autorisé (Max entre les deux notes écrêté à une note seuil)
  • le rattrapage est obligatoire si :
    Note initiale < 6
  • le rattrapage peut être demandé par l'étudiant si :
    6 ≤ note initiale < 10
L'UE est acquise si Note finale >= 10
  • Crédits ECTS acquis : 1.5 ECTS
  • Scientifique acquis : 1.5

Le coefficient de l'UE est : 1

La note obtenue rentre dans le calcul de votre GPA.

L'UE est évaluée par les étudiants.

Programme détaillé

Tentative plan.

#1 Class. Introduction: recap on convex models and algorithms. A model for a network of communicating and computing nodes. Parallel methods in optimization: Gauss method, Jacobi method, incremental methods. Consensus optimization problem.

#2 Class. Distributed optimization (I): primal methods: gradient and gradient tracking. Communication vs. computation trade-off, network scaling.

#3 Class. Distributed optimization (II): dual methods: dual decomposition, ADMM; primal-dual methods and networked problems. Possible example in large-scale smart grids.

#4 Class. Federated optimization (I): the setting and the problem, its relation with distributed optimization and the main differences. Federated averaging and other momentum-based first- order algorithms.

#5 Class. Federated optimization (II): Robustness, Communication vs. computation trade-off, network scaling, relaxations, acceleration, personalization. Possible example with real-data.

#6 Class. Privacy issues in optimization: the concept of privacy and how to enforce it. Differential privacy in distributed optimization and federated optimization. Data attacks, robustness.

Mots clés

Convex optimization; distributed, parallel, and federated algorithms; privacy
Veuillez patienter