The project gives you the opportunity to study in greater depth some concepts of the course, to put them in practice or to pursue some related research direction. The topic has to be linked with algorithms, concepts or methods presented in class, but beyond this requirement, the choice is quite open. There is considerable flexibility in your choice of project, and it should be tailored to your interests.
The project should be grounded on at least one research paper (which you can propose yourself). The project can be done invidually or in teams of two to three people. Once your group and project topic is chosen, you submit a plan and description on Studium so that I can validate it and potentially give you some feedback (by Tuesday April 7th).
There are many possible types of project for this class, and in particular one can decide to focus more on structured prediction or on optimization aspects.
A standard class project would contain the following 3 components:
An article review around a given topic. It can be chosen from the references given during the lectures, or in the list below, or some other article of your choice. This means to read and understand a specific research article.
An implementation of the method.
An experimentation with real data. This means to apply the method on real data and report your findings and observations. If the paper is quite dense and theoretical, then an experimentation on simulated / synthetic data is sufficient.
Here are examples of types of projects:
Application project: you focus on applying some methods seen in class (or related) on a problem that you care about or one taken from the applied literature. Here it is still important to refer to a paper describing the method so that you can have a solid basis to work with.
Literature review: you read multiple papers on one topic to go in greater depth and write a synthetic review. Ideally, one could also compare different methods on real data or synthetic data to see pros and cons of the methods.
Theoretical project: you focus more on theoretical properties of structured prediction or optimization (still grounded on some papers). This could be about convergence proofs; analyzing theoretical properties of different methods; etc. Experiments might still be useful to provide more insights on the topic.
Research project: any of the above type can be pushed in a research direction if you go beyond the existing literature. I have listed many open questions during my class – feel free to tackle one of them. Some are actually quite low hanging fruits.
The final class project counts for 100%. Evaluation will be made on:
A report (of about 4 to 8 pages) presenting the project and the obtained results (for applicative projects), to be given by April 30th, 2020 on Studium. The report has to be written in such a way that any student who has followed the class can understand the gist. The report has to clearly present (in French or English) the studied problem and the existing approaches. You will be more evaluated on the clarity of the report rather than on its length. To train you to write professional research papers, you should use LaTeX in a modified ICML 2018 template format (download the template here). You may use appendices for additional details beyond 8 pages if you want, but be aware that as in standard conference reviewing, I might only read the first 8 pages (so the main content has to be there), and also, succinctness is more valued here than length!
A talk of 6-10 minutes (length will be decided later once I know then number of teams) which will be given online during the Thursday April 30th lecture time slot. The presentation (in French or English) is also geared towards other students and the goal is to highlight in the allocated time the salient points of your project.
The content of your slides has the purpose:
to explain clearly to other students of the class the model, problems and algorithms you have worked on, and any interesting observations that you have made.
The timing will be strict as I want to be able to ask you a couple of questions in addition and there are many of you. We highly recommend that you prepare ahead of time what you will say during this time. Highlight your understanding and the main things you have done (model, main algorithmic ideas, data, results).
The various steps are summarized below.
Before 4/7 | Chooose your group and your project and give a description on Studium. |
On 4/30 | Online talk about your project during the 1:30–3:30pm time slot (I'll make a schedule) |
Before 4/30 | Submit your project report (4-8 pages, ICML format, on Studium) |
As an example, here were the titles for the projects from past years:
“A New Saga with Adam” (combining SAGA and SVRG with RMSprop and ADAM updates)
“Empirical Study of Variance Reduction Techniques for Variational Autoencoders”
“Application of the Hidden-Markov SVM method to the identification of horizontally transferred genes in bacterial genomes”
“Review of State of the Art Text Summarization Models”
“Stochastic Dual Coordinate Ascent for Conditional Random Fields”
“Sructured prediction and reinforcement learning”
“Max-Margin Training for Inverse Robot Control”
“Deep Structured Models”
“A Project on Structured Prediction Energy Networks”
“Sequential Quadratic Programming for Deriving optimal SPL variant”
Here is a list of references to give you ideas. It is definitively not exhaustive – feel free to Google a bit! Also, a lot of these have code and datasets online, so make sure to look for these. Also consider the other references that I have given in class.
J. Weston, O. Chapelle, A. Elisseeff, B. Schoelkopf and V. Vapnik, “Kernel Dependency Estimation”, NIPS, 2002.
SEARN: Hal Daume, John Langford, Daniel Marcu, “Search-based Structured Prediction”, Machine Learning, 2009.
Submodular potentials:
Associative Markov networks: D. Anguelov, B. Taskar, V. Chatalbashev, D. Koller, D. Gupta, G. Heitz, A. Ng. “Discriminative Learning of Markov Random Fields for Segmentation of 3D Scan Data”. CVPR, 2005.
A. Fix, T. Joachims, S. Park, R. Zabih. “Structured learning of sum-of-submodular higher order energy functions”. ICCV, 2013.
Ulf Brefeld, Tobias Scheffer, “Semi-Supervised Learning for Structured Output Variables”, ICML, 2006.
Latent variable models:
Chun-Nam Yu, Thorsten Joachims. “Learning Structural SVMs with Latent Variables”. ICML 2009.
Kuzman Ganchev, Joao Graca, Jennifer Gillenwater, Ben Taskar. “Posterior Regularization for Structured Latent Variable Models”. JMLR, 10, 2010.
Quannan Li, Jingdong Wang, David Wipf, Zhuowen Tu, “Fixed-Point Model For Structured Labeling”, ICML 2013.
Liang-Chieh Chen, Alexander G. Schwing, Alan L. Yuille, Raquel Urtasun, “Learning Deep Structured Models”, ICML 2015.
David Belanger, Andrew McCallum, “Structured Prediction Energy Networks”, ICML 2016.
Shuai Zheng, Sadeep Jayasumana, Bernardino Romera-Paredes, Vibhav Vineet, Zhizhong Su, Dalong Du, Chang Huang, Philip H. S. Torr, “Conditional Random Fields as Recurrent Neural Networks”, ICCV 2015.
Sutskever, Ilya, Vinyals, Oriol, and Le, Quoc V. “Sequence to sequence learning with neural networks”, NIPS 2014.
R. Leblond, J.-B. Alayrac, A. Osokin and S. Lacoste-Julien, “SEARNN: Training RNNs with Global-Local Losses”, ICLR 2018.
And a ton of other recent papers!
T. Joachims, “A Support Vector Method for Multivariate Performance Measures”, ICML 2005.
Anton Osokin and Pushmeet Kohli, “Perceptually Inspired Layout-aware Losses for Image Segmentation”, ECCV 2014.
McAllester, D. A., Hazan, T., and Keshet, J., “Direct loss minimization for structured prediction”, NIPS, 2010.
Song, Yang, Schwing, Alexander G., Zemel, Richard S., and Urtasun, Raquel. “Training deep neural networks via direct loss minimization”, ICML 2016.
Finley, T. and Joachims, T. “Training structural SVMs when exact inference is intractable”, ICML 2008.
Meshi, Ofer, Sontag, David, Globerson, Amir, and Jaakkola, Tommi S, “Learning efficiently with approximate inference via dual losses”, ICML 2010.
Tamir Hazan, Raquel Urtasun, “A Primal-Dual Message-Passing Algorithm for Approximated Large Scale Structured Prediction”, NIPS 2010.
(in NLP):
Liang Huang, Suphan Fayong, and Yang Guo, “Structured perceptron with inexact search”, NAACL 2012
Veselin Stoyanov and Jason Eisner, “Minimum-risk training of approximate CRF-based NLP systems”, NAACL 2012.
Shi, Qinfeng, Reid, Mark, Caetano, Tiberio, van den Hengel, Anton, and Wang, Zhenhua. “A hybrid loss for multiclass and structured prediction”, TPAMI 2015.
Structured prediction generalization bound: Corinna Cortes, Vitaly Kuznetsov, Mehryar Mohri, Scott Yang, “Structured Prediction Theory Based on Factor Graph Complexity”, NIPS 2016.
Ciliberto, Carlo, Rosasco, Lorenzo, and Rudi, Alessandro. “A consistent regularization approach for structured prediction”, NIPS 2016.
Anton Osokin, Francis Bach, Simon Lacoste-Julien, “On Structured Prediction Theory with Calibrated Convex Surrogate Losses”, NIPS 2017.
Linked to GANs: Gabriel Huang, Hugo Berard, Ahmed Touati, Gauthier Gidel, Pascal Vincent, Simon Lacoste-Julien, “Parametric Adversarial Divergences are Good Task Losses for Generative Modeling”, arXiv:1708.02511.
S. Lacoste-Julien and M. Jaggi, “On the Global Linear Convergence of Frank-Wolfe Optimization Variants”, NIPS 2015.
D. Garber and O. Meshi, “Linear-Memory and Decomposition-Invariant Linearly Convergent Conditional Gradient Algorithm for Structured Polytopes”, NIPS 2016.
A. Osokin, J.-B. Alayrac, I. Lukasewitz, P. Dokania and S. Lacoste-Julien, “Minding the Gaps for Block Frank-Wolfe Optimization of Structured SVMs”, ICML 2016.
Mark Schmidt, Reza Babanezhad, Mohamed Osama Ahmed, Aaron Defazio, Ann Clifton, Anoop Sarkar, “Non-Uniform Stochastic Average Gradient Method for Training Conditional Random Fields”, AISTATS 2015.
Shai Shalev-Shwartz and Tong Zhang, “Accelerated Proximal Stochastic Dual Coordinate Ascent for Regularized Loss Minimization”, ICML 2014.
Shai Shalev-Shwartz and Tong Zhang, “Stochastic Dual Coordinate Ascent Methods for Regularized Loss Minimization”, JMLR 2013.
Zeyuan Allen-Zhu, Yang Yuan, “Improved SVRG for Non-Strongly-Convex or Sum-of-Non-Convex Objectives”, ICML 2016.
Hongzhou Lin, Julien Mairal, Zaid Harchaoui, “A Universal Catalyst for First-Order Optimization”, NIPS 2015.
Saddle point Frank-Wolfe: G. Gidel, T. Jebara and S. Lacoste-Julien, “Frank-Wolfe Algorithms for Saddle Point Problems”, AISTATS 2017.
Saddle point methods for GANs: G. Gidel, H. Berard, P. Vincent and S. Lacoste-Julien, “A Variational Inequality Perspective on Generative Adversarial Nets”, arXiv:1802.10551.
Semantic segmentation:
Ľubor Ladický, Chris Russell, Pushmeet Kohli, Philip H.S. Torr. “Inference Methods for CRFs with Co-occurrence Statistics”, IJCV 2012.
V. Lempitsky, A. Vedaldi, and A. Zisserman. “A Pylon Model for Semantic Segmentation”, NIPS 2011
Nathan Silberman, David Sontag, Rob Fergus, “Instance Segmentation of Indoor Scenes using a Coverage Loss”, ECCV, 2014.
Weak supervision:
M. Pawan Kumar, B. Packer and D. Koller, “Modeling Latent Variable Uncertainty for Loss-based Learning”, ICML 2012.
Detection:
P. Felzenszwalb, R. Girshick, D. McAllester, D. Ramanan, “Object Detection with Discriminatively Trained Part Based Models” TPAMI, Vol. 32, No. 9, September 2010.
Chaitanya Desai, Deva Ramanan, Charless C. Fowlkes, “Discriminative Models for Multi-Class Object Layout”, IJCV 2011.
P. Mohapatra, C. V. Jawahar and M. Pawan Kumar. “Efficient Optimization for Average Precision SVM”, NIPS 2014.
Pose estimation:
Yi Yang, Deva Ramanan. “Articulated Human Detection with Flexible Mixtures of Parts”, TPAMI 2013.
Jonathan Tompson, Arjun Jain, Yann LeCun, Christoph Bregler, “Joint Training of a Convolutional Network and a Graphical Model for Human Pose Estimation”, NIPS 2014.
Xianjie Chen, Alan Yuille, “Articulated Pose Estimation by a Graphical Model with Image Dependent Pairwise Relations”, NIPS 2014.
Ben Sapp, Ben Taskar, “MODEC: Multimodal Decomposable Models for Human Pose Estimation”, CVPR 2013
Recognizing text on images: Max Jaderberg, Karen Simonyan, Andrea Vedaldi, Andrew Zisserman. “Deep Structured Output Learning for Unconstrained Text Recognition”, ICLR 2015.
Structured perceptron for machine translation: P. Liang, Alexandre Bouchard-Cote, D. Klein and B. Taskar. “An End-to-End Discriminative Approach to Machine Translation”, ACL 2006.
Kevin Gimpel and Noah A. Smith. “Structured Ramp Loss Minimization for Machine Translation”, NAACL 2012.
S. Lacoste-Julien, B. Taskar, D. Klein, and M. Jordan. “Word Alignment via Quadratic Assignment”, NAACL 2006.
B. Taskar, D. Klein, M. Collins, D. Koller and C. Manning. “Max-Margin Parsing”, EMNLP 2004.
Online large-margin training of dependency parsers. “Online large-margin training of dependency parsers”, ACL 2005.
André F. T. Martins, Noah A. Smith, Pedro M. Q. Aguiar, and Mário A. T. Figueiredo, “Structured Sparsity in Structured Prediction”, EMNLP 2011.
Computational biology: Chun-Nam John Yu, T. Joachims, R. Elber, J. Pillardy. “Support Vector Training of Protein Alignment Models”. Journal of Computational Biology, 15(7): 867-880, September 2008.
Information retrieval: Yisong Yue, T. Finley, F. Radlinski, T. Joachims. “A Support Vector Method for Optimizing Average Precision”. SIGIR, 2007.