See also the Winter 2019 version of the outline for an idea of where the class is heading.
“Predicting Structured Data”, Edited by Gökhan BakIr, Thomas Hofmann, Bernhard Schölkopf, Alexander J. Smola, Ben Taskar and S.V.N Vishwanathan, MIT Press 2006.
“Advanced Structured Prediction”, Edited by Sebastian Nowozin, Peter V. Gehler, Jeremy Jancsary and Christoph H. Lampert, MIT Press, 2014.
With applications:
“Linguistic Structure Prediction”, Noah Smith, Synthesis Lectures on Human Language Technologies, May 2011.
Sebastian Nowozin, Christoph H. Lampert, “Structured Learning and Prediction in Computer Vision”, Foundations and Trends in Computer Graphics and Vision (FnT CGV), 6(3-4), p. 185-365, 2011
Topics:
structured prediction basics
statistical decision theory setup
generative / discriminative continuum
examples of structured prediction models: word alignment, image segmentation, OCR
Pointers
See lecture 5 of my PGM class for a review of statistical decision theory
Sources for examples:
word alignment: “A Discriminative Matching Approach to Word Alignment”. B. Taskar, S. Lacoste-Julien, and D. Klein, EMNLP 2005
multi-object tracking: “On Pairwise Costs for Network Flow Multi-Object Tracking”. V. Chari, S. Lacoste-Julien, I. Laptev and J. Sivic, CVPR 2015.
image segmentation: “Learning Associative Markov Networks”, B. Taskar, V. Chatalbashev and D. Koller, ICML 2004
OCR: “Max-Margin Markov Networks”, B. Taskar, C. Guestrin and D. Koller, NIPS 2003 (best student paper award)
Big sidenote, other papers I have mentioned (completely optional):
Analogy between 0-1 loss being difficult vs. hamming loss and how KL-divergence is a bit like 0-1 loss for generative modelling: “Parametric Adversarial Divergences are Good Task Losses for Generative Modeling”, G. Huang, H. Berard, A. Touati, G. Gidel, P. Vincent and S. Lacoste-Julien, arXiv:1708.02511, 2017.
Scaling up a word-alignment-inspired approach to align knowledge bases: “SiGMa: Simple Greedy Matching for Aligning Large Knowledge Bases”, S. Lacoste-Julien, K. Palla, A. Davies, G. Kasneci, T. Graepel and Z. Ghahramani, KDD 2013.
Topics:
constraints in structured prediction (word alignment; multi-object tracking)
prediction function
structured prediction losses: perceptron, log-loss (CRF), structured hinge loss
OCR example
Reading:
for next class, you can have a look at the tutorial on energy-based methods by Yann Lecun et al.:
Yann LeCun, Sumit Chopra, Raia Hadsell, Marc'Aurelio Ranzato and
Fu-Jie Huang: “A Tutorial on Energy-Based Learning”, in Bakir, G. and
Hofman, T. and Schölkopf, B. and Smola, A. and Taskar, B. (Eds),
Predicting Structured Data, MIT Press, 2006.
Have a look at sections 1 to 3; 5, 7 and 8. Section 7.1 covers the structured perceptron, structured SVM and CRF that I mentioned in today's class.
Pointers
Perceptron loss: Collins, M. “Discriminative training methods for hidden Markov models: theory and experiments with perceptron algorithms”, EMNLP 2002
Log loss (CRF): Lafferty, J., McCallum, A., Pereira, F. “Conditional random fields: Probabilistic models for segmenting and labeling sequence data”, ICML 2001.
M3-net: “Max-Margin Markov Networks”, B. Taskar, C. Guestrin and D. Koller, NIPS 2003 (best student paper award)
Topics:
energy based models
revisiting surrogate losses
binary SVM as special case of structured SVM
surrogate losses for binary classification
Pointers
energy based models: Yann LeCun, Sumit Chopra, Raia Hadsell, Marc'Aurelio Ranzato and Fu-Jie Huang: “A Tutorial on Energy-Based Learning”, in Bakir, G. and Hofman, T. and Schölkopf, B. and Smola, A. and Taskar, B. (Eds), Predicting Structured Data, MIT Press, 2006.
Perceptron loss: Collins, M. “Discriminative training methods for hidden Markov models: theory and experiments with perceptron algorithms”, EMNLP 2002
Log loss (CRF): Lafferty, J., McCallum, A., Pereira, F. “Conditional random fields: Probabilistic models for segmenting and labeling sequence data”, ICML 2001.
Structured hinge loss (structured SVM):
B. Taskar, C. Guestrin and D. Koller, “Max-Margin Markov Networks”, NIPS 2003.
Ioannis Tsochantaridis, Thorsten Joachims, Thomas Hofmann and Yasemin Altun, “Large Margin Methods for Structured and Interdependent Output Variables”, JMLR, 2005.
Pointers to learn about binary SVM:
A nice tutorial paper: K. Bennet and C. Campbell, “Support vector machines: hype or hallelujah?”, SIGKDD Explor. Newsl. 2, 2 (December 2000), 1-13.
A classical (though somewhat old) book: N. Cristianini and J. Shawe-Taylor, An Introduction to Support Vector Machines, Cambridge University Press, 2000.
The classical paper which covers several surrogate losses for binary classification: Bartlett, Peter L., Jordan, Michael I., and McAuliffe, Jon D, “Convexity, classification, and risk bound”, Journal of the American Statistical Association, 101(473):138–156, 2006.
Topics:
theory for binary classification:
no free lunch theorem
Occam's generalization error bound
Pointers
Source for the uniform consistency of the voting rule for binary classification when X is finite:
(in French sorry!): see Theorem 2.1 of the very nice lectures from Sylvain Arlot.
Source for No Free Lunch theorems: Theorem 7.1 and Theorem 7.2 (chapter 7) of Devroye & al., “A Probabilistic Theory of attern Recognition”, 1996.
Notes for Occam's bound:
(in French) lecture notes from a class I taught at ENS.
This class was based on very interesting notes from a class taught by David McAllester (the guy behind PAC-Bayes and many other things) – these are in English.
Topics:
PAC-Bayes for structured prediction; probit loss
Structured prediction surrogate losses recap
Motivation for structured prediction
Pointers:
PAC-Bayes bound from McAllester 2003 was taken from Lemma 4 in: D. McAllester, “Generalization Bounds and Consistency for Structured Labeling”, in Predicting Structured Data, edited by G. Bakir, T. Hofmann, B. Scholkopf, A. Smola, B. Taskar, and S. V. N. Vishwanathan. MIT Press, 2007.
See also proof in his lecture notes that I had linked to last lecture.
Probit loss and its consistency for structured prediction: David McAllester, Joseph Keshet, “Generalization Bounds and Consistency for Latent Structural Probit and Ramp Loss”, (oral), NIPS 2011.
Skipped: Concentration inequalities; Chernoff bound
See 2018's notes on the Chernoff bound.
Concentration inequality: see the Wikipedia article on concentration inequalities; Chernoff bound is here and the proof of Hoeffding's lemma is here.
Topics:
Generalization error bound: VC dimension, Rademacher complexity
Structured prediction generalization error bounds (factor graph complexity)
Extra: non-parametric methods, kernel trick, RKHS
Pointers:
VC dimension / Rademacher complexity for binary case: see slides from presentation by John Shawe-Taylor at MLSS 2009.
VC dimension definition: slide 38
generalization error bound for binary classification with VC dimension: slide 46
Rademacher complexity: slide 85
generalization error bound with Rademacher complexity: slide 87
Structured prediction generalization bound: Corinna Cortes, Vitaly Kuznetsov, Mehryar Mohri, Scott Yang, “Structured Prediction Theory Based on Factor Graph Complexity”, NIPS 2016.
Sidenote: relationship between constrained and regularized / penalized formulations:
see section 4.7.3 (Pareto) and 4.7.4 (scalarization) of Boyd's book for the formal relationships
(in French): see exercise 2 from this homework from my ENS class.
Kernel methods and RKHS:
See the RKHS entry on Wikipedia.
See a machine learning summer school tutorial on kernel methods by Arthur Gretton here. His first lecture gives an introduction to the RKHS.
Topics:
Review of kernel trick, representer's theorem
RKHS: intuition from spectral theorem + formalization
Mercer's theorem and infinite dimensional SVD
Pointers:
See the RKHS entry on Wikipedia.
See the nice Spring 2013 class “machine learning with kernel methods” by Jean-Philippe Vert. Below I give pointers to the 2013 version of the slides:
slide 16: finite space example
slide 55: representer's theorem
slide 150: Mercer's theorem
slide 156: subset of L2 viewpoint (see slide 163 for more info)
Topics:
Consistent surrogate loss for structured prediction
Calibration function
Some task losses are harder than others
Pointers:
Main pointer: Anton Osokin, Francis Bach, Simon Lacoste-Julien, “On Structured Prediction Theory with Calibrated Convex Surrogate Losses”, (oral), NIPS 2017.
other pointers:
Follow-up work with inconsistent losses: K. Struminsky, S. Lacoste-Julien and A. Osokin, “Quantifying Learning Guarantees for Convex but Inconsistent Surrogates”, NIPS 2018.
Canonical paper which presented consistency analysis for binary classification: Bartlett, Peter L., Jordan, Michael I., and McAuliffe, Jon D. “Convexity, classification, and risk bound”, Journal of the American Statistical Association, 101(473):138–156, 2006.
Paper which introduced the terminology for calibration function (binary): Ingo Sweinwart, “How to Compare Different Loss Functions and Their Risks”, Constructive Approximation, 26:225-287, 2007.
paper which showed that multiclass SVM is not consistent (for the 0-1 loss) and proposed a consistent alternative: Lee, Yoonkyung, Lin, Yi, and Wahba, Grace. “Multicategory support vector machines: Theory and application to the classification of microarray data and satellite radiance data”. Journal of the American Statistical Association, 99(465):67–81, 2004.
See also the McAllester 2007 paper: D. McAllester, “Generalization Bounds and Consistency for Structured Labeling”, in Predicting Structured Data, edited by G. Bakir, T. Hofmann, B. Scholkopf, A. Smola, B. Taskar, and S. V. N. Vishwanathan. MIT Press, 2007.
And interestingly, this recent paper shows that the multiclass SVM is consistent for a loss on 3 classes with an “abstain” notion: Ramaswamy, Harish G. and Agarwal, Shivani. “Convex calibration dimension for multiclass loss matrices”. JMLR, 17(14):1–45, 2016.
see also the extensive related work section of the NIPS 2017 paper by Osokin et al.
Topics:
Finish going through the NIPS 2017 paper “On Structured Prediction Theory with Calibrated Convex Surrogate Losses”
convex analysis recap; subgradient
Pointers:
Optimization books:
Classical optimization textbook (does not assume convexity!): Dimitri P. Bertsekas, Nonlinear Programming, 2016.
Good coverage of convex optimization (free): Stephen Boyd and Lieven Vandenberghe, Convex Optimization, 2008.
For algorithms: Jorge Nocedal and Stephen J. Wright,Numerical Optimization, 2006.
For convex analysis: Borwein and Lewis, Convex Analysis and Nonlinear Optimization, 2006.
Topics:
fundamental descent lemma
stochastic subgradient method
convergence proof for stochastic subgradient method (convex or strongly convex)
Pointers:
Proof of convergence for the weighted average stochastic subgradient method: Lacoste-Julien, Schmidt, Bach, “A simpler approach to obtaining an O(1/t) convergence rate for the projected stochastic subgradient method”, arXiv:1212.2002
Topics:
landscape of convergence rates
SGD for CRF
stochastic subgradient method for structured SVM
Pointers:
Bible book for (deterministic) rates of convergence for convex optimization: Nesterov, Introductory Lectures on Convex Optimization, 2004.
SVRG algorithm with similar convergence rate as SAGA, Hofmann's SVRG variant proposed in: T. Hofmann, A. Lucchi, S. Lacoste-Julien, and Brian McWilliams, “Variance Reduced Stochastic Gradient Descent with Neighbors”, NIPS 2015. See also a clear explanation in Section 4.1 of: R. Leblond, F. Pedregosa, and S. Lacoste-Julien, “Improved Asynchronous Parallel Optimization Analysis for Stochastic Incremental Methods”, JMRL 2018.
First application of stochastic subgradient method for structured SVM: Ratliff, N., Bagnell, J. A., and Zinkevich, M. “(Online) subgradient methods for structured prediction”, AISTATS, 2007.
Weighted average version for structured SVM – see Section 6 in: S. Lacoste-Julien, M. Jaggi, M. Schmidt and P. Pletscher, “Block-Coordinate Frank-Wolfe Optimization for Structural SVMs”, ICML 2013.
Other pointers:
Subgradient of the max of convex functions: see Danskin's theorem.
A tighter analysis of SGD (smooth case): F. Bach, E. Moulines, “Non-Asymptotic Analysis of Stochastic Approximation Algorithms for Machine Learning”, NIPS 2011.
Topics:
generic approach using duality to get small QP
saddle-point formulation
variational inequality perspective of saddle points
small QP formulation for structured SVM
examples of efficient loss-augmented inference: word alignment
Pointers:
Generic formulation using duality on the convex form of the loss-augmented inference: B. Taskar, V. Chatalbashev, D. Koller and C. Guestrin, “Learning Structured Prediction Models: A Large Margin Approach”, ICML 2005.
Saddle-point formulation and more details (e.g. including the word alignment example and also mentioned the extragradient method): B. Taskar, S. Lacoste-Julien, and M. Jordan, “Structured Prediction, Dual Extragradient and Bregman Projections.”, JMLR 2006.
More on the variational inequality perspective (the paper I mentioned which uses the extragradient method for GANs): G. Gidel, H. Berard, P. Vincent and S. Lacoste-Julien,“A Variational Inequality Perspective on Generative Adversarial Nets”, ICLR 2019.
Book that I mentioned that explained totally unimodular matrices: “Combinatorial Optimization Algorithms and Complexity” by C. Papadimitriou & K. Steiglitz, 1998.
Topics:
M3-net efficient formulation; marginal polytope; triangulated graphs
Joint from clique marginals using junction tree
Marginal polytope
M3-net QP or saddle point version
Pointers:
M3-net paper: B. Taskar, C. Guestrin and D. Koller, “Max-Margin Markov Networks”, NIPS 2003.
The LP formulation of MAP inference in MRF is explained in section 13.5 of the Koller & Friedman's book for example
See Def. 10.6 in Koller & Friedman's book for how to express the joint distribution from the consistent clique marginals when you have a junction tree
Book to learn more about polytopes: Ziegler, Lectures on Polytopes, 1995.
To learn more about the marginal polytopes, see Section 3.4 of: M. J. Wainwright and M. I. Jordan, “Graphical models, exponential families, and variational inference”. Foundations and Trends in Machine Learning, 2008.
Also, see p.80 Figure 4.1 for an example of fractional corners for the local consistency polytope (vs. the marginal polytope which only has integer vertices).
The saddle point Frank-Wolfe algorithm paper: G. Gidel, T. Jebara and S. Lacoste-Julien, “Frank-Wolfe Algorithms for Saddle Point Problems”, AISTATS 2017.
Topics:
interior point method
Lagrangian duality for structured SVM objective
Properties of primal-dual structured SVM objective
Pointers:
See chapter 11 of Boyd's book for interior point methods.
See chapter 5 of Boyd's book for a detailed coverage of Lagrangian duality.
Sec Lecture 16 of my PGM class 2017 for some scribbles on duality.
See this paper for derivations of the dual of structured SVM objective with similar notation: S. Lacoste-Julien, M. Jaggi, M. Schmidt and P. Pletscher, “Block-Coordinate Frank-Wolfe Optimization for Structural SVMs”, ICML 2013.
Topics:
properties of structured SVM solutions
M3-net (dual formulation)
constraint generation algorithm (“cutting plane” misnomer)
1-slack vs. n-slack constraint generation approach
Frank-Wolfe algorithm and properties: sparsity, gap, affine co-variance
Pointers:
Original constraint generation approach for structured SVM (n-slack): I. Tsochantaridis, T. Hofmann, T. Joachims, and Y. Altun, “Large Margin Methods for Structured and Interdependent Output Variables”, JMLR 2005.
Improved version (with 1-slack formulation): T. Joachims, T. Finley, Chun-Nam Yu, “Cutting-Plane Training of Structural SVMs”, Machine Learning Journal, 77(1):27-59, 2009.
Good modern overview of Frank-Wolfe algorithm with applications in machine learning: M. Jaggi, “Revisiting Frank-Wolfe: Projection-Free Sparse Convex Optimization”,ICML 2013.
Topics:
away-step Frank-Wolfe algorithm
non-convex Frank-Wolfe and stationarity condition
Affine co-variance of FW
Pointers:
Modern survey of away-step Frank-Wolfe and other variants: S. Lacoste-Julien and M. Jaggi, “On the Global Linear Convergence of Frank-Wolfe Optimization Variants”, NIPS 2015.
A more efficient version of the away-step recently revisited: D. Garber and O. Meshi, “Linear-Memory and Decomposition-Invariant Linearly Convergent Conditional Gradient Algorithm for Structured Polytopes”, NIPS 2016.
Convergence of FW on non-convex objectives: S. Lacoste-Julien, “Convergence Rate of Frank-Wolfe for Non-Convex Objectives”, arXiv:1607.00345, 2016.
Succesful application of FW on non-convex objective for multiple sequence alignment, Section 5.2 of: J.-B. Alayrac, P. Bojanowski, N. Agrawal, I. Laptev, J. Sivic and S. Lacoste-Julien, “Learning from Narrated Instruction Videos”, TPAMI 2018.
Topics:
Convergence proof of Frank-Wolfe algorithm
Application of FW to structured SVM
Pointers:
The standard 2/(t+2) step-size proof for Frank-Wolfe convergence: M. Jaggi, “Revisiting Frank-Wolfe: Projection-Free Sparse Convex Optimization”,ICML 2013.
Linear convergence proof for AFW: S. Lacoste-Julien and M. Jaggi, “On the Global Linear Convergence of Frank-Wolfe Optimization Variants”, NIPS 2015.
Application of Frank-Wolfe to structured SVM: S. Lacoste-Julien, M. Jaggi, M. Schmidt and P. Pletscher, “Block-Coordinate Frank-Wolfe Optimization for Structural SVMs”, ICML 2013.
More technical pointers for the curious people:
The formal proof for the differential equation approach, see Lemma D.5 in: R. Krishnan, S. Lacoste-Julien and D. Sontag, “Barrier Frank-Wolfe for Marginal Inference”, NIPS 2015.
An illustration of the “brute-force” approach with arbitrary step-sizes, but used in the context of convergence of SGD – see proof of THeorem 1 in Appendix A of: F. Bach, E. Moulines, “Non-Asymptotic Analysis of Stochastic Approximation Algorithms for Machine Learning”, NIPS 2011.
More refined convergence results for generic step-size for the Frank-Wolfe algorithm: RM Freund, P. Gigras, “New Analysis and Results for the Frank-Wolfe Method”, Mathematical Programming 2016.
Topics:
Applying FW variants on structured SVM objective
Relationships: FW on dual equivalent to batch subgradient method on primal; FCFW on dual equivalent to 1-slack cutting plane on primal.
Block-coordinate Frank-Wolfe (BCFW) applied to structured SVM
Pointers:
The usual pointer for FW for structured SVM (as in previous lecture): S. Lacoste-Julien, M. Jaggi, M. Schmidt and P. Pletscher, “Block-Coordinate Frank-Wolfe Optimization for Structural SVMs”, ICML 2013.
The linear convergence of FCFW is given in (already cited before): S. Lacoste-Julien and M. Jaggi, “On the Global Linear Convergence of Frank-Wolfe Optimization Variants”, NIPS 2015.
A good application of using FCFW when the linear oracle is expensive is given in this paper: R. Krishnan, S. Lacoste-Julien and D. Sontag, “Barrier Frank-Wolfe for Marginal Inference”, NIPS 2015.
The generalization of the observation that Frank-Wolfe optimization sometimes reduced to the subgradient method on the primal is given in: F. Bach. “Duality between subgradient and conditional gradient methods”. SIAM Journal of Optimization, 25(1):115-129, 2015.
Nesterov coordinate method: Y. Nesterov, “Efficiency of Coordinate Descent Methods on Huge-Scale Optimization Problems”, SIAM J. Optim., 22(2), 341–362, 2012.
BCFW in all its gory details: S. Lacoste-Julien, M. Jaggi, M. Schmidt and P. Pletscher, “Block-Coordinate Frank-Wolfe Optimization for Structural SVMs”, ICML 2013.
Improvements of BCFW for SVMstruct (non-uniform sampling, away steps, etc.): A. Osokin, J.-B. Alayrac, I. Lukasewitz, P. Dokania and S. Lacoste-Julien, “Minding the Gaps for Block Frank-Wolfe Optimization of Structured SVMs”, ICML 2016.
Code.
Topics:
Application of BCFW to SVMstruct
Variance reduction for incremental gradient methods: SAG
Pointers:
Stochastic average gradient (SAG):
Original NIPS 2012 paper: N. Le Roux, M. Schmidt, F. Bach, “A Stochastic Gradient Method with an Exponential Convergence Rate for Finite Training Sets.”, NIPS 2012.
The practical aspects of SAG are described in the massive journal version: M. Schmidt, N. Le Roux, F. Bach. “Minimizing Finite Sums with the Stochastic Average Gradient” Mathematical Programming, 162:83-162, 2017. (arxiv)
An alternative to the complicated “lagged updates” when you have sparse features is the Sparse SAGA algorithm; see Section 2 of: R. Leblond, F. Pedregosa and S. Lacoste-Julien, “ASAGA: Asynchronous Parallel SAGA”, AISTATS 2017.
SAGA paper – unbiased version of SAG (with simpler proof) as well as variance reduction perspective on SAG / SAGA / SVRG (see references therein for SDCA and SVRG): A. Defazio, F. Bach and S. Lacoste-Julien. “SAGA: A Fast Incremental Gradient Method With Support for Non-Strongly Convex Composite Objectives”, NIPS 2014.
lecture20 scribbles | recorded lecture (only 2nd hour reconstructed by Mila technician) – unfortunately the 1st hour was lost by Bluejeans :(
Topics:
Variance reduction for incremental gradient methods: SAG, SAGA, SVRG, etc.
CRF objective and dual; optimization algorithms:
online exponentiated gradient
Application of SAGA to CRFs
Proximal gradient method
Pointers:
SAGA paper – unbiased version of SAG (with simpler proof) as well as variance reduction perspective on SAG / SAGA / SVRG (see references therein for SDCA and SVRG): A. Defazio, F. Bach and S. Lacoste-Julien. “SAGA: A Fast Incremental Gradient Method With Support for Non-Strongly Convex Composite Objectives”, NIPS 2014.
SVRG paper: Rie Johnson and Tong Zhang “Accelerating stochastic gradient descent using predictive variance reduction”, NIPS 2013.
Note that a variant of SVRG that is adaptive to local strong convexity is given in the following paper (where the end of the inner loop is decided randomly: at every inner loop iteration, with probability 1/n, you end the inner loop); it also contains a simpler proof of convergence for SAGA: T. Hofmann, A. Lucchi, S. Lacoste-Julien, and Brian McWilliams, “Variance Reduced Stochastic Gradient Descent with Neighbors”, NIPS 2015.
tutorial by Francis Bach on SAG et al. at NIPS 2016 – slides
variance reduced SGD in the context of deep learning, that I brielfy mentioned in class:
Paper which investigates why SVRG does not seem to work that well for deep network training: A. Defazio, L. bottou. “On the Ineffectiveness of Variance Reduced Optimization for Deep Learning”, NeurIPS 2019.
Paper which shows that variance reduction can work quite well for saddle point problems: T. Chavdarova, G. Gidel, F. Fleuret, S. Lacoste-Julien. “Reducing Noise in GAN Training with Variance Reduced Extragradient”, NeurIPS 2019.
CRF optimization:
Online exponentiated gradient for CRF paper: Collins, M., Globerson, A., Koo, T., Carreras, X., and Bartlett, P. L. “Exponentiated gradient algorithms for conditional random fields and max-margin Markov networks”, JMLR, 9:1775-1822, 2008.
SAG for CRF paper: M. Schmidt, R. Babanezhad, M.O. Ahmed, A. Defazio, A. Clifton, A. Sarkar, “Non-Uniform Stochastic Average Gradient Method for Training Conditional Random Fields”, AISTATS 2015.
Adaptive SDCA for CRFs: R. Le Priol, A. Piché and S. Lacoste-Julien, “Adaptive Stochastic Dual Coordinate Ascent for Conditional Random Fields”, UAI 2018.
Proximal gradient method: see slides of this great optimization class by L. Vandenberghe.
Topics:
proximal gradient method for lasso -> ISTA; prox SAGA
General acceleration scheme: catalyst
Non-convex optimization
submodular optimization
Pointers:
Catalyst – meta-algorithm for acceleration: Hongzhou Lin, Julien Mairal, Zaid Harchaoui, “A Universal Catalyst for First-Order Optimization”, NIPS 2015.
Non-convex optimization: see slides from Suvrit Sra at the NIPS 2016 tutorial on “Large-Scale Optimization: Beyond Stochastic Gradient Descent and Convexity” (e.g. table of rates on p. 20 and 22)
submodularity:
website with tutorials and pointers: http://submodularity.org/
detailed monograph by Francis Bach: F. Bach. “Learning with Submodular Functions: A Convex Optimization Perspective”, Foundations and Trends in Machine Learning, 6(2-3):145-373, 2013 | slides
Other good optimization pointers:
great coverage of convex optimization by Mark Schmidt at the Machine Learning Summer School in 2015 - slides | video
two great classes on optimization:
EE236C - Optimization Methods for Large-Scale Systems (Spring 2016) - Prof. L. Vandenberghe, UCLA - link
Convex optimization class by Ryan Tibshirani at CMU - Fall 2015 - link
slides on the Frank-Wolfe lecture
Topics:
Latent structured SVM + CCCP
RNN and deep learning
Pointers:
latent variable SVMstruct: Chun-Nam Yu, Thorsten Joachims, “Learning Structural SVMs with Latent Variables”, ICML 2009.
others:
hidden CRF: A. Quattoni, S. Wang, L. Morency, M. Collins, and T. Darrell, “Hidden Conditional Random Fields”, TPAMI 2007.
deformable part models for object recognition (highly cited paper): Felzenszwalb, R. Girshick, D. McAllester, D. Ramanan, “Object Detection with Discriminatively Trained Part Based Models”, TPAMI 2010.
CCCP procedure convergence rate: Ian E.H. Yen, Nanyun Peng, Po-Wei Wang and Shou-de Lin, “On Convergence Rate of Concave-Convex Procedure”, NIPS 2012 OPT Workshop (not considered a publication by the way)
deep learning:
see chapter 10 of the “Deep learning book” for RNNs
head detection plug-in example mentioned in class: Tuan-Hung Vu, Anton Osokin, and Ivan Laptev, “Context-aware CNNs for person head detection”, ICCV 2015
Encoder-decoder RNN model (seq2seq): see chapter 10.4 of deep learning book.
(skipped): more on kernels for structured prediction:
example of early paper presenting kernels for structured SVM: Juho Rousu, Craig Saunders, Sandor Szedmak, John Shawe-Taylor, “Kernel-Based Learning of Hierarchical Multilabel Classification Models”, JMLR 2006
example of application: L. Bertelli, T. Yu, D. Vu, and B. Gokturk, “Kernelized structural SVM learning for supervised object segmentation”, CVPR 2011.
for computation in BCFW, see Appendix B.5 in the usual: S. Lacoste-Julien, M. Jaggi, M. Schmidt and P. Pletscher, “Block-Coordinate Frank-Wolfe Optimization for Structural SVM”, ICML 2013.
the standard book for kernels: Bernhard Schölkopf and Alexander J. Smola, “Learning with Kernels”, MIT Press 2001
Topics:
Learning to search
SeaRNN
Structured Prediction Energy Networks (SPENs)
Pointers:
learning to search: see great ICML 2015 tutorial
LOLS paper: Kai-Wei Chang, Akshay Krishnamurthy, Alekh Agarwal, Hal Daume, John Langford, “Learning to Search Better than Your Teacher”, ICML 2015.
SeaRNN paper: R. Leblond, J.-B. Alayrac, A. Osokin and S. Lacoste-Julien, “SEARNN: Training RNNs with Global-Local Losses”, ICLR 2018.
SPENs:
David Belanger, Andrew McCallum, “Structured Prediction Energy Networks”, ICML 2016.
David Belanger, Bishan Yang, Andrew McCallum, “End-to-End Learning for Structured Prediction Energy Networks”, ICML 2017.
Clarke's generalization of the subgradient to non-convex functions: Chapter 10 of “Functional Analysis, Calculus of Variations and Optimal Control”, 2013 by Francis Clarke.
1:30 pm – 3:30 pm on Bluejean