Notes from the Numerical Linear Algebra in Machine Learning Workshop
Here’s a quick summary of some highlights from my notes about the NLA in ML workshop at ICML. First, it was fantastic in terms of speakers and audience. There were lots of great questions that the audience interjected into the talks to clarify the ideas and all of the talks were about important topics.
See the workshop web-page for more about the ideas behind the workshop. Without further ado, my top 4 highlights:
- Peder Olsen talked about the “box product”, a variation on the Kronecker product that arises in his new, and useful, treatment of matrix calculus. I wish I had these notes for the last time I gave my lecture on matrix calculus! In brief, the box product is the Kronecker product after a perfect shuffle or stride permutation. I’ll give an example since
- Zeyuan Allen Zhu gave an overview of their new “almost linear time” method to solve Ax=b with a Laplacian matrix from a graph. It’s a really neat algorithm and closely exploits the relationship between a positive definite. Look up their paper and spend some time with it if you work with these systems. (My internet is bad at the moment and so I can’t look up the refs for the rest of the post. Things are googleable, I believe.)
- Nicolas Gillis spoke about recent work with NMF. I hadn’t seem the LPs before that people use to find NMFs under this separability condition. These are actually quite similar to some of the problems Paul Constantine looks at. See the Hottopicx paper.
- Michael Mahoney spoke about their 60-page revisitation of the Nystrom method with all sorts of goodies that are important in actually using these methods.
There was a ton of other great gems and I wish I had time to list them all. If you aren’t here, it’s because if I wrote all I wanted to, this note would never be done (or at least not in bounded time)! And thanks again to the organizers for a great session.