Matrices — basically the mathematical abstraction of the common spreadsheet — are central to today’s data-driven world. The data behind climate change models, video game graphics, and streaming music recommendation are to one degree or another stored in matrices, and the effectiveness of these and other applications hinge on being able to perform computations on matrices fast and accurately.
A few months ago I found out about the seminar Complexity of Matrix Computation. Set up as an online zoom discussion between some of the leading practitioners, I’ve found it to offer interesting discussion of some of the challenging problems that concern people who work on matrix computation for a living (a community that includes mathematicians, engineers, computer scientists, statisticians, programmers).
If you work in data science or machine learning, or are just curious I recommend taking a look at some of the videos on the seminar’s youtube channel.
As an example, here’s the discussion of low rank approximation methods
There have been four seminars so far. The next one is scheduled for September 1, 2021. Each starts with a question. For example, the last seminar started with the questions
What does it mean to compute a low rank approximation of a matrix? What is a low rank matrix? What does approximation mean in this context? What is a good algorithm for this problem? How should the running time depend on the approximation parameter?
The interesting thing to me is inter-generational span of these discussions — the exchanges between practitioners who are graduate students or early career and those who are in their eighties (Cleve Moler, the inventor of the Matlab numerical analysis package is 81 and was a participant in two of the seminars I attended).
You can sign up for the the mailing list here.