GPU-Accelerated Linear Programming

Haihao Lu – MIT

In this talk, I will talk about the recent trend of research on new first-order methods for scaling up and speeding up linear programming (LP) with GPUs. The state-of-the-art solvers for these optimization problems are mature and reliable at delivering accurate solutions. However, these methods are not suitable for modern computational resources, particularly GPUs. The computational bottleneck of these methods is matrix factorization, which usually requires significant memory usage and is highly challenging to take advantage of the massive parallelization of GPUs. In contrast, first-order methods (FOMs) only require matrix-vector multiplications, which work well on GPUs and have already accelerated machine learning training during the last 15 years. This ongoing line of research aims to scale up and speed up LP by using FOMs and GPUs. I’ll discuss how we can achieve this by explaining: (i) the behaviors of FOMs for LP; (ii) computational results on GPUs; (iii) theoretical results, and how theory can lead to better computation and better understanding of the algorithm’s performance.

Scroll to Top