Graphical models have wide-ranging applications in machine learning, and the natural and social sciences, where they are one of the most popular ways to model statistical relationships between observed variables. For example, they are used to infer the structure of gene regulatory networks and to learn functional brain connectivity networks. They also played an important role in some of the early breakthroughs in algorithms for learning deep neural networks. In most of the settings in which they are applied, the number of observed samples is much smaller than the dimension. I will describe recent approaches for provably learning several widely used classes of graphical models with nearly-optimal sample complexity (only logarithmic in the dimension) and time complexity.
Raghu Meka is an Associate Professor of Computer Science at UCLA. He is broadly interested in complexity theory, learning and probability theory. He got his PhD at UT Austin under the (wise) guidance of David Zuckerman. After that, he spent two years in Princeton as a postdoctoral fellow at the Institute for Advanced Study with Avi Wigderson, and at DIMACS, at Rutgers. And after that, he spent an enjoyable year as a researcher at the Microsoft Research Silicon Valley Lab.