Quantum machine learning brings quantum computing to learning models, but it remains an open question what are the learning protocols that can benefit the most from quantum resources. Furthermore, the programming of quantum computers still has not been abstracted from gate-level manipulations and basic algorithmic primitives. Probabilistic programming is an emergent paradigm in the design of programming languages for statistical inference and machine learning: it could be a good fit for developing learning algorithms on quantum computers. For example, various types of probabilistic graphical models — Bayesian networks, Markov networks, conditional random fields, hidden Markov models and even Kalman filters — compactly represent the joint probability distribution of a system. Generally, probabilistic inference in graphical models is a #P-complete problem. Approximate inference is a way around it, and this is where quantum-enhanced sampling can give an advantage. In this talk, we argue that NISQ-era devices need a fresh approach to developing algorithms and that probabilistic graphical models are a primary target for quantum machine learning in the near term.
Peter Wittek is an Assistant Professor in the University of Toronto and an affiliate in the Vector Institute for Artificial Intelligence the Perimeter Institute for Theoretical Physics. He obtained his Ph.D. from the National University of Singapore. His research explores the synergies between artificial intelligence, machine learning, quantum information theory, and quantum computing. He authored “Quantum Machine Learning: What Quantum Computing Means to Data Mining”, the first monograph on the subject. As the Academic Director of the Quantum Program in the Creative Destruction Lab, he oversees two dozen quantum software startups a year that exploit contemporary quantum technologies in a commercial setting.