Paper on quantum reinforcement learning authored by D.P. Park et al. is presented at the APS March Meeting 2019, Boston

Title: ​Quantum-classical reinforcement learning for quantum algorithms with classical data

Authors: Daniel Kyungdeock Park, Jonghun Park, June-Koo Kevin Rhee

Many known quantum algorithms with quantum speed-ups require an existence of a quantum oracle that encodes multiple answers as quantum superposition. These algorithms are useful for demonstrating the power of harnessing quantum mechanical properties for information processing tasks. Nonetheless, these quantum oracles usually do not exist naturally, and one is more likely to work with classical data. In this realistic scenario, whether the quantum advantage can be retained is an interesting and critical open problem.

In our research group, we tackle this problem with the learning parity with noise (LPN) algorithm as an example. LPN is an example of an intelligent behavior that aims to form a general concept from noisy data. This problem is thought to be classically intractable. The LPN problem is equivalent to decoding a random linear code in the presence of noise, and several cryptographic applications have been suggested based on the hardness of this problem and its generalizations. However, the ability to query a quantum oracle allows for an efficient solution. The quantum LPN algorithm also serves as an intriguing counterexample to the traditional belief that a quantum algorithm is more susceptible to noise than classical methods. However, as noted above, in practice, a learner receives data from classical oracles. In our work, we showed that a naive application of the quantum LPN algorithm to classical data that is encoded as an equal superposition state requires an exponential sample complexity, thereby nullifying the quantum advantage.

We developed a quantum-classical hybrid algorithm for solving the LPN problem with classical examples. The underlying idea of our algorithm is to learn the quantum oracle via reinforcement learning, for which the reward is determined by comparing the output of the guessed quantum oracle and the true data, and the action is chosen via greedy algorithm. The reinforcement learning significantly reduces both the sample and the time cost of the quantum LPN algorithm in the absence of the quantum oracle. Simulations with a hidden bit string of length up to 12 show that the quantum-classical reinforcement learning performs better than known classical algorithms when the number of examples, run time, and robustness to noise are collectively considered.

 

 

Link : http://meetings.aps.org/Meeting/MAR19/Session/K27.9

목록보기
메뉴닫기