Professor June-Koo Rhee’s research team developed a quantum AI algorithm that goes beyond existing AI technology

Professor June-Koo Rhee’s research team developed a non-linear quantum machine-learning artificial intelligence algorithm through collaborative research with German and South African research teams.
Through this study, a non-linear kernel was devised to enable quantum machine learning of complex data. In particular, the quantum supervised learning algorithm developed by Professor June-Koo Rhee’s research team can be calculated with a minimal amount of computation. Therefore, the algorithm presents the possibility of overtaking current AI technologies that require large amounts of computation.
Professor June-Koo Rhee’s research team developed quantum forking technology that generates train and test data through quantum information and enables parallel computation of quantum information. A simple quantum measurement technique has been combined to create a quantum algorithm system that implements non-linear kernel-based supervised learning that efficiently calculates similarities between quantum data. The research team successfully demonstrated quantum supervised learning on real quantum computers through IBM cloud services. Research professor Kyung-Deock Park (KAIST) participated as the first author. The result of this study was published in the 6th volume of May 2020, ‘npj Quantum Information’, a sister journal of the international journal Nature. (Title: Quantum classifier with tailored quantum kernel).

Furthermore, the research team theoretically proved that it is possible to implement various quantum kernels through the systematic design of quantum circuits. In kernel-based machine learning, the optimal kernel may vary depending on the given input data. Therefore, being able to implement various quantum kernels efficiently is a significant achievement in the practical application of quantum kernel-based machine learning.

Research professor Kyung-Deock Park said, “The kernel-based quantum machine learning algorithm developed by the research team will surpass traditional kernel-based supervised learning in the era of hundreds of qubits of Noisy Intermediate-Scale Quantum (NISQ) computing, which is expected to be commercialized in the next few years. The developed algorithm will be actively used as a quantum machine learning algorithm for pattern recognition of complex non-linear data.”

Meanwhile, this research was carried out with the support of the Korea Research Foundation’s Creative Challenge Research Foundation Support Project, the Korea Research Foundation’s Korea-Africa Cooperation Foundation Project, and the Information and Communication Technology Expert Training Project (ITRC) supported by the Institute for Information and Communications Technology Promotion.
You can find information on related articles in the link below.

Congratulations again on Professor June-Koo Rhee’s research team for their outstanding performance in the field of quantum computing.

Research introduction on "Quantum tomography via classical machine learning."

Title: Quantum tomography via classical machine learning

Authors: Changjun Kim, Daniel Kyungdeock Park, June-Koo Kevin Rhee

Determination of a wave function or a density matrix of a quantum system and/or its dynamics is of fundamental importance in quantum information science. Unfortunately, the computational cost of full quantum state and process tomography grow exponentially with the number of qubits. In this research project, we are exploring the possibilities to apply classical machine learning techniques such as linear regression and deep learning to assist quantum tomography tasks.

Article8 1

Research Introduction on Signal Integrity/Power Integrity Design for AI Computing Hardware, Professor Joungho Kim's Group

Signal Integrity/Power Integrity Design for AI Computing Hardware

1. Signal Integrity/Power Integrity Design of Energy-efficient Processing-in-memory in High Bandwidth Memory (PIM-HBM) Architecture to Accelerate AI Applications

Article6 1 1 Article6 2 2

Fig. 1. Conceptual view of heterogeneous PIM-HBM architecture

As the demand on high computing performance for artificial intelligence is increasing, parallel processing accelerators are the key factor of system performance. The important feature of these accelerator is high DRAM bandwidth required. That mean DRAM access occurs more frequently. In addition, the energy of DRAM access is about 200 times one 32bit floating-point operation and this gap increases with transistor scaling. In aspect of accelerator, the number of cores is continuously increasing, which requires more off-chip memory bandwidth and area. As a result, it not only increases the energy consumed by interconnection, but also limits system performance by insufficient off-chip memory bandwidth. In order to overcome the limitation, Processing-In-Memory (PIM) architecture is re-emerged. PIM architecture is the integration of processing units with memory, which can be implemented by 3D-stack high bandwidth memory (HBM).

Our lab’s AI hardware group focused on the optimized design of PIM-HBM architecture and interconnection considering signal integrity (SI) / power integrity (PI). In order to provide high memory bandwidth to the PIM core using through silicon via (TSV), area or data rates of TSV should be increased. However, more than 30% of DRAM area is already occupied by TSV, and data rates of TSV is determined by SI. Therefore, optimal design of TSV should be essential for small area and high bandwidth. Also, when the number of PIM cores increases for high performance, more area of logic die is required. That mean memory bandwidth for host processor is decreased by increased interposer channel length. Consequently, design of PIM-HBM logic die and interposer channel must be optimized for system performance without degradation of interposer bandwidth. Through system level optimization as mentioned above, our PIM-HBM architecture can achieve high energy-efficiency by drastically reducing interconnection lengths and improve system performance in memory-limited applications.

 

2. Signal Integrity/Power Integrity in a Memristor Crossbar Array for Neural Network Accelerator and Neuromorphic Chip

The most important part of artificial intelligence calculations is huge parallel matrix-vector multiplication. Such an operation method is inefficient in terms of calculation time and power in the conventional Von-Neumann computing architecture. This is because the data for the operation must be fetched from off-chip memory, which consumes lots of interconnection power, every clock cycle. Various AI hardware operation architectures are emerging to solve this problem. Among them, the promising structure is to integrate computation into the memory using non-volatile resistive memory. This architecture can reduce the access to off-chip memory for data fetch and calculate vector-matrix multiplication directly by reading current from the multiplication of voltage and conductance of memory as an analog computing approach. Thus, calculation for AI can be done very efficiently based on hardware structure target.

Our lab’s AI hardware group focused on the design of optimized computing architecture and interconnection considering signal integrity (SI) / power integrity (PI) for accurate hardware operation. Generally, memristor crossbar array has smaller size than the number of input neurons in a filter layer. But large-scale memristor crossbar array has serious IR drop problem, and can be more sensitive to noise such as crosstalk and ripple at high speed. In particular, it is more serious in a multi-level input calculation because of small voltage margin. We analyze SI/PI issues such as crosstalk noise between crossbar interconnection and power/ground noise that can affect to memristor resistance change and calculation. These noise can cause a large malfunction in the calculation of the small read voltage margin. Finally, we suggest design guide of memristor crossbar array for hardware AI operation.

Article6 3

Fig. 2. Signal Integrity/Power Integrity in a Memristor Crossbar Array for Hardware-based Matrix-Vector Multiplication

Professor Joung-Ho Kim’s column on strategies in the AI Era is serialized every month in the Chosun.com

Professor Joung-Ho Kim writes weekly column articles, “Kim Joung-Ho Column on Strategies in the AI Era,” in the Opinion Section of Chosum.com. In his column series, he discloses various aspects of AI technologies in the IT industries from his insightful technical analysis and visionary insights on the future development of AI applications.
link : http://news.chosun.com/site/data/html_dir/2019/06/16/2019061602215.html

Paper on quantum reinforcement learning authored by D.P. Park et al. is presented at the APS March Meeting 2019, Boston

Title: ​Quantum-classical reinforcement learning for quantum algorithms with classical data

Authors: Daniel Kyungdeock Park, Jonghun Park, June-Koo Kevin Rhee

Many known quantum algorithms with quantum speed-ups require an existence of a quantum oracle that encodes multiple answers as quantum superposition. These algorithms are useful for demonstrating the power of harnessing quantum mechanical properties for information processing tasks. Nonetheless, these quantum oracles usually do not exist naturally, and one is more likely to work with classical data. In this realistic scenario, whether the quantum advantage can be retained is an interesting and critical open problem.

In our research group, we tackle this problem with the learning parity with noise (LPN) algorithm as an example. LPN is an example of an intelligent behavior that aims to form a general concept from noisy data. This problem is thought to be classically intractable. The LPN problem is equivalent to decoding a random linear code in the presence of noise, and several cryptographic applications have been suggested based on the hardness of this problem and its generalizations. However, the ability to query a quantum oracle allows for an efficient solution. The quantum LPN algorithm also serves as an intriguing counterexample to the traditional belief that a quantum algorithm is more susceptible to noise than classical methods. However, as noted above, in practice, a learner receives data from classical oracles. In our work, we showed that a naive application of the quantum LPN algorithm to classical data that is encoded as an equal superposition state requires an exponential sample complexity, thereby nullifying the quantum advantage.

We developed a quantum-classical hybrid algorithm for solving the LPN problem with classical examples. The underlying idea of our algorithm is to learn the quantum oracle via reinforcement learning, for which the reward is determined by comparing the output of the guessed quantum oracle and the true data, and the action is chosen via greedy algorithm. The reinforcement learning significantly reduces both the sample and the time cost of the quantum LPN algorithm in the absence of the quantum oracle. Simulations with a hidden bit string of length up to 12 show that the quantum-classical reinforcement learning performs better than known classical algorithms when the number of examples, run time, and robustness to noise are collectively considered.

 

Article7 1

 

Link : http://meetings.aps.org/Meeting/MAR19/Session/K27.9

Nonlinear Equalizer Based on Neural Networks for PAM-4 Signal Transmission Using DML(IEEE)

The recent study on artificial neural network signal equalization authored by Ahmed Galib Reza (KAIST EE) and June-Koo Kevin Rhee has been published in IEEE Photonics Technology Letters.  ( https://ieeexplore.ieee.org/document/8401897 )

Article Content:

Title: Nonlinear Equalizer Based on Neural Networks for PAM-4 Signal Transmission Using DML

Authors: Ahmed Galib Reza, and June-Koo Kevin Rhee

Nonlinear distortion from a directly modulated laser (DML) is one of the major limiting factors to enhance the transmission capacity beyond 10 Gb/s for an intensity modulation direct-detection optical access network. In this letter, we propose and demonstrate a low-complexity nonlinear equalizer (NLE) based on a machine-learning algorithm called artificial neural network (ANN). Experimental results for a DML-based 20-Gb/s signal transmission over an 18-km SMF-28e fiber at 1310-nm employing pulse amplitude modulation (PAM)-4 confirm that the proposed ANN-NLE equalizer can increase the channel capacity and significantly reduce the impact of nonlinear penalties.

 

%EC%A4%80%EA%B5%AC%EC%A4%80%EA%B5%AC%EC%A4%80%EA%B5%AC

 

%EC%A4%80%EA%B5%AC%EC%A4%80%EA%B5%AC

Establishment of the KAIST ITRC of Quantum Computing for AI (QCAI)

QC

The KAIST ITRC of Quantum Computing for AI (QCAI), sponsored by the Ministry of Science and Information and led by Professor June-Koo Rhee, has been established in June 2018, to embrace strategic efforts to develop quantum HW and SW for AI technologies for the next 4 years.

Link: https://www.yna.co.kr/view/AKR20181002102000063

Professor June-Koo Rhee’s presentation about quantum computing was reported in IT Chosun

%EC%9D%B4%EC%A4%80%EA%B5%AC%EA%B5%90%EC%88%98%20%EC%96%B8%EB%A1%A0%EB%B3%B4%EB%8F%84 0 0

Professor June-Koo Rhee’s presentation at ‘2018 Pre Smart Cloud Show: Commercialization of Quantum Computing’ was reported in IT Chosun

Quantum computing is getting a lot of attention as it is an opportunity to develop the rapid computing performance that goes beyond Moore’s law. Currently, the quantum computing in the R&D stage can be implemented only in a laboratory environment. However, some of the key technologies for implementing them such as superconductivity have already been transferred to the engineering stage.

In this talk, Professor June-Koo Rhee said, “While optimism and pessimism coexist, such as the point that the current encryption system may be broken when the quantum computing is put into practical use, there is no disagreement in observation that the quantum computing will grow into the key technology for overcoming the limit at the stage of the completion of the fourth industrial revolution. In order to bring the quantum computing science to the stage of engineering, we must constantly invest in basic research.”

In this lecture, the vision and commercialization of quantum computing technology were introduced. For more information, please refer to the link below.

<Link to the article>
http://m.it.chosun.com/m/m_article.html?no=2851189

 

Professor Joung-Ho Kim, Application of Machine Learning for Optimization of 3-D Integrated Circuits and Systems(IEEE)

Title: “Machine Learning based Optimal Signal Integrity/Power Integrity Design for 3D ICs,” is published on IEEE Trans. VLSI Systems. (https://ieeexplore.ieee.org/abstract/document/7850943)

Author: Sung-Joo Park Bum-Hee Bae Joung-Ho Kim Madhavan Swaminathan

Article contents

Machine Learning based Optimal Signal Integrity/Power Integrity Design for 3D ICs

1. Deep Neural Network (DNN)-based Signal integrity/Power integrity Results Estimation Method

%EA%B9%80%EC%A2%85%ED%98%B8%201

Fig. 1. Deep Neural Network (DNN)-based SI/PI results estimation method

High-speed channels and power distribution networks (PDNs) must be simulated to ensure signal/power integrity in the early stage of the entire design process for the reduction of the time and cost. However, the time to simulate the entire channels and PDNs by conventional EM, circuit simulators has been longer as the complexity of the design is increased. The estimation of the SI/PI results and model parameters using the deep neural network (DNN) can save time and cost than the conventional simulations. Because the DNN can automatically solve the non-linearity relationship between input and output, DNN can accurately estimate/model the electrical characteristics of design parameters such as high speed channels, power distribution networks, and through silicon vias (TSVs) to obtain outputs such as eye diagram, P/G noise and TSV models as shown Fig. 1. Fig. 2 shows the comparison between simulation and DNN which estimate the eye height and eye width of the high speed memory channel in HBM interposer. The estimation using DNN is accurate. As a result, the DNN can be used in many ways in the SI/PI field.

%EA%B9%80%EC%A2%85%ED%98%B82

Fig. 2. Comparison between simulation and DNN model which estimate the eye height and eye width.

 

2. Reinforcement Learning-based Signal Integrity/Power Integrity Design Method

%EA%B9%80%EC%A2%85%ED%98%B83

Fig. 3. Reinforcement learning (RL)-based optimal SI/PI design method.

As the demand on higher performance such as channel speed, bandwidth, and low power is increasing, the complexity of the 2.5-D/3D IC design is gradually increasing to ensure the signal/power integrity. Moreover, time-to-market is getting shorter in order to respond quickly to market trends and customer needs. Therefore, time-efficient and accurate optimal 2.5D/3D IC design is necessary in these days. Optimal layout design considering SI/PI can be performed by reinforcement learning (RL) as shown in Fig. 3.

By using RL algorithms, optimal layout design guideline which is optimal policy can be learned through reward (feedback) mechanisms depending on the target specifications of SI/PI. Therefore, high speed channels and power distribution networks (PDNs) can be designed through the RL-based optimal design method. Fig. 4 shows the results of the RL-based optimal decoupling capacitor design method. As shown in Fig. 4, the self PDN impedance of optimized PDN satisfied the target impedance and simultaneous switching noise (SSN) is suppressed.

%EA%B9%80%EC%A2%85%ED%98%B84

Fig. 4. PDN self impedance and simultaneous switching noise (SSN) of the optimized PDN by the RL-based optimal design method.