Machine-Learning-Based Read Reference Voltage Estimation for NAND Flash Memory Systems Without Knowledge of Retention Time

Title: Machine-Learning-Based Read Reference Voltage Estimation for NAND Flash Memory Systems Without Knowledge of Retention Time

Authors: Hyemin Choe, Jeongju Jee, Seung-Chan Lim, Sung Min Joe, Il Han Park, and Hyuncheol Park

Journal: IEEE Access (published: September 2020)

To achieve a low error rate of NAND flash memory, reliable reference voltages should be updated based on the accurate knowledge of program/erase (P/E) cycles and retention time, because those severely distort the threshold voltage distribution of memory cell. Due to the sensitivity to the temperature, however, a flash memory controller is unable to acquire the exact knowledge of retention time, meaning that it is challenging to estimate accurate read reference voltages in practice.

In addition, it is difficult to characterize the relation between the channel impairments and the optimal read reference voltages in general. Therefore, we propose a machine-learning-based read reference voltage estimation framework for NAND flash memory without the knowledge of retention time.

In the off-line training phase, to define the input features of the proposed framework, we derive alternative information of unknown retention time, which are obtained by sensing and decoding the data in one wordline. For the on-line estimation phase, we propose three estimation schemes: 1) k-nearest neighbors (k-NN)- based, 2) nearest-centroid (NC)-based, and 3) polynomial regression (PR)-based estimations. By applying these estimation schemes, an unlabeled input feature is simply mapped into a pre-assigned class label, namely label read reference voltages, via the on-line estimation phase.

Based on the simulation and analysis, we have verified that the proposed framework can achieve high-reliable and low-latency performances in NAND flash memory systems without the knowledge of retention time.

%EB%B0%95%ED%98%84%EC%B2%A0%20%EA%B5%90%EC%88%98%20%EC%97%B0%EA%B5%AC%EC%8B%A4%20%EC%97%B0%EA%B5%AC

Figure 1. Flow charts of read reference voltage estimation schemes. (a) k-NN based (b) NC-based (c) PR-based estimations.

Downlink Extrapolation for FDD Multiple Antenna Systems Through Neural Network Using Extracted Uplink Path Gains

Title: Downlink Extrapolation for FDD Multiple Antenna Systems Through Neural Network Using Extracted Uplink Path Gains

Authors: Hyuckjin Choi, Junil Choi

Journal: IEEE Access (published: April 2020)

 

In frequency division duplexing (FDD) communication system, base stations (BSs) should have the downlink (DL) channel state information (CSI) that cannot be obtained at BSs. The conventional FDD communication systems deploy the DL training and feedback where the mobile station (MS) estimates the DL CSI and delivers the CSI to the BS. It becomes infeasible as the number of antennas at the BS increases in a high mobility scenario. When a MS moves at high speed, the channel changes rapidly, which results in a short coherence time.

Without the uplink (UL) feedback, the BS might obtain the DL CSI, which is called the DL extrapolation. Even in the FDD communication system, the UL and DL channel have the reciprocity, which has been proved through previous related works. Using the relation between the UL and DL channel, the UL CSI can be mapped to the DL CSI through the neural network (NN). Prior studies have developed the DL extrapolation algorithm with full dimensional UL and DL channels. However, the complexity of NN training becomes severe as the channel dimension grows.

We proposed the algorithm to simplify the NN input and output for the DL extrapolation. It has been proved through many measurements that the UL and DL channels share same channel path delays and directions in FDD communication systems. The proposed method first extracts the common channel parameters from the UL and DL channel, then trains the NN with the frequency-dependent path gains such that the size of input and output of the NN decreases. The proposed technique outperforms the conventional NN-based DL extrapolation schemes through plenty of simulations.

연구 1

Figure 1. Flow charts of (a) CH-learning and (b) PG-learning.

연구 2

Figure 2. NN structures used for numerical studies. (a) MLP for the CH-learning and (b) CNN for the PG-learning.

Massive MIMO Channel Prediction: Kalman Filtering Vs. Machine Learning

Title: Massive MIMO Channel Prediction: Kalman Filtering Vs. Machine Learning

Authors: Hwanjin Kim, Sucheol Kim, Hyeongtaek Lee, Junil Choi.

Journal: IEEE Transactions on Communications (published: January 2021)

 

Accurate channel state information (CSI) at the base stations (BSs) is crucial to fully exploit massive multiple-input multiple-output (MIMO) systems. The CSI at the BS can be outdated in the time-varying channel due to the mobility of user equipment (UE). The best way to solve the outdated CSI problem is to predict channels based on the prior CSI. In this paper, we develop the vector Kalman filter (VKF)-based predictor and the machine learning (ML)-based predictor using the spatial channel model (SCM), which is the realistic channels adopted in the 3GPP standard. First, we develop the VKF-based predictor using the autoregressive (AR) parameters from the SCM data based on the Yule-Walker equations. Then, we develop the ML-based channel predictor exploiting the linear minimum mean-square error (LMMSE)-based noise pre-processed data. Numerical results show that both channel predictors have significant gain over the outdated channel with regard to the prediction accuracy and data rate.

%EA%B9%80%ED%99%98%EC%A7%84%20%EC%97%B0%EA%B5%AC

Figure 1. Multi-layer perceptron (MLP) structure with LMMSE pre-processing.

The paper of Ph.D. students Sung-Whan Yoon and Jun Seo(Advised by Jaekyun Moon) was accepted to ICML 2019

Title: TapNet: Neural Network Augmented with Task-Adaptive Projection for Few-Shot Learning

Authors: Sung-Whan Yoon, Jun Seo and Jaekyun Moon 
 
Few-shot learning promises to allow machines to carry out tasks that are previously unencountered, using only a small number of relevant examples. As such, few-shot learning finds wide applications, where labeled data are scarce or expensive, which is far more often the case than not. Unfortunately, despite immense interest and active research in recent years, few-shot learning remains an elusive challenge to machine learning community. For example, while deep networks now routinely offer near-perfect classification scores on standard image test datasets given ample training, reported results on few-shot learning still fall well below the levels that would be considered reliable in crucial real world settings.
 
We propose TapNets, neural networks augmented with task-adaptive projection for improved few-shot learning. See Figure. Here, employing a meta-learning strategy with episode-based training, a network and a set of per-class reference vectors are learned across widely varying tasks. At the same time, for every episode, features in the embedding space are linearly projected into a new space as a form of quick task-specific conditioning. Excellent generalization results with this combination. When tested on standard datasets, we obtain state of the art classification accuracies under various few-shot scenarios. As seen in Table, our method gives the best accuracy when compared with existing world-renowned few-shot learners.

TapNet          

Performance 0

Ph.D. student Seung-Yul Han’s paper (Advised by Young-Chul Sung) was accepted to ICML 2019

Title: Dimension-Wise Importance Sampling Weight Clipping for Sample-Efficient Reinforcement Learning

Authors: Seung-Yul Han & Young-Chul Sung

 In importance sampling (IS)-based reinforcement learning algorithms such as Proximal Policy Optimization (PPO), IS weights are typically clipped to avoid large variance in learning. However, policy update from clipped statistics induces large bias in tasks with high action dimensions, and bias from clipping makes it difficult to reuse old samples with large IS weights. In this work, we propose the Dimension-wise Importance Sampling Weight Clipping (DISC) algorithm based on PPO, a representative on-policy algorithm, by applying dimension-wise IS weight clipping which separately clips the IS weight of each action dimension to avoid large bias and adaptively controls the IS weight to bound policy update from the current policy. This new technique enables efficient learning in high action-dimensional tasks and reusing old samples like in off-policy learning to significantly increase the sample efficiency. Numerical results show that the proposed DISC algorithm outperforms other state-of-the-art RL algorithms in various Open AI Gym tasks.

High-Dimensional Continuous Action Robot Learning:

robot learning

Results: Comparison with state-of-the-art algorithms on Mujoco Robot Simulation

result 1

result 2

Professors Hye-Won Chung & Ji-Oon Lee’s paper was accepted to ICML 2019

Title: Weak Detection of Signal in the Spiked Wigner Model

Authors: Hye-Won Chung & Ji-Oon Lee

We consider the problem of detecting the presence of the signal in a rank-one signal-plus-noise data matrix. In case the signal-to-noise ratio is under the threshold below which a reliable detection is impossible, we propose a hypothesis test based on the linear spectral statistics of the data matrix. When the noise is Gaussian, the error of the proposed test is optimal as it matches the error of the likelihood ratio test that minimizes the sum of the Type-I and Type-II errors. The test is data-driven and does not depend on the distribution of the signal or the noise. If the density of the noise is known, it can be further improved by an entrywise transformation to lower the error of the test.

One of the fundamental questions in machine learning is to detect signals from given data. If the data is of ‘signal-plus-noise’ type, the model is often referred to as a ‘spiked model.’ If the strength of the signal is considerably stronger than that of the noise, we can reliably detect the signal and also recover the signal from the noisy data. On the other hand, if the noise dominates the signal, it is impossible to detect the presence of the signal from the data, which is indistinguishable from pure noise.

In this paper, we consider the case that the strengths of the signal and of the noise are comparable. It was known that there is a certain threshold for the signal-to-noise ratio (SNR) above which the reliable detection, or the strong detection is available, whereas the strong detection is impossible if SNR is below the threshold. In the latter case, we try the weak detection to determine whether the signal is present in the given data. More precisely, we propose a hypothesis test with low computational complexity whose probability of error is minimal. The test is based on state-of-the-art techniques from random matrix theory.

If the noise is non-Gaussian, the test can be further improved by suitably processing the given data. Such a procedure, which we call an entrywise transformation in our work, effectively increases SNR. In case the noise has exponential decay, the entrywise transformation corresponds to applying a function similar to the hyperbolic tangent function (tanh) to each data entry. Our test is expected to be used in various problems with noisy high dimensional data such as community detection and angular synchronization.

fig1 2
Figure1: The limiting density of the proposed test statistic under H0 (when the signal is not present) and under H1 (when the signal is present)

 

fig2 0
Figure2: The limiting errors of the proposed algorithms (Alg 1: without entrywise transformation, Alg2: with entrywise transformation.)

 

Woong-Sup Lee, Min-Hoe Kim and Professor Dong-Ho Cho’s paper was published in IEEE Transactions on Vehicular Technology

Title: Deep Cooperative Sensing: Cooperative Spectrum Sensing Based on Convolutional Neural Networks

Authors: Woong-Sup Lee, Min-Hoe Kim and Dong-Ho Cho

In this paper, we investigate cooperative spectrum sensing (CSS) in a cognitive radio network (CRN) where multiple secondary users (SUs) cooperate in order to detect a primary user, which possibly occupies multiple bands simultaneously. Deep cooperative sensing (DCS), which constitutes the first CSS framework based on a convolutional neural network (CNN), is proposed. In DCS, instead of the explicit mathematical modeling of CSS, the strategy for combining the individual sensing results of the SUs is learned autonomously with a CNN using training sensing samples regardless of whether the individual sensing results are quantized or not. Moreover, both spectral and spatial correlation of individual sensing outcomes are taken into account such that an environment-specific CSS is enabled in DCS. Through simulations, we show that the performance of CSS can be greatly improved by the proposed DCS.

 

5

 

Figure 1. CSS with correlated individual spectrum sensing.

 

6

 

Figure 2. CNN model for deep cooperative sensing

Professors Byung-Chang Chung & Professor Dong-Ho Cho’s paper was published in IEEE Systems Journal

Title: Semidynamic cell-clustering algorithm based on reinforcement learning in cooperative transmission system

Authors: Byung-Chang Chung & Dong-Ho Cho

In this paper, we propose a novel method of managing a semidynamic cluster through the use of a reinforcement learning. We derive some concepts from reinforcement learning that could be suitable for cooperative networks. We also verify the performance of proposed algorithm by means of a simulation, in which we examined spectral efficiency and convergence properties. The proposed algorithm represents a considerable improvement for edge users in particular. In addition, we analyze the complexity of the clustering schemes. Our proposed algorithm is effective in the environment where there is a limited computational resource.

1 2

Figure 1. Operation of SC clustering.

2

Figure 2. SE of clustering schemes.

KAIST WISRL team led by Woojun Kim et al. won the 1st AI Soccer Worldcup

Two KAIST EE teams, WISRL (now SISRel) in Communication Division and SIIT in Signal Division, won the first and second medals in the 1st AI Soccer Worldcup held on 22 August 2018. AI soccer is a game in which two teams each with five robot members trained by machine learning and artificial intelligence play soccer without any external guidance. AI soccer is an emerging AI platform on which AI policies and algorithms for real-time action control are tested and compared. In the first AI Soccer Worlcup, 29 teams from  12 countries participated and competed for the championship. It turns out that both the first and second place winners are from KAIST EE, demonstrating vigorous AI research in KAIST EE.

 

<Link>

Min-Hoe Kim, Nam-I Kim, Woong-Sup Lee, and Professor Dong-Ho Cho’s paper was published in IEEE Communications Letters

Title: Deep learning-aided SCMA

Authors: Min-Hoe Kim, Nam-I Kim, Woong-Sup Lee, Dong-Ho Cho

Sparse code multiple access (SCMA) is a promising code-based non-orthogonal multiple-access technique that can provide improved spectral efficiency and massive connectivity meeting the requirements of 5G wireless communication systems. We propose a deep learning-aided SCMA (D-SCMA) in which the codebook that minimizes the bit error rate (BER) is adaptively constructed, and a decoding strategy is learned using a deep neural network-based encoder and decoder. One benefit of D-SCMA is that the construction of an efficient codebook can be achieved in an automated manner, which is generally difficult due to the non-orthogonality and multi-dimensional traits of SCMA. We use simulations to show that our proposed scheme provides a lower BER with a smaller computation time than conventional schemes.

3 1

Figure 1. Structure of D-SCMA