PAM-4 based PCIe 6.0 Channel Design Optimization Method using Bayesian Optimization (EPEPS 2021)

Title: PAM-4 based PCIe 6.0 Channel Design Optimization Method using Bayesian Optimization
(EPEPS 2021)

 

Authors: JihunKim, Hyunwook Park, Minsu Kim, Seonguk Choi, Keeyoung Son, Joonsang Park, Haeyeon Kim, Jinwook Song, Youngmin Ku, Jonggyu Park and Joungho Kim.

 

 

Abstract: This paper, for the first time, proposes a pulse amplitude modulation-4 (PAM-4) based peripheral component interconnect express (PCIe) 6.0 channel design optimization method using Bayesian Optimization (BO). The proposed method provides a sub-optimal channel design with PAM-4 signaling that maximizes target function considering signal integrity (SI). We formulate the target function of BO as a linear combination of the channel insertion loss (IL) and crosstalk (FEXT, NEXT) considering characteristics of PAM-4 signaling. To consider the trade-off between insertion loss and crosstalk in PAM-4 signaling, we obtain reasonable coefficients for formulating target function via ablation study. For verification, an eye diagram simulation with PAM-4 signaling is conducted.  We compare the channel performance of the proposed method and random search method (RS). Also, the proposed method is compared with the only IL- considered BO method to verify the impact of crosstalk in PAM-4 signaling. As a result, the channel optimized by the proposed method only obtains eye height and eye width of PAM-4 eye diagram, and the PAM-4 eyes of other comparison methods are closed.

 

Deep Reinforcement Learning-based Channel-flexible Equalization Scheme: An Application to High Bandwidth Memory (DesignCon2022)

Title:

 

Deep Reinforcement Learning-based Channel-flexible Equalization Scheme: An Application to High Bandwidth Memory (DesignCon2022)

 

Authors:

 

Seonguk Choi, Minsu Kim, Hyunwook Park, Haeyeon Rachel Kim, Joonsang Park, Jihun Kim, Keeyoung Son, Seongguk Kim, Keunwoo Kim, Daehwan Lho, Jiwon Yoon, Jinwook Song, Kyungsuk Kim, Jonggyu Park and Joungho Kim.

 

 

Abstract:

In this paper, we propose a channel-flexible hybrid equalizer (HYEQ) design methodology with re-usability based on deep reinforcement learning (DRL). Proposed method suggests the optimized HYEQ design for arbitrary channel dimension. HYEQ is comprised of a continuous time linear equalizer (CTLE) for high-frequency boosting and passive equalizer (PEQ) for low frequency attenuation, and our task is to co-optimize both of them. Our model plays a role as a solver to optimize the design of equalizers, while considering all signal integrity issues such as high frequency attenuation, crosstalk and so on.

 Our method utilizes recursive neural network commonly employed in natural language processing (NLP), in order to design HYEQ based on constructive DRL. Thus, each parameter of the equalizer is designed sequentially, reflecting other parameters. In this process, the design space of machine learning (ML) is determined by applying domain knowledge of equalizer, and thus even precise optimization is conducted. Furthermore, fast inference is conducted by trained neural network for any channel dimension. We validate that the proposed method outperforms conventional optimization algorithms such as random search (RS) and genetic algorithm (GA) in 3-coupled channel system of next generation high-bandwidth memory (HBM).

 

Sequential Policy Network-based Optimal Passive Equalizer Design for an Arbitrary Channel of High Bandwidth Memory using Advantage Actor Critic (EPEPS 2021)

Title:

 

Sequential Policy Network-based Optimal Passive Equalizer Design for an Arbitrary Channel of High Bandwidth Memory using Advantage Actor Critic (EPEPS 2021)

 

Authors:

 

Seonguk Choi, Minsu Kim, Hyunwook Park, Keeyoung Son, Seongguk Kim, Jihun Kim, Joonsang Park, Haeyeon Kim, Taein Shin, Keunwoo Kim and Joungho Kim.

 

Abstract:

In this paper, we proposed a sequential policy network-based passive equalizer (PEQ) design method for an arbitrary channel of high bandwidth memory (HBM) using advantage actor critic (A2C) algorithm, considering signal integrity (SI) for the first time. PEQ design must consider the circuit parameters and placement for improving the performance. However, optimizing PEQ is complicated because various design parameters are coupled. Conventional optimization methods such as genetic algorithm (GA) repeat the optimization process for the changed conditions. In contrast, the proposed method suggests the improved solution based on the trained sequential policy network with flexibility for unseen conditions. For verification, we conducted electromagnetic (EM) simulation with optimized PEQs by GA, random search (RS) and the proposed method. Experimental results demonstrate that the proposed method outperformed the GA and RS by 4.4 \% and 6.4 \% respectively in terms of the eye-height.

 

Deep Reinforcement Learning-based Channel-flexible Equalization Scheme: An Application to High Bandwidth Memory (DesignCon2022)

Title:

 

Deep Reinforcement Learning-based Channel-flexible Equalization Scheme: An Application to High Bandwidth Memory (DesignCon2022)

 

Authors:

 

Seonguk Choi, Minsu Kim, Hyunwook Park, Haeyeon Rachel Kim, Joonsang Park, Jihun Kim, Keeyoung Son, Seongguk Kim, Keunwoo Kim, Daehwan Lho, Jiwon Yoon, Jinwook Song, Kyungsuk Kim, Jonggyu Park and Joungho Kim.

 

 

Abstract:

In this paper, we propose a channel-flexible hybrid equalizer (HYEQ) design methodology with re-usability based on deep reinforcement learning (DRL). Proposed method suggests the optimized HYEQ design for arbitrary channel dimension. HYEQ is comprised of a continuous time linear equalizer (CTLE) for high-frequency boosting and passive equalizer (PEQ) for low frequency attenuation, and our task is to co-optimize both of them. Our model plays a role as a solver to optimize the design of equalizers, while considering all signal integrity issues such as high frequency attenuation, crosstalk and so on.

 Our method utilizes recursive neural network commonly employed in natural language processing (NLP), in order to design HYEQ based on constructive DRL. Thus, each parameter of the equalizer is designed sequentially, reflecting other parameters. In this process, the design space of machine learning (ML) is determined by applying domain knowledge of equalizer, and thus even precise optimization is conducted. Furthermore, fast inference is conducted by trained neural network for any channel dimension. We validate that the proposed method outperforms conventional optimization algorithms such as random search (RS) and genetic algorithm (GA) in 3-coupled channel system of next generation high-bandwidth memory (HBM).

 

Sequential Policy Network-based Optimal Passive Equalizer Design for an Arbitrary Channel of High Bandwidth Memory using Advantage Actor Critic (EPEPS 2021)

Title:

 

Sequential Policy Network-based Optimal Passive Equalizer Design for an Arbitrary Channel of High Bandwidth Memory using Advantage Actor Critic (EPEPS 2021)

 

Authors:

 

Seonguk Choi, Minsu Kim, Hyunwook Park, Keeyoung Son, Seongguk Kim, Jihun Kim, Joonsang Park, Haeyeon Kim, Taein Shin, Keunwoo Kim and Joungho Kim.

 

Abstract:

In this paper, we proposed a sequential policy network-based passive equalizer (PEQ) design method for an arbitrary channel of high bandwidth memory (HBM) using advantage actor critic (A2C) algorithm, considering signal integrity (SI) for the first time. PEQ design must consider the circuit parameters and placement for improving the performance. However, optimizing PEQ is complicated because various design parameters are coupled. Conventional optimization methods such as genetic algorithm (GA) repeat the optimization process for the changed conditions. In contrast, the proposed method suggests the improved solution based on the trained sequential policy network with flexibility for unseen conditions. For verification, we conducted electromagnetic (EM) simulation with optimized PEQs by GA, random search (RS) and the proposed method. Experimental results demonstrate that the proposed method outperformed the GA and RS by 4.4 \% and 6.4 \% respectively in terms of the eye-height.

 

Imitate Expert Policy and Learn Beyond: A Practical PDN Optimizer by Imitation Learning (DesignCon 2022, nominated for best paper award & Early career best paper award finalist)

Imitate Expert Policy and Learn Beyond: A Practical PDN Optimizer by Imitation Learning (DesignCon 2022, nominated for best paper award & Early career best paper award finalist)

 

Authors: Haeyeon Kim, Minsu Kim, Seonguk Choi, Jihun Kim, Joonsang Park, Keeyoung Son, Hyunwook Park, Subin Kim and Joungho Kim

 

 

Abstract: This paper proposes a practical and reusable decoupling capacitor (decap) placement solver using the attention model-based imitation learning (AM-IL). The proposed AM-IL framework imitates an expert policy by using pre-collected guiding datasets and trains a policy that outperforms the performance beyond the existing machine learning methods. The trained policy has reusability in terms of PDN with different probing port and keep-out regions; the constructed policy itself becomes the decap placement solver. In this paper, genetic algorithm is taken as an expert policy to verify how the proposed method generates a solver that learns beyond the level of the expert policy. The expert policy for imitation learning can be substituted by any algorithm or conventional tool, which means this is a fast and effective approach to improve existing methods. Moreover, by taking the existing data from the industry as guiding data or human experts as an expert policy, the proposed method can construct a reusable decap placement solver that is data-efficient, practical and guarantees a promising performance. This paper presents verification of AM-IL in comparison to two neural combinatorial optimization networks-based deep reinforcement learning methods, AM-RL and Ptr-RL. As a result, AM-IL achieved a performance score of 11.72, while AM-RL achieved 10.74 and Ptr-RL achieved 9.76. Unlike meta-heuristic methods such as genetic algorithm that require numerous iterations to find a near-optimal solution, the proposed AM-IL generates a near-optimal solution to any given problem by a single trial.

 

Deep Reinforcement Learning Framework for Optimal Decoupling Capacitor Placement on General PDN with an Arbitrary Probing Port (EPEPS 2021)

Title: Deep Reinforcement Learning Framework for Optimal Decoupling Capacitor Placement on General PDN with an Arbitrary Probing Port (EPEPS 2021)

 

Authors: Haeyeon Kim, Hyunwook Park, Minsu Kim, Seonguk Choi, Jihun Kim, Joonsang Park, Seongguk Kim, Subin Kim and Joungho Kim.

 

 

Abstract: This paper proposes a deep reinforcement learning (DRL) framework that learns a reusable policy to find the optimal placement of decoupling capacitors (decaps) on power distribution network (PDN) with an arbitrary probing port. The proposed DRL framework trains a policy parameterized by pointer network, which is a sequence-to-sequence neural network, based on REINFORCE algorithm. The policy finds the positional combination of a pre-defined number of decaps that best suppresses self-impedance of a given probing port on PDN with randomly assigned keep-out regions. Verification was done by allocating 20 decaps on ten randomly generated test sets with an arbitrary probing port and randomly selected keep-out regions. Performance of the policy generated by the proposed DRL framework was evaluated based on the magnitude of probing port self-impedance suppression followed by decap placement over 434 frequencies between 100MHz and 20GHz. The policy generated by the proposed framework achieves greater impedance suppression with fewer samples in comparison to random search heuristic method.

 

Learning Collaborative Policies to Solve NP-hard Routing Problems

Conference: NeurIPS 2021

 

 

Title:

 

Learning Collaborative Policies to Solve NP-hard Routing Problems

 

Authors:

 

Minsu Kim, Jinkyoo Park and Joungho Kim.

Abstract:

Recently, deep reinforcement learning (DRL) frameworks have shown potential for solving NP-hard routing problems such as the traveling salesman problem (TSP) without problem-specific expert knowledge. Although DRL can be used to solve complex problems, DRL frameworks still struggle to compete with state-of-the-art heuristics showing a substantial performance gap. This paper proposes a novel hierarchical problem-solving strategy, termed learning collaborative policies (LCP), which can effectively find the near-optimum solution using two iterative DRL policies: the seeder and reviser. The seeder generates as diversified candidate solutions as possible (seeds) while being dedicated to exploring over the full combinatorial action space (i.e., sequence of assignment action). To this end, we train the seeder’s policy using a simple yet effective entropy regularization reward to encourage the seeder to find diverse solutions. On the other hand, the reviser modifies each candidate solution generated by the seeder; it partitions the full trajectory into sub-tours and simultaneously revises each sub-tour to minimize its traveling distance. Thus, the reviser is trained to improve the candidate solution’s quality, focusing on the reduced solution space (which is beneficial for exploitation). Extensive experiments demonstrate that the proposed two-policies collaboration scheme improves over single-policy DRL framework on various NP-hard routing problems, including TSP, prize collecting TSP (PCTSP), and capacitated vehicle routing problem (CVRP).

 

 

장민석교수 연구팀, 고도로 응축된 빛-물질의 새로운 플랫폼 구현 성과 발표 등

 
전기및전자공학부 장민석교수 연구팀, 고도로 응축된 빛-물질의 새로운 플랫폼 구현 성과 발표 등
 
============================================================================
– KAIST 고도로 응축된 빛-물질의 새로운 플랫폼 구현
– 장민석교수 연구팀 금단결정위 유전박막에 전파되는 극도로 구속된 중적외선을 고해상도 산란형 주사 근접장 현미경으로 관측 성공
– 원자층 수준의 공간에 빛을 가둠으로서 빛과 물질간의 상호작용을 높여 차세대 광전자 소자개발에 응용될 수 있어 향후 고효율 나노광학소자 실용화 및 양자 컴퓨팅 등에 활용
=============================================================================
 
[장민석교수, 메나브데 세르게이 연구교수, 왼쪽부터]
 
 
한국과학기술원(KAIST)은 공동연구를 통해 고도로 구속된 빛이 전파될 수 있는 새로운 플랫폼을 2차원 물질 박막으로 구현했다고 18일 밝혔다. 이번 연구 결과는 향후 강한 빛-물질 상호작용에 기반한 차세대 광전자 소자 개발에 기여할 것으로 예상된다.
 
원자 한 층으로 이뤄진 2차원 물질이 쌓이면 기존 2차원 물질과 다른 특성을 보이는 ‘반데르발스 결정’이 된다. ‘포논-폴라리톤’은 전기를 띠는 물질 속 이온 진동이 전자기파에 결합된 형태를 말한다. 특히 고전도도 금속에 놓인 반데르발스 결정에 생성되는 포논-폴라리톤은 응축성이 극대화된다. 폴라리톤 결정 속 전하가 영상 전하 영향으로 금속에 반사돼 ‘영상 포논-폴라리톤’ 이라는 새로운 폴라리톤이 생성되기 때문이다.
영상 포논-폴라리톤 형태로 전파되는 빛은 강한 빛-물질 상호작용을 유도할 수 있는데, 금속 표면이 거칠 경우 생성이 억제된다. 이에 기반한 광소자 실현 가능성이 제한된다. 이런 한계점을 돌파하고자 다섯 연구팀이 협업해 단결정 금속 위 영상 포논 폴라리톤 측정에 성공했다.
 
장민석 교수는 “이번 연구결과는 영상 폴라리톤, 특히 영상 포논-폴라리톤 장점을 잘 보여준다. 특히 영상 포논-폴라리톤이 갖는 저손실성과 강한 빛-물질 상호작용은 차세대 광전자 소자 개발에 응용될 수 있을 것으로 보인다”며 “연구팀의 실험 결과가 향후 메타표면, 광스위치, 광 센서 등 고효율 나노광학 소자 실용화를 앞당기는 데 도움이 되기를 바란다”고 설명했다.
 
메나브데 세르게이 연구교수가 제1 저자로 참여한 이번 연구는 국제 학술지 ‘사이언스 어드밴시스’에 지난 13일자 게재됐다. 한편 이번 연구는 삼성미래기술육성센터와 한국연구재단 지원을 받아 진행됐으며, 한국과학기술연구원(KIST), 일본 문부과학성, 덴마크 빌룸 재단 지원을 받았다.
 
□ 연구성과도

[그림 1. hBN에 진행하는 영상 포논-폴라리톤을 초고화질로 측정하기 위해 사용되는 나노 팁]

 
□ 관련 링크 : 전자신문등 14개 언론 주요 링크
 
전자신문 https://www.etnews.com/20220718000234
헤럴드경제 : http://news.heraldcorp.com/view.php?ud=20220718000582
 

김정호 교수랩 신태인 학생, 2021 IEEE EDAPS Best Paper Award수상

우리학부 김정호교수 연구실 박사과정 신태인학생이 2021 IEEE EDAPS Best Paper Award를 수상하였습니다.

이번 수상은 얼마 전 같은 랩 석사과정 김민수학생이  미국 실리콘밸리에서 개최된 DesignCon 학회에서 최우수 논문상

을 받은 후 같은랩에서 최우수논문상을 수상하여 더 귀감이 되고 있습니다.

 

[김정호교수, 신태인학생, 왼쪽부터]

 

KAIST 전기 및 전자공학부 김정호 교수 연구실 신태인 박사과정 학생이 2021 IEEE International Conference on Electrical Design of Advanced Packaging and System (EDAPS)에서 Best Paper Award를 수상하였다.

 

수상명 : IEEE EDAPS Paper Award 수상

논문제목: Modeling and Analysis of System-Level Power Supply Noise Induced Jitter (PSIJ) for 4 Gbps High Bandwidth Memory (HBM) I/O Interface

저자: 신태인, 박현욱, 김근우, 김성국, 손기영, 손경준, 박갑열, 박준상, 최성욱학생, 김정호교수 (지도교수).

학회 명: 2021 IEEE International Conference on Electrical Design of Advanced Packaging and System (EDAPS)

개최 일시: 2021/12/13 ~ 12/15 개최 (Virtual Event)

 

코로나 19로 인하여, 이번 학술대회는 지난 12월 13일부터 15까지 온라인으로 개최되었다.

EDAPS 학회는 IEEE에서 매년 개최하는 국제학회이며, 신호 및 전력 무결성 (Signal Integrity and Power Integrity) 기반 반도체 학술 행사로서, 관련된 세계 유수 대학 및 기업이 참여한 학회이다. 

신태인 학생은 “Modeling and Analysis of System-Level Power Supply Noise Induced Jitter (PSIJ) for 4 Gbps High Bandwidth Memory (HBM) I/O Interface” 논문을 발표여 우수성을 인정받아 수상자로 선정되었다.

작년 같은 학회에서 학생논문상(Best Student Paper Award)을 받았는데, 올해에는 학회 전체 Best Paper Award 수상자가 배출되어 랩 구성원들이 더욱 고무되었다.