EEProfessor Junil Choi’s Research Team Won the Excellence Prize at the 2022 ICT Academic Paper Competition

Professor Junil Choi’s lab received the Best Paper Award at the 2022 ICT Academic Paper Competition.

 

1

[Prof. Junil Choi, Hyesang Cho, Beomsoo Ko, from left]
 
 
– Award Name: Excellence Prize
– Paper Title: Coverage Increase at THz Frequencies: A Cooperative Rate-Splitting Approach.
– Authors: Hyesang Cho, Beomsoo Ko, and Junil Choi (Advisor)
– Conference name: 14th ICT Academic Paper Competition with ET NEWS
– Date: December 16, 2022
 
Terahertz (THz) communication suffers from low coverage due to the harsh propagation loss, blockage vulnerability, and hardware constraints.
To overcome this limitation, the research team led by Professor Junil Choi proposed a communication framework exploiting cooperative communication and rate-splitting multiple access.
The proposed framework and techniques have shown to successfully increase coverage in THz frequency bands.
 

 

 

 

EE Prof. Junil Choi, recipient of the 2021 IEEE CTTC Early Achievement Award

Junil Choi

[Prof. Junil Choi]

 

EE Professor Junil Choi was awarded the Early Achievement Award by the IEEE Communications Society Communication Theory Technical Committee (CTTC), becoming the first Korean member to receive the honor.

 

Although professor Choi was chosen as the recipient for the 2021 award, the award ceremony was held at the belated Communication Theory Workshop (CTW) last week, due to the COVID-19 pandemic.

 

The IEEE CTTC was established in 1964 as one of the first technical committees within the IEEE Communications Society (ComSoc).

Since 2016, the CTTC Early Achievement Award has celebrated the achievements of members with early career visibility within 10 years of their Ph.D., with a history of recipients from prestigious institutions such as Stanford University, Imperial College London, Virginia Tech and KTH.

 

IMG20221004163755 1

EE Prof. Junil Choi win ’22 IEEE Best Vehicular Electronic Paper award

캡처

[Prof. Junil Choi, Dr. Preeti Kumari (Qualcomm), Prof. Nuria Prelcic (North Carolina State University), Prof. Robert Heath (North Carolina State University), from left]
 
 
KAIST EE Prof. Junil Choi won Best Vehicular Electronic Paper Award in 2022 Institute of Electrical and Electronics Engineers (IEEE) Vehicular Technology Society (VTS), becoming the first Korean to receive four best paper awards among the 39 academic journals of IEEE.
 
Prof. Junil Choi was awarded Best Paper Award in 2015 Signal Processing Society, Stephen O. Rice Prize (Best Paper Award) in 2019 Communications Society, and Neal Shepherd Memorial Best Propagation Award (Best Paper Award) in ’21 VTS.
 
Prof. Choi said “I am highly grateful to know that my research on the application of joint millimeter communication-radar systems in vehicle-to-vehicle communication has received huge international recognition, and honored to become the first Korean to win four best paper awards in IEEE society.”
 
The award ceremony is going to be held in upcoming September at 2022 Vehicular Technology Conference (VTC), the largest academic conference arranged by IEEE VTS. More details will be posted in 2022 VTC fall homepage and IEEE VTS News letter, and the list of awardees will be displayed permanently in IEEE VTS homepage.
 
 
image01

Professor Song, lickho has published a book on probability and random variables in English

Professor Song, lickho has published a book on probability and random variables in English.
The book is a translated version of Prof. Song’s book ‘Theory of Random Variables’ in Korean, which was selected as an ‘Excellent Book of Basic Sciences’ by the National Academy of Sciences and the Ministry of Education in 2020.
 
 
 
You can find more information on the book below:
 
Title: Probability and Random Variables: Theory and Applications
Authors:  Iickho Song,  So Ryoung Park,  Seokho Yoon
 
Summary:
This book discusses diverse concepts and notions – and their applications – concerning probability and random variables at the intermediate to advanced level. It explains basic concepts and results in a clearer and more complete manner than the extant literature. In addition to a range of concepts and notions concerning probability and random variables, the coverage includes a number of key advanced concepts in mathematics. Readers will also find unique results on e.g. the explicit general formula of joint moments and the expected values of nonlinear functions for normal random vectors. In addition, interesting applications of the step and impulse functions in discussions on random vectors are presented. Thanks to a wealth of examples and a total of 330 practice problems of varying difficulty, readers will have the opportunity to significantly expand their knowledge and skills. The book is rounded out by an extensive index, allowing readers to quickly and easily find what they are looking for.
Given its scope, the book will appeal to all readers with a basic grasp of probability and random variables who are looking to go one step further. It also offers a valuable reference guide for experienced scholars and professionals, helping them review and refine their expertise.
 
Link:   https://link.springer.com/book/10.1007/978-3-030-97679-8

P.h.D. Candidate Gong, Jinu (Prof. Kang, Joonhyuk(Head, School of EE)) Wins IEEE DSLW Best Student Paper Runner-up Award

[(from the left) Professor Kang Joonhyuk, Gong Jinu (Ph.D candidate)]

PhD candidate Gong, Jinu from EE Professor Kang, Joonhyuk’s lab won the Best Student Paper Runner-up Award at the 2022 IEEE Data Science and Learning Workshop. He has been chosen to receive the award for the contribution made in the presented paper “Forget-SVGD: Particle-Based Bayesian Federated Unlearning”.
 
 
Details on this good news are as follows:
 
 
Venue: 2022 IEEE Data Science and Learning Workshop
 
Date: May 22 ~ 23, 2022
 
Award: The Best Student Paper Runner-up Award
 
Authors: Jinu Gong, Osvaldo Simeone*, Rahif Kassab*, and Joonhyuk Kang
                                 (*King’s College London)
 
Paper: Forget-SVGD: Particle-Based Bayesian Federated Unlearning
 
 
 
 
COVID-19 precautions rendered this year’s conference online. The DLSW, a successor to the IEEE Data Science Workshop, has been held since 2021 by IEEE to encompass signal processing, statistics, machine learning, data mining, and computer vision as an international academic venue. (acceptance rate: 26.7%)

Self-Diagnosing GAN: Diagnosing Underrepresented Samples in Generative Adversarial Networks Jinhee Lee*, Haeri Kim*, Youngkyu Hong*, and Hye Won Chung (*:equal contribution)

Abstract

Despite remarkable performance in producing realistic samples, Generative Adversarial Networks (GANs) often produce low-quality samples near low-density regions of the data manifold, e.g., samples of minor groups. Many techniques have been developed to improve the quality of generated samples, either by postprocessing generated samples or by pre-processing the empirical data distribution, but at the cost of reduced diversity. To promote diversity in sample generation without degrading the overall quality, we propose a simple yet effective method to diagnose and emphasize underrepresented samples during training of a GAN. The main idea is to use the statistics of the discrepancy between the data distribution and the model distribution at each data instance. Based on the observation that the underrepresented samples have a high average discrepancy or high variability in discrepancy, we propose a method to emphasize those samples during training of a GAN. Our experimental results demonstrate that the proposed method improves GAN performance on various datasets, and it is especially effective in improving the quality and diversity of sample generation for minor groups.

 

1

 

2

 

Suyoung Lee and Sae-Young Chung, “Improving Generalization in Meta-RL with Imaginary Tasks from Latent Dynamics Mixture,” in Proc. Thirty-fifth Conference on Neural Information Processing Systems (NeurIPS 2021), Dec 2021.

Abstract

The generalization ability of most meta-reinforcement learning (meta-RL) methods is largely limited to test tasks that are sampled from the same distribution used to sample training tasks. To overcome the limitation, we propose Latent Dynamics Mixture (LDM) that trains a reinforcement learning agent with imaginary tasks generated from mixtures of learned latent dynamics. By training a policy on mixture tasks along with original training tasks, LDM allows the agent to prepare for unseen test tasks during training and prevents the agent from overfitting the training tasks. LDM significantly outperforms standard meta-RL methods in test returns on the gridworld navigation and MuJoCo tasks where we strictly separate the training task distribution and the test task distribution.

 

1

2

Seungyul Han and Youngchul Sung, "A max-min entropy framework for reinforcement learning," accepted to Conference on Neural Information Processing Systems (NeurIPS) 2021

Abstract

In this paper, we propose a max-min entropy framework for reinforcement learning (RL) to overcome the limitation of the soft actor-critic (SAC) algorithm implementing the maximum entropy RL in model-free sample-based learning. Whereas the maximum entropy RL guides learning for policies to reach states with high entropy in the future, the proposed max-min entropy framework aims to learn to visit states with low entropy and maximize the entropy of these low-entropy states to promote better exploration. For general Markov decision processes (MDPs), an efficient algorithm is constructed under the proposed max-min entropy framework based on disentanglement of exploration and exploitation. Numerical results show that the proposed algorithm yields drastic performance improvement over the current state-of-the-art RL algorithms.

 

1

2

Y. Park*, D.-J. Han*, D.-Y. Kim, J. Seo and J. Moon, "Few-Round Learning for Federated Learning," Neural Information Processing Systems (NeurIPS), Dec. 2021

Abstract

Federated learning (FL) presents an appealing opportunity for individuals who are willing to make their private data available for building a communal model without revealing their data contents to anyone else. Of central issues that may limit a widespread adoption of FL is the significant communication resources required in the exchange of updated model parameters between the server and individual clients over many communication rounds. In this work, we focus on limiting the number of model exchange rounds in FL to some small fixed number, to control the communication burden. Following the spirit of meta-learning for few-shot learning, we take a meta-learning strategy to train the model so that once the meta-training phase is over, only  rounds of FL would produce a model that will satisfy the needs of all participating clients. A key advantage of employing meta-training is that the main labeled dataset used in training could differ significantly (e.g., different classes of images) from the actual data sample presented at inference time. Compared to the meta-training approaches to optimize personalized local models at distributed devices, our method better handles the potential lack of data variability at individual nodes. Extensive experimental results indicate that meta-training geared to few-round learning provide large performance improvements compared to various baselines.

 

3

J. Park*, D.-J. Han*, M. Choi and J. Moon, "Sageflow: Robust Federated Learning against Both Stragglers and Adversaries," Neural Information Processing Systems (NeurIPS), Dec. 2021

Abstract

While federated learning (FL) allows efficient model training with local data at edge devices, among major issues still to be resolved are: slow devices known as stragglers and malicious attacks launched by adversaries. While the presence of both of these issues raises serious concerns in practical FL systems, no known schemes or combinations of schemes effectively address them at the same time. We propose Sageflow, staleness-aware grouping with entropy-based filtering and loss-weighted averaging, to handle both stragglers and adversaries simultaneously. Model grouping and weighting according to staleness (arrival delay) provides robustness against stragglers, while entropy-based filtering and loss-weighted averaging, working in a highly complementary fashion at each grouping stage, counter a wide range of adversary attacks. A theoretical bound is established to provide key insights into the convergence behavior of Sageflow. Extensive experimental results show that Sageflow outperforms various existing methods aiming to handle stragglers/adversaries.

 

2