Professor Minsoo Rhu has been inducted into the Hall of Fame of the IEEE/ACM International Symposium on Computer Architecture (ISCA) 2024

Professor Minsoo Rhu has been inducted into the Hall of Fame of the IEEE/ACM International Symposium on Computer Architecture (ISCA) 2024

Inline image 2024 07 02 10.57.33.530

<Professor Minsoo Rhu>
 
Professor Minsoo Rhu has been inducted into the Hall of Fame of the IEEE/ACM International Symposium on Computer Architecture (ISCA) this year.
 
ISCA (https://www.iscaconf.org/isca2024/) is an international conference with a long history (51th year) and the highest authority in the field of computer architecture. Along with the MICRO (IEEE/ACM International Symposium on Microarchitecture) and HPCA (IEEE International Symposium on High-Performance Computer Architecture) conferences, it is considered as one of the top three international conferences in the computer architecture field.
 
Professor Minsoo Rhu is a leading researcher in South Korea on research in AI semiconductors and GPU-based high-performance computing systems within the field of computer architecture. Following his induction into the HPCA Hall of Fame in 2021 and the MICRO Hall of Fame in 2022, he has published more than eight papers at ISCA and has been inducted into the ISCA Hall of Fame in 2024.
 
This year, the ISCA conference will be held from June 29 to July 3 in Buenos Aires, Argentina, where Professor Rhu’s research team will present a total of three papers (see below).
 
[Information on Professor Minsoo Rhu’s Research Team’s ISCA Presentations]
 
1. Yujeong Choi, Jiin Kim, and Minsoo Rhu, “ElasticRec: A Microservice-based Model Serving Architecture Enabling Elastic Resource Scaling for Recommendation Models,” ISCA-51     
arXiv paper link
 
2. Yunjae Lee, Hyeseong Kim, and Minsoo Rhu, “PreSto: An In-Storage Data Preprocessing System for Training Recommendation Models,” ISCA-51
arXiv paper link
 
3. Ranggi Hwang, Jianyu Wei, Shijie Cao, Changho Hwang, Xiaohu Tang, Ting Cao, and Mao Yang, “Pre-gated MoE: An Algorithm-System Co-Design for Fast and Scalable Mixture-of-Expert Inference,” ISCA-51 
arXiv paper link

 

Professor Kim Lee-Sup Lab’s Master’s Graduate Park Jun-Young Wins Best Paper Award at the International Design Automation Conference

Professor Kim Lee-Sup Lab’s Master’s Graduate Park Jun-Young Wins Best Paper Award at the International Design Automation Conference

 

Inline image 2024 07 02 11.10.07.255

<(From left to right) Professor Kim Lee-Sup, Master’s Graduate Park Jun-Young, Ph.D. Graduate Kang Myeong-Goo, Master’s Graduate Kim Yang-Gon, Ph.D. Graduate Shin Jae-Kang,    Ph.D. Candidate Han Yunki>

 

Master’s graduate Park Jun-Young from Professor Kim Lee-Sup’s lab of our department achieved the significant accomplishment of winning the Best Paper Award at the International Design Automation Conference (DAC) held in San Francisco, USA, from June 23 to June 27. Established in 1964, DAC is an international academic conference in its 61st year, covering semiconductor design automation, AI algorithms, and chip design. It is considered the highest authority in the related field, with only about 20 percent of submitted papers being selected for presentation.

The awarded research is based on Park Jun-Young’s master’s thesis, proposing an algorithmic approximation technique and hardware architecture to reduce memory transfer for KV caching, a problem in Large Language Model inference. The excellence of this research was recognized by the Best Paper Award selection committee and was chosen as the final Best Paper Award winner from among the four candidate papers (out of 337 presented and 1,545 submitted papers).

The details are as follows:

 

  • Conference Name: 2024 61st IEEE/ACM Design Automation Conference (DAC)
  • Date: June 23-27, 2024
  • Award: Best Paper Award
  • Authors: Park Jun-Young, Kang Myeong-Goo, Han Yunki, Kim Yang-Gon, Shin Jae-Kang, Kim Lee-Sup (Advisor)
  • Paper Title: Token-Picker: Accelerating Attention in Text Generation with Minimized Memory Transfer via Probability Estimation

 

Inline image 2024 07 02 11.11.15.807

Professor Shinhyun Choi’s Research Team solves the Reliability Issues of Next-Generation Neuromorphic Computing

Professor Shinhyun Choi’s Research Team solves the Reliability Issues of Next-Generation Neuromorphic Computing

images 000078 photo1.jpg 12

<(From left) Professor Shinhyun Choi, Master’s student Jongmin Bae, Postdoc Cho-ah Kwon (Hanyang University), and Professor Sang-Tae Kim (Hanyang University)>
 

Neuromorphic computing, which implements AI computation in hardware by mimicking the human brain, has recently garnered significant attention. Memristors (conductance-changing devices), used as unit elements in neuromorphic computing, boast advantages such as low power consumption, high integration, and efficiency.

However, issues with irregular device characteristics have posed reliability problems for large-scale neuromorphic computing systems.

Our research team has developed a technology to enhance reliability, potentially accelerating the commercialization of neuromorphic computing.

 

On June 21, professor Shin-Hyun Choi’s research team announced a collaborative study with Hanyang University researchers. The study developed a doping method using aliovalent ions* to improve the reliability and performance of next-generation memory devices.

*Aliovalent ion: An ion with a different valence (a measure of its ability to bond) compared to the original atom.

 

The joint research team identified that doping with aliovalent ions could enhance the uniformity and performance of devices by addressing the primary issue of irregular device characteristic changes in next-generation memory devices, confirmed through experiments and atomic-level simulations.

 

images 000078 image1.jpg 11

Figure 1. Results of aliovalent ion doping developed in this study, demonstrating the improvement effects and the material principles underpinning them

 

The team reported that the appropriate injection of aliovalent halide ions into the oxide layer could solve the irregular device reliability problem, thereby improving device performance. This method was experimentally confirmed to enhance the uniformity, speed, and performance of device operation.

 

Furthermore, atomic-level simulation analysis showed that the performance improvement effect of the device was consistent with the experimental results observed in both crystalline and amorphous environments. The study revealed that doped aliovalent ions attract nearby oxygen vacancies, enabling stable device operation, and expand the space near the ions, allowing faster device operation.

 

Professor Shinhyun Choi states, “The aliovalent ion doping method we developed significantly enhances the reliability and performance of neuromorphic devices. This can contribute to the commercialization of next-generation memristor-based neuromorphic computing and can be applied to various semiconductor devices using the principles we uncovered.”

 

This research, with Master’s student Jongmin Bae and Postdoctoral researcher Choa Kwon from Hanyang University as co-first authors, was published in the June issue of the international journal ‘Science Advances’ (Paper title: Tunable ion energy barrier modulation through aliovalent halide doping for reliable and dynamic memristive neuromorphic systems).

 

The study was supported by the National Research Foundation of Korea’s Advanced Device Source Proprietary Technology Development Program, the Advanced Materials PIM Device Program, the Young Researcher Program, the Nano Convergence Technology Institute Semiconductor Process-based Nano-Medical Device Development Project, and the Innovation Support Program of the National Supercomputing Center.

Professor YongMan Ro’s research team develops a multimodal large language model that surpasses the performance of GPT-4V

Professor YongMan Ro’s research team develops a multimodal large language model that surpasses the performance of GPT-4V

 

Inline image 2024 05 31 15.36.55.915

<(From left) Professor YongMan Ro, ph.d. candidate ByungKwan Lee, ph.d. candidate Beomchan Park(integrated), ph.d. candidate Chae Won Kim>
 

On  June 20, 2024, Professor YongMan Ro’s research team announced that  they have developed and released an open-source multimodal large  language model that surpasses the visual performance of closed  commercial models like OpenAI’s ChatGPT/GPT-4V and Google’s Gemini-Pro. A  multimodal large language model refers to a massive language model  capable of processing not only text but also image data types.

 

The  recent advancement of large language models (LLMs) and the emergence of  visual instruction tuning have brought significant attention to  multimodal large language models. However, due to the support of  abundant computing resources by large overseas corporations, very large  models with parameters similar to the number of neural networks in the  human brain are being created.

These models are all developed in  private, leading to an ever-widening performance and technology gap  compared to large language models developed at the academic level. In  other words, the open-source large language models developed so far have  not only failed to match the performance of closed large language  models like ChatGPT/GPT-4V and Gemini-Pro, but also show a significant  performance gap.

 

To  improve the performance of multimodal large language models, existing  open-source large language models have either increased the model size  to enhance learning capacity or expanded the quality of visual  instruction tuning datasets that handle various vision language tasks.  However, these methods require vast computational resources or are  labor-intensive, highlighting the need for new efficient methods to  enhance the performance of multimodal large language models.

 

Professor YongMan Ro’s research team has announced the development of two technologies that significantly  enhance the visual performance of multimodal large language models  without significantly increasing the model size or creating high-quality  visual instruction tuning datasets.

 

The first technology developed by  the research team, CoLLaVO, verified that the primary reason existing  open-source multimodal large language models perform significantly lower  compared to closed models is due to a markedly lower capability in  object-level image understanding. Furthermore, they revealed that the  model’s object-level image understanding ability has a decisive and  significant correlation with its ability to handle visual-language  tasks.

 

Inline image 2024 05 31 15.25.02.940

[Figure – Crayon Prompt Training Methodology]
 

To  efficiently enhance this capability and improve performance on  visual-language tasks, the team introduced a new visual prompt called  Crayon Prompt. This method leverages a computer vision model known as  panoptic segmentation to segment image information into background and  object units. Each segmented piece of information is then directly fed  into the multimodal large language model as input. 

 

Additionally, to  ensure that the information learned through the Crayon Prompt is not  lost during the visual instruction tuning phase, the team proposed an  innovative training strategy called Dual QLoRA.

This strategy trains  object-level image understanding ability and visual-language task  processing capability with different parameters, preventing the loss of  information between them.

Consequently, the CoLLaVO multimodal large  language model exhibits superior ability to distinguish between  background and objects within images, significantly enhancing its  one-dimensional visual discrimination ability.

 

Inline image 2024 05 31 15.25.53.251

[Figure – CoLLaVO Multimodal LLM Performance Evaluation]
 
 
Following  the development of CoLLaVO, Professor YongMan Ro’s research team  developed and released their second large language model, MoAI. This  model is inspired by cognitive science elements that humans use to judge  objects, such as understanding the presence, state, and interactions of  objects, as well as background comprehension and text interpretation.

The team pointed out that existing multimodal large language models use vision encoders that are semantically aligned with text, leading to a lack of detailed and comprehensive real-world scene understanding at the pixel level.

 
To incorporate these cognitive science elements into a multimodal large language model, MoAI employs four computer vision models: panoptic segmentation, open-world object detection (which has no limits on detectable objects), scene graph generation, and optical character recognition (OCR).
 
The results from these four computer vision models are then translated into human-understandable language and directly used as input for the multimodal large language model.

By  combining the simple and efficient approach of CoLLaVO’s Crayon Prompt +  DualQLoRA with MoAI’s array of computer vision models, the research  team verified that their models outperformed closed commercial models  like OpenAI’s ChatGPT/GPT-4V and Google’s Gemini-Pro.

 
 
Inline image 2024 05 31 15.27.06.852
[Figure – MoAI Multimodal LLM Performance Evaluation]
 
 
The  two consecutive multimodal large language models, CoLLaVO and MoAI,  were developed with the participation of ByungKwan Lee (Ph.D student)  as the first author. Additionally, Beomchan Park (integrated master’s  and Ph.D. student), and Chae Won Kim, (Ph.D. student), contributed as  co-authors.
The open-source large language model CoLLaVO was accepted on  May 16, 2024, by the prestigious international conference in the field  of natural language processing (NLP), ‘Findings of the Association for  Computational Linguistics (ACL Findings) 2024’. MoAI is currently  awaiting approval from the top international conference in computer  vision, the ‘European Conference on Computer Vision (ECCV) 2024’.

Accordingly, Professor YongMan Ro stated, “The open-source multimodal large language models developed by our research team, CoLLaVO and MoAI, have been recommended on Huggingface Daily Papers and are being recognized by researchers worldwide through various social media platforms. Since all the models have been released as open-source large language models, these research models will contribute to the advancement of multimodal large language models.”

 This research  was conducted at the Future Defense Artificial Intelligence  Specialization Research Center and the School of Electrical Engineering  of Korea Advanced Institute of Science and Technology (KAIST).

 

[1] CoLLaVO Demo GIF Video Clip https://github.com/ByungKwanLee/CoLLaVO

 

images 000078 imga4.jpg

< CoLLaVO Demo GIF >

 

[2] MoAI Demo GIF Video Clip https://github.com/ByungKwanLee/MoAI

images 000078 image5.png

< MoAI Demo GIF >

Professor Song Min Kim’s Research Team Wins Best Paper Award at ACM MobiSys 2024, an International Mobile Computing Conference

Professor Song Min Kim’s Research Team Wins Best Paper Award at ACM MobiSys 2024, an International Mobile Computing Conference

 

Inline image 2024 06 14 07.27.15.086

<(Left) Paper Award Certificate, (Right) From the second left: Professor Song Min Kim, Ph.D. candidate Kang Min Bae, and Ph.D. candidate Hankyeol Moon (Co-first Authors>
 

A research team led by Professor Song Min Kim from the Department of EE won the Best Paper Award at ACM MobiSys 2024, the top international conference in the field of mobile computing.

This achievement follows their previous Best Paper Award at ACM MobiSys 2022, making it even more significant as the two Ph.D students became the first in the world to win multiple Best Paper Awards at the three major conferences in mobile/wireless networks (MobiSys, MobiCom, SenSys) as the first authors.

 

Ph.D. candidates Kang Min Bae and Hankyeol Moon from the Department of Electrical and Electronic Engineering participated as co-first authors in Professor Kim’s research team.

They developed a technology using millimeter-wave backscatter to accurately locate targets obscured by obstacles with precision under 1 cm, earning them the Best Paper Award.

 

This research is expected to revolutionize the stability and accuracy of indoor positioning technology, leading to widespread adoption of location-based services in smart factories and augmented reality (AR), among other applications.

-Paper: https://doi.org/10.1145/3643832.3661857

 

Dr. Donggyun Lee in Prof. Seunghyup Yoo’s group, together with Dong-A Univ. and ETRI, develops a stretchable display that maintains its reloutuon when stratched

 

3823325396359741537.3823336861325276504@dooray

<(from left) Professor Seunghyup Yoo, Dr. Donggyun Lee, Professor Hanul Moon of Dong-A univ.>
 
A research team led by Professor Seunghyup Yoo from our School has successfully developed a stretchable organic light-emitting diode (OLED) display in collaboration with Professor Hanul Moon (KAIST EE alumus) from Dong-A University and ‘Hyper-realistic Device Research Division’ of the Electronics and Telecommunications Research Institute (ETRI). The developed stretchable display boasts one of the highest luminous area ratio and, moreover, maintains resolution quite well even when stretched.
 
The joint research team developed an ultrathin OLED with exceptional flexibility and embedded part of its luminous area between two adjacent isolated rigid “islands”. This concealed luminous area gradually reveals itself when stretched, compensating for any reduction in the luminous area ratio. Conventional stretchable displays typically secure performance by using fixed, rigid luminous parts, while achieving stretchability through serpentine interconnectors. However, space dedicated to these non-luminous serpentine interconnectors reduce the overall luminous area ratio, which decreases even further when the display is stretched as the interconnectors expand.
 
The proposed structure achieved an unprecedented luminous area ratio close to 100% before stretching and only exhibited a 10% reduction after 30% stretching. This is in stark contrast to existing platforms, which experience a 60% reduction in luminous area ratio under similar conditions. Additionally, the new platform demonstrated mechanical stability, operating reliably under repeated stretch-and-release cycles.
 
The research team illustrated the applicability of this technology to wearable and free-form light sources that can operate stably on curved surfaces such as spherical objects, cylinders, and human body parts, accommodating expansions like balloon inflation and joint movements and demonstrated the potential for stretchable displays that can compensate for resolution loss during stretching by independently driving the hidden luminous areas.
 
The study, with Dr. Donggyun Lee (currently a research fellow at Seoul National University) as the first author, was published in the June 5, 2024 issue of Nature Communications (Title: Stretchable OLEDs based on a hidden active area for high fill factor and resolution compensation, DOI: 10.1038/s41467-024-48396-w) and was also featured in an online news article by IEEE Spectrum as well as several domestic newspapers.
 
This research was supported by the Engineering Research Center Program (Attachable Phototherapeutics Center for e-Healthcare) backed by the National Research Foundation of Korea and the Research Support Program of ETRI (Developing Independent and Challenging Technologies in ICT Materials, Parts, and Equipment.).
 
 
3823325396359741537.3823336861471141321@dooray
 
3823325396359741537.3823336861612070434@dooray
 
*News Link :   KAIST·ETRI·동아대, 잡아 늘려도 ‘고화질’ 유지하는 디스플레이 개발 – 전자신문 (etnews.com) 
                   [뉴테크] 늘려도 화질 유지되는 신축성 디스플레이 나왔다 – 조선비즈 (chosun.com) )
                   Stretchy OLED Display With Superior Resolution – IEEE Spectrum
 
**Demo Video Clip : Click below
 

Professor Sung-Ju Lee Laboratory, “Healthy diet in digital buffet” receives ACM CHI Best Paper Honorable Mention Award

[Professor Sung-Ju Lee Laboratory, “Healthy diet in digital buffet” receives ACM CHI Best Paper Honorable Mention Award for preventing negative effects of Mukbang and cooking shows on patients with eating disorders]
 
Inline image 2024 05 07 11.20.03.749
<(from left) Professor Sung-Ju Lee, ph.d. candidate Ryuhaerang Choi, MS candidate Subin Park, Ph.d. candidate Sujin Han>
 
Professor Sung-Ju Lee’s research team has presented their paper titled “FoodCensor: Promoting Mindful Digital Food Content Consumption for People with Eating Disorders” at the international conference CHI in the field of Human-Computer Interaction. The paper introduces a real-time intervention system designed to prevent the detrimental effects of digital food content consumption among individuals with eating disorders. Their work was awarded the Honorable Mention for Best Paper at the conference.
 
*Research Demo Video: https://drive.google.com/file/d/103OG9qHpjbfIMhB4tP4I4ESyPlP1pAAD/view
According to recent studies, various food-related contents have been found to be addictive, with visually appealing presentations, immersive experiences, and auditory stimuli contributing to cravings and reinforcing unhealthy eating habits beyond addiction. While for some, eating is a natural act, individuals with eating disorders struggle daily against the allure of unhealthy eating habits. Particularly sensitive and vulnerable to addictive food-related content, these individuals may see their disorder symptoms worsen due to such content.
 
In response to these concerns, Professor Sung-Ju Lee and his research team have developed FoodCensor, a system to mitigate the detrimental impacts of digital food content in YouTube on people with eating disorders on mobile and personal computers. Drawing inspiration from the Dual Systems Theory in psychology, this system is designed to tear off the potential connection between digital food content and eating disorders. The theory posits two decision-making systems: System 1, which operates fast and automatically, and System 2, which engages in slower, more thoughtful judgments.
 
 
 
Inline image 2024 05 07 11.16.25.264
<Figure 1. Example of real-time food content censorship and intervention in Youtube mobile application of the system>
 
 
Inline image 2024 05 20 09.50.17.101
<Figure 2. The system ① reduces the influence of stimuli by screening digital food content, ② encourages users to transition from system 1 automatic responses to system 2 conscious evaluations by revealing screened content through immediate questioning when users desire to view it, and ③ promotes conscious and healthy content consumption by providing negative impacts of eating disorder behaviors along with questions to increase the expected value of control>
 
Based on this theory, the system aims to enable users to make more conscious evaluations and decisions when consuming food content on social media. Visual and auditory stimuli associated with digital food content may trigger automatic responses (System 1; e.g., reflexively watching content). However, the system blocks these automatic responses by hiding food content in real-time and muting it, activating System 2 by providing users with reflective prompts to encourage conscious content selection and consumption.
 
The research team conducted a three-week user study involving 22 participants with eating disorders to evaluate the system. The experimental group showed a significant reduction in exposure to food content on YouTube, affecting the platform’s content recommendation algorithm. Experimental group participants acknowledged the system’s role in inhibiting automatic reactions and promoting System 2 control. User feedback indicated that the system alleviated food-related obsessions in daily life and improved overall quality of life.
 
Building on these findings, the research team proposed adaptive intervention design directions to support healthy digital content consumption and user-centric content management methods that promote intentional behavior changes beyond content censorship.
 
Lead author Ryuhaerang Choi (PhD Candidate) and co-authors Subin Park (MS Candidate), Sujin Han (PhD Candidate), and Professor Sung-Ju Lee participated in this study. The research was presented at the ACM Conference on Human Factors in Computing Systems (CHI) in Hawaii in May. (Paper Title: FoodCensor: Promoting Mindful Digital Food Content Consumption for People with Eating Disorders) and has won the Best Paper Honorable Mention Award.
 
This technology could be applied to contents beyond food, such as violence and explicit contents, and thus, could be widely deployed.
 
This work was supported in part by the Institute of Information & communications Technology Planning & Evaluation (IITP) grant funded by the Korean government (MSIT) (No. 2022-0-00064, Development of Human Digital Twin Technologies for Prediction and Management of Emotion Workers’ Mental Health Risks).
 
 
 

Doctoral students Seonjeong Lee and Dongho Choi from Professor Seunghyup Yoo’s research lab have won the Best Presentation Paper Award and the Excellent Presentation Paper Award, respectively

Doctoral students Seonjeong Lee and Dongho Choi from Professor Seunghyup Yoo’s research lab have won the Best Presentation Paper Award and the Excellent Presentation Paper Award, respectively

Inline image 2024 05 08 13.18.45.008

<(from left) Professor Seunghyup Yoo, ph.d. candidate Seonjeong Lee , ph.d. candidate Dongho Choi>
 

Our department’s doctoral students Seonjeong Lee and Dongho Choi from Professor Seunghyup Yoo’s research lab have won the Best Presentation Paper Award and the Excellent Presentation Paper Award, respectively, at the 2024 Spring Conference of the Korean Sensors Society.

The Spring Conference of the Korean Sensors Society is held annually in the spring, and this year’s conference took place from April 29 to 30 at the Daejeon Convention Center (DCC).

Doctoral students Choi Dongho and Lee Seonjeong presented papers titled “Vertically stacked organic pulse oximetry sensors with low power consumption and high signal fidelity” and “Micro-scale Pressure Sensor Based on the Gradual Electric Double Layer Modulation Mechanism,” respectively. 

The details are as follows:

 

0 Conference: 2024 Spring Conference of the Korean Sensors Society

0 Date: April 29-30, 2024

0 Award title: Best Presentation Paper Award

0 Authors: Sun-jeong Lee, Sang-hoon Park, Hae-chang Lee, Han-eol Moon, Seung-hyup Yoo (advisor)

0 Paper: Micro-scale Pressure Sensor Based on the Gradual Electric Double Layer Modulation Mechanism

0 Award title: Excellent Presentation Paper Award

0 Authors: Dong-ho Choi, Chan-hwi Kang, Seung-hyup Yoo (advisor)

0 Paper: Vertically stacked organic pulse oximetry sensors with low power consumption and high signal fidelity

 

B.S. Candidate Do A Kwon (Prof. Jae-Woong Jeong) wins Outstanding Poster Award at the 2024 Spring Conference of The Korean Sensors Society & Sensor Expo Korea-Forum

B.S. Candidate Do A Kwon (Prof. Jae-Woong Jeong) wins Outstanding Poster Award at the 2024 Spring Conference of The Korean Sensors Society & Sensor Expo Korea-Forum

 

Inline image 2024 05 07 16.24.45.825

<B.S. Candidate Do A Kwon>
 

B.S. student Do A Kwon (Advised by Jae-Woong Jeong) won the Outstanding Poster Award at the 2024 Spring Conference of The Korean Sensors Society & Sensor Expo Korea-Forum. 

 

The Conference of the Korean Sensors Society is held biannually in spring and fall. This spring, it was held at the Daejeon Convention Center (DCC) from April 29 to 30th.

Do A Kwon, an undergraduate student, published a paper titled “Body-temperature softening electronic ink for additive manufacturing of transformative bioelectronics via direct writing” and was selected as the winner in recognition of her excellence.

 

The paper introduces body-temperature softening electronic ink that can be patterned in high resolution.

It is expected to open unprecedented possibilities in personalized medical devices, wearable electronics, printed circuit boards, soft robots, and more, pushing the existing limitations in electronic devices with fixed form factors.

 

0 Conference: 2024 Spring Conference of The Korean Sensors Society 

0 Date: April 29-30, 2024

0 Award: Outstanding Poster Award

0 Authors: Do A Kwon, Simok Lee, Jae-Woong Jeong (Advisory Professor)

0 Paper Title: Body-temperature softening electronic ink for additive manufacturing of transformative bioelectronics via direct writing 

 

Inline image 2024 05 07 16.27.35.812

<(from left) Professor Jae-Woong Jeong, Do A Kwon>

 

EE Professor Joung-Ho Kim Establishes NAVER-Intel-KAIST AI Joint Research Center(NIK AI Research Center) for the Development of Next-Generation AI Semiconductor Eco-System

EE Professor Joung-Ho Kim Establishes NAVER-Intel-KAIST AI Joint Research Center(NIK AI Research Center) for the Development of Next-Generation AI Semiconductor Eco-System

Inline image 2024 04 30 13.39.42.721 1

<MoU Singing Ceremony of Joint Research Center>
 
As generative AI, sparked by ChatGPT, sweeps the globe, Professor Joung-Ho Kim(KAIST), is joining forces with Naver and Intel to consolidate their capabilities and strengths in the new “NAVER·Intel·KAIST AI Joint Research Center (NIK AI Research Center)” to establish an ecosystem for new AI semiconductors.
 
Industry professionals view the strategic partnership between these three institutions as a proactive challenge to establish a new AI semiconductor ecosystem and secure market and technological leadership. They aim to integrate their individual hardware and software technologies and infrastructures in AI, including the development of open-source software necessary for the operation of AI semiconductors, AI servers, and data centers.
In particular, it is noteworthy that Intel, a global semiconductor company known for advanced CPU design and foundry capabilities, is establishing and supporting a joint research center at a domestic university—KAIST—for the first time. This initiative aims to develop open-source software and other necessary tools to optimally operate Intel’s AI semiconductor, “GAUDI”, marking a significant step beyond traditional central processing units (CPUs).
KAIST announced on the 30th that it has signed a Memorandum of Understanding (MOU) to establish and operate the “NAVER·Intel·KAIST AI Joint Research Center (NIK AI Research Center)” at its main campus in Daejeon. This collaboration with Naver Cloud, led by CEO Yu-won Kim, focuses on developing advanced open-source software aimed at enhancing the performance and optimizing the operation of AI semiconductors, AI servers, clouds, and data centers.
A KAIST representative emphasized the significance of Intel’s decision, stating, “It is of great strategic importance that Intel has chosen Naver and KAIST as partners for the development of open-source software in the fields of AI and semiconductors.“
 
The representative further detailed, “The combination of Naver Cloud’s excellence in computing, databases, and various AI services based on the NAVER Cloud Platform, Intel’s next-generation AI chip technology, and KAIST’s world-class expertise and software research capabilities, is expected to successfully create a distinctively creative and innovative ecosystem in the AI semiconductor sector.”
At the MOU signing ceremony, key KAIST officials including President Kwang-Hyung Lee, Provost and Executive Vice President Gyun-Min Lee, Senior Vice President for Research Sang-yup Lee, and Professor Joung-Ho Kim from the Department of Electrical Engineering were present. From Naver Cloud, key executives such as CEO Yu-won Kim, Head of AI Innovation Jung-Woo Ha, and Executive Officer Dong-soo Lee, responsible for Hyperscale AI, also attended.
 
Following the MOU signing, KAIST and Naver Cloud plan to establish the “NAVER·Intel·KAIST AI Joint Research Center (NIK AI Research Center)” at KAIST within the first half of the year. They are scheduled to commence full-scale research activities starting in July.
At KAIST, Professor Joung-Ho Kim of the Department of Electrical Engineering, recognized globally as a leading scholar in AI semiconductor design and AI application design (AI-X), will co-lead the NIK AI Research Center. From Naver Cloud, Executive Officer Dong-soo Lee, an expert in AI semiconductor design and AI software, will serve as the other co-director of the center. Additionally, Professor Min-hyuk Sung from the KAIST Department of Computer Science and Naver Cloud’s Leader Se-jung Kwon will each serve as deputy directors, collaboratively steering the center’s research initiatives.
 
The operation period of the joint research center is initially set for three years, with the possibility of extension based on research outcomes and the needs of the participating institutions. As a key research center, about 20 faculty members specializing in artificial intelligence and software from KAIST, along with approximately 100 master’s and doctoral students, will participate as researchers, ensuring the center is equipped with substantial expertise and innovation capacity.
During the initial two years, the joint research center will focus on establishing a platform ecosystem specifically for the AI training and inference chip, “GAUDI”, developed by Intel’s Habana Labs. To achieve this, approximately 20 to 30 collaborative industry-academic research projects will be conducted.
 
Research at the joint research center primarily focuses on the development of open-source software in fields such as natural language processing, computer vision, and machine learning. Of the center’s research efforts, 50% is devoted to autonomous subject research, while 30% and 20% of the efforts are allocated to studies on the miniaturization and optimization of AI semiconductors, respectively.
To facilitate this research, Naver and Intel will provide the “GAUDI 2”—based on the Naver Cloud Platform—to the KAIST Joint Research Center. In turn, the KAIST research team will utilize “GAUDI 2” for their studies and annually publish their findings and papers related to this work.
 
Additionally, beyond their existing capabilities in artificial intelligence and cloud technologies, Naver and Intel will share various infrastructure facilities and equipment necessary for joint research. They also plan to engage in numerous collaborative activities, including supporting the joint research center with the necessary space and administrative staff and facilitating the exchange of research personnel between the institutions. This comprehensive support is designed to enhance the effectiveness and impact of their cooperative efforts.
Professor Joung-Ho Kim of highlighted the significant benefits of the joint research center, stating, “KAIST can acquire technical know-how in AI development, semiconductor design, and operational software development through the use of the GAUDI series. Particularly, the establishment of this joint research center is highly meaningful as it allows us to gain experience in operating large-scale AI data centers and to secure the AI computing infrastructure needed for future research and development.”
Director Dong-soo Lee from Naver Cloud expressed his aspirations for the collaboration: “Naver Cloud looks forward to leading various research initiatives with KAIST and expanding the AI ecosystem centered around HyperCLOVA X. Through the joint research center, we hope to invigorate AI research in the country and enhance the diversity of the AI chip ecosystem.” 
 
[ Terminologies ]
* Generative AI
: Artificial intelligence technology that uses deep learning models to learn from large datasets. It can actively generate outputs such as text, images, and videos based on user requests.
** GAUDI
: A general-purpose AI accelerator for data centers, developed by Habana Labs, an Israeli AI chip company acquired by Intel in 2019.
*** High Bandwidth Memory (HBM)
: A high-performance DRAM technology where multiple DRAM chips are interconnected using Through Silicon Vias (TSVs) to significantly enhance data processing speeds. It is primarily used in conjunction with GPUs to accelerate AI training and generation speeds. Characteristically, HBM is designed to maximize memory bandwidth, making it especially suitable for high-speed parallel processing. It is a critical semiconductor in AI computers installed in mega-scale generative AI data centers. The technology has evolved through several generations: HBM, HBM2, HBM2E, HBM3, and the current HBM3E. Companies like Samsung Electronics and SK Hynix are currently developing HBM4, which is used in GPU modules by NVIDIA, Intel, and AMD.