Professor Minsoo Rhu has been inducted into the Hall of Fame of the IEEE/ACM International Symposium on Computer Architecture (ISCA) 2024
Professor Minsoo Rhu has been inducted into the Hall of Fame of the IEEE/ACM International Symposium on Computer Architecture (ISCA) 2024
Professor Kim Lee-Sup Lab’s Master’s Graduate Park Jun-Young Wins Best Paper Award at the International Design Automation Conference
<(From left to right) Professor Kim Lee-Sup, Master’s Graduate Park Jun-Young, Ph.D. Graduate Kang Myeong-Goo, Master’s Graduate Kim Yang-Gon, Ph.D. Graduate Shin Jae-Kang, Ph.D. Candidate Han Yunki>
Master’s graduate Park Jun-Young from Professor Kim Lee-Sup’s lab of our department achieved the significant accomplishment of winning the Best Paper Award at the International Design Automation Conference (DAC) held in San Francisco, USA, from June 23 to June 27. Established in 1964, DAC is an international academic conference in its 61st year, covering semiconductor design automation, AI algorithms, and chip design. It is considered the highest authority in the related field, with only about 20 percent of submitted papers being selected for presentation.
The awarded research is based on Park Jun-Young’s master’s thesis, proposing an algorithmic approximation technique and hardware architecture to reduce memory transfer for KV caching, a problem in Large Language Model inference. The excellence of this research was recognized by the Best Paper Award selection committee and was chosen as the final Best Paper Award winner from among the four candidate papers (out of 337 presented and 1,545 submitted papers).
The details are as follows:
Professor Shinhyun Choi’s Research Team solves the Reliability Issues of Next-Generation Neuromorphic Computing
Neuromorphic computing, which implements AI computation in hardware by mimicking the human brain, has recently garnered significant attention. Memristors (conductance-changing devices), used as unit elements in neuromorphic computing, boast advantages such as low power consumption, high integration, and efficiency.
However, issues with irregular device characteristics have posed reliability problems for large-scale neuromorphic computing systems.
Our research team has developed a technology to enhance reliability, potentially accelerating the commercialization of neuromorphic computing.
On June 21, professor Shin-Hyun Choi’s research team announced a collaborative study with Hanyang University researchers. The study developed a doping method using aliovalent ions* to improve the reliability and performance of next-generation memory devices.
*Aliovalent ion: An ion with a different valence (a measure of its ability to bond) compared to the original atom.
The joint research team identified that doping with aliovalent ions could enhance the uniformity and performance of devices by addressing the primary issue of irregular device characteristic changes in next-generation memory devices, confirmed through experiments and atomic-level simulations.
Figure 1. Results of aliovalent ion doping developed in this study, demonstrating the improvement effects and the material principles underpinning them
The team reported that the appropriate injection of aliovalent halide ions into the oxide layer could solve the irregular device reliability problem, thereby improving device performance. This method was experimentally confirmed to enhance the uniformity, speed, and performance of device operation.
Furthermore, atomic-level simulation analysis showed that the performance improvement effect of the device was consistent with the experimental results observed in both crystalline and amorphous environments. The study revealed that doped aliovalent ions attract nearby oxygen vacancies, enabling stable device operation, and expand the space near the ions, allowing faster device operation.
Professor Shinhyun Choi states, “The aliovalent ion doping method we developed significantly enhances the reliability and performance of neuromorphic devices. This can contribute to the commercialization of next-generation memristor-based neuromorphic computing and can be applied to various semiconductor devices using the principles we uncovered.”
This research, with Master’s student Jongmin Bae and Postdoctoral researcher Choa Kwon from Hanyang University as co-first authors, was published in the June issue of the international journal ‘Science Advances’ (Paper title: Tunable ion energy barrier modulation through aliovalent halide doping for reliable and dynamic memristive neuromorphic systems).
The study was supported by the National Research Foundation of Korea’s Advanced Device Source Proprietary Technology Development Program, the Advanced Materials PIM Device Program, the Young Researcher Program, the Nano Convergence Technology Institute Semiconductor Process-based Nano-Medical Device Development Project, and the Innovation Support Program of the National Supercomputing Center.
Professor YongMan Ro’s research team develops a multimodal large language model that surpasses the performance of GPT-4V
On June 20, 2024, Professor YongMan Ro’s research team announced that they have developed and released an open-source multimodal large language model that surpasses the visual performance of closed commercial models like OpenAI’s ChatGPT/GPT-4V and Google’s Gemini-Pro. A multimodal large language model refers to a massive language model capable of processing not only text but also image data types.
The recent advancement of large language models (LLMs) and the emergence of visual instruction tuning have brought significant attention to multimodal large language models. However, due to the support of abundant computing resources by large overseas corporations, very large models with parameters similar to the number of neural networks in the human brain are being created.
These models are all developed in private, leading to an ever-widening performance and technology gap compared to large language models developed at the academic level. In other words, the open-source large language models developed so far have not only failed to match the performance of closed large language models like ChatGPT/GPT-4V and Gemini-Pro, but also show a significant performance gap.
To improve the performance of multimodal large language models, existing open-source large language models have either increased the model size to enhance learning capacity or expanded the quality of visual instruction tuning datasets that handle various vision language tasks. However, these methods require vast computational resources or are labor-intensive, highlighting the need for new efficient methods to enhance the performance of multimodal large language models.
Professor YongMan Ro’s research team has announced the development of two technologies that significantly enhance the visual performance of multimodal large language models without significantly increasing the model size or creating high-quality visual instruction tuning datasets.
The first technology developed by the research team, CoLLaVO, verified that the primary reason existing open-source multimodal large language models perform significantly lower compared to closed models is due to a markedly lower capability in object-level image understanding. Furthermore, they revealed that the model’s object-level image understanding ability has a decisive and significant correlation with its ability to handle visual-language tasks.
To efficiently enhance this capability and improve performance on visual-language tasks, the team introduced a new visual prompt called Crayon Prompt. This method leverages a computer vision model known as panoptic segmentation to segment image information into background and object units. Each segmented piece of information is then directly fed into the multimodal large language model as input.
Additionally, to ensure that the information learned through the Crayon Prompt is not lost during the visual instruction tuning phase, the team proposed an innovative training strategy called Dual QLoRA.
This strategy trains object-level image understanding ability and visual-language task processing capability with different parameters, preventing the loss of information between them.
Consequently, the CoLLaVO multimodal large language model exhibits superior ability to distinguish between background and objects within images, significantly enhancing its one-dimensional visual discrimination ability.
The team pointed out that existing multimodal large language models use vision encoders that are semantically aligned with text, leading to a lack of detailed and comprehensive real-world scene understanding at the pixel level.
By combining the simple and efficient approach of CoLLaVO’s Crayon Prompt + DualQLoRA with MoAI’s array of computer vision models, the research team verified that their models outperformed closed commercial models like OpenAI’s ChatGPT/GPT-4V and Google’s Gemini-Pro.
Accordingly, Professor YongMan Ro stated, “The open-source multimodal large language models developed by our research team, CoLLaVO and MoAI, have been recommended on Huggingface Daily Papers and are being recognized by researchers worldwide through various social media platforms. Since all the models have been released as open-source large language models, these research models will contribute to the advancement of multimodal large language models.”
This research was conducted at the Future Defense Artificial Intelligence Specialization Research Center and the School of Electrical Engineering of Korea Advanced Institute of Science and Technology (KAIST).
[1] CoLLaVO Demo GIF Video Clip https://github.com/ByungKwanLee/CoLLaVO
< CoLLaVO Demo GIF >
[2] MoAI Demo GIF Video Clip https://github.com/ByungKwanLee/MoAI
< MoAI Demo GIF >
Professor Song Min Kim’s Research Team Wins Best Paper Award at ACM MobiSys 2024, an International Mobile Computing Conference
A research team led by Professor Song Min Kim from the Department of EE won the Best Paper Award at ACM MobiSys 2024, the top international conference in the field of mobile computing.
This achievement follows their previous Best Paper Award at ACM MobiSys 2022, making it even more significant as the two Ph.D students became the first in the world to win multiple Best Paper Awards at the three major conferences in mobile/wireless networks (MobiSys, MobiCom, SenSys) as the first authors.
Ph.D. candidates Kang Min Bae and Hankyeol Moon from the Department of Electrical and Electronic Engineering participated as co-first authors in Professor Kim’s research team.
They developed a technology using millimeter-wave backscatter to accurately locate targets obscured by obstacles with precision under 1 cm, earning them the Best Paper Award.
This research is expected to revolutionize the stability and accuracy of indoor positioning technology, leading to widespread adoption of location-based services in smart factories and augmented reality (AR), among other applications.
-Paper: https://doi.org/10.1145/3643832.3661857
Doctoral students Seonjeong Lee and Dongho Choi from Professor Seunghyup Yoo’s research lab have won the Best Presentation Paper Award and the Excellent Presentation Paper Award, respectively
Our department’s doctoral students Seonjeong Lee and Dongho Choi from Professor Seunghyup Yoo’s research lab have won the Best Presentation Paper Award and the Excellent Presentation Paper Award, respectively, at the 2024 Spring Conference of the Korean Sensors Society.
The Spring Conference of the Korean Sensors Society is held annually in the spring, and this year’s conference took place from April 29 to 30 at the Daejeon Convention Center (DCC).
Doctoral students Choi Dongho and Lee Seonjeong presented papers titled “Vertically stacked organic pulse oximetry sensors with low power consumption and high signal fidelity” and “Micro-scale Pressure Sensor Based on the Gradual Electric Double Layer Modulation Mechanism,” respectively.
The details are as follows:
0 Conference: 2024 Spring Conference of the Korean Sensors Society
0 Date: April 29-30, 2024
0 Award title: Best Presentation Paper Award
0 Authors: Sun-jeong Lee, Sang-hoon Park, Hae-chang Lee, Han-eol Moon, Seung-hyup Yoo (advisor)
0 Paper: Micro-scale Pressure Sensor Based on the Gradual Electric Double Layer Modulation Mechanism
0 Award title: Excellent Presentation Paper Award
0 Authors: Dong-ho Choi, Chan-hwi Kang, Seung-hyup Yoo (advisor)
0 Paper: Vertically stacked organic pulse oximetry sensors with low power consumption and high signal fidelity
B.S. Candidate Do A Kwon (Prof. Jae-Woong Jeong) wins Outstanding Poster Award at the 2024 Spring Conference of The Korean Sensors Society & Sensor Expo Korea-Forum
B.S. student Do A Kwon (Advised by Jae-Woong Jeong) won the Outstanding Poster Award at the 2024 Spring Conference of The Korean Sensors Society & Sensor Expo Korea-Forum.
The Conference of the Korean Sensors Society is held biannually in spring and fall. This spring, it was held at the Daejeon Convention Center (DCC) from April 29 to 30th.
Do A Kwon, an undergraduate student, published a paper titled “Body-temperature softening electronic ink for additive manufacturing of transformative bioelectronics via direct writing” and was selected as the winner in recognition of her excellence.
The paper introduces body-temperature softening electronic ink that can be patterned in high resolution.
It is expected to open unprecedented possibilities in personalized medical devices, wearable electronics, printed circuit boards, soft robots, and more, pushing the existing limitations in electronic devices with fixed form factors.
0 Conference: 2024 Spring Conference of The Korean Sensors Society
0 Date: April 29-30, 2024
0 Award: Outstanding Poster Award
0 Authors: Do A Kwon, Simok Lee, Jae-Woong Jeong (Advisory Professor)
0 Paper Title: Body-temperature softening electronic ink for additive manufacturing of transformative bioelectronics via direct writing
<(from left) Professor Jae-Woong Jeong, Do A Kwon>
EE Professor Joung-Ho Kim Establishes NAVER-Intel-KAIST AI Joint Research Center(NIK AI Research Center) for the Development of Next-Generation AI Semiconductor Eco-System