Professor Yoon Young-Gyu’s research team develops AI imageing analysis technology ”SUPPORT” which enables high-precision measurement of biological fluorescence signals

Professor Yoon Young-Gyu’s research team develops AI imageing analysis technology ”SUPPORT” which enables high-precision measurement of biological fluorescence signals

 

연구팀

< (From the left) Professor Young-Gyu Yoon from the School of Electrical Engineering, Ph.D. student Minho Eom, and Ph.D. student Seungjae Han.>

 
KAIST (President Kwang-Hyung Lee) announced on the 19th that a research team led by Professor Young-Gyu Yoon from the School of Electrical Engineering has developed an AI imaging analysis technology that can measure biological fluorescence signals with over 10 times the precision of existing technologies.
 
With the recent advancement of genetic engineering technology, it has become possible to convert various biological signals, such as specific ion concentrations or voltages within living biological tissues, into fluorescence signals. Technologies that utilize fluorescence microscopy to capture time-lapse images of biological tissues and rapidly measure these signals have been developed and are in use.
 
However, because the fluorescence signals emitted from biological tissues are weak, measuring rapidly changing signals results in a very low signal-to-noise ratio, making precise measurements difficult. In particular, the accuracy of measurements becomes extremely low when measuring signals that change on a millisecond scale, such as the action potentials of neurons.
 
In response to this technical challenge, Professor Yoon’s research team developed an AI image analysis technology that enables measurements with over 10 times the precision of existing technologies.
 
This technology can autonomously learn the statistical distribution of data from fluorescence microscope images with a low signal-to-noise ratio and improve the signal-to-noise ratio of the images by more than tenfold even without the use of training data.
 
Utilizing this method, the measurement precision of various biological signals can be significantly enhanced. It is anticipated that this technology will be broadly applicable in the overall field of biological sciences and in the development of treatments for brain disorders.
 
Professor Yoon stated, “We named this technology SUPPORT (Statistically Unbiased Prediction utilizing sPatiOtempoRal information in imaging daTa) in the hope that it will support various neuroscience and biological science research.”
He added, “This is a technology that researchers using various fluorescence imaging devices can easily utilize without the need for separate training data. It has the potential to be broadly applied in uncovering new biological phenomena.”
 
Co-first author Minho Eom stated, “Through SUPPORT, we succeeded in precisely measuring rapid changes in biological signals that were difficult to observe. In particular, it’s now possible to optically measure the action potentials of neurons that change on a millisecond scale, which will be very useful for neuroscience research.” Co-first author Seungjae Han added, “While SUPPORT was developed for precise measurements of biological signals within fluorescence microscopy images, it can also be widely used to enhance the quality of general time-lapse images.”
 
This technology was developed under the supervision of Professor Young-Gyu Yoon’s team from the School of Electrical Engineering at KAIST, in multidisciplinary and multinational collaboration with researchers from the Department of Materials Science Engineering at KAIST (Professor Jae-Byum Chang), the Graduate School of Medical Science and Engineering at KAIST (Professor Pilhan Kim), Chungnam National University, Seoul National University, Harvard University, Boston University, the Allen Institute, and Westlake University.
 
This research was conducted with the support of the National Research Foundation of Korea and was published online in the international journal “Nature Methods” on September 19th. It was also selected as the cover article for the October issue.
 
1. Fluorescence signal: The brightness of light (fluorescence) changes in proportion to specific biological signal variations.
2. Timelapse: A video that continuously captures the subject at regular intervals.
 
 
AI영상분석기술 1

Figure 1. Concept of SUPPORT technology:

(a) For each pixel in the image, the artificial neural network removes noise without separate training data by utilizing the surrounding pixel information within the current frame and information from adjacent frames.

(b) Impulse response of the designed artificial neural network.

 

2

Figure 2. Ultra-precise neural cell voltage measurement using SUPPORT:

(Top) In the original fluorescence image, it’s impossible to observe the action potentials of neurons due to the low signal-to-noise ratio.

(Bottom) By enhancing the signal-to-noise ratio using SUPPORT, it is possible to precisely observe the action potentials of each neural cell.

 

3

Figure 3. Improvement of in vivo ear tissue fluorescence images of mice using SUPPORT:

(Left) In the original fluorescence image, it’s impossible to observe the detailed structure of the tissue due to the low signal-to-noise ratio.

(Right) By enhancing the signal-to-noise ratio using SUPPORT, it is possible to observe the detailed structure and rapidly moving red blood cells.

 

 

 
5 1

Figure 4. Improvement of in vivo muscle tissue fluorescence images of mice using SUPPORT:

(Left) In the original fluorescence image, it’s impossible to observe the detailed structure of the tissue due to the low signal-to-noise ratio.

(Right) By enhancing the signal-to-noise ratio using SUPPORT, it is possible to observe the detailed structure of muscle fibers and rapidly moving red blood cells.

 

Prof. Myung, Hyun’s research team develops ‘Dreamwalker’ technology that walks up stairs without seeing

1. 연구팀이 개발한 제어기 드림워크의 개요도

 

– Developed ‘Dreamwalk’, a walking robot control technology based on artificial intelligence deep reinforcement learning that can walk in atypical environments without visual and tactile information.
– Mass production of various types of quadrupedal ‘Dreamwalker’ robots using ‘Dreamwalk’ technology
– Expected to be utilized for exploration missions in atypical environments caused by disasters such as fires
 
A quadrupedal robot technology that can go up and down stairs and move without falling in uneven environments such as tree roots without the help of visual or tactile sensors in smoky disaster situations has been developed by domestic researchers.
 
A research team led by Professor Myung, Hyun of the Department of Electrical and Electronic Engineering, Urban Robotics Laboratory, has developed a walking robot control technology that enables robust ‘blind locomotion’ in various unstructured environments.
 
The team has developed a technology called “DreamWaQ,” which is named for its ability to walk blindly, just as a person can wake up from sleep and walk to the bathroom in the dark with little visual assistance, and the robot equipped with this technology is called a “DreamWaQer.”
This technology can be used to create various types of quadrupedal robot DreamWalkers.
 
In addition to the laboratory environment, the DreamWaQer robot has demonstrated robust performance in a university campus environment with curbs and speed bumps, and in a field environment with tree roots and gravel, by overcoming steps of up to two-thirds of its height from the ground to its body when walking.
The team also found that the robot can walk stably at speeds as slow as 0.3 m/s and as fast as 1.0 m/s, regardless of the environment.  
 
The results of the study, which was led by Doctoral Candidate I Made Aswin Nahrendra and co-authored by Doctoral Candidate Byung Ho Yoo, have been accepted and will be presented at the IEEE International Conference on Robotics and Automation (ICRA), the world’s most prestigious conference on robotics, in London, UK, at the end of May. (Paper title: DreamWaQ: Learning Robust Quadrupedal Locomotion With Implicit Terrain Imagination via Deep Reinforcement Learning)
 
Videos of DreamWalker, a walking robot equipped with the developed DreamWaQ, can be viewed at the following addresses.
 
 
-Main video: https://youtu.be/JC1_bnTxPiQ
 
-Cookie video:  https://youtu.be/mhUUZVbeDA0 
 
1. 왼쪽부터 전기및전자공학부 명현 교수 이 마데 아스윈 나렌드라I Made Aswin Nahrendra 박사과정 유병
(From left) Prof. Myung, Hyun, Doctoral Candidate I Made Aswin Nahrendra, Doctoral Candidate  Byung Ho Yoo, and Doctoral Candidate  Min Ho Oh. In the foreground, Dreamwalker, a quadrupedal robot equipped with Dreamwalk technology.
 
 
[Related Press]
etnews : KAIST 보행로봇, 앞이 안보여도 계단 오르고 걷는다 – 전자신문 (etnews.com) 
Herald Economy : “보이지 않는데도 높은 계단 척척…카이스트, 新로봇제어 기술 개발”- 헤럴드경제 (heraldcorp.com) 

Professor In So Kweon’s research team wins the Best Student Paper award at IEEE/CVF Winter conference on Applications of Computer Vision (WACV) 2023

63bb6f7c21d6d

[Uk Cheol Shin, Kwanyong Park and Byeong-Uk Lee and professor In So Kweon from left]

 

WACV is a major academic conference ranked 9th in terms of the Google Scholar h-5 index within the field of computer vision.

The KAIST team’s award-winning work was among this year’s 641 published papers, which were by themselves selected out of 1,577 submission.

Titled ” Self-supervised Monocular Depth Estimation from Thermal Images via Adversarial Multi-spectral Adaptation”, their paper deals with the estimation of distance from a single thermal image, as one of the most difficult problems in computer vision involving challenges with the low resolution of thermal images and the lack of detailed image data labelled with temperature distribution. 

To address this problem, the team proposed a novel deep learning model that combines self-supervised learning with adversarial learning between multispectral images.

Unlike conventional methods that are limited by constraints such as the requirement of the exact camera settings, the model is able to learn without these constraints through the utilization of individual thermal and color images.

The model was tested in various experimental conditions such as day, night and poor illumination, and high performance was achieved under various conditions compared to the existing method.

 

Professor Jung-Woo Choi is selected as the keynote forum speaker for NOVEM 2023

86.최정우

[Prof. Jung-Woo Choi]

 

Professor Jung-Woo Choi is selected as the keynote forum speaker for NOVEM 2023

 

EE Professor Jung-Woo Choi has been selected as the keynote forum speaker for the upcoming 2023 NOVEM (Noise and Vibration: Emerging Methods), to be held in New Zealand.

Established by Institut National des Sciences Appliquées (INSA) at Lyon, France, NOVEM is an international conference for the latest technologies in the field of noise and vibration.

 

Professor Choi had also been selected as the plenary lecturer of Inter-Noise 2014, the largest conference in the field of noise and vibration, for his research in sound field control.

In his second occasion as the keynote speaker at an international conference, professor Choi will deliver his speech at the Sound Field Control session of 2023 NOVEM under the theme of sound field control in real and virtual audio space.

 

We offer sincere congratulations to Professor Jung-Woo Choi and look forward to his future achievements.

 

[Link for 2023 NOVEM keynote forums]  https://www.novem.ac.nz/keynote-forums/ 

EE Prof. Hyun Myung’s team jointly won Innovation Award at CES 2023

image001

 

KAIST EE Professor Hyun Myung’s team collaborated with Hill’s Robotics (CEO Myung-gyu Park), a company that transferred technology, at the world’s largest new technology fair, ‘CES 2023’ held annually in Las Vegas, USA, to develop robotics ( Robotics) sector CES 2023 Innovation Award.

 

Hi-bot of Hills Robotics is a high-tech self-driving robot based on simultaneous localization and mapping technology (hereinafter referred to as SLAM) using a low-cost 2D laser scanner by Professor Hyun Myung’s team.

 

It was awarded the Innovation Award for its technical distinction.

 

First, it is an effective non-face-to-face meeting support function. Instead of the existing 2D hologram expression method, they used a 360-degree omnidirectional stereoscopic hologram technology to implement the world’s smallest metaverse type, docent/non-face-to-face meeting support function.

 

Second, it is a disease prevention and quarantine function that meets the pandemic era. It uses a non-contact touch screen method to block the transmission of contamination due to contact and provides a plasma air disinfection function.

 

Lastly, it is a multi-functional mobile platform with built-in AI and SLAM-based self-driving intelligent platform SOLOMAN and can be used in various environments.

 

In addition to this, it can be seen as an artificial intelligence-based quarantine/docent/guide robot suitable for the With Corona era in that it has sterilization/air cleaning/therapy functions in consideration of various indoor environments and customer tastes.

 

It is expected to be used in multi-sure public places such as domestic and foreign museums, hospitals, and airports.

 

In addition to CES 2023 Hi-bot, Hills Robotics (formerly Hills Engineering), to which Professor Hyun Myung’s team transferred technology, they won the CES Innovation Award in the past years with Coro-bot CES 2021 and Hey-bot CES 2022.

 

Professor Joonhyuk Kang (Head) of the School of Electrical Engineering said, “Professor Hyun Myung’s research team won the Prize at the Future Challenge Defense Technology Drone Competition last week, so winning the CES 2023 Innovation Award is even more meaningful. We will actively support the scientific contribution of technology transfer that we plan to hold the 2023 mobility technology show with KAMA.”

 

image002

Link: 

https://news.kaist.ac.kr/news/html/news/?mode=V&mng_no=24950

https://digitalchosun.dizzo.com/site/data/html_dir/2022/11/17/2022111780240.html

https://www.etnews.com/20221117000327

EE Prof. Yoo elected as the TC Member for IEEE Signal Processing Society

[EE Professor Yoo picture]
 
EE Professor Yoo has been elected as the Technical Committee(TC) Member for IEEE Signal Processing Society and will be contributing to various IEEE Signal Processing functions that includes conferences, awards and education.
 
IEEE Signal Processing Society was founded as IEEE’s first society in 1948 and it is the world’s premier association for signal processing engineers and industry professionals.
 
Engineers around the world look to the Society for information on the latest developments in signal processing filed.
Its deeply rooted history spans almost 70 years, featuring  a membership base of more than 19,000 deeply interested and involved signal processing engineers, adcademics, industry professionals and students who are all part of a dynamic global community- spanning 100 countries world wide. 

EE Prof. Yoo, Chang Dong and Kweon, In So ’s team Give Oral Presentation at ECCV 2022

Title: EE Professor Yoo, Chang Dong and  Kweon, In So ’s Research Team Give Oral Presentation on Self-supervised Learning at ECCV 2022

KAIST EE Prof. Yoo, Chang Dong and Kweon, In So’s team conducted a joint research and proposed a self-supervised learning method that is remarkably robust and performs well with even only a small volume of labeled data.

2022 eccv 홍보

<(From left) EE Professors Yoo, Chang Dong and Kweon, In So and Researchers Chaoning Zhang and Kang Zhang>

ECCV began in 1990 and has since focused on introducing the latest findings in artificial intelligence and machine learning research on vision and signal processing. It has long been a renowned conference on computer vision and deep learning, and its 2022 rendition gathered 5,803 submissions, only 1,650 (28%) of which have had the honor of being accepted, and merely a select 158 (2.7%) of the accepts given the opportunity for an oral presentation.

The team’s findings titled “Decoupled Adversarial Contrastive Learning for Self-supervised Adversarial Robustness” earned the oral presentation honor and will be presented on Oct. 23, 2022 in Tel Aviv, Israel.

While artificial intelligence is making progress in various domains, it has yet to win full trust from humans. Reliable learning should encompass learning from little data as well as robust learning, and attempts at this objective have been made with combining self-supervised learning and adversarial learning. This work utilizes distillation methods to efficiently put together the two and proposed an adversarial learning framework capable of self-supervised learning without labels.

The paper outlining these findings has been selected as an ECCV Oral Presentation (acceptance rate 2.7%) work. The work is a joint endeavor by Professors Yoo, Chang Dong and Kweon, In So, and their team, and it promises exciting opportunities for providing high-performance services based on robust artificial intelligence learning from little data.

This research is supported by IITP by MSIT.

dataURItoBlob

EE Ph.D. Candidate Sangmin Lee and Sungjune Park (Prof. Yong Man Ro) win Ad-hoc Video Search in VBS 2022

캡처

(Prof. Yong Man Ro, Sangmin Lee, Sungjune Park,  from left)

 

Ph.D. Candidate Sangmin Lee and Sungjune Park (Prof. Yong Man Ro’s Lab) won the 1st place in the Ad-hoc Video Search (AVS) section of the 11th Video Browser Showdown (VBS 2022).

 

VBS is the international video retrieval competition held annually, and this year VBS 2022 is the 11th competition.

 

This year’s competition was held at Vietnam Phú Quốc for two days from June 6th to 7th, with 16 finalized video search teams from around the world.

 

The Ad-hoc Video Search section is to find as exact videos for given querys out of 2.5 million videos.

Sangmin Lee and Sungjune Park won the first place by constructing a multimodal search engine based on deep learning, which effectively searches target videos through the multi-modal correspondences of visual-audio-language latent representations.

 

The core algorithm adopted in the search engine, novel visual-audio representation learning method will be presented at CVPR 2022, the top tier conference in computer vision and AI field.

 

The title of the paper is “Weakly Paired Associative Learning for Sound and Image Representations via Bimodal Associative Memory”.

 

– Competition: 11th Video Browser Showdown 2022

– Award: Best AVS (1st place winner in Ad-hoc Video Search)

– Recipient: Sangmin Lee, Sungjune Park, Yong Man Ro (Advisory Professor)

 

EE Prof. Kim, Changick and Jeong, Jae-Woong Awarded on KAIST Research Day

 EE Professors Kim, Changick and Jeong, Jae-Woong Awarded on 2022 KAIST Research Day.

dataURItoBlob 1

[From left, Prof. Kim, Changick, Prof. Jeong, Jae-Woong]

 

Professor Kim, Changick has been recognized with the Transdisciplinary Research Prize for his contributions to computer vision- and artificial intelligence-based monitoring  technology of anthropocene effects on the planet. Anthropocene is a scientific concept referring to the recent geological epoch distinct from previous ones, marked by unprecedented transformations in the planet’s system from human activities since the Industrial Revolution. Prof. Kim has been conducting research with satellite images, computer modeling, and deep learning tools on monitoring the compromised states of planet Earth, such as climate change and sea level rises. In addition, as part of AI-based digital study of ecology, he has cooperated closely with anthropogeography and ecology experts to detect endangered species in the DMZ; he has developed a deep network capable of counting and classifying endangered species, such as the red-crowned cranes, the white-naped cranes, and the white-fronted geese. This study is meaningful in automating and maintaining the monitoring process of endangered species in the DMZ and Cheorwon.

 

Professor Jeong, Jae-Woong has been awarded the KAIST Scholastic Award for proposing a new direction in the automated treatment of brain diseases and cognitive research by developing for the first time an IoT (Internet of Things) based wireless remote control system for neural circuits in the brain. The proposed direction sets a vision for one of humanity’s most difficult challenges: overcoming brain diseases. Prof. Jeong has also led the field of research in wirelessly rechargeable soft subdermal implantable devices. These works have been published in 2021 in top journals of medical engineering: Nature Biomedical Engineering and Nature Communications. Said studies were led by Prof. Jeong’s team, with international collaborators in Washington University in School of Medicine, attracting over 60 press reports across the world.

 

dataURItoBlob 1

EE Prof. Hyun Myung’s Team wins the 2nd Prize among Academia in IEEE ICRA 2022 SLAM Challenge

b

[Hyungtae Lim (PhD student), event officials, Prof. Hyun Myung, from the left)

Title: EE Prof. Hyun Myung’s Team wins the 2nd Prize in Academia at the Future of Construction Workshop in IEEE ICRA 2022  
 
Team QAIST (advisor: Prof. Hyun Myung) wins the 2nd prize at HILTI Challenge 2022 held in Future of Construction: Build Faster, Better, Safer – Together with Robots Workshop at 2022 IEEE International Conference on Robotics and Automation (ICRA) held in Philadelphia, USA during May 23-27, 2022.  
HILTI SLAM Challenge 2022 is  organized by HILTI Corp. in Liechtenstein, Oxford Robotics Institute in Oxford University, and Robotics and Perception Group in ETH Zürich.  
This Challenge is a competition for accurate mapping by developing simultaneous localization and mapping (SLAM) algorithms that can robustly operate even in construction environments and degeneracy environments such as narrow indoor environments that lack features. Among the  40 teams around the world, team QAIST wins the 2nd prize in the Academia. They will receive  USD 3,000 as a cash prize.  
   
Details on this good news are as follows:   
 

l  Name of Conference: 2022 IEEE International Conference on Robotics and Automation (ICRA) 

l  Name of Workshop and Date: Future of Construction: Build Faster, Better, Safer – Together with Robots Workshop, 23rd, May, 2022 

l  Prize: 2nd Prize among Academia (USD 3,000) 

l  Participants: Team QAIST (Quatro + KAIST). Hyungtae Lim, Daeboem Kim, Beomsoo Kim, Seungwon Song, Alex Junho Lee, Seungjae Lee, and Prof. Hyun Myung