I was once told that the signals from neural/biological systems are not purely random signals, but rather those that have latent structures, which can be recovered with principled approaches. This insight has stuck with me ever since and has been a guiding principle for my research. Formally put, we can incorporate domain knowledge as constraints into a regularizer or inference procedure to help us recover more structured estimates. For this talk, I will focus on the following domain constraints we typically observe in neural signals: smoothness, sparsity, and shift-invariance. I will first present Deep Convolutional Autoencoders (DCEA), a constrained deep learning framework, which exploits 1) the shift-invariance and sparsity of the latent representation and 2) the properties of the natural exponential family, which are often used for modeling the neural spiking data (ICML 20). Next, I will present how the repeated occurrence of smooth shift-invariant patterns in the signal, such as action potentials in electrophysiology data, can be leveraged to build a Convolutional Dictionary Learning (CDL) framework for smooth dictionary elements (IEEE TSP 20).
Andrew Hyungsuk Song is currently a Ph.D. student in the Department of Electrical Engineering and Computer Science (EECS), MIT. He also received B.Sc. and M.Eng. degree in EECS from MIT in 2015 and 2016, respectively. He is currently a member of Neuroscience Statistics Research Laboratory at MIT, advised by Professor Emery N. Brown, and Computation, Representations, and Inference in Signal Processing (CRISP) group at Harvard University, advised by Professor Demba Ba. His main research interest is statistical/neural signal processing, with a focus on sparsity and dictionary learning. He is also interested in the connection between signal processing and deep learning.