In this talk, I will describe a new scheme for successively refinable lossy compression. In each iteration, the encoder merely describes the indices of the few maximal source components, while the decoder’s reconstruction is a natural estimate of the source components based on this information. This step can be shown to be near-optimal for the memoryless Gaussian source in the sense of achieving the zero-rate slope of its distortion-rate function. I will then introduce a scheme comprising of iterating the above iteration on an appropriately transformed version of the difference between the source and its reconstruction from the previous iteration. The proposed scheme achieves the rate distortion function of the Gaussian memoryless source (under squared error distortion) when employed on any finite-variance ergodic source. Its storage and computation requirements are modest at both the encoder and decoder. It further possesses desirable properties which I will discuss, which we respectively refer to as infinitesimal successive refinability, ratelessness, and complete separability. Our work provides new insights regarding video and image compression as well as bio data compression such as genomic data. Based on joint work with Tsachy Weissman.
Albert No received the dual B.S. degrees in both electrical engineering and mathematics from Seoul National University, 2009 and is now a PhD candidate in the Department of Electrical Engineering at Stanford University, under the supervision of Prof. Tsachy Weissman. He recently finished defense for his PhD degree (July 2014) and will graduate September 2015. His research interests include relations between information and estimation theory, joint source-channel coding, lossy compression, and their applications to compression for video, image, and bio data.