There is some excitement about what is called Compressed Sensing (CS) in signal processing, harmonic analysis and applied math.
David Donoho came up with this question that since signals were going to be approximated anyway (using few coefficients in some suitable basis), can one just make a small number of measurements rather than measuring at every point? He showed how to do it. Did this have a precursor? Indeed it did, in a kingdom by the sea (gratuitous reference to Nabokov). Not an exact precursor, but some related results were known in learning theory, streaming algorithms, etc. Donoho's twist is interesting and has generated a lot of research the past few years. Check out the
CS site for the early papers by Candes, Tao (yes, field medalist Terence Tao), Romberg, Ruselson, Vershynin et al, and the other papers since.
I was lucky to get to a meeting with Ingrid Daubechies, Ron DeVore, Anna Gilbert and Martin Strauss (thanks to a NSF FRG) where we tried to figure out CS and its relationship to other problems. In particular, I am indebted to Ron for gamely explaining elements of the CS conditions in Donoho's paper and the proofs, on the white board.
Recently, I went to the Allerton conference (my first, it IS in the middle of nowhere, but it was a reassuringly warm math conf) to speak about algorithmic issues in compressed sensing. I eked out some research time to write down a few results I have been mulling over, and some new research directions in compressed sensing (in particular, what I call
functional compressed sensing; the
distributed, continous version of compressed sensing and its intriguing connection to the classical Slepian-Wolf). Check out a copy of the paper
here. I hope to refine this over time, so comments are welcome.