Thursday, November 25, 2010

Meetings Missed or Made in NY

I visited distant places the past few weeks and missed many meetings in NY.
Enjoy the programs and happy Thanksgiving!

I did make it to one meeting, the annual Gala of NY Academy of Sciences held at Cipriani's on Nov 15, where the academy launched a scientists without borders program to combat malnourishment and advance nutritional sciences. Aneesh Chopra (CTO of US) was present and described an episode where NASA's solar forecasting puzzle was posed on the web and solved by a Russian scientist, borders long gone! Message was, connecting people leads to innovation. The winners of the Blavtnik Award for Young Scientists were announced. Thanks to people like Joan Feigenbaum, Computer Scientists are getting represented, and Yaron Lipman of Princeton CS was a winner. Daniela Schiller made a compelling note for studying behaviors, neural systems and trauma/fear. There was a joke about a string theorist husband caught cheating who says, "I can explain everything". James Watson was in audience and others got reminded how he supported science alliance so NY would become a science city. Ultimately, in NY, the issue is always real estate: people at my dinner table were architects, and knew James Carpenter, my neighbor. Finally, there is going to be a Museum of Math in NY, a physical entity. George Hart is the Chief of Content for the Museum.

Labels:

Flight Humor

There is this story of a passenger on a flight to Mumbai, India, who asks the flight attendant why, having arrived, they seem to be circling instead of landing and the attendant says, "The movie isn't over yet!". I had a similar moment on a flight to CA recently when passengers complained that, having paid for DirecTV, the channels were silent, and the pilot said, "I will climb 5000 ft so the satellites will be able to see us."

New Venue for Workshops: NII's Shonan Village

In the class and style of Dagstuhl, Oberwolfach, Bertinoro or Banff, NII (a national research lab in Japan) has just started weekly meetings. These will be held at the Shonan village, a popular place close to Mt Fuji with spas and beaches, \leq 1.5 hrs from Tokyo. The official webpage has more details and call for proposals. There is one on graph algorithms and combinatorial optimization run by Satoru Iwata and Ken-ichi Kawarabayashi, already scheduled for early 2011. I am delighted!

Labels:

Wednesday, November 17, 2010

Workshop on Parallelism

Phil Gibbons, Howard Karloff and Sergei Vassilvitski are organizing a much-needed workshop on parallelism with 360 degrees view at DIMACS. Contact the organizers if you would like to attend.

Parallelism: A 2020 Vision
Decades after its origins and its study in academia, parallel computing is finally becoming pervasive. Today's PCs have multiple cores, and some predict 1000-core machines in the near future. Google, Yahoo and others run MapReduce or Hadoop on thousands of machines to answer search queries, among other things. D. E. Shaw Research is building a massively parallel machine to simulate molecular dynamics. Climate scientists predict the evolution of the earth's climate on parallel machines. Amazon's EC^2 enables users to run jobs on a "cloud" of PCs.

The evolution of parallel computing from primarily an academic subject in the '80s to its realization today is an exciting development. This DIMACS workshop will bring together some of the leading researchers and practitioners involved in parallel computing to describe their work. Attendees will discuss, for example:
* how parallel computing in its various forms is used today;
* what new uses and programming abstractions will arise by 2020;
* what parallel computers will look like in 2020; and
* how to model parallelism theoretically.

The workshop will occur on March 14-16, the first two days of which will be devoted to invited talks, the last to contributed talks.

Labels:

Thursday, November 04, 2010

FOCS 10 Tutorials: Structures in One's Mind

I missed Ketan's tutorial, but managed to attend the tutorials of Mihai and Tim, both gems.

Mihai Patrascu cast lower bounds for data structures on cell probe model (hence for general purpose computation) in the context of P vs NP. He started with the simple lower bound of \Omega(log n) on \max(t_{update},t_{query}) for the dynamic partial sums problem. He showed the whole proof by communication complexity, with a picture and succinctly stated hard case. Then he discussed extensions to the marked ancestors problem, and eventually, dynamic connectivity which leads to non-deterministic communication complexity problems. The hard instances are now 3 stage problems, and he showed connection to 3-sum problem. The lower bound in this case is like \Omega(n^{1/\eps}). In the second half, he addressed static data structure problems by starting with asymmetric communication complexity. In particular, he defined the Lopsided Disjointness problem (LSD) and showed how it can be used to show lower bounds for the partial match problem, 2d stabbing queries, range median queries, and so on. There were cute connections like 2d stabbing to butterfly graph reachability, static 2d stabbing to 1d persistent queries and so on. As the talk's end neared, one sensed there is a huge dependency graph in Mihai's mind that connected the lower bounds of these problems together!

Tim Roughgarden chose to talk about algorithmic mechanism design. He started truthful mechanisms and then presented his philosophy: designing truthful mechanisms is like working in a restricted computational model. He used the example of single parameter problems to show that instead of thinking of truthfulness (which may be a new concept to grapple with for theory CS folks), one can equivalently just think of mechanisms with monotonic allocation (which is a more familiar concept for TCS). I thought this was a great message, phrasing this classical result as a transformation in thinking from Economics to Algorithms. Ex 2 was revenue maximization for k item auction with n bidders. Bayesian view is standard and interesting in itself, but TCS folks focus on worst case equilibrium. The challenge is to set up a benchmark. Instead of comparison against per instance valuation or max_i v_i, or distribution F, he proposed revenue benchmark of max_{i\leq k} iv_i (there are some details). After the break, Tim spoke about multiparameter problems and focused on combinatorial auction. He mentioned Feiger STOC 06 that gives 2-approx for optimizing total welfare under monotone and subadditive valuation, but no good truthful approximation is known; also, very little known about revenue optimization Tim described the results known for combinatorial auctions, including the \sqrt{m} approx on welfare (do VCG on sample of limited outcome: give all good to one player or one good to each player) and its tightness from DN STOC 07. He mentioned recent results on blackbox reductions, connection between smooth analyses and mechanism design and so on. Q: David Johnson asked what is known about non- subadditive cases like spectrum auction, ex with complements. Sanjeev Arora asked how Economists respond to failures of VCG or TCS insights. Yevgeniy Dodis asked about collusions and differentially private mechanisms as alternate solutions. There were other questions about repeated games, richer interactions and models, etc. Tim is a powerful speaker with a richer working vocabulary than most TCS speakers; he distilled the maze of problems and metrics in this area into a few nuggets for TCS researchers. Curiosity: Tim did not once mention "Nash" (maybe just once during Q/A).

Both the tutorials had something in common. One sensed that in each case, the speaker had a vast structural zoo in their mind relating the various problems, and were expertly revealing only parts of it to the audience, the way a good tutorial should.

Labels:

Simons Science Series: David Heeger

David Heeger, the speaker on Oct 13, is a researcher in the interdisciplinary cross-section of engineering, psychology, and neuroscience, and was introduced against the backdrop of some stunning snow-covered mountains.

He started with vision systems (vision task of motion perception as "unconscious influence"). We can begin understanding vision from single neurons that can be modeled simply as linear weighting functions that respond to particular speed or direction. For example, each neuron has an orientation or direction selectivity that can be seen as a tuning curve. Q: Do these selectivities happen at birth or later? David said, some even before eyes open (referred to many cat+strobe light experiments). He mentioned halfway rectification and spiking threshold to convert functions. Also, there is small bias among neurons for directions (preferred orientation). Next David wanted to extend this model to motion. Neurons average over space and time window. Q: What is the fastest speed one can measure. David said, distance is in units of degrees of visual perception, and if I recall right, he said 100/sec. Distributed representation of speed is obtained by multiple sensors in different orientation preference. A failure of this model is that linearity of signals or responses dont hold, so he proposed normalizing with all neurons in the neighborhood for L_2. Q: Does it matter at what stage the normalization happens? A: No, it happens many times/stages. Q: What mechanisms implement the above? A: Many, like inhibition, synoptic depression, .... Q; Does normalization work in other areas of brain? A: Yes, like in olfaction in fruitfly. The advantage of this proposal is output can be interpreted as a probability and has certain invariances. Q (from theoretical computer scientist, Baruch? Boaz?): Why L_2. A: Some do L_1. Next David attacked the problem of attention. Even when vision does not change, attention can change. David extended the model to matrices and a function based on pointwise multiplication, and pointed out the neatness of the approach.

In the second part of his talk, he connected his work to clinical aspects by looking at failures of these computational models in cases like epilepsy, autism or schizophrenia (who for ex are not fooled by certain image contrasts). Then he broadened the message from neural circuit mechanisms to behavior.

Scientists and Mathematicians may be shy, but not when it comes to asking questions. There were many questions at the end. Q: what is the role of "defective" parts in the computation. Lot of redundancy, averages will work for primates, but not for insects where failure rates of neurons are very small. Q: What is the computational model for a "small" organism, eg is attention a factor in small organisms (Sylvain Cappell)? A: Attention does not need new resources over what we have.

I am sure many of us theoretical computer scientists in the audience were distracted by the math/algorithmic/complexity questions underlying the neuron model and had similar reaction to the talk: so, the model of brain is parallel guess of thresholds and linear weighting function that implements each threshold? To what extent does this --- with L_2 weighting --- fit the reality? Is there any nonlinear computation model? What are the powers and limitations? Etc. But the rest of the audience didn't seem focused on these questions.

Disclaimer: I scribed what I could, there were many interesting comments by David and the audience that I didn't have the scientific background to parse in real time during the talk though they sounded important and insightful.

Labels:

Wednesday, November 03, 2010

Cakes and Us





The ever-creative Mark Sandler we know and Mey created this xkcd wedding cake. Enjoy! :)