Sunday, March 27, 2011

STOC 2011 Posters

STOC 2011 Call for Posters. Details at: http://www2.research.att.com/~dsj/stoc11/CallForPosters.html

This year, STOC 2011 (part of FCRC) is experimenting with a new Poster Session. The poster session will be held from 8:30pm to 10:30pm on Monday June 6, and should be thought of as an extended hallway-discussion with a visual aid. The poster session will be accompanied by refreshments. We welcome posters from registered FCRC attendees on research in any aspect of theoretical computer sciences, and believe presenting posters should be especially appealing for:

- Researchers with new results since the STOC submission deadline.
- Researchers with papers in other FCRC conferences that would be of interest to the STOC community.
- Researchers with TCS papers not appearing at FCRC that would be of interest to the STOC community.
- Students who want to let senior researchers know of their work.
- STOC authors who want to have a visual aid for extended discussions of their work.

Deadline:
May 2. Poster Committee: Avrim Blum (CMU), Lisa Fleischer (Dartmouth), Ravi Sundaram (Northeastern), Salil Vadhan (Harvard). Please send any questions to stoc11posters@seas.harvard.edu.

Labels:

Sunday, March 20, 2011

A String Problem: Relative Suffix Tree

Given a string S of length n, its absolute suffix tree is a compressed trie of all the n suffixes of S. This is the classical data structure that is popular.

Given a string S of length n and a reference string R of length m, the relative suffix tree of S is its absolute suffix tree but with edges labelled by substrings NOT in R being pruned away (together with the subtree under those edges). Say m < n.

Is relative suffix tree meaningful, nontrivial? Any research problems, a la, best running time?

DIMACS Parallelism 2020: John Gustafson

Nitish Korula is a thinker, able to assimilate and relate vast amount of information; I have been looking for a sufficiently broad topic to get him to weigh in, and this proves ideal:

----
Thanks to Muthu for the opportunity to write a guest post, and thanks to the organizers for putting together this workshop. There were several great talks on Monday, but one that really stood out was a "keynote" by John Gustafson, director of research at Intel Labs. It exemplified many of the qualities of a great keynote address; it was provocative and comprehensive, with the speaker demonstrating a mastery of every aspect of his subject.

Gustafson's vision for 2020 was billion-core computing. Not a billion cores on a single chip, but a billion cores in a data-center sized single computing unit. Getting there, though, will be more difficult than simply continuing on our current path; our performance trend curves are flattening out on many dimensions. Will Moore's law continue to hold for another 10 years? Intel imagines that we will be at 5 nm gates by then, but 5 nm is only about 50 atoms wide; at this scale electrons travel 'slowly' and "the speed of light isn't what it used to be". Even if we can get performance improvements, this is likely to require significant power increases, so the "performance per watt" will go in the wrong direction.

The talk was far from pessimistic, though; the message was that building these billion-core machines will require radically new ideas in many areas of computer science, and it's an exciting time to be doing research on them. Some of the themes woven through the talk were a need for a better understanding and use of memory, better algorithms for scientific computing simulations, and better performance metrics. I can't possibly do justice to them all, but here are a few highlights:

  1. Shekhar Borkar, the director of Intel's Microprocessor Technology Lab, says "Caches are for morons"! More seriously, reading data from memory takes a long time, and so we have been trying techniques like speculative fetching of large blocks of memory into a cache. According to Intel's best estimates, roughly 79% of data in a cache is never accessed. Gustafson suggests that we "cherish every bit" of memory we access, which leads to a second suggestion:
  2. Throw out old Numerical Analysis textbooks! Algorithm designers have historically "measured" algorithm run times by counting the number of floating point operations / additions / multiplications. This made sense decades ago, when floating point arithmetic took 100 times as long as reading a word from memory. Now, one multiplication takes about 1.3 nanoseconds (to go through the entire pipeline; this underestimates throughput), compared to 50-100 nanoseconds for the memory access. Why do our algorithms measure the wrong thing? We should be counting memory accesses; it isn't reasonable to ignore the constant factor of 50.
  3. Processor utilization is a terrible way to measure performance of a supercomputer! As one of many such analogies, he compared processors to snow plows at O'Hare airport; there are dozens of plows at the airport, almost all of which are idle most of the year. But when you need them, you need all of them; it wouldn't make sense to cut back on the number of snowplows because the average utilization is low. We should design programs that use as many processors as needed; see the next point.
  4. We need new arithmetics! (What?) Today, we typically use 64-bit floating point arithmetic for everything, because "64 bits is enough". Sometimes it isn't, and at other times, it's far more than the application needs. When it isn't, you run into rounding errors that propagate; the heuristics to deal with this don't always work, and so you could have 64 bits of precision with errors from the third bit onwards. And when you don't need 64 bits, why are you wasting memory? You should be "cherishing every bit". One way of dealing with this problem is interval arithmetic, which explicitly maintains an interval guaranteed to contain the "true" answer. Unfortunately, interval arithmetic has its own issues; it's very easy to get the interval [-\infinity,+\infinity] if one isn't careful. Still, there are applications (such as $n$-body problems) where interval arithmetic gives good results. (And when you have a billion cores, you can split up the interval and have different cores working on different sub-intervals.) For these approaches to catch on, we need new algorithms for interval arithmetics and/or new arithmetics.
  5. We need new programming languages and development environments to allow programmers to understand / interact with the hardware they're programming for. Should compilers receive a "memory description" in addition to source code? Should programmers use a 3-D environment in which they can specify the spatial relationship of their billion cores and memory?
There was much more in this vein; I didn't touch on system software and reliability, networks and communication (Gustafson thinks we should increase communication budgets by a factor of ~8), etc. As you can probably imagine, the controversial points generated a lot of
questions and heated debate. Like many great talks, it energized the audience and suggested new lines of work on areas including architecture, compiler design, human-computer interaction, networks, scientific computing, programming languages, and (of course) algorithms.

Labels:

Wednesday, March 16, 2011

HackRU

HackRU is the largest event of the year hosted by the Undergraduate Student Alliance of Computer Scientists (USACS). It is a 27-hour programming competition beginning at 11AM on March 26th at the Hill Center on Busch campus, Rutgers Univ, in NJ. The webpage says, "The goal of HackRU is to get programmers from all over to get together and make awesome applications. We will have companies presenting their APIs—until 2PM on the 26th—which will be available to contestants to use in their applications. Contestants do not necessarily have to be students at Rutgers University. Participants may enter with a group of people as a team or may choose to work solo. Food and drinks will be served throughout the contest, and free parking will be available for contest participants. Linux machines will be available for contestants, although we highly recommend bringing your own laptops. The event will end at 2PM on March 27th and the contestants with the best applications will win prizes!"

Seems like a great opportunity to go, hang out with smart people, build something.

Workshop: Parallelism 2020

I was at the DIMACS Workshop on parallelism: 2020 vision. I looked forward to the workshop and as expected, problems and challenges that were discussed were reminiscent of early 1990's,; the keywords however were not interconnection machines, grids or hypercubes but mapreduce and multicores. The people --- Vijaya Ramachandran, Uzi Vishkin, Leslie Valiant, Mike Goodrich, Guy Blelloch and many others --- were stalwarts who brought a lot of experience from PRAM world of 90's to the discussions of the day.

  • Mike started off the meeting with a talk on many algorithmic results he is able to obtain on a model of mapreduce. The talk was a tour through data structures (invisible B trees), geometry, linear programming and simulations (some version of BSP by a version of mapreduce), and at the least is a large swath of benchmark of algorithms research in this area that now one can understand and try to improve.
  • I went next. In the first part, I gave examples of simple things I wish I could do easily in mapreduce (recurse on problems like prefix sums and be able to fill answers back in, enumerate pairs for triangle counting, or do sketches for eigenvalue computations, etc). In the second part, I spoke about a special case of mapreduce, namely, sawzall, and our model for it showing it to be equivalent to streaming. I ran out of energy half way through and thought I did not communicate well the importance of sawzall, the relevance of relating it to streaming, or the niftiness of the proof that simultaneous communication complexity model can simulate sequential communication complexity model (via Savitch's theorem on streams). Apologies to the audience, as Howard Karloff pointed out, my talk had 2 jokes and 0 proofs (not true, I proved a theorem about counting triangles in terms of eigenvalues. :)), while I should have managed to work in one more proof. My talk pdf here.
  • Sid Suri went next and spoke in detail about the problem of counting triangles. The straightforward enumeration algorithm for counting triangles takes very long because some nodels have high degrees. Then he exploited schank's observation that responsibility for counting triangles can be shifted to low degree nodes, together with graph partition techniques to get an improved algorithm and showed actual mapreduce run times. An interesting part of his talk was the finale, where he abstracted lessons for mapreduce algorithms from this particular example: quadratic shuffling is hard, rounds matter unless some reducers are streaming which saves some, and both the model and the machines can not differentiate between constants.

Leslie started the afternoon session with a clear goal: enabling a world where parallel programs can be written once and used on whatever parallel machine. He then extended the BSP model to a hierarchical version, and amidst many parameters, still managed to design optimal algorithms for many problems. Vijaya went next and spoke about a parallel machine machine with local cache and work stealing across machines, and proposed algorithms for a number of problems in a joint work with Richard Cole. Finally, Uzi did a powerful defense of his agenda for past few years that PRAM is a useful model for thinking parallel programs and discussed 1000 or so machine PRAMs within reach.

When I headed back after a toothsome dinner at due mari, I realized that this workshop was definitely about what algorithms community should be doing --- applying our expertise on parallelism to see its impact in the new world of multi-core, multi-machines world, where systems researchers and builders have built parallel machines successfully. So, to some extent, this is a bridge from the other side (the side we knew was going from beautiful theory to getting them built). We also need a healthy perspective that systems change year to year, mapreduce of yesterday may be different from one of tomorrow. So, more tighter loop of interaction between theory and practice will be desirable.

Labels:

Saturday, March 12, 2011

Some Counts Dont Count

Now that Turing Awards are in the air (Congratulations Leslie!), here is some data. Recall that the h-index is the largest h such that h papers are cited \geq h times each.

Here are the h-indices of Turing award winners.
82 Robert Tarjan, 57 Alan Newell , 54 Amir Pnueli , 54 Herbert A. Simon, 53 Adi Shamir, 51 Richard Karp, 49 John McCarthy, 47 C.A.R. (Tony) Hoare, 46 John Hopcroft, 45 Jim Gray, 43 Barbara Liskov, 43 Robin Milner, 43 Ronald L. Rivest, 42 Donald E. Knuth, 41 Manuel Blum, 41 Marvin Minsky,, 41 Joseph Sifakis, 40 Edmund M. Clarke, 40 Michael O. Rabin, 40 James H. Wilkinson, 40 Niklaus Wirth,

Plenty of h-indices above this cluster, and below. All data filtered from this list.

Labels:

in situ Theater

I don't have concept of a weekend --- work flows from one week into the weekend and on to the next --- but I have a distinct feel for the Friday evening, when work has ``ended'' for the week, I am fatigued, having gone moment to moment, and I pause some. In that mood, I went for a walk and found myself at the World Financial Center (WFC), accidently and directly walking onto the performance of The Rover by the NY classical theater.

It is what I call in situ theater. The entire 3.5 acre WFC site with its multilevel stairs becomes the stage. The action and audience are moved from spot to spot, actors move in and out of the cast and paper masks, costume and knifes flash intermittently. See NYT review. The actors did a superb job, changing their diction, body language and strides from small square spaces to long elongated ones along walking bridges, grand staircases with working escalators carrying wall street folks, and finally, to the cavernous space filled with green benches and tall trees. Surprises like this keep me going.

Saturday, March 05, 2011

NVTI 11

The Netherlands Theory Day took place on Friday. After a brief puzzling moment (the projector showed mirror image of the slides and I had to tweak knobs, one of which turned out to be the joystick for the menu).

Jos Baeten motivated his talk with a few observations: in reactive systems, input is not a string with an endmarker; world is nondeterministic, such as google results from one moment to another; Turing m/c's can't fly an airplane but computers can; etc. Then he embarked on describing a theory for interactivity. He emphasized the lack of distributivity in presence of interaction (a.b + a.c \not= a. (b+c); a is open door and after that have b or c revealed is different from opening the door and having the choice of b vs c), and develop an algebra. The talk went from processes similar to finite automata to pushdown processes with new concepts along the way (new Kleene closure, unbounded branching of automata, nonstandard cfg, use of contra simulation, etc) and eventually to reactive Turing machines with executability playing the role of computability. The audience had questions about quantum versions, analogs of halting problems or Universality, and so on, and technical discussions about nonlinear recursion. Jos ended the talk saying this material was good for teaching 1st year theory course because this can lead to later courses on both computability and to process algebra.

I spoke second on data stream algorithms. I had a new puzzle for this talk and was energized. Mark de Berg asked about classes of problems solvable not only in polylog space like typical streaming, but also in say \sqrt{n} space, etc. Yvette Tuin talked about Dutch govt funding for theory but despite best efforts, the picture is dismal for core theory. Observations: Grants were called "subsidies". There were interesting programs like Games for Healthcare, and things like vici, vidi, veni on slides that I am trying to still figure out. Over lunch, I got tutored on how to say Vermeer and Van Gogh.

Hans Bodlaender spoke after lunch about kernelization: preprocessing the input in polytime with what looks like simple rules to reduce the problem to its polysize kernel in some parameter k. The first example was Vertex Cover, for which O(K^2) kernel exists (one can remove any vertex of degree >= k+1 and include it). The second example was about transforming a string into one in which all symbols occur only in runs, and this too had quadratic sized kernel. Hans then related this to FPT (fixed parameter tractability): a prob is FPT iff there exists a kernel. Questions were about complete/hard problems (ex: W(1) hard), role of choice of parameters (in SAT, you can set k variables or satisfy k clauses), and other problems (set cover is hard, linear algebraic problems?). Hans talk altogether was very interesting and I think "kernelization" is a nice combinatorial way to bring one into the FPT world.

The final talk was by Tobias Nipkow. In introducing him, Femke van Raamsdonk talked about success of his group in checking complex proofs (like proof of Kepler's Conjecture or compiled Java programs). Tobias stated that students find it challenging to prove things formally. (Scott Aaronson who has a gift for phrasing, made a cameo: he was quoted comparing some proofs to LSD trips). So, Tobias started breezily, with setting out the goal of teaching students to do proofs with a proof assistant using the Isabelle system, and the image he invoked was thought of bloody video game (qed or die). Rest of his talk was about his experiment of teaching a class on semantics with HWs using a proof assistant with 15 or so MS students. Fascinating, the kind of observations one can make with such experiments. Questions: are there proof "styles"? does Isabelle find counterexamples? can this be extended to algorithms (challenges: probability, approx, running times?) can one detect plagiarism? etc. Some of the technical discussions were around whether this only applied to constructionist logic or if this was too old fashioned by omitting pointers, etc.

After the talks, the NVTI organization (Jaco van de Pol led) discussed their business --- newsletter, thesis awards --- and I was a fly on the wall. They discussed more technical activities, and Femke brought up the idea of connecting with activities for the Turing year (didn't know about that!) in 2012.

ps: The day ended with the dinner at the traditional place (for 15+ years). It was great to see Han La Poutre whose early work on data structures and dynamic graph problems I know and liked, who now seems to have ambled onto bidding/bargaining/auctions/strategies. Han is a fellow city dweller, and Amsterdam is one of the great art cities of the world.

Labels: