### BWCA Notes

Here are very rough notes on Day 1. Tim tells me videos will be posted soon.

------

Tim started the workshop stating that Stanford Univ was doing a focused year of activity on theory of CS, and mentioned there will be workshops on social network algorithms and expanders in the future.

Avrim spoke about the premise that the objective we optimize in problems is usually a proxy for some underlying target. Assume: all c-approx are \eps close to desired target (eg., clustering to 1+\eps gives good grouping of proteins by functions). An example result [Blum, Balcan, Gupta 99] is then for c>1, can get O(\eps) close to target. He posed several open problems to attack this way: sparsest cut (focus on application to image processing, segment/partition. What if we assume that 10 approx has error \eps for image segmentation?), phylogenetic trees, other assumptions? (eg., most c-approx are close to the target), for "easy" problems, given arbitrary instance, find stable "portions" of solution. Eg., Bilu-Linial perturbation-stablity notion. Some of the questions: q1: as they say, clustering is either easy or not interesting. comments? A: key is difference between error and opt metric, not clustering opt per se. Alex q: are there smooth versions? like as property goes from structure to arbitrary, runing time automatically adjusts from quick to fast? A: Maybe look at PTAS examples? q: Is approx-stabilty a rare property? A meta theory? Therefore we should not use approx at all. A: Clustering does NOT have to be hard, so a formulation that makes it easy is not immediately challenging to the establishment. Tim: Any hardness results? So, the assumption is not too strong? Avrim is a researcher can drive the car with two gas pedals, one for math/CS techniques, one for formulation/abstraction/philosophy and can put the foot down on both or either as needed.

Nicole Immorlica spoke about PASS approximations, through the example of facility location. She made a clear argument that you wanted to consider variations that assume some property of the solution (not the instance). Cited the Kleinberg, Papadimitriou and Raghavan paper on catalog segmentation as an early example. She then described the PASS Framework: take a signature of the solution: for each facility, q_i, alpha_i, consider the recoverable value as a function of the signature, and do analysis in these terms. Greedy/LP gives additive error guarantees per facility, and provides guidance on choice of heuristics. Nicole is a clear speaker, often using simple examples to make the point.

Nina Balcan spoke about a very nice area, developing a discriminative theory of semi-supervised learning. Unlabeled data helps in many ways (self consistency, throw out hypotheses etc). Propose a new model. concept class + compatibility between the concept and data distribution. learn not C, but C under compatibility. Gives nice algorithms: First consider unlabeled data, choose a rep from a set of compatible hypotheses, then use labeled example to choose among these. How much unlabeled data is needed, how much can unlabeled data reduce number of labeled data needed? Pointed to Zhu survey 2010. Kakade, COLT 2008. q: Does this theory extent to active learning (where you get to pick questions)? Yes. Need to combine with compatiblity constraints. q: any killer applications? q: anything to choose from transitive SVM and … for any instance. A: run them in parallel and stop when justified.

Spielman spoke after lunch on smoothed complexity. He spoke about the role of the condition number in a phenomenally illustrative way. Theroem: ill conditioned ifff some row is near the span of the others. Eckert+Young: cond no = 1/distance to singular. Another condition no= ||A|| ||A^-1|| where || || is Frobenius. Gaussian elimination without pivoting: entires may not get large when submatrices are not ill-conditioned. matlab demo to show Gaussian elimination sucks. Towards the end of the talk, he spoke about solving set of polynomial equations, solving: n eq , degree d. a point in which Newton converges quadratically. (Smale def) Bezout theorem says many solutions. NP Hard. SAT x_i (1-x_i)=1 is poly clause. Recent result is a poly smoothed complexity. Result sounded amazing, need to follow up. Dan's talk and work makes theory research inspiring, as someone in the audience observed later, and no doubt others felt the same way.

After a break (I fell asleep some during the talks, I was tired, apologies to the speakers), Andrew Goldberg spoke about shortest path problems. The message was that there is science of algorithms: do, implement, observe, test, tweak, realgorithmize, and so on. In a great example of what algorithms research can do when thrown at a basic problem, Andrew gave a series of improvements to shortest path computations in particular, as they applied to road networks and map queries, relating VC Dimension to the complexity (HW: Unique shortest Path system in graphs has VC dimension \leq 2.) In road networks, there are transit edges which are natural and important and cut down on shortest paths to consider, store and compute with. Dijkistra's takes 2s for 18M vertices. Result now << 1s. q: Alex: reproducible experiments, are these reproducible?

Susanne Albers spoke about online competitive analysis. Paging, with resource augmentation. Paging/ k/k-h+1 competitive if OPT has h <= k pages for alg. For list update problem, proposed and analyzed a new locality model based on runs. q: Aranyak: What benchmark for running scheduling expos? A: Hard. sometimes need deadlines for energy efficiency. Tim: How many of these online algorithms are ideas that have usefulness elsewhere? online ad allocation will be an example.

----

Altogther, I thought the quality of the talks was very high on the first day. I could not follow the workshop with as much attention as it deserved on the second and third days (it was vaguely unsettling to hear Richard Karp talk using ppt slides. :))

------

Tim started the workshop stating that Stanford Univ was doing a focused year of activity on theory of CS, and mentioned there will be workshops on social network algorithms and expanders in the future.

Avrim spoke about the premise that the objective we optimize in problems is usually a proxy for some underlying target. Assume: all c-approx are \eps close to desired target (eg., clustering to 1+\eps gives good grouping of proteins by functions). An example result [Blum, Balcan, Gupta 99] is then for c>1, can get O(\eps) close to target. He posed several open problems to attack this way: sparsest cut (focus on application to image processing, segment/partition. What if we assume that 10 approx has error \eps for image segmentation?), phylogenetic trees, other assumptions? (eg., most c-approx are close to the target), for "easy" problems, given arbitrary instance, find stable "portions" of solution. Eg., Bilu-Linial perturbation-stablity notion. Some of the questions: q1: as they say, clustering is either easy or not interesting. comments? A: key is difference between error and opt metric, not clustering opt per se. Alex q: are there smooth versions? like as property goes from structure to arbitrary, runing time automatically adjusts from quick to fast? A: Maybe look at PTAS examples? q: Is approx-stabilty a rare property? A meta theory? Therefore we should not use approx at all. A: Clustering does NOT have to be hard, so a formulation that makes it easy is not immediately challenging to the establishment. Tim: Any hardness results? So, the assumption is not too strong? Avrim is a researcher can drive the car with two gas pedals, one for math/CS techniques, one for formulation/abstraction/philosophy and can put the foot down on both or either as needed.

Nicole Immorlica spoke about PASS approximations, through the example of facility location. She made a clear argument that you wanted to consider variations that assume some property of the solution (not the instance). Cited the Kleinberg, Papadimitriou and Raghavan paper on catalog segmentation as an early example. She then described the PASS Framework: take a signature of the solution: for each facility, q_i, alpha_i, consider the recoverable value as a function of the signature, and do analysis in these terms. Greedy/LP gives additive error guarantees per facility, and provides guidance on choice of heuristics. Nicole is a clear speaker, often using simple examples to make the point.

Nina Balcan spoke about a very nice area, developing a discriminative theory of semi-supervised learning. Unlabeled data helps in many ways (self consistency, throw out hypotheses etc). Propose a new model. concept class + compatibility between the concept and data distribution. learn not C, but C under compatibility. Gives nice algorithms: First consider unlabeled data, choose a rep from a set of compatible hypotheses, then use labeled example to choose among these. How much unlabeled data is needed, how much can unlabeled data reduce number of labeled data needed? Pointed to Zhu survey 2010. Kakade, COLT 2008. q: Does this theory extent to active learning (where you get to pick questions)? Yes. Need to combine with compatiblity constraints. q: any killer applications? q: anything to choose from transitive SVM and … for any instance. A: run them in parallel and stop when justified.

Spielman spoke after lunch on smoothed complexity. He spoke about the role of the condition number in a phenomenally illustrative way. Theroem: ill conditioned ifff some row is near the span of the others. Eckert+Young: cond no = 1/distance to singular. Another condition no= ||A|| ||A^-1|| where || || is Frobenius. Gaussian elimination without pivoting: entires may not get large when submatrices are not ill-conditioned. matlab demo to show Gaussian elimination sucks. Towards the end of the talk, he spoke about solving set of polynomial equations, solving: n eq , degree d. a point in which Newton converges quadratically. (Smale def) Bezout theorem says many solutions. NP Hard. SAT x_i (1-x_i)=1 is poly clause. Recent result is a poly smoothed complexity. Result sounded amazing, need to follow up. Dan's talk and work makes theory research inspiring, as someone in the audience observed later, and no doubt others felt the same way.

After a break (I fell asleep some during the talks, I was tired, apologies to the speakers), Andrew Goldberg spoke about shortest path problems. The message was that there is science of algorithms: do, implement, observe, test, tweak, realgorithmize, and so on. In a great example of what algorithms research can do when thrown at a basic problem, Andrew gave a series of improvements to shortest path computations in particular, as they applied to road networks and map queries, relating VC Dimension to the complexity (HW: Unique shortest Path system in graphs has VC dimension \leq 2.) In road networks, there are transit edges which are natural and important and cut down on shortest paths to consider, store and compute with. Dijkistra's takes 2s for 18M vertices. Result now << 1s. q: Alex: reproducible experiments, are these reproducible?

Susanne Albers spoke about online competitive analysis. Paging, with resource augmentation. Paging/ k/k-h+1 competitive if OPT has h <= k pages for alg. For list update problem, proposed and analyzed a new locality model based on runs. q: Aranyak: What benchmark for running scheduling expos? A: Hard. sometimes need deadlines for energy efficiency. Tim: How many of these online algorithms are ideas that have usefulness elsewhere? online ad allocation will be an example.

----

Altogther, I thought the quality of the talks was very high on the first day. I could not follow the workshop with as much attention as it deserved on the second and third days (it was vaguely unsettling to hear Richard Karp talk using ppt slides. :))

Labels: aggregator

## 0 Comments:

Post a Comment

<< Home