Simons Science Series: Sanjeev Arora
Sanjeev Arora needs no introduction for us CSists. To the larger audience of scientists and mathematicians at the Simons Science Series, David Eisenbud introduced him by pointing out his great trajectory in academia.
Sanjeev started the talk with an informal view of P vs NP and phrased it as "Can Brilliance --- someone gives us the solution, we check it and exclaim why didn't I think of it --- be automated?". The mathematicians briefly struggled with ease of checking decision vs optimization versions of a problem. Some in the audience needed to know what was n (input size in bits), whether P/NP was machine-dependent (under strong Church-Turing hypothsis, NO), or did randomization help (Not believed to help P vs NP). Given NP Completeness, how do we approach the world? Sanjeev pointed out the heuristic code-it-up approach vs average case vs approximation theory. Then he used the max-cut approximation as an example of approximation theory: GW approximation, inapproximability without or with UGC (no time to explain UGC in the talk). The local audience close to where I was sitting expressed some wonder at these numbers and the tightness. Sanjeev used this moment to make a meta point: the failure of SDP on some instance can be turned into inapproximability result. Then he discussed the PCP theorem, motivating NP = PCP(log n , O(1)), discussed gap amplification with an "impressionistic proof" and described Dinur's checking algorithm for maxcut. Rest of the talk meandered through KKL, Euclidean TSP, etc.
The audience had many question: does PCP theorem apply to natural language math proofs (ie can we replace referees in journals); in practice what does it mean if P=NP? (Crypto will collapse, SAT in o(n^2) might change worlds); do people work on P vs NP (Sanjeev didnt fall for the temptation to give examples of attempts and said when you work on a problem, you constantly keep both algorithm and complexity in mind, and eg wonder if I solve this piece, will it have a bad implication),;any connection to machine learning and data mining (some of the key problems there are hard to approximate); etc.
Well, Sanjeev is a prince of a researcher and he did a great job of presenting central thoughts related to P vs NP question. I was very pleasantly surprised by how audience related to the PCP theorem: many in the audience, seeing it for the first time, really saw the power of the theorem and its enormous implications. For the evening, maybe without intending, Sanjeev represented the SIGACT Committee for Advancement of Theoretical CS ("Increase awareness of theoretical computer science's activity and successes in general CS, and the public at large.")
Sanjeev started the talk with an informal view of P vs NP and phrased it as "Can Brilliance --- someone gives us the solution, we check it and exclaim why didn't I think of it --- be automated?". The mathematicians briefly struggled with ease of checking decision vs optimization versions of a problem. Some in the audience needed to know what was n (input size in bits), whether P/NP was machine-dependent (under strong Church-Turing hypothsis, NO), or did randomization help (Not believed to help P vs NP). Given NP Completeness, how do we approach the world? Sanjeev pointed out the heuristic code-it-up approach vs average case vs approximation theory. Then he used the max-cut approximation as an example of approximation theory: GW approximation, inapproximability without or with UGC (no time to explain UGC in the talk). The local audience close to where I was sitting expressed some wonder at these numbers and the tightness. Sanjeev used this moment to make a meta point: the failure of SDP on some instance can be turned into inapproximability result. Then he discussed the PCP theorem, motivating NP = PCP(log n , O(1)), discussed gap amplification with an "impressionistic proof" and described Dinur's checking algorithm for maxcut. Rest of the talk meandered through KKL, Euclidean TSP, etc.
The audience had many question: does PCP theorem apply to natural language math proofs (ie can we replace referees in journals); in practice what does it mean if P=NP? (Crypto will collapse, SAT in o(n^2) might change worlds); do people work on P vs NP (Sanjeev didnt fall for the temptation to give examples of attempts and said when you work on a problem, you constantly keep both algorithm and complexity in mind, and eg wonder if I solve this piece, will it have a bad implication),;any connection to machine learning and data mining (some of the key problems there are hard to approximate); etc.
Well, Sanjeev is a prince of a researcher and he did a great job of presenting central thoughts related to P vs NP question. I was very pleasantly surprised by how audience related to the PCP theorem: many in the audience, seeing it for the first time, really saw the power of the theorem and its enormous implications. For the evening, maybe without intending, Sanjeev represented the SIGACT Committee for Advancement of Theoretical CS ("Increase awareness of theoretical computer science's activity and successes in general CS, and the public at large.")
Labels: aggregator
2 Comments:
``metapoint: the failure of SDP on some instance can be turned into inapproximability result."
-- by recent results such as Raghavendra's result, this is no more a metapoint, but a precise theorem.
That is exactly what Sanjeev was referring to, but without explicitly listing the results.
-- Metoo
Post a Comment
<< Home