Friday, March 19, 2010

Monitoring user feedback on the net

It takes a lot of talent to look at an applied situation, abstract a theoretical CS problem, solve it, and have it really matter in the application. You have to be creative and balance between the pull of the application to be ad hoc, and that of the theory to be principled and analyzable. If you go too far in any direction, you get speared. Mark Sandler pulled a nice example recently:

The problem is, there is a lot of user-generated content on the web. It may be inappropriate for many reasons. We don't have algorithmic and human resources to monitor all to see which is appropriate and which should be removed. So, one solution is to let users on the web monitor the content and provide feedback (keep or remove the content). Now the problem is the feedback may not be correct, and we need to monitor feedback! A solution is to carefully select a few of them to be vetted by human raters. What is a suitable formulation of this problem of trading off missing some of the inappropriate content vs using far too many human raters, for this task? Mark and I worked on this, tried many different formulations, but Mark came up with a very nice framework ultimately. We have this paper in the upcoming WWW conference. The paper leaves open some probabilistic analyses which could be fun.

Labels:

0 Comments:

Post a Comment

<< Home