2012 March : Dragons in the Algorithm

Archives

Constant Crawl Design – Part 4

Suppose you wanted to build a tool for anonymously capturing the websites that a user visited and keeping a record of the public sites while keeping the users completely anonymous so their browsing history could not be determined. One of the most difficult challenges would be finding a way to decide whether a site was “public” and to do so without keeping any record (not even on the user’s own machine) of the sites visited or even tying together the different sites by one ID (even an anonymous one). more…

Constant Crawl Design – Part 3

Suppose you were building a tool integrated with web browsers to anonymously capture the (public) websites that a user visited and store them to a P2P network shared by the users of this tool. What would the requirements be for this storage P2P network? more…

Constant Crawl Design – Part 2

Suppose you were building a tool for anonymously capture the (public) websites that a user visited. What would the UI requirements be? more…

Constant Crawl Design – Part 1

Do you remember Google Web Accelerator? The idea was that you downloaded all your pages through Google’s servers. For content that was static, Google could just load it once, then cache it and serve up the same page to every user. The advantage to the user was that they got the page faster, and more reliably; the advantage to Google was that they got to crawl the web “as the user sees it” instead of just what Googlebot gets… and that they got to see every single page you viewed, thus feeding even more into the giant maw of information that is Google.

Well, Google eventually dropped Google Web Accelerator (I wonder why?), but the idea is interesting. Suppose you wanted to build a similar tool that would capture the web viewing experience of thousands of users (or more). For users it could provide a reliable source for sites that go down or that get hit with the “slashdot” effect. For the Internet Archive or someone a smaller search engine like Duck Duck Go, it would provide a means of performing a massive web crawl. For someone like the EFF or human-rights groups it would provide a way to monitor whether some users (such as those in China) are being “secretly” served different content. But unlike Google Web Accelerator, a community-driven project would have to solve one very hard problem: how do this while keeping the user’s browsing history secret — the exact opposite of what Google’s project did. more…

[Here there should be links to more entries, but WordPress is a pain and I can't make it work.]