Story Points Aren’t Accurate – That’s Why They’re Good

Ben Northrop wrote to complain that story points are not accurate. They don’t (always) map linearly to hours spent, so adding up story points over a large project won’t accurately give hours for the project. In the spirit of expressing controversial opinions, I will agree, and explain why I think that’s a good thing.

I believe that story points serve as a “rough” estimate. In the teams I work with, story point estimates are made quickly (a few minutes to be sure we understand the story, then quickly discuss and reach a consensus estimate). They are quantized (must round off to some Fibonacci number) which means that any given estimate is necessarily imperfect.

As such, they provide a cheap (didn’t take long to generate) but rough (not perfectly accurate) estimate, and they have to be respected as such. Story point estimates would not be useful to answer questions like “Will this project deliver in October or November?”, but they ARE useful for questions like “Would this be a 3-month project or a 1 year project?” For some purposes, a more precise estimate is needed, and then it may be necessary to invest a few hours to a few weeks to perform detailed work to generate a more precise estimate. However, I think that such situations are rare: people *want* perfect estimates ahead of time but rarely *need* them. Also I think that people are usually fooling themselves: most (usually waterfall) projects with precise up-front estimates later discover that those estimates are not accurate.

One of the strengths of story points is that everyone (including the customer) REALIZES that they are rough and don’t correspond to a precise delivery date — something that can be difficult to explain for estimates expressed in hours.

Constant Crawl Design – Part 4

Suppose you wanted to build a tool for anonymously capturing the websites that a user visited and keeping a record of the public sites while keeping the users completely anonymous so their browsing history could not be determined. One of the most difficult challenges would be finding a way to decide whether a site was “public” and to do so without keeping any record (not even on the user’s own machine) of the sites visited or even tying together the different sites by one ID (even an anonymous one). [More...]

Constant Crawl Design – Part 3

Suppose you were building a tool integrated with web browsers to anonymously capture the (public) websites that a user visited and store them to a P2P network shared by the users of this tool. What would the requirements be for this storage P2P network? [More...]

Constant Crawl Design – Part 2

Suppose you were building a tool for anonymously capture the (public) websites that a user visited. What would the UI requirements be? [More...]

Constant Crawl Design – Part 1

Do you remember Google Web Accelerator? The idea was that you downloaded all your pages through Google’s servers. For content that was static, Google could just load it once, then cache it and serve up the same page to every user. The advantage to the user was that they got the page faster, and more reliably; the advantage to Google was that they got to crawl the web “as the user sees it” instead of just what Googlebot gets… and that they got to see every single page you viewed, thus feeding even more into the giant maw of information that is Google.

Well, Google eventually dropped Google Web Accelerator (I wonder why?), but the idea is interesting. Suppose you wanted to build a similar tool that would capture the web viewing experience of thousands of users (or more). For users it could provide a reliable source for sites that go down or that get hit with the “slashdot” effect. For the Internet Archive or someone a smaller search engine like Duck Duck Go, it would provide a means of performing a massive web crawl. For someone like the EFF or human-rights groups it would provide a way to monitor whether some users (such as those in China) are being “secretly” served different content. But unlike Google Web Accelerator, a community-driven project would have to solve one very hard problem: how do this while keeping the user’s browsing history secret — the exact opposite of what Google’s project did. [More...]

Host Error 2

Another posting on how to understand Profile errors. [More...]

Removing the “Macros” warning in PowerPoint

When you open any PowerPoint presentation made by my company’s default presentation format, you get a warning that it contains macros and asking whether the macros should be disabled. The macros are useless, but removing this is somewhat awkward and difficult to remember so I’m writing down the instructions. [More...]

Using a Mix of Computers and Humans for Security

Suppose that your bank offers currency conversion as a service: give them a deposit or make a withdrawal in euros and they’ll adjust your balance in dollars. They don’t do this out of the goodness of their hearts: today’s conversion rate is around 1.28 $ / €, so they’d give you 0.75 € for every $ and 1.25 $ for every € so they’d make a good 6.5% margin on the conversions. [More...]

Namespace for a valid SOAP message

A brief hint: if you see an error message like this:

InputStream does not represent a valid SOAP 1.1 Message

check the namespace of the SOAP envelope

SOAP 1.1: http://schemas.xmlsoap.org/soap/envelope/

SOAP 1.2: http://www.w3.org/2003/05/soap-envelope/

Binary Backward Compatibility

I saw this interesting article about a weakness in the Scala language. The weakness applies not just to Scala, but to pretty much any language: the community using the language cannot grow past a certain point until it somehow solves the problem of libraries depending on other libraries in a large (deep) tree. [More...]