Skip to main content

Stackoverflow.com

There is a good article on the principles driving the development of stackoverflow.com, a site where programmers get help with their coding problems on ReadWriteWeb.

I was particularly struck by the design points where Spolsky highlights the frustration created wrong answers and obsolete results.

I can remember when I was able to circumnavigate the web through a search engine for the topic of history of photography. It was that small. I could see everything there was to see about history of photography online in a week, a week of drudgery wading through duplicate results page after duplicate results page, until I had made sure I had seen everything about my topic. Although filled with a fair amount of junk and duplicates, I was still able to find a single web page if it contained sufficiently unique keywords, until about a year before Google emerged, I had relied on AltaVista to take me back to a web page in one go, when I could not remember where I had found a code solution on some obscure personal page, for example. Then the search engines began to fail me, and single pages I had found before became nearly impossible to find, but eventually, search engine technology improved and with Google, you could find that one blog page with the coding. That was one the solution to the problem of finding things.

Spolsky is right to observe the problem now is that search is failing to distinguish between correct and incorrect answers; between current and obsolete answers to technical questions.

When I first started programming using Microsoft Visual C++ (I was just a dabbler), I had a question about how to render bitmap graphics. I turned to the library of articles and code intended to help developers. I was happy when search quickly turned up an article on how to introduce bitmaps into your application. After an hour or two of reading, it slowly dawned on me the author was not talking about what I was familiar with, Microsoft Foundation Class applications. I was seeing unfamiliar code and unfamiliar techniques. I glanced up at the date. The article was from the mid 1990s. It was about coding C under Windows before MFC was introduced. The first, supposedly most relevant, documents search had brought up from MSDN was completely obsolete and about coding without an application framework. I had wasted hours reading the wrong articles.

Stackoverflow.com is an example of a great site. It is well designed, the developers learned the lessons of the last fifteen years of web technology and applied them. It is clean, beautifully presented and well organized site. I have to admit they did right what I failed to do with phphelp.com, which started by envisioning many of the same goals. They had to courage to go ahead with "soft security," collaborative editing, and content surfacing and valuing through a user voting system. Of course, with the volume of content and edits, such tools are necessary. What two humans could watch and police such a flow of content while doing their day job? User contributed and curated content is the only rational answer.

(By the way, it would probably be better to describe their principles as being informed by behavioral economics or an evolutionary branch of the field, than anthropology or social psychology, I feel the way people use voting systems to surface content, how "soft" social engineering strategies are employed on wikis, etc. to be close to the phenomena studied by behavioral economics, not just financial choices.)

Comments

Popular posts from this blog

Reading Tweets

I see a new kind of writing being created on Twitter, including hashtags, mixed into the text, in a variety of creative ways. In future, we should see a system that allows users to make these kind of connections, but without needing to include obscure computer-like commands in their text. I sometimes feel I'm reading a Linux command line or script when reading some tweets. Sometimes, it takes a moment to figure out what the tweet means.

Traditonal Publishers Still Hidebound

"The idea that something that appeared in print is automatically worth paying for is nonsense." says Mark Coatney in Evaluating Time Magazine's New Online Pay Wall This is an example of thinking from the traditional publishing world, where if something made it into print or was "published" it meant the content with through a lengthy process of adding value and checking quality, through the editorial, fact-checking and proofreading process. This was thought in the olden days to mean something. Yes, it did, but not always. That editors and fact-checkers were available or that they had a hand in content did not necessarily mean puff-pieces, fabricated stories, falsehoods, mistakes, typos never made it into that published content polished to shine like your grandmother's counter tops. Publishing was a measure of trust and quality from the pre-network world. The network has a new set of criteria and indicators of trust and quality. I find that often writers who

Snowball, the Dancing Bird

A video of a dancing bird has become the latest YouTube sensation. Some people thought the bird's performance was faked, but for me, it is not surprising, given the sophisticated ability birds demonstrate for manipulating pitch and rhythm in their songs, that a bird shows the ability to keep time with music. Neuroscientists, including John Iversen of the Neurosciences Institute, have studied the dancing bird and confirm it is capable of extracting a beat from sound. What impressed me most about Snowball's performance is when he lifts his leg and gives it a little shake before bringing it down. As the investigators mention, it may be prompted by the pace being too fast to put his foot all the way down in time with the faster beat, but it piques my curiosity further. It appears Snowball is dividing the beat when he waves his foot, into two or three little waves, which if I am seeing it correctly, suggests birds are capable of division of the beat and perceiving and manipulating