There is a good article on the principles driving the development of stackoverflow.com, a site where programmers get help with their coding problems on ReadWriteWeb.
I was particularly struck by the design points where Spolsky highlights the frustration created wrong answers and obsolete results.
I can remember when I was able to circumnavigate the web through a search engine for the topic of history of photography. It was that small. I could see everything there was to see about history of photography online in a week, a week of drudgery wading through duplicate results page after duplicate results page, until I had made sure I had seen everything about my topic. Although filled with a fair amount of junk and duplicates, I was still able to find a single web page if it contained sufficiently unique keywords, until about a year before Google emerged, I had relied on AltaVista to take me back to a web page in one go, when I could not remember where I had found a code solution on some obscure personal page, for example. Then the search engines began to fail me, and single pages I had found before became nearly impossible to find, but eventually, search engine technology improved and with Google, you could find that one blog page with the coding. That was one the solution to the problem of finding things.
Spolsky is right to observe the problem now is that search is failing to distinguish between correct and incorrect answers; between current and obsolete answers to technical questions.
When I first started programming using Microsoft Visual C++ (I was just a dabbler), I had a question about how to render bitmap graphics. I turned to the library of articles and code intended to help developers. I was happy when search quickly turned up an article on how to introduce bitmaps into your application. After an hour or two of reading, it slowly dawned on me the author was not talking about what I was familiar with, Microsoft Foundation Class applications. I was seeing unfamiliar code and unfamiliar techniques. I glanced up at the date. The article was from the mid 1990s. It was about coding C under Windows before MFC was introduced. The first, supposedly most relevant, documents search had brought up from MSDN was completely obsolete and about coding without an application framework. I had wasted hours reading the wrong articles.
Stackoverflow.com is an example of a great site. It is well designed, the developers learned the lessons of the last fifteen years of web technology and applied them. It is clean, beautifully presented and well organized site. I have to admit they did right what I failed to do with phphelp.com, which started by envisioning many of the same goals. They had to courage to go ahead with "soft security," collaborative editing, and content surfacing and valuing through a user voting system. Of course, with the volume of content and edits, such tools are necessary. What two humans could watch and police such a flow of content while doing their day job? User contributed and curated content is the only rational answer.
(By the way, it would probably be better to describe their principles as being informed by behavioral economics or an evolutionary branch of the field, than anthropology or social psychology, I feel the way people use voting systems to surface content, how "soft" social engineering strategies are employed on wikis, etc. to be close to the phenomena studied by behavioral economics, not just financial choices.)
I was particularly struck by the design points where Spolsky highlights the frustration created wrong answers and obsolete results.
I can remember when I was able to circumnavigate the web through a search engine for the topic of history of photography. It was that small. I could see everything there was to see about history of photography online in a week, a week of drudgery wading through duplicate results page after duplicate results page, until I had made sure I had seen everything about my topic. Although filled with a fair amount of junk and duplicates, I was still able to find a single web page if it contained sufficiently unique keywords, until about a year before Google emerged, I had relied on AltaVista to take me back to a web page in one go, when I could not remember where I had found a code solution on some obscure personal page, for example. Then the search engines began to fail me, and single pages I had found before became nearly impossible to find, but eventually, search engine technology improved and with Google, you could find that one blog page with the coding. That was one the solution to the problem of finding things.
Spolsky is right to observe the problem now is that search is failing to distinguish between correct and incorrect answers; between current and obsolete answers to technical questions.
When I first started programming using Microsoft Visual C++ (I was just a dabbler), I had a question about how to render bitmap graphics. I turned to the library of articles and code intended to help developers. I was happy when search quickly turned up an article on how to introduce bitmaps into your application. After an hour or two of reading, it slowly dawned on me the author was not talking about what I was familiar with, Microsoft Foundation Class applications. I was seeing unfamiliar code and unfamiliar techniques. I glanced up at the date. The article was from the mid 1990s. It was about coding C under Windows before MFC was introduced. The first, supposedly most relevant, documents search had brought up from MSDN was completely obsolete and about coding without an application framework. I had wasted hours reading the wrong articles.
Stackoverflow.com is an example of a great site. It is well designed, the developers learned the lessons of the last fifteen years of web technology and applied them. It is clean, beautifully presented and well organized site. I have to admit they did right what I failed to do with phphelp.com, which started by envisioning many of the same goals. They had to courage to go ahead with "soft security," collaborative editing, and content surfacing and valuing through a user voting system. Of course, with the volume of content and edits, such tools are necessary. What two humans could watch and police such a flow of content while doing their day job? User contributed and curated content is the only rational answer.
(By the way, it would probably be better to describe their principles as being informed by behavioral economics or an evolutionary branch of the field, than anthropology or social psychology, I feel the way people use voting systems to surface content, how "soft" social engineering strategies are employed on wikis, etc. to be close to the phenomena studied by behavioral economics, not just financial choices.)
Comments