14 December 2009

A can of snakes and haiku

The Zen master Dogen says we ought "to be actualized by myriad things." I take this to suggest sympathy with the haiku and its process of creating, a poetic form centered on paying attention to things, allowing something to become 'ensouled' or noticed, and thus actualized. The haiku form it itself, has as its purpose to actualize things. For haiku does not exist to tell you about the poet's experience, but to recreate the poet's experience in you. Simply, a haiku is a "canned" actualization, like those joke cans of snakes, where when the hapless victim opens the can, a fake snake jumps out. It is as if the poet saw a snake, then made the joke can and gave it to you as a way of experiencing the fright he felt upon first seeing the snake.

(Who was Dogen? He was a thirteenth-century Japanese Zen master, and the quote is from a well-known passage from Genjo-koan).

19 November 2009

Visualization, Flickr and Look Magazine

I was thinking recently about the emergence of visualization as an important trend. Visualization is gaining mindshare rapidly among academics and information technology people. The overwhelming volume of data on the network is prompting this interest in visualization as way coping with this emergent, crushing tidal wave of data. There are billions of digital photographs online. I remember when I could see nearly every historic photograph with online access in week. There are trillions of texts and billions of images. The only way to make sense of this data, the only way to organize and explore this data, may very well be through visualization.

Visualization is not photographs or illustrations. It is making data visible. Numbers, statistics, metadata, information about texts or images, the activity of users, authors, creators, contributors, visitors, etc. It is using visual means to make this kind of statistical data and the architecture of information visible and comprehensible.

I like to think about how to organize and present images and text. I was thinking about visualization in comparison to the way magazines organized text and images. In the heyday of the photographic magazines, the images were the most important thing, so they were printed large and allowed to run freely through margins to the borders of the paper. The texts were small captions. This was an ideal, expressive and easy to comprehend presentation (note that presentation is not visualization). It allowed photo editors to engage in the creation of visual narrative, through the juxtaposition of images. Using montage and arrangement on the page, the editor could create a mood or a story. Although text was secondary, it is essential to understanding the meaning of the content in the images. A photograph is just a jumble of meaningless or misleading objects without context.

It occurred to me, that if a way could be found to tie the presentation of the photo magazine (Life, Look, etc.) to the idea of visualization, it could create a powerful new kind of experience. What if the presentation of pictures and text could be as satisfying and transparent as that of the picture magazines, but the mass of data the pictures and text are drawn from, from a collaborative photo and text site (Flickr, for example), could be exposed and explored through visualization? How could these two elements be combined?

18 November 2009

Koans, Cavaliers and Facebook

We are raised in a culture that teaches us to always look for answers or winners. We are taught to expect the purpose of a question is to find the answer. We expect the goal of a game is to win. In Zen practice, students are given questions to ponder in the form of stories, called koans. I was taken aback when I realized the purpose of the koan was not to find an answer to the question, but to measure the progress the student has made in understanding zen. It was a shock, for this rational and scientific minded person, to consider there might be some other purpose to a question than to find the correct answer. Although one can find cheat sheets with common answers to zen koans, there may be no correct answer to a koan. The answer, although interesting, is not the important point of practicing with a koan.

A koan is a measure, just as for those who believe in 'fortuna' culture (a common belief among Virginia cavaliers was that each person was born with a certain amount of forunta or good luck, which could be modified by charms), games of chance are measures of how much fortune one possess. The aim of games of chance, such as dice, is to discover how one's own fortune measures up to the other players, not to win. Sometimes a zen student will supply a stock answer to a koan, but the experience is missed if you get the answer to a koan from a cheat sheet. A koan is there to help understanding. When Helen Keller learned the meaning of "water," she experienced a profound moment of realization, which ordinary children might never experience when learning the name for water. If she had used a cheat sheet, she would have only learned that water is water and not that cold, liquid on her hands was water, her understanding, if you could call it that, would have been divorced from experience.

It was a new concept for me to absorb, how people are interested in tests telling them something about themselves, measures of who they are, yet I should not have been surprised, since such quizzes are popular on Facebook, such as what are your guilty pleasures, what would you do if you could go back in time, what superpowers would you choose, and the ultimate question is probably How Well do you Know Me?

01 October 2009

The new manuscript culture

In a 'manuscript' culture, the distinction between written and verbal text is not as sharp as it happens in a culture dependent on printing. It appears we are heading into a new era of manuscript culture as social networked content emerges and comes to dominate, as documents become conversations and conversations become documents. In a manuscript culture, such as the period in the West before the invention of movable type, or in China before printing became universal, manuscripts offered a 'more fluid transfer of information' where the copyist (think of 'retweeting' or sharing information socially), could make changes to the text, purposeful or inadvertent, could leave sections out or add new ones, combine with illustrations (as in illuminated manuscripts, perhaps similar to Storybird).

(References http://goodlifezen.com/wp-content/uploads/2007/11/the-road-to-nowhere.pdf)

26 September 2009

What's wiki got to do with medicine?

The idea of cutting off blood flow to tumors and using 'diplomacy' to convince cancer cells to listen to the better angels of their nature are like the 'soft security' and 'human engineering' schemes used to protect wikis from vandalism. The run counter to the scorched earth policy of killing all cancer cells. But medicine is now looking to solutions in epigenetic therapy and other methods, which attempt to negotiate with disease instead of destroy it.

We're so used to locks and gates, slings and arrows, we forget other ways.

05 September 2009

A useful poke

I am trying to use Facebook more. I do not have a lot of use for the "poke" tool, which seems a bit childish. I wonder if it would be good to use the Poke method for content? What if you sent someone a book you thought they should read or you wanted to discuss. Sent a link to an ebook? Then they didn't reply. You might poke them about that post of content, not just generally?

01 September 2009

My hands are small, I know

My hands are small, I know
But they're not yours, they are my own
But they're not yours, they are my own
And I am never broken
--Jewel, Hands, Spirit


07 August 2009

Twitter outage creates panic

According to CNN, the twitter outage left users feeling as if they had lost a limb or left home without their cell phone. It is suggested Twitter needs competition to provide alternatives when an outage occurs, as they inevitably will. There were (or are?) a couple of alternative social microblog services available (is jaiku still running?). Of course, this won't help if multiple sites get attacked at once.

What would help is Google Wave. This outage is an incredible opportunity to demonstrate the potential for resiliency a "federated" or distributed social media system has. Content in the Google Wave universe is independent. Every user can have a copy of a bundle of posts, comments, content on their own device. Multiple copies can exist on different servers. It could be possible for a group to continue working, or at least work offline with their content, during an outage and then when connection is reestablished, the changes can be merged back into the conversation. This is what we can do with email now. We can read messages in our inbox (as long as it is not webmail) even when the network is unavailable. We can keep messages around as long as we want. We can write draft replies, take notes, resuse content through quoting or editing the text we recieve and at any time later, forward to others or send the revisions back to the sender.

What did I do during the panic? I just waited for Twitter to come back up. I only post once a day (if I am feeling up to it).

06 August 2009

You can't just put content behind a blank wall

I caught a discussion of Newscorp's new plan to get users to pay for online news content. It will be difficult to sell news online because there are so many fragmentary ways to get the news for free. If any scheme for getting online users to pay for news does work, it just has to be easy. No matter what news sources online do, they must make paying for the news easy and transparent like iTunes. As easy as putting a coin in a paper box at the corner bus stop. The pricing is not as important as convenience.

Also, the customer must have a feel for the worth of the content before they buy or they must get a cheap bulk subscription so the content is cheap enough to take the irrelevant, incomplete, incompetent or useless with the relevant, complete, competent and useful content. I hate sites that put up a poorly written summary and a login or subscription screen. It breaks the rhythm of navigation on the web when a link leads to nothing. It stops you cold and punishes the user for following a link. It would be a sad web of balkanized content with links as obstacles. If content is to be shuttered behind closed doors, it must be quick and easy to open those doors with some kind of universal pass like OpenID connected to a micropayment system.

It started me thinking again about how to get online users to pay for content again. You can't just put content behind a blank wall and expect it to work. No one will ever find it, be able to search for it, search engines index it. Its not enough to provide a meta data summary like a bibliographic catalog does. Meta data will never be the answer to our search problems, at least not as long as humans are responsible for providing it. Nearly everyone ignores meta data, fails to include it, or includes incomplete or incorrect meta data. Who is going to keep all this meta data up to date? No, this is unworkable. Meta data must be generated automatically from content and that is subject to a high error rate using current technology.

The solution google books provides gets much closer to a real solution to the problem of hidden content. Instead of trying to describe the content using faulty and hard to keep up meta data, why not grant access to a sample of content? This gets much closer to a successful model for selling content online. When I read a book in Google books I get a random sample of pages around my keywords. Each user received a custom sample of content tailored to their interests and needs. In my experience reading a few pages of a book without restriction, as I would in a bookstore, gives a feel for the content. I am more likely to buy the book if it proved useful repeatedly over several searches. Yes, sometimes I find what I want in the sample pages, but I generally bookmark the source, take down the title in my notes and will cite the source in any work derived from the information gleaned for "free" which is actually a fair exchange I think for citation and a link.

I do not understand the hostility and opposition to Google Boooks. I am willing to pay less but buy more books in electronic form for reference purposes. If I find an interesting book in google books but it is not one I would pay $30 for a hardback I would pay $10 to download to my book reader. If I have to pay $30 for one book, it is going to be the one I value most and need the information most, which I want to keep around for a lifetime, not a casual read or reference work.

There are books I would buy on the reader as convenient portable references. I would buy more ebooks at lower cost to fill out my "search space" of texts on my ebook reader. If a book adds to the information I have available on a subject but only partially or tangentially, I can't afford a $30 hardback, but I can afford three $9 works related to my subject to add to the search space on the reader

An idea I had a long time ago, when I was wondering how to pay for hosting my first website, was the "vanishing page" model. This would work a bit like PBS where content slowly disappears unless readers pay a small fee to keep it available. The individual reader does not pay for each content page, butsimilar to donations to PBS, a small number of readers or viewers pays for free access by others (this actually gives the donor a feeling of superiority, if it were not for me...). Mechanically, the web page would be publicly available to all readers and search engines but a count of page views would be kept. Each time the page is viewed the number of views or days left would be decremented by some amount. A button to make instant micro-payents would be displayed ok the page along with a thermometer displaying how close the page is to being removed from the site. If enough people donate, days (or credits, it could be a ratio of views to donations similar to bitorrent) are added to the life of the page, if not, it is replaced by a summary and a button to start donating again.

What we need are ingenious "social engineering" methods to get people to buy content online, similar to the ones used to manage "soft security" on wikis. We need soft methods, like Google Books, which gives readers a peek into books that might interest them.

31 July 2009

Yoshihisa Maitani Dies At 76

The man behind the Olympus Pen cameras, the OM-1 and the XA, Yoshihisa Maitani died yesterday at age seventy six. He lived to see his Pen camera system reborn as the Pen Digital through the work and enthusiasm of a new generation of engineers. The new E-P1 is not an exercise in nostalgia but a camera that acknowledges its ancestry while breaking new ground with its mirrorless design, compact lenses and in-camera digital image processing.

Olympus OM-2n w/ OM 50mm f/1.4 lens

This photograph of my OM-2n looks a bit like a shrine in light of today's news.

Olympus XA-2

My original XA-2 bought circa 1986. A classic camera and novel industrial design from Maitani's hand.

22 July 2009

The ugliest Twitter post yet?

New generation of farmers. RT @FMCorg: 40 farmers under 40 http://tr.im/tnjB (via @civileater via @ediblechicago)
about 22 hours ago from web
Things are getting ugly on Twitter. This tweet I posted to @farmfoody is more citation than message. It is filled with gibberish, the "retweet" code and the via's, the cryptic tiny url. Something will have to give and eventually meta data will find its way back to where it belongs, hidden somewhere outside of the message text in the message envelope.

17 July 2009

Apologies from the Ministry of Information

"Dear citizen, we recently discovered a problem with an ebook that you have purchased, the photograph showing the Dear Leader shaking the hand of the former enemy president has been corrected to show the Dear Leader shaking the hand of our new friends. We apologize for the inconvenience." -- Ministry of Information, 2012
How long will it be before our digital content is "corrected" for our convenience, not just pulling books from our electronic readers in the middle of the night?

I bought a Kindle 2, but I returned it out of concerns my ebooks would not be my own books, in the way my paper books are my own books. I was concerned about what might happen to my hundreds of dollars of books stored on the Kindle if amazon went bankrupt. I worried that if the Kindle did not do well, they would shut down the DRM servers and my content would become inaccessible. So I sent it back (also it was a bit small screen and I wanted to loan it to my parents on occaison, who are elderly).

Today, I found in my Twitter stream an article in the New York Times: Some E-Books Are More Equal Than Others.

I never imagined this would be the start of an Orwellian world where content of the books I own is edited at the whim of a publisher or perhaps the government whenever a judgement is rendered in a lawsuit or a contract changes, or perhaps is misread and accidentally violated. We now enter the world of my cold war childhood, when it was common for Pravda to scrub undesirable persons from a photograph or restore them once they were rehabilitated, to change history to suit political circumstance on a daily basis.

What will it be next from the Ministry of Information in the world of Big Publishing, "correcting" our photographs according to the latest lawsuit or government edict?

02 July 2009

HDR Chrome Effect in LightZone

An popular effect is the HDR, gritty, "chrome" look. After watching a video on the technique, I thought I'd give it a try in LightZone.

P6010916_lzn

This look is achieved without actual HDR, but by contrast, local contrast and saturation adjustments.

The steps in LightZone involve dropping a number of tools on the stack and adjusting them, which is a different approach than Photoshop or Lightroom.

  1. Drop a Hue/Saturation tool. Set Vibrance to 100.
  2. Drop a Sharpen tool. Set Amount to 500 and Radius to 50, adjust until you get a "comic book" or "chrome" look (strong blacks, faded, three-dimensional pastel colors).
  3. Drop a Zone Mapper tool. Define points at 2 steps down from white, 5 steps down from white and 4 or 5 steps up from black. Push the white point up to where it divides the top step in two, leave the middle point alone, pull the black point all the way done to keep contrast. Later, you can adjust the middle tone contrast by adjusting the middle point.
  4. Drop another Hue/Saturation tool. Pull the Saturation slider back, reducing saturation until you like the effect.
  5. You may choose to drop a Relight tool to adjust overall brightness and graininess, but you may want to zero the Detail slider or turn off the Sharpen tool and use the Detail adjustment instead.
So you do not have to go through all of these steps, I created the HDChrome style you can download for LightZone here to use as a starting point. Download the archive file, extract the .lzt file and save it to your LightZone templates folder and it will appear under Custom styles.

Reducing saturation gives an image the feeling of a drawing, as if an ink drawing has (or photograph) has been tinted. Starting with a pastel image, mostly gray with touches of color gives this result, if there is texture, this will become the "ink" part. If you start with a metal object, like a car or motorcycle, it can enhance the curves and create the appearance the metal has been chromed, and enhance already chromed parts.

30 June 2009

Twitter as Attention Machine

I am reading On the Origin of Stories, a new book by Brian Boyd, which to make a long story short, draws conclusions from recent research into the mind and evolutionary psychology, that status is essentially attention (or at the very least attention is the currency of status). I can see how this applies to Twitter. The ability to 'favorite' another twitter's content is yet another way of bestowing attention. Twitter is an attention machine. When visiting a twitter's profile, being mentioned or retweeted in the stream of updates or being favorited are ways of gaining attention. A twitterer gains when a user with high attention favorites one of their tweets and more so, when they retweet.

27 June 2009

Mixing Conversation and Story

I realize now the real problem I have been working on and off for ten years now is 'conversation' versus 'story', but particularly applicable to journalism. In a way, conversation and story are like oil and water, they do not like to mix. Yet, stories are filled with dialog, or conversations, so why is that journalistic stories cannot contain dialog? Well, when it is an interview, they do. So what we need is a network tool that seamlessly integrates conversation (interview, written dialog, transcript) with story (narrative, reportage, essay and analysis). It looks like Google Wave has the closest technology to achieving this flexible confluence of conversation and story, even the potential for our conversations and stories to be both mobile and distributed. If every smart phone adopted Google Wave, and given that it works similar to email, which mobile computing already provides and is a robust and well-known commodity service, it promises quick adoption avoiding any centralized monopoly.

I envision the same tool could be used by a reporter to do an interview (dialog) and for personal self-expression (dialog, like Twitter, only sharing little bits of information, such as links). An interview consists of dialog, little snippets of information associated by place and time. This has the form of Twitter messages, but a chat application is much better for doing an interview than Twitter, so some new mechanism must be created to accommodate flexible use, moving between story and conversation, between longer and shorter length posts, between collaborative and authored posts.

Turning off the data tap for Routesy

An interesting question about ownership and rights to data in public use has arisen, chronicled in Apple kills Routesy app, my iPhone gets less useful

Routesy is an iphone application using data provided by the municipal transit authority, through an agreement with a data provider. The details are in the article, if you care to read them.

I have to agree the company has a right to license predictive arrival times, since such information requires investment in research and development, formulating predictive algorithms and such information does not meet the requirement of being "obvious" and thus non-proprietary.

However, the ultimate solution would be an application that took GPS data from every iphone riding the metro at any given moment, if it can be identified as to which bus it is, then do the same kind of approximate arrival time calculations NextBus does, only through peer-to-peer networked computation. Let all the iphones on the bus line find their own position, communicate with each other, track the movement of buses, compare to published schedule and then present the approximate times to the riders. A distributed system of self-analysis. Since all iphones would be held in private hands, and the data shared between peers, who agree to participate by sharing data on their location, the data would be owned by no one. Each person would own their own location and decide whether or not to share it.

30 May 2009

Google Wave and Portable Social Media

A quick observation about Google Wave.

I wrote some time ago about the problem of social media losing its social context as it moves around the digital universe. I thought some mechanism should be created to enable the social context pertaining to a unit of social media to be portable, so it moves along with it. It appears that Google Wave associates the people who pertain to a document (the authors, editors, people with access to view or edit the content, etc.) with the content in a portable way, through its "wavelets" concept.

It seems possible to share or transfer a piece of collaboratively authored content across the Wave system and into other systems with its social context intact. If so, this is a revolutionary step in the evolution of information technology. It gets my vote as the first technology I've seen that truly could be called Web 3.0, as far as I'm concerned.

It would only be right, if you downloaded a image from such a Wave based system to your pc, that it would somehow preserve the social context, perhaps with XML sidecar or embedded meta data, like the EXIF standard for photographs. The content could be uploaded back into a Wave ecosystem with its social context intact, possibly even after local edits.

27 May 2009

Is Your Life Poetry or Nihilism?

ReadWriteWeb asks this question. Poetry is reflective. Journalism also should be reflective (if all journalism were like C-SPAN, we would be better off for it). I am sure we could and perhaps will find ways to mine activity feeds for patterns and other useful information. It may find uses in many fields and places in life, perhaps even in medicine. But the real reason why there is so little reflection on the web is simply because the structures and tools of the web encourage shallow interaction, quick posts, short content, quick reads, quick writes. This is an area I've given some thought to and posted to the blog about it.

What is required is not some new gizmo for finding patterns in bits of trivial data, but tools that encourage people to slow down, to be reflective and create meaningful content. My idea presented here has been of a "quick-slow" system. This system would recognize the importance of brief, concise posts when things are happening (like you've just landed safely in an aircraft with the landing gear stuck and want to tell your friends or the world) and longer, slower, more reflective posts. This system would allow users to post concise messages like Twitter does, but those messages could be expanded on, by expanding the text or by associating longer texts with them. The idea is not entirely new. About ten years ago, I played with a prototype application trying to combine blog and wiki elements. Later, I discovered a more successful project to combine blog and wiki, and an application exists called a bliki.

What I propose is a system like Twitter, which retains its immediacy through a connection to text messaging (cell phones) and the "stream of concise posts" format, yet also provides a way to extend those posts in a meaningful way. Perhaps a user's followers could be allowed to edit the extended content, creating a community of editors and contributors.

What we really need is to encourage people who grew up "network native" to slow down and think before the write, or at the very least, if they have to capture an event or thought with quick, impressionist strokes, they or others should be able to return later after reflection to revise. A kind of "slow news" for journalism, akin to the slow foods movement, asking people to sit down and think a while before they write. This may be asking too much for journalism, but a quick-slow approach could support both quick impressions (what's new) and reflection (analysis). Moreover, this could support a collaborative approach that mixes reportage (the initial concise post, possibly with a picture) and analysis (the associated post, perhaps by an analyst).

The poet Basho revised his haiku many times over the years, sometimes refining the wording and other times he would write a new poem, depicting the same experience from a different aspect. This kind of revision and reflection should be encouraged and supported by technology. Haiku are an ideal model. Brief, concise, experiential, yet through juxtaposition and the many hours of careful writing, they convey higher truths.

I see a number of people writing on Twitter in haiku form, quite a few who are just arranging prose in haiku form and really have no understanding of haiku as an art form (poetry has to say something to be poetry, and say it in a way that affects us). I want to be clear, there is a new form of haiku practice emerging on Twitter, which is akin to the the impressionist movement in painting, where haiku are written on the spot and posted to Twitter from a cell phone. This is a new development in haiku, since most haiku are written down long after the poet has left the place of experience (not always, Basho sometimes wrote haiku and left them behind, but nearly all the haiku that reach us were probably revised many times long after he had visited the location). It bears watching.

23 May 2009

Something Must Be Going On

The heart of OCD is a feeling of "not being right" or repeating a ritual until it "feels right." A creative mathematician experiences intuition as a feeling there is "something going on here but I don't know what it is" according to William Byers in How Mathematicians Think. You were probably taught in high school that mathematics is a rigorous and logical endeavor and that for every mathematical principle there is a proof. It was implied to you that mathematicians seek out new principles by following threads of logic from an existing proof to a new proof. You were taught a myth. Most mathematical breakthroughs began with an intuition. Only later, after the instruction was explored well enough to believe it was true, to believe it was worth proving, perhaps even after it was proved to the satisfaction of the mathematician was an "official" proof created for the record. Proof comes after the fact, not before it. An interesting relationship between obsessive compulsive disorder and mathematics.

Moreover, instruction plays a vital role in creative mathematics. Just as in other creative arts, a shift of frame is required to turn the ordinary into the novel. The author relates the story of how he along with fellow mathematician John McKay noticed something curious about a single number. If you express adding one to 196884 as an equation you get 196884 = 196884 + 1. On the surface, it hardly seems worth the interest of a mathematician. You can add one to any integer on to infinity, something obvious to even non-mathematicians. What is so fascinatingly curious about this instance? As Byers writes, "...these are not just any two numbers. They are significant mathematical constants that are found in two different areas of mathematics." The relationship of the constants could not be a coincidence, thought McKay, who began a line of inquiry leading to a series of conjectures, which went under the fanciful but telling name of "monstrous moonshine." I want to linger a moment on this point. Here we have a mathematician who sees something curious, which prompts a "gut feeling" something systematic must be going on, a suggestion there may be a relationship between two systems of mathematics, who starts inquiring into the possibility, and as he finds more support for the reality of the intuition, he begins to make conjectures about how the two systems might be connected through the curiosity he discovered. At this point, we can hardly blame a mathematician for feeling he was chasing "moonshine." But that is exactly what creative people do. They chase moonshine and rainbows. Yet, somehow they end up driving the process of scientific rational, mathematical and artistic discovery. McKay's conjectures were later proved.

Byers does relate mathematical creativity to artistic creativity, observing good mathematicians (the creative ones) are very sensitive to the feeling of something going on, and ties mathematical intuition to the poet's, quoting the poet Denise Levertov saying "You can smell a poem before you see it."

This is all a blow to anyone raised on the rhetoric of rationalism. The human mind is a reasoning machine. Human beings are rational actors seeking the most efficient path. This ought to be nonsense to any carnival barker or snake oil salesman, but for most educated people it is a conceit they sustain because they enjoy the belief they are rational. Reason has become a virtue and virtues cannot be questioned.

At the bottom of human irrationality may be rational decisions, observations, the machinery of the mind is not metaphysical, but the abstract layers above the fine grain of deterministic reasoning are irrational. The mind is connected to a body. People get "gut feelings" as their mind tries to tell itself something from its emotional, pattern recognizing centers. How else could the pattern recognizing centers of the brain communicate with this supremely rational being, other than by kicking it in the gut?

I take away from this you will not be a creative scientist, mathematician or musician unless you learn to use your intuition. Exercise your curiosity. Keep a childlike sense of astonishment about the world around you or the inner worlds you explore. Experiment. Follow instruction. Don't worry about the result, the path to a Nobel prize in mathematics is not by seeking that which is likely to win a prize, but by following up an intuition, seeing where the thread will lead, without any thought to where it will go, other than to satisfy curiosity and that feeling of something must be going on.

21 May 2009

New Tools for Men of Letters (or Not)

"The art of conversation, with its counterpart the dialogue as a literary form for presenting ideas, has also declined since the days of Galileo, while the art of advertising has advanced. Advertising is easily recognized as the literary form that most completely responds to the technique of the printing press, because it demands, above all else, a numerous and receptive "public" of readers."
New Tools for Men of Letters
The Yale Review, Spring 1935.
Sounds a lot like Twitter, does it not? The success of Twitter is largely due (as has been generally true of web services) because of the possibilities inherent in the medium for promotion and self-promotion, or advertising. Now, since helping independent farms survive is another fascination of mine, I believe using Twitter for self-promotion may be beneficial, but it is important to recognize how much our tools are influenced by advertising. Also, it is important to note how technology shapes culture. Technology often defines what is possible in art or culture, and then shapes its direction and expression (think of the woodcut or electric guitar and the idioms of graphic art and music that sprang from the technology). So Twitter is not always good for us, like eating too much cake, because it is a medium that "demands ... a numerous and receptive 'public' of readers" and authors that meet the demand. Of course, all good authors keep the audience in mind while they write, but Twitter and concise social messaging systems orient our writing and conversation toward the jangle of advertising.

We hear a lot of talk about conversation on the web, but there it seems very lacking in real conversation. I learned recently that Ward Cunningham when he originally envisioned the wiki, believed people would begin with conversation and then shape the results into an article, which would then be refined collaboratively. As it turned out, authorship on most wikis occurs in reverse, with articles being started then shaped by conversation (if we are lucky).

Ever since I made my first foray into the world of networked content and community, starting with bulletin boards and then moving onto the web, I have been fascinated by the idea of capturing expertise and knowledge "lost" in conversations. Forums, discussion groups, bulletin boards, message systems, all formats for conversation are ephemeral. When a person asks a question on a help forum, the answers they receive are generally lost. The web made it possible to ask a search engine a question and bring up one of these threads of conversation archiving knowledge.

Much of the knowledge of experts falls into the category of "folk wisdom" or "folk knowledge." This may upset some rationalists who believe all knowledge is found in books, which are the mechanisms that "separate us from the medieval" by storing knowledge without the requirement of memory. The reality is that many of the solutions for common problems coders face on a daily basis are not written down in books. A book is generally written by an academic about generalities or abstract theories. Or it is a technical cookbook about a particular language or technology. Many of the solutions for little quirks, bugs and problems solved with little tricks or algorithms are passed from one coder to another by oral tradition, sharing code, looking at other people's code or in forums. Coding is not the only professional or practice that this process occurs within, but serves as an example.

One of the great problems of the web is how to capture the knowledge being generated by this process of dialog about small problems. It is a Long Tail problem. It is a "exponential" problem because it consists of very small parts that add up to a larger whole, which exercise a large influence over our life (think of software controlling aircraft of a medical robot). It does not just apply to coding, but to any knowledge.

Not only would it be good to capture this knowledge in a better way than just stumbling on a solution in a forum or blog post, it might prove beneficial to author a work "conversation first," like the old carpenter's adage to measure twice, cut once (of course, real carpenters use a template but that is another story) .

In a way, Twitter achieves the conversation part, but as I've observed before, lacks the means to capture the essence of a valuable conversation (other than favorite tweets). What could be a first step would be to allow favorite tweets to be organized by tag and the browsed like a social bookmarking site. The better solution would be to enable Twitter users to create a wiki page for extending the thought or observation in the tweet collaboratively, perhaps allowing followers to edit the content. That is the idea I will be working on, if I can get some time away from farmfoody.org and folkstreams.net activities.

17 May 2009

By Twine or By Time?

I ran across an interesting answer in an interview about Twine:

[Nova Spivack] I think the above solution would work for this too. Basically you are asking for a new view of the content – not “by twine” or “by time” but “by popularity” or “by relevance to me”.

Notice the question being posed. What he is asking is, why don't you like the view our "intelligence" provided, why do you insist on these existing, simplistic views like by time or popularity?

The last is odd. "Relevance to me" is the primary criteria for all information I want to receive. Even if I don't yet know it is relevant, such as when a person I follow in Twitter shares something I've never seen before and would never have found on my own. Do you understand? Even that is relevant to me. Everything I want is relevant to me.

I understand what they mean though. They mean serendipity. Like overhearing a snatch of conversation in Twitter by seeing posts by friends of your followers, but who you do not follow. But it still is relevant to me, you're just increasing the chaos in my information feed. Perhaps what we need is a "volume control" on chaos in information filtering systems.

Moreover, I suspect that humans being humans, really want to order information in the ways they are familiar with, the way their brain was designed to process information through evolutionary psychology (hmm, this is a new kind of "design" process, contradictory to the meaning of design, but seems appropriate to say design, designed by evolution). The upshot of this is people still want to order things by time or popularity. What other measures are there than the one's we've known?

Authorship: When we buy a book because the author's name is on the spine or cover in 96pt type. We are buying authority.

Sharing. When we "hear it through the grapevine" from our friends. Another high trust information source.

Some finding aids are a form of recommendation, as when we used to go to the reference desk librarian and ask for a book on a subject. This is a kind of sharing.

Look at the role trust plays in gathering and accepting information. Yet, we trust the smartness of crowds (or at least the smartness of cliques) at Wikipedia. I use it all the time and find the information is always a good starting point, usually reliable for technical information.

With trust comes the opportunity for abuse of power. The power of authority to stifle innovation and knowledge, to be used for sustaining false views (think of how the view of the Amazon civilization by anthropologist maintained for a hundred years turned out to be completely wrong and opposite to reality, despite the application of the "scientific method" and mountains of "evidence" all chosen, selected by a reductionist process, which only knows what it measures, can only measure what it sees).

15 May 2009

Trouble in dead trees and inky fingers land

Newspapers and thinking the unthinkable

An excellent analysis of the situation newspaper based journalism is in.

I like the idea of micro-payments for content, such as New York Times articles. The only problem I have with it and why I would be reluctant to use it, is simply that I have to pay for the article before I've read it. Even if I saw an excerpt, it might not be enough to determine whether it is worthwhile or not. A solution for this problem might be found in social networking. I usually read articles my friends share with me (by sending a link in email or chat). I would be much more willing to pay for an article they recommend. Keep the price low and integrate with a social sharing system and it might work as long as the payment is by an "easy button."

The greater problem is content and authorship is changing radically with digital content available through the network, given the unlimited perfect copying and access without distribution. What we are seeing is a working out of the many pieces loosely joined paradigm described a decade ago. The newspaper started as a handwritten piece of paper passed around coffee shops in Enlightened London. I see nothing sacred about its continued existence.

The problem of journalism online is of course that Twitter is the new journalism but the content is too brief, chaotic and frequently idiotic. Micro-blog formats do encourage conciseness and sharp thinking, but they also promote a hyperactive and fragmentary view of subjects. As I wrote in my blog, there needs to be a "slow thought" or "slow news" or such movement (like the Slow Food movement), which you might say is what blogs already give, but not really.

04 May 2009

Stackoverflow.com

There is a good article on the principles driving the development of stackoverflow.com, a site where programmers get help with their coding problems on ReadWriteWeb.

I was particularly struck by the design points where Spolsky highlights the frustration created wrong answers and obsolete results.

I can remember when I was able to circumnavigate the web through a search engine for the topic of history of photography. It was that small. I could see everything there was to see about history of photography online in a week, a week of drudgery wading through duplicate results page after duplicate results page, until I had made sure I had seen everything about my topic. Although filled with a fair amount of junk and duplicates, I was still able to find a single web page if it contained sufficiently unique keywords, until about a year before Google emerged, I had relied on AltaVista to take me back to a web page in one go, when I could not remember where I had found a code solution on some obscure personal page, for example. Then the search engines began to fail me, and single pages I had found before became nearly impossible to find, but eventually, search engine technology improved and with Google, you could find that one blog page with the coding. That was one the solution to the problem of finding things.

Spolsky is right to observe the problem now is that search is failing to distinguish between correct and incorrect answers; between current and obsolete answers to technical questions.

When I first started programming using Microsoft Visual C++ (I was just a dabbler), I had a question about how to render bitmap graphics. I turned to the library of articles and code intended to help developers. I was happy when search quickly turned up an article on how to introduce bitmaps into your application. After an hour or two of reading, it slowly dawned on me the author was not talking about what I was familiar with, Microsoft Foundation Class applications. I was seeing unfamiliar code and unfamiliar techniques. I glanced up at the date. The article was from the mid 1990s. It was about coding C under Windows before MFC was introduced. The first, supposedly most relevant, documents search had brought up from MSDN was completely obsolete and about coding without an application framework. I had wasted hours reading the wrong articles.

Stackoverflow.com is an example of a great site. It is well designed, the developers learned the lessons of the last fifteen years of web technology and applied them. It is clean, beautifully presented and well organized site. I have to admit they did right what I failed to do with phphelp.com, which started by envisioning many of the same goals. They had to courage to go ahead with "soft security," collaborative editing, and content surfacing and valuing through a user voting system. Of course, with the volume of content and edits, such tools are necessary. What two humans could watch and police such a flow of content while doing their day job? User contributed and curated content is the only rational answer.

(By the way, it would probably be better to describe their principles as being informed by behavioral economics or an evolutionary branch of the field, than anthropology or social psychology, I feel the way people use voting systems to surface content, how "soft" social engineering strategies are employed on wikis, etc. to be close to the phenomena studied by behavioral economics, not just financial choices.)

03 May 2009

Snowball, the Dancing Bird

A video of a dancing bird has become the latest YouTube sensation. Some people thought the bird's performance was faked, but for me, it is not surprising, given the sophisticated ability birds demonstrate for manipulating pitch and rhythm in their songs, that a bird shows the ability to keep time with music. Neuroscientists, including John Iversen of the Neurosciences Institute, have studied the dancing bird and confirm it is capable of extracting a beat from sound.



What impressed me most about Snowball's performance is when he lifts his leg and gives it a little shake before bringing it down. As the investigators mention, it may be prompted by the pace being too fast to put his foot all the way down in time with the faster beat, but it piques my curiosity further. It appears Snowball is dividing the beat when he waves his foot, into two or three little waves, which if I am seeing it correctly, suggests birds are capable of division of the beat and perceiving and manipulating a metrical framework. This is simply astonishing were it be to true, but perhaps not unexpected given the sophistication of bird vocalization and communication. It is one thing for a bird to keep time with a beat and an entirely different for a bird to exhibit division of the beat.

When people sing (or compose a melody to sing) the tones are not of arbitrary duration. No one could sing a song consisting of a series of tones of arbitrary duration measured to fractions of a second. Could you imagine signing a melody: A 1000ms, F 1500ms, E 500ms, D 1000ms, A 1000ms? The human mind is not well suited for measuring duration in milliseconds on an ordinary basis (we can leave out extraordinary abilities some humans may possess or develop). What if someone asked you to pick up the pace to sign faster? Each duration would have to be recalculated down to the millisecond, in your head. For this reason, music is organized by relative measures of duration.

In reality, when people sing or play music, they use simple division of the beat to measure duration. This is usually achieved through the division of a steady beat by whole fractions, usually simple divisors, like two or three. The most basic division of the beat is by two. When a tone of shorter duration than the beat is required, the melody will use a tone one half of the beat, or one quarter or one eighth and so on, down to the ability of humans to resolve divisions of time. The other main division of the beat observed in music is by three. So for every beat you have the possibility of three tones, six tones, twelve tones and so on. The human ability for perceiving and manipulation this time structure is sophisticated. Musicians can anticipate the division of future beats, playing notes that persist across multiple divisions of the beat or create "holes" or silences for certain beats, playing with the listeners expectations (this is called syncopation). I have to wonder if the small movements he makes dividing the beat follow any ornamentation of percussion or melody in the song. In the background, another bird can be seen bobbing his head to the beat, in a clear parallel to human "head banging."

Although I would have to watch a lot more video to be sure, what I have seen suggests he may be dividing the beat and deserves further investigation. I would not be surprised to find that birds do erect a sense of metrical time in sound and can mentally divide the beat and even anticipate it, perhaps even perceive syncopation. It is fascinating to watch Snowball lose and pick up the beat again.

I did watch a couple of video clips. At about 2:00 into the following video he lifts his foot and waves time to the beat, but does not divide it.
(http://www.youtube.com/watch?v=N7IZmRnAo6s)


In the following video, notice how he keeps the beat when Stevie Nicks sings "ooh, ooh" on the beat? This suggests he has an expectation of the beat even when the beat is not marked by a percussive instrument. At about 1:05 he may have divided the beat with a wave. He does lose the beat more frequently when based only on Nick's vocals or less percussive sections. At 2:31 he appears to divide the beat with a wave again. And at 2:56. I'm not sure if he's just losing the beat or dividing it...but this is a seriously important question about the intelligence of birds. At 4:43 also.
(http://www.youtube.com/watch?v=GYMBIGTteWA&feature=channel)

(Note: The time required for a complete cycle of A above middle C is 2.27 milliseconds and a sixteenth note at a metronomic pace of 60 clicks per second is 250 milliseconds, according to Wikipedia http://en.wikipedia.org/wiki/Millisecond 2009)

18 April 2009

Graphic Recording

I didn't know it, but all my life I've engaged in "graphic recording" when it came to exploring new ideas or learning. I never went as far as the artists who made a series of recordings for the sustainable agriculture and food conference, but my subjects were technical, and I was a technical kid growing up, so my "confections," as Tufte calls them, were more mathematical, graphical and textual in nature. I used them to illustrate things to myself, like working out visually how cycles represent waveforms in musical instruments. Now, I see them as graphic recordings. I was a bit ashamed of them, since I thought it meant that I wasn't a good learner and tried to suppress or limit them. That was a mistake.

The drawings are simply wonderful and I got put onto them by Brenda Dawson who tweeted about the graphical recordings made for the March 29 2009 conference
Inaugural National Symposium on Food Systems and Sustainability at the University of California, Davis. How much better a "presentation" these graphic recordings make than a PowerPoint presentation!

These drawings are a lot like my vision for an information system, called Strands, which would be as thick and filled with complexity as the Talmud and as visually expressive as these graphic recordings. If only the web could be like this. When I think of Twitter and Tabloo, if they could be combined, I think we'd be close. Tabloo enables users to create visual narratives (through the structure and relationship, size and aspects of images) and Twitter enables users to create conversations out of small fragments of thought flowing continually.

13 April 2009

Save the Gulf Branch Nature Center

If you live in Arlington County, please help save Gulf Branch nature center by signing the petition available through Save Gulf Branch Nature Center.

The county is planning to demolish the nature center and leave a parking lot to save a pittance on the budget, totaling $132,057. Our county is blessed with numerous streams and stream valleys rich with wild plants and animals running through the urban landscape. In the 1600s this part of Virginia was described as the finest place in the world to live, with its numerous stream valleys and abundant wildlife set in rolling hills. It would be wrong to lose one of the few points of contact children have with nature. The nature center is an island for the preservation of native species. Moreover, the nature center, due to the proximity of human culture to nature, is an opportunity not found elsewhere to understand humans and their culture are not separate from nature, which could help dispel the destructive urge to recreate a mythic Eden.

I've lived here most of my life and was born in the district and came home to Alcova Heights where my family migrated to when Arlington was Alexandria County. I grew up spending my summers in Alcova Heights park. Doctor's Run, Four Mile Run, Lubber Run and the associated parks were my playground. Long Branch Nature Center was closer to my home, to my neighborhoods of Barcroft and Alcova, so I never visited Gulf Branch, being on the far side of Arlington for me, but through my experience with Long Branch at Carlyn Spring I understand why Gulf Branch should continue.

12 April 2009

Olympus E-620

I am collecting reviews and information about the Olympus E-620 here with the idea of replacing my E-510 with it.

Reviews

The E-620 (Four Thirds Photo)

Review includes sample images (all JPEG).

Techradar Olympus E-620 review

CNET E-620

The E-620 is a revolutionary camera. It combines one of the most compact and lightweight DSLR camera bodies with an articulating LCD screen for use with Live View. I cannot imagine a more portable and flexible camera. It is perfect for a vacation or for getting views from unusual angles. Mount the 9-18mm ultra wide angle and you have an incredible camera for reportage, with its wide all encompassing view and deep depth of field.

Olympus cameras are excellent starter cameras. The kit zoom lenses produce better image quality than most kit zoom lenses. You cannot find a better a value than the dual zoom lens kits.

If you are an average person buying a DSLR and not a professional photographer, the E-620 does not fall "behind the competition on most counts." The kit lenses are ahead of the competition, with designed for digital lenses that are not warmed over designs from twenty years ago, which are sharp from edge to edge and wide open. The 14-42mm and 40-150mm are the lightest and most compact lenses I've ever seen for a DSLR camera. I own and use them myself and I can go out walking, with one lens mounted and the other in my coat pocket. The quality of the kit lenses is more than sufficient for any purpose a family or amateur photographer could want, except for low light photography. The resolution of the lenses is higher than most lenses on the market, and 4/3 sensor is more than adequate for printing at 8 x 10 inches or less, which are the sizes the majority of photographs in the world are printed at. Most wedding photographers produce images at 8 x 10 or less. Think about it. I know a wedding photographer who several years ago shot weddings with a Nikon 990, a 3 megapixel camera. 3MP is enough for 5 x 7 or less and good enough for a few carefully processed 8 x 10s. The 4/3 sensor is many times larger than this one and is not much smaller than an APS-C sensor. At 8 x 10 or smaller, the majority of photographs, any noise due to the slightly smaller sensor is not going to show up unless at high ISO sensitivity. Take a look at my Flickr album. Do you see any noise in most of the pictures? Most people will be sharing images online or printing 4 x 6 with an occasional 8 x 10. The 4/3 sensor is more than enough to handle this, much more than enough, with the exception of high ISO photography.

Let me say something about 4/3 lenses. Before you stand in awe of the large lens collections available from other makers, consider that many of those lenses only work on a particular camera, there is something you need to know:

All 4/3 lenses work on all 4/3 cameras.

This means the best lenses made for 4/3 cameras can be used with the least expensive cameras. I can put the awesome 150mm f/2 on the E-410 (or in my case, the outstanding Panasonic Leica 14-50mm f/2.8-3.5 on my E-510). In other makers lineups, the best lenses may be unavailable on the lower end cameras because the mounts differ.

But I digress. The review agrees, the E-620 has "fabulous photo quality." The review goes off the tracks on several points.

though it offers competitive photo quality, it lacks the (admittedly primitive) video capture capability that Canon and Nikon have brought down to this price segment

Why is it not competitive to avoid releasing a poorly implemented, low quality video capability into the marketplace? It seems smart to wait. Perhaps the reviewer is unaware of the Micro 4/3 product, which will likely be the camera line Olympus will cover video with.

It has Olympus' trademark grip, shallower than its competitors' grips, which I find less comfortable; definitely a reason for you to hold the camera and give it a feel before you buy.
"Trademark grip?" Olympus cameras have several different grips. The E-510 sytle grip is the best I've ever used, and fits my hand perfectly. The E-410 grip is inspired by grips from the SLR period, and suited to the compact design of the camera body. The E-620 grip tries to be unobtrustive and suited to the compact design. You don't hold a camera by a grip, you hold it by the lens. The grip is to hang onto the camera and steady it. I can't imagine it being less comfortable than the Canon 450D, which is small and cuts into my hand like a knife blade. Admittedly, it is a personal decision.

The super control panel is a dream. It makes most of the menu digging unnecessary. Nearly all photographic controls are directly available with a single click of the OK button and a bit of navigation. They are correct to note the Exposure Bracketing settings are buried in the menu, which can be a pain.

Most professional photographers prefer Compact Flash cards and I prefer them to the fiddly little SD cards. The SD card may be easier for the user stepping up from a digicam, but if you transfer photos from the camera using a USB cable, you can simply leave the card in the camera and never have to worry about bending a pin on the CF card.

It powers on and shoots in 1.4 seconds, which does rank on the slow side for its class.
Once again, we have to put up with the idiotic "start up time" measurement. The power up time is slow because the world class dust removal system is operating on power up. Are you willing to sacrifice a few tenths of a second for hours and hours of spotting dust flecks or minutes each day of sensor cleaning? I am.

If you think this startup time is slow, I challenge you to take an E-system camera, turn it on and try to bring the viewfinder to your eye before the camera is ready to shoot. I can't do it.

This is a straw man. If you feel the need for instant start up, just put the camera in sleep mode. It will wake up immediately when you press the shutter button.

Some good things pointed out by the review.

The 2x multiplier (compared to 35mm). They note the coverage of the kit zooms is 28-84mm and 80-300mm EFL.

The 12MP does provide extra detail from what I have seen in RAW examples. The TruePic III+ engine improves on the already excellent JPEG engine Olympus cameras are known for. Many photographers choose Olympus because the JPEG output is so good they do not have to post process.

The one weakness of the E-620 is high ISO. It does produce a bit more noise at base ISO, but so do other high megapixel cameras like the A350. At high ISO, other APS-C cameras will do better, but you must ask yourself, will you see the difference in your prints or on the web?

Though it's a solid, serviceable dSLR, if you're looking for an easy-to-learn, entry-level camera, I'd steer clear of the Olympus E-620.
The conclusion seems contradictory. I believe the reviewer meant to say that for people stepping up from digicams they should consider the other makes, but that for serious, advanced photographers, the E-620 does fine. This seems complimentry.

The E-620 has many features that people stepping up from digicams should find beneficial.

* Image stabilization. All 4/3 lenses are image stabilized on the E-620. From other makes, pricey stabilized lenses are required. Image stabilization just works, without any need for the digicam shooter to know or do anything on any 4/3 lens.

* An articulating LCD screen. To take full advantage of Live View, an articulating screen is necessary. I want to warn digicam users: a DSLR is not a digicam, the Live View on ANY DSLR is not going to operate as quickly and easily as your digicam LCD view for shooting. This is due to the mechanics of the reflex and interchangeable lenses. The Live View cameras are improving, the E-620 is one of the best, but you will sacrifice some ease of use for a more capable camera, if you're willing to learn something about photography.

* Dust removal system. What they don't tell you is the time you save on "start up" will be spent in hours cleaning dust spots from your images if you don't have a good dust removal system. The E-620 has the best dust removal system of any DSLR camera. Some people will tell you it is easy to clean your sensor, but if you're stepping up from a digicam, you probably don't want to ever clean your sensor. I haven't cleaned mine in two years.

* Art Filters. I think digicam users will enjoy the art filters. You can see how the effects apply in Live View.

It seems the review cannot decide whether it wants to present technical information for experienced photographers or provide advice for those stepping up from digicams. If the camera is being reviewed for beginners, why bother to include the technical gibberish?

The E-620 takes good pictures. It will take better pictures than a digicam, if you're willing to learn a little about photography. You won't have to worry about cleaning the sensor or spotting dust in your pictures (dust is a part of life with interchangeable lenses). You will get extra detail with 12MP without going too far and getting too much noise, as 12MP digicams do. It comes with excellent kit lenses. You can use any lens in the 4/3 lineup, but most people will be satisified with the kit lenses.

If you expect to take pictures of your kids playing ball in the back yard at twilight you may want to get a Canon, because the E-620's one weakness is high ISO shooting. It's good, but with the slow kit lenses provided by most makers, high ISO is necessary for shooting in near dark or under poor flood lights. This affects any camera, but Canon is the high ISO king. You can always fit a faster lens, like the 14-54 f/2.8, or use the flash.


Previews

DPReview Olympus E-620 Preview

Preview includes sample images. Full-size downloadable image samples. This is the best of the previews.

Olympus E-620 Digital Camera First Impression Review

(Cursory review of pre-production camera. No image samples.)

DCRP First Look: Olympus E-620

(Review of pre-production camera. No image samples.)

PMA 2009: Panasonic GH1 & Olympus E620

Olympus' E-620 raises the bar for entry-level DSLRs

Olympus

Olympus E-620

If you're looking for raw images to compare cameras, try http://raw.fotosite.pl/ where you can download E-1, E-3, E-510, E-30, and I hope, E-620 images for comparison.

08 April 2009

Life, Flickr and the New Panasonic GH1

The HD video version of the Panasonic Micro Four-Thirds camera is coming out, the GH1, and it confirms my idea this camera has potential to facilitate a new visual journalism, citizen journalism, social media journalism, whatever you want to call it, and Panasonic is aware of it (as I would assume they were from the time they started development of the system).

According to Twice.com "fans who bring their Panasonic DMC-G1 cameras to Beck’s live performances will have the opportunity to take photos and videos at the event." Fans can submit photographs for inclusion on Jeff Beck's website.

We may be seeing the beginning, with Flickr, JPG magazine and others who may follow in their footsteps, of a new great era of the "picture magazine" recapitualted on the network (I say network, because it is not just the web or email, anymore but content is becoming social and available throughout the network on all kinds of devices in all kinds of human contexts) through camreas like the G-series and social photo sharing sites.

The process described in the article, whereby fans photographs will be allowed to bubble up through an edited system for display on Facebook or the artist's website is reminiscent of the collaborative rating system on JPG magazine bubbles up content, so it is brought to the attention of editors, who then use their critical understanding of the art, and the state of the art, to decide which images appear in the official magazine. A very similar process to Life or Look magazine, which catered to an audience interested in learning about the world around them visually, before television.

04 April 2009

Twitter and the Principles of Illusion

It is worth noting the two guiding principles of illusion are "suppressing context" and "preventing reflective analysis" (according to Tufte, in Visual Explanations). The first applies also to the ubiquitous photographic image, nearly all of which appear without context. A situation that apparently few people find troubling. A good example of the phenomena is the iconic image from the Vietnam war of the Viet Cong operative being summarily executed by a village officer. The photographer who took the picture often wished he hadn't because of the damage the image did when used out of context (as was the usual case). Several iconic images from the Vietnam war were frequently presented without context. It was left up to the viewer to interpret and may very well be people at the time did not want to know the context, enabling them to press the image into service of their political aspirations or personal, psychological needs. Visual media is inherently weak at providing context.

The emergence of email, web discussion forums, short messages, video sharing, all network native forms of communication create an environment hostile to reflective analysis. What is needed to alleviate this trend is a movement akin to the "slow foods" movement, perhaps a "slow media" movement, asking people to slow down, consider context and think reflectively within a network information ecosystem. The content of a Twitter stream can be informative, but it can also be trivial, and despite its benefits, it does not encourage reflective analysis. I personally find a tweets (Twitter messages) are frequently a touchstone for an innovative thought, connecting me to something I did not know and probably would not have had someone not passed on an interesting web link or thought out loud. But it would still be nice to pull wisdom from the ether by capturing tweets in some reflective and expandable form.

Although not yet a visual medium, these concise messaging and blogging systems are most attractive to television journalists. A quick turn Twitter before the commercial graces many newscasts. These context free nuggets are ideally suited to a medium described as a "wasteland" and it troubles me that networked content has been so eagerly adopted by television news shows. It points out the need for reflection and context in networked short message content.


I have explored this theme before (see Twitter Wiki, "quick-slow" bliki articles previously). The question is how to accommodate the fragmentary, context free units of networked content and encourage expressions of context and reflection to balance them. It is a daunting task because people often do not see a need for context or reflection and are often unwilling to bother with the story behind a photograph or take time to expand on a short message.

We need to accommodate the uses for which short messages are legitimate and when they are beneficial (such as the conciseness they encourage...concise writing requires reflective analysis before posting, you must know your subject well to pare it down to its essentials and wordiness often just adds confusion...we must be prepared for abuse of longer text forms connected to short text forms). But also we must make it possible for reflection to take place. The "quick-slow" approach to networked content systems encompasses this. We can then turn the two principles to our advantage, by encouraging their opposites context and reflection.

17 March 2009

Biological Construction and Networked Content Creation

The order and symmetry of biologically created structures, such as an egg or the human body, are expressions of how correctly those biological systems worked to construct the natural artifact. Biological organisms are collections of cells cooperating with each other. The order and correctness is an expression of the successfulness of the collaboration.

An egg comes out more egg-like when the biological processed working to make it cooperate and collaborate more correctly in its construction. I believe this has implications for the collaborative processes operating in networked software development and information science. The biological process of construction is inherently different than the one humans have inherited from their tool making and industrial heritage. What will we make of it?

Where are we going?

The issue of whether people should pay for forums or not came up on dpreview. With the current economy, I expect how to pay the bills will be a growing question for many web services.

The problem is with forums there is perfect competition. Anyone can setup a forum and run it for next to nothing. If one forum decides to charge a fee, the users can flee to another forum. The only reason they might stay is because of the audience. For example, photographers pay for to host their photographs on Flickr primarily because it provides a rich audience of people who love to look at still photographs. Flickr is the Life and Look magazine of our time, it is the revival of the great picture magazines, not because of its technology (that helped orient the site in the right direction to succeed, just look at the abject failure of Picasa to be social---too little too late). Flickr just happened to be where most people who like to look at pictures gathered, mostly because of its blog-like streams of every changing pictures and social tools. It is easier to pay a small fee to use Flickr (perhaps even to "read" it) than it would be to overcome the "capital" costs of changing sites. Flickr users have a lot invested in Flickr and it might just cost less to stay and pay than to move elsewhere. Besides, there is no where else to move. The closest thing I could see to Flickr would be for every photographer to put up their own photo blog software and then join photoblogs.org, which would become the "magazine" and "social hub." This is a distributed vision of photo sharing online. I used to wonder which would be successful. But it really was simple, Flickr did it all for you, some for free, a little more for pay, well worth it to promote your photography.

Despite the somewhat juvenile and absurd environment of Flickr with regard to art photography (you know, the dozens of people giving out "Great Photograph" awards to pedestrian, derivative and mediocre images mostly to promote themselves or because they are too young to know what a derivative image is), it is useful to professional photographers and art photographers because Flickr is where the eyeballs are. It attracts people who still love still photography, which in this age of video, is a bit of a miracle that anyone takes an interest in photography. However, photographs can make the world sit still long enough for people to pay attention, and that is a very similar experience to poetry, which at least in part, is there to draw attention to things. I've heard from professional photographers they get an order of magnitude more requests or work through Flickr than through one of the professional portfolio sites.

One reason, perhaps the principal one, Henri Cartier Bresson and other great photographers became well known, was through their images being published in the great picture magazines. When television came along, the picture magazines went into decline. Photojournalism began its long decline at this time, for the simple reason people could learn about their world visually through television, a more attention grabbing (the barrier to entry for television was lower, you didn't have to be intelligent to watch it, a good example where low barrier of entry is destructive to society) and free medium. Without the picture magazines it was no longer possible for a photographer of acknowledged artistic merit to become known and their images have significance in society. The audience was gone. Flickr reestablishes this audience.

So the question still stands. Will people in the future pay for their online content. Pay to create it. Pay to consume it. What is happening now? People are already paying to create content. They pay for a Flickr account with better tools. They pay for services to create graphics, three dee art, property in virtual communities. A few sites charge for reading content, but not many. But given human history and the recent past, when most content was paid for, in newspapers, books and magazines (except for tv), it seems reasonable to assume the free ride will be over someday.

There may be a tipping point when a non-pay site is no longer competitive. When most good content has gone to pay sites and the community of interest for that content willing to pay is consuming all they can (this is what happens with books and magazines today), the other sources will be driven out in a kind of perfect competition. The free sites will be filled with garbage and what passes for content on local cable access.

The network is not the old traditional world of libraries and publishers. It will be different. Project Gutenberg. Open source projects. The collections of enthusiasts sick and tired of the crap shoveled out by the traditional content and software businesses have taken it on their own to produce quality products where the marketplace would not or could not. This is an order of magnitude different than the pre-networked world, where people could not work together, providing little bits of effort or expertise to collaboratively create a cultural artifact. This is entirely new and we don't know where its going.

As an aside, the idea of tipping or donation comes up. Frustrated with no way to fund my original website, I considered taking a modern high tech variation on the PBS approach. I considered (in the 1990s) creating a content management system where each article would have a countdown timer displayed like a reverse donation thermometer. If you didn't contribute something to the article, it would count down, when it reached zero, the page would be pulled from the site. Of course, the ability to cache networked content presents a threat to such schemes, the wayback machine can regurgitate considerable missing content and so can the Google search cache. What about caching? If the Wikipedia were to dry up funding and blow away today, would its content still remain available in a myriad of niches around the network? On people's computers, disks, servers here and there, in caches? Would it evolve another life in a peer to peer environment? Will all information become distributed over billions of cell phones and have no location at all?

16 March 2009

Twitter is a 'starfish' enabler

Twitter is a 'starfish' enabler. It's what makes Twitter powerful and empowers those who use Twitter. It puts individuals at the center of the star.

Twitter friends (followers) are more like information flows you choose, organizing the flow of information for yourself and others, curating, editing, creating than other social network friends, which are more passive, something you collect or at most create a space to explore. This is because friends/followers bring content to you automatically. It is the flows of information resulting from following that make Twitter different from other social networks.

I didn't know much about Twitter when we started designing Farmfoody.org and thought it was something to do with short text messages on cell phones. I am currently integrating Twitter into farmfoody.org, after having considered a Facebook social feed model and finding it overly complex and confusing. We need as low a barrier to participation as possible. Farmers don't have time for complex systems, blogging, social feeds with posts and comments and threads and six dfferent types of publishing and bold and italic.

Neither do people standing at a farm stand with a bag of white corn tucked in their arm have time for complexity. It turns out that the social bulletin system we were envisioning two years ago exactly describes the information flows in Twitter. The way your friends (followers) tweets (messages) aggregate on the Twitter homepage is identical to how we envisioned messages from our users collecting on the user's profile page. In our bulletin system, all the friends of a user receive a bulletin, similar to the "homepage" on Twitter, creating an information flow. The only difference is bulletins are like craigslist ads and expire. That original requirement for bulletings to operate as classified ads with an expiration date, similar to craigslist, held us back. I should have looked into Twitter integration then, since we would not have needed to develop one of our own.

12 March 2009

A Twitter Wiki

As the popularity of short, fragmentary messages grows, I have become concerned the public conversation may lose the capacity for thoughtfulness and reflection. At the same time, I would like to caution those who condemn Twitter or other systems based on micro content to not throw the baby out with the bath water. The long form newspaper article found in the New York Times or Washington Post contains a lot of material used to provide background for the reader, often at the end of the article. Not only is this text boring and redundant to the knowledgeable reader, it takes up previous space. The one thing the web is good at is connecting one piece of knowledge to a broader context of other pieces of knowledge. There is no sane reason to continue repeating background and further reading material in a long form newspaper article when on the web, a writer can simply link to the information.

The brief, concise texts of micro content can be connected to many other sources of information, some just as concise (a kind of "blizzard" of small pieces connected loosely) as well as to other longer, deeper and reflective sources. This loose, disjoint and connected type of writing is simply the network native way of writing and connecting information. It is beneficial, as long as both kinds of writing and forms of content are available and can be connected.

My concern is really with lowering the barrier of entry, enabling and encouraging those longer, deeper and reflective forms of writing. I recognize that there are benefits from shorter, more concise writing, which leaves redundant, expansive or source material hidden (properly) under a link or conntected through a network of tags or a network of people. Perhaps will will see fewer long texts divided up by headings and sections and more smaller texts connected together through search, tags and linkages into a variety of wholes, determined by the user's interests and needs.

About ten years ago, I was fascinated by the idea of a long text (article, book, etc.) entirely constructed of fragments, similar to the kind of texts you see posted on Twitter today, which could be freely rearranged similar to those magnets used to write poetry on refrigerator doors. I imagined that instead of writing a large text with a single coherent whole, they way books have always been written, the pieces of information on a topic could be combined to create a "book" in innumerable ways by rearranging those pieces.

It would be like taking all the paragraphs in a book, shaking them out on to the floor, and then allowing or enabling those pieces to be rearranged for each reader or interest. The pieces would be tied together by keyword or by search result and only lastly by links. I coded a small prototype application called Strands to test the idea, but work and life caught up with me and I shelved it. I was and am still surprised by the ease and rapidity with which people have adopted Twitter.

Not only are people using Twitter, despite the fragmentary nature of its texts, they are participating creatively in shaping the technology and usage of this kind of system based on fragmentary texts.

The use of tagging emerged spontaneously from the user base. Using "hashtags" brief texts can be connected to media, such as images and video, with the tag at the center of a network of content.

Also, I've noticed users are starting to fit the tag word into their text. Some examples are:

"Young Nebraska farmer explains how limiting direct payments would affect his #farm at www.nefb.org"
(Tweet from http://twitter.com/farmradio)

and

"farmanddairyGet four issues of #Farm and Dairy FREE! Click on the big promo on our home page: http://www.farmanddairy.com/"
(Tweet from http://twitter.com/farmanddairy)

At the heart of my Strands prototype were small texts connected by keywords. I wanted to create the lowest possible barrier of entry, so a user could create a keyword (essentially a tag, I called them "strand words") just by writing it into the text. In this system, what was essentially a tag was created by writing it (texts were scanned on post or edit for the presences of tags and any new ones added to an index), which is hauntingly similar to how people have started using tags on Twitter. They started out adding the tags to the end of a message, but have now begun incorporating them directly into the flow of text. I hesitated to continue working in this direction on Strands, partly because I expected people would find the tags sprinkled through the text troublesome.

My current interest is in providing tools or ideas that will encourage and enable a society addicted to short messages, however beneficial they may be, however native to the networked way of writing and reading in a connected fashion, to engage in greater contextualization and thoughtful reflection, to enable collecting some of the knowledge quickly flying by in the "Twitterverse" into slower, more reflective pools of knowledge, like eddies on the edges of a fast flowing stream.

The first tool I want to build is a "Twitter Wiki" enabling anyone to associate a text of any length with a Tweet and anyone to edit it. If I have the energy, I will post any experiments on my site or at least attempt to describe it.

03 March 2009

Social Micro-blogging and bookmarking

It hadn't occurred to me until I saw it being done that social bookmarking and social microblogging are both popular in part because the create flows of information edited and curated by experts.

One good reason to follow the bookmarks of a user belonging to a social bookmarking site is simply it is a source of good information. The bookmarks ought to be high quality and relevant in the expert's topic area.

It makes sense to follow the tweets on the homepage of a user belonging to Twitter (or any microblog system), because they represent a selection, an inclusion, of edited and curated information for free, usually from an expert.

A Twitter homepage combines the posts from a user's followers, which amounts to multiple levels of curation. Suppose a number of people practicing organic farming create Twitter accounts and post information they feel is important. Suppose then an expert in organic farming, perhaps an editor of an organic farming and gardening magazine becomes a Twitter user and then follows the tweets of those practicing organic farming. Suddenly, this user's homepage becomes a fountain of curated knowledge on organic farming.

The same phenomena occurs (without the vertically integrated curation) on social bookmarking sites. A social bookmark system is an example of horizontally integrated curation, since many hands organize, but one result does not necessarily flow into another, progressively filtering content. What if you could follow another person's bookmarks and aggregate the bookmarks of all your followers onto your profile page?

The question presents itself: Where did I go wrong?

I had thought of bookmarking as bookmarking and blogging as blogging, each highly personal, one organizing content for an individual's own use and the other for publishing (in a mostly traditional way, I tended to scoff at the idea of blogs as conversations, but they are, just not as flexible and immediate as microblogs). What I missed about the social aspect was the flows of information they create, which are curated. I envisioned years ago the idea of users collaboratively organizing the content of a website, but that wasn't all that great an achievement, since that is essentially what wiki users have been doing from day one.

I thought about applying it to all the content, thinking perhaps I could Tom Sawyer-like, get others to organize my stuff, but my second thought was, who would want to do that? I think most efforts to get users to organize content will fail and I think most efforts have, except where the social ingredient is in the mix. Collaboratively edited social content sites for news, bookmarks, short messages, do work, but through self interest in the flows of information they create. They become platforms for self-promotion.

There is no incentive for you to come organize my site. Flickr allows users to tag other user's pictures, to essentially organize content for another user, just as I thought would be possible, but I see no evidence it is being used generally. Except for the Flickr Commons, where there exists an incentive to surround historically important images with context, to tell their stories so they will be preserved and meaningful for society, where local historians can demonstrate their knowledge and where self-promotion is possible through the organization of other people's stuff.

Despite seeing the social ingredient at work in wikis and despite seeing the essentially (and pioneering) social organization of the CPAN library, I missed its importance. It's importance comes from the curated flows of information created by the social organizing, editing, contextualizing with narrative, selecting and filtering, that occurs in social media systems.

02 March 2009

Twitter as curated news feed

When I follow another Twitter user, their posts (Tweets) are included on my homepage, which is public. This amounts to creating a kind of "newspaper" news feed of content "curated" (selected and managed by me). The problem with this, is for example, that with our farmfoody.org Twitter account (established primarily for communicating announcements to users) is that it could become a kind of "mashup" of farm-food related news by following Twitter users posting on those subjects. However, that would result in clutter and chaos, since there is no way to organize the flow of content onto my homepage.

What is needed, is a way to tag posts. It would be nice if posts could be tagged according to topic and each tag converted to a tab, which would separate the streams of information, so there could be a #farm and a #food tag (using the hashtags convention) and a Farm and Food tab would appear on my homepage, allowing readers to chose the topic they are interested in following. I suppose they could just follow the individual sources, but what is needed is a curated aggregation to enable Twitter users to follow an "edited" flow of information through Twitter (like Reader's Digest?, it might even be possible for Twitter users to give a thumbs up/dn vote on what content should appear in a particular flow or to collaboratively tag).

If I could categorize posts based on farm or food topics, it would be useful to me and my followers. If the users I am following could tag their posts, it would be very similar and I would be relieved of some work (of course, merely following does in a sense create the mix, but it is all jumbled together). It is not important who organizes the content as much as it gets organized with the least effort possible.

I don't know if anyone is working on something like this, but it seems rational that Twitter would be working on some internal mechanisms for organizing that flow. The hashtags solution appears to not be scalable, since it requires following their user account (in order to scan for tags), more of a prototype.

17 February 2009

Dynamic Range

The Online Photographer has posted an excellent article, More on Dynamic Range, on the range of brightness in a scene, how it is captured as a photographic image, how to fit that range into the range of lightness levels recorded by the camera and express that range in the rendered medium, whether a JPEG image viewed on a monitor or a paper print.

He's right that dynamic range is the most abused, misused and poorly understood term in digital photography. It's the only short hand we have for "range of brightness values" or "range of tonal values," which are both going to give your fingers cramps if you write them often enough.

There is a lack of understanding by many photographers about the basic process of recording an image and producing a visible print from it. There are crucial, but precise, distinctions to be made, which took a long time and much expertise to establish in analog photography, so the confusion is not surprising.

The first thing to consider is the range of brightness in the scene (which the Online Photographer article demonstrates and discusses). It may seem obvious to some, but is often counter-intuitive, that a distinction exists between the range of measurable brightness values in the scene (and remember, most are reflected light, but some is direct from light sources or specular reflections, for purposes of exposure, it is good to consider only reflected light and not specular highlights, since they do not contain any detail or information) and the representation of those values as tonal values in the recording medium (in the camera, film or sensor).

The difference between the range of brightness in the scene and the translation of the brightness levels into tonal levels recorded by the camera (density in film, tonal levels in digital) is not immediately obvious, but the camera does not record brightness, but some analog of it, clumps of grain or numbers. To see the picture, the range of tonal levels must be translated back into a range of brightness values. We do this when printing a negative to photographic paper or viewing a transparency film slide through transmitted light (a projector or lightbox).

In the digital realm, the image rendered from raw capture data or printed to paper is the output, which must be translated into reflected or transmitted light so we can view the image. A complication in digital photography is the JPEG image, which places limitations on the original data. It would not matter either, if it were a TIFF image, since all images rendered from capture data have contrast curves applied to fit the image within the range of tonal values the format is capable of storing and to be "pleasing" to the eye. Linear-data is not pleasing to the eye because it contains too large a range of tonal levels and corresponding brightness range. It won't "look" like the original scene as the eye saw it.

When you are talking about dynamic range, you first need to ask, which range? Is it the range of brightness levels in the scene, capable of being captured by the sensor as input, capable of being rendered to output? Is it the range of brightness or tonal values you are considering?

The scene has a range of brightness values.

The recording medium (film or sensor) has a range of brightness values it is senstive (ISO comes in here) to and a range of tonal values it uses to express those values. The brightness levels are translated to those tonal levels (whether represented by density in analog film or by numbers in digital data).

The output medium has a range of tonal levels it is capable of storing and expressing as brightness values when viewed.

The complications come because of the need to match the range and step of tonal levels in the input to the output. Further complicating things is that the JPEG image has its own set of curves and translations, when printed the printer paper and inks place their own set of limitations and curves on the translation. The environment in which the image is viewed has its own limitations and effects on the brightness values percieved.

The capability of a camera or sensor cannot be judged by looking at a random example of output from a camera's JPEG engine. That would be the equivalent of judging a film by the quality of processing and printing from a randomly selected corner drugstore.