» Archive for June, 2010

The Productivity Conundrum – Dilbert Is Currently Busy

Thursday, June 24th, 2010 by David Goldes

How we, as knowledge workers, spend our day is something that we ourselves tend to not fully understand. 

Dilbert is currently busy...

Our impressions of what we have done in the course of a day are frequently far different than what really took place.  Dilbert famously noted that “Mondays are not part of the productive work week” and this is just the tip of the iceberg.

To find out a bit more about how we work, we’ve launched a brief survey that asks you to look at your most recent full day at work and answer a few questions.

Please click here to take the survey.

Participants will receive an Executive Summary of the survey’s findings and can also enter a drawing to win a set of Dilbert CubeGuard information overload blockers (three sets will be awarded).  After you complete the survey, please share the survey link with colleagues or in forums where knowledge workers congregate; the more people participating in the survey, the better we will be able to take the first steps to increasing our own productivity.

David M. Goldes is the president of Basex.

In the briefing room: Comintelli Knowledge XChanger

Thursday, June 24th, 2010 by Cody Burke

The battle to find the right piece of content at the right moment is a never ending quest for the knowledge worker.

Calling all cars...

While most companies have organized their various internal content stores and many have contracted for authoritative external content from sources such as Factiva and LexisNexis, this is only half the battle.

All of this progress notwithstanding, a knowledge worker often has to search through multiple systems to find exactly what he is looking for.  Frequently, he may not end up with the best and most up-to-date content because the individual searches produced results different from those an aggregated search would have presented.

Comintelli, a Swedish company founded in 1999, addresses this challenge with its Knowledge XChanger offering.  The solution aggregates content from both internal and external sources and then classifies, organizes, and presents relevant items to knowledge workers.  The content is packaged and delivered to work groups in a role-based and customized format so that only the most relevant information is presented.  Additionally, users select topics and enter search terms to further drill down on an area and refine the result set.

Knowledge XChanger allows knowledge workers to publish information through an easy-to-use browser-based interface or via e-mail.  In addition, the system supports commenting, voting, and chat around content.

Users can personalize how they receive information by using automatic e-mail alerts and/or via a customized start page.

When the user does perform a search, he is tapping into content that has been drawn from vetted and authoritative sources, which could include internal sites or select external sources such as news sites as well as from content providers such as Factiva.

A particularly valuable feature in Knowledge XChanger is the ability to find experts on a given topic.  The system uses Knowledge Points, a customizable feature that assigns points to users based on activities, to determine expertise.  For instance, a user may receive points for every time he reads an article, searches on a term, or comments on content.  Users can search for individuals who have expertise in a given area.

Tools such as Knowledge XChanger are key components on the road to the development of true Collaborative Business Environments.  In addition, by aggregating and delivering timely and relevant role-based content to the knowledge worker, the system tackles several aspects of Information Overload relating to search and information management.

Finally, by supporting expertise location with the system’s ability to associate individuals in an organization with topics they have knowledge and interest, Comintelli has taken a big step in improving knowledge sharing and collaboration by connecting knowledge workers to each other and jump-starting the collaboration process.

Cody Burke is a senior analyst at Basex.

When Too Much Knowledge Becomes a Dangerous Thing

Thursday, June 17th, 2010 by Jonathan Spira

Socrates was relentless in his pursuit of knowledge and truth and this eventually led to his death   In The Apology, Plato writes that Socrates believed that the public discussion of important issues was necessary for a life to be of value.  “The unexamined life is not worth living.”

Danger, Professor Robinson?

In the olden days, before the Web, someone wishing to leak secret government documents would adopt a code name (think “Deep Throat” of the Watergate era) and covertly contact a journalist.  The reporter would then publish the information if, in the view of the reporter, editor, and publisher, it did not cross certain lines, such as placing the lives of covert CIA agents in danger.

Enter WikiLeaks.

WikiLeaks, founded in 2006, describes itself as “a multi-jurisdictional public service designed to protect whistleblowers, journalists and activists who have sensitive materials to communicate to the public.”

The site was founded to support “principled leaking of information.”  A classic example of an individual following this line or reasoning, namely that leaking classified information is necessary for the greater good, is that of Daniel Ellsberg, who leaked the Pentagon Papers, thereby exposing the U.S. government’s attempts to deceive the U.S. public about the Vietnam War.  The decision by the New York Times to publish the Pentagon Papers is credited with shortening the war and saving thousands of lives.

Time magazine wrote that WikiLeaks, located in Sweden, where laws protect anonymity, “… could become as important a journalistic tool as the Freedom of Information Act.”

On the other hand, the U.S. government considers WikiLeaks to be a potential threat to security.  In a document eventually published on the WikiLeaks site, the Army Counterintelligence Center wrote that WikiLeaks “represents a potential force protection, counterintelligence, operational security (OPSEC), and information security (INFOSEC) threat to the US Army.”  The document also states that “the identification, exposure, termination of employment, criminal prosecution, legal action against current or former insiders, leakers, or whistleblowers could potentially damage or destroy this center of gravity and deter others considering similar actions from using the WikiLeaks.org web site.”

Ten days ago, Wired magazine reported that U.S. officials had arrested Spc. Bradley Manning, a 22-year-old army intelligence analyst who reportedly leaked hundreds of thousands of classified documents and records as well as classified U.S. combat videos to WikiLeaks.

Although WikiLeaks confidentiality has never been breached, Manning reportedly bragged about his exploits, resulting in his apprehension.

According to Wired, Manning took credit for leaking the classified video of a helicopter air strike in Baghdad that also claimed the lives of several civilian bystanders.  The previously-referenced Army Counterintelligence Center document also reportedly came from Manning.

The case of Manning is perhaps the tip of the iceberg.  Several million people in the U.S. hold security clearances and, while their motives may vary from clear (e.g. trying to end a war as in the case of Ellsberg) to unclear (e.g. Manning), the genie is clearly out of the bottle.

Socrates, a social and moral critic, preferred dying for his beliefs rather than to recant them.  Indeed, Plato referred to Socrates as the “gadfly” of the state.  The motives of today’s leakers may not be as virtuous as Socrates’ but today’s technology virtually ensures that a secret may not remain a secret for very long.

Jonathan B. Spira is CEO and Chief Analyst at Basex.

RIM’s Foleo

Thursday, June 17th, 2010 by David Goldes

Research in Motion, a mobile device company, will reportedly introduce a new BlackBerry with a slide-out keyboard as well as a large-screen tablet that will serve as a companion device to its smartphones later this year.  If the latter sounds familiar, there’s a reason why it does.  Not too long ago, back in May 2007, Palm introduced the Foleo, a laptop that included a paradox at no extra charge.

Palm Foleo ca. 2007

Palm billed the Foleo as a “smartphone companion.”  Indeed, at its launch, Palm co-founder Jeff Hawkins explicitly acknowledged the shortcomings of the smartphone form factor for doing intensive e-mail.  With a 10.2″ color screen and full-sized keyboard, the Foleo would allow mobile knowledge workers to edit and view e-mail and Microsoft Office documents accessible on a smartphone (and eventually on non-Palm devices).  The Foleo would offer built-in Wi-Fi and Bluetooth wireless support, making it capable of accessing the Web (without a Palm) as well as browser-based e-mail.

A few months later, Palm announced it was pulling the plug on the project – at a point where the company was nearly ready to ship the product.

There were several reasons why this happened and hopefully the executives in Waterloo are reading this and paying attention.

First, the reaction to the Foleo’s launch in many quarters was a collective yawn.  There was much that was good about the machine (incredible industrial design according to Jonathan Spira, who had a brief opportunity to use one, plus a lightweight, perfect form factor for working on a plane in tight quarters).

There was also much that the Foleo was not.  It was not particularly fast and its functionality was limited due to Palm’s emphasis on making it a peripheral first and networked computer second.

Finally, Palm never anticipated the advent of netbooks, which were then making their first appearance.  Today’s netbooks, available (with mobile broadband contracts) for as little as $49, come without the limitations of the Foleo and were not designed as somewhat crippled peripheral devices.

Palm CEO Ed Colligan, in a message announcing the company’s decision, wrote: “Our own evaluation and early market feedback were telling us that we still have a number of improvements to make Foleo a world-class product, and we can not afford to make those improvements on a platform that is not central to our core focus.” Palm is “working hard” on its next generation software platform and the Foleo was based on a second platform and separate design environment.

Back in 2007 I wrote that the Foleo did indeed demonstrate the potential of the ultra lightweight diskless portable and I hoped that someone would take notice.  Perhaps RIM has.

David M. Goldes is the president of Basex.

The Siren’s Call of Information Overload

Thursday, June 10th, 2010 by Jonathan Spira

Once again, information overload and attention management are front-page news. Matt Richtel at the New York Times wrote yet another piece on this topic that appeared earlier this week in the New York Times (in the interest of full disclosure, Matt interviewed me for background information as he was preparing the piece).

Ich weiß nicht, was soll es bedeuten...

Matt’s written on this subject many times before so I wasn’t surprised that he was working on this. Unfortunately, while he found some great examples of information-overload casualties, the trends and problems he examines in this 3500+ word piece were far from revolutionary. There are many more key points he could have addressed and focused on, and I will address a few here.

Indeed, the problem of information overload isn’t a new one but it is one that has been exacerbated by the fact that 1.) we have countless new gadgets and tools that deliver “information” and 2.) the rate of information creation has increased dramatically. As a result, in order to keep up, people attempt to multitask, something that our brains simply aren’t capable of handling with any degree of efficiency.

Instead of multitasking what we actually do is task switching which is really a series of continuous interruptions. While this is done in the belief that one is being more efficient and getting more done, nothing could be farther from the truth. Each interruption comes with a penalty.

In 2003 through 2005, Basex conducted research that led us to uncover the phenomenon of “recovery time,” the time it takes an individual to return to a task after he has been interrupted. Recovery time is generally imperceptible because the individual is not aware – even if he returns to the task – that he is struggling to get back to the point at which he was before the interruption.

Each time an individual switches tasks and tries to return to the previous task, he has to go back in time and recollect his thoughts and recall exactly what he has done and what he has yet to do. Some repetitive work may be involved as well, e.g. redoing the last few steps. This of course assumes that the individual returns at all – in some instances, the task is forgotten altogether. The interruptions also increase the likelihood of errors being committed.

When this happens over and over again (which is the case for most people during the workday), the ability to devote thought and reflection to a particular task – the hallmark of the knowledge worker – becomes nearly impossible. The human brain is curious and always seeking new information. As a result, external stimuli – the beeps and bleats of technology indicating a new message or call – are like the siren Loreley, the beautiful Rhine maiden who lured passing sailors to their doom with her singing and long, golden hair.

We found that recovery time is between 10 to 20 times the duration of the interruption. That means that a 30 second interruption can result in a minimum of 5 minutes of recovery time. Added together, unnecessary interruptions plus the related recovery time can consume as much as 28% of the workday and hundreds of billions of dollars in time.

Little has changed since then. If anything, we multitask more. But we can still tame the multitasking monster – it merely requires some discipline. In the coming weeks, we’ll look at ways to do just that.

Jonathan B. Spira is CEO and Chief Analyst at Basex.

Plato Turns 50

Thursday, June 3rd, 2010 by David Goldes

Imagine a world without the collaborative tools we take for granted today. Decades before the emergence of the Internet and World Wide Web, computer pioneers were building Plato, a system that pioneered chat rooms, e-mail, instant messaging, online forums and message boards, and remote screen sharing. 

When the mind is thinking it is talking to itself. -Plato

Plato (Programmed Logic for Automated Teaching Operations) was the world’s first computer-aided teaching system and it was built in 1960 at Computer-based Education Research Lab (CERL) at the University of Illinois and eventually comprised over 1,000 workstations worldwide. It was in existence for forty years and offered coursework ranging from elementary school to university-level.  

Social computing and collaboration began on Plato in 1973. That year, Plato got Plato Notes (message forums), Talk-o-matic (chatrooms), and Term-talk (instant messaging).  

Plato was also a breeding ground for today’s technology innovators. Ray Ozzie, the creator of Lotus Notes and Microsoft’s chief software architect, worked on the Plato system in the 1970s as an undergraduate student at the University of Illinois at Urbana-Champaign. Many others including Dave Woolley, who wrote Plato Notes at the age of 17, Kim Mast, who wrote Personal Notes (the e-mail system) in 1974 at the age of 18, and Doug Brown, creator of Talk-o-matic, continued to develop collaborative technologies in their careers.  

Don Bitzer, credited by many as the “father of Plato,” is the co-inventor of the plasma display and has spent his career focusing on collaborative technologies for use in the classroom.  

This week we celebrate Plato’s 50th anniversary. Why a week and not a day? I spoke with Brian Dear, whose book on Plato (The Friendly Orange Glow: The Story of the Plato System and the Dawn of Cyberculture) will be published later this year,told me “[I]t’s hard to pin down an exact date, due to a) it being open to interpretation as to what qualifies as the first day — when the project got green-lighted? when they started designing it? when a system was actually up and running? when they did the first demo? — and b) there’s little lasting documentary evidence from those earliest weeks.”  

“May 1960 was when Daniel Alpert’s interdisciplinary group that had held meetings for weeks about the feasibility of the lab embarking on an automated teaching project, finally submitted its report to Alpert. He read it, thought about it, and decided to ignore the group’s recommendation to not proceed. Instead he asked if a 26-year-old PhD named Don Bitzer wanted to have a go at it, and Bitzer agreed. Consequently, on June 3, Alpert wrote up his own report to the Dean of the Engineering School, which instead of reiterating his group’s recommendation to not go forward with a computer education project, stated that they were indeed going forward. Bitzer went right to work on it, brought in others to help with the hardware and software, and they had a prototype up and running pretty quickly that summer. The rest is history.”  

 

   

 

David M. Goldes is the president of Basex.

Understanding Our Information Diet

Wednesday, June 2nd, 2010 by Jonathan Spira and Cody Burke

The somewhat elusive key to understanding Information Overload, and thus developing meaningful solutions to lessen its impact, is to first develop a clear picture of the amount of information that individuals receive and consume and also develop an understanding of how much information is too much in a given circumstance.

Just how hungry for information are you now?

This is a tricky set of problems because information does not lend itself to direct measurement.  Traditionally, researchers have approached this question in one of three ways, namely looking at words, bytes, or time.  A document, for instance, could be high in words, low in bytes, and high in time spent reading it.  A video clip on the other hand, could be low in words, high in bytes, and low in time.

Research conducted at the University of California, San Diego tells us that roughly 3.6 zettabytes of information were consumed by Americans in their homes in 2008.  This translates to ca. 11.8 hours a day of information consumption.  Those numbers are, as stated, for information received and consumed solely in the home and do not address business settings.

In the coming months we will begin our efforts to determine how much information knowledge workers consume in the course of their work, thereby developing a profile and understanding of the knowledge workers’ information diet.

One concept we are studying is satisficing, a method of decision making that seeks to reach an “adequate” solution to a problem, as opposed to searching relentlessly for the optimal solution that may cost more in time spent than it is worth.  Satisficing is a naturally occurring and largely subconscious thought process that probably kept humankind from starving at some point in history, when our ancestors decided that they could make do with the berries on the tree and not wait forever for the perfect mammoth to pass by.

Depending on the circumstances, knowledge workers are both under- and overusing this strategy.  This frequently leaves them with sub-par solutions to a problem or results in wasted time when a simpler solution exists.

Another interesting concept we are grappling with is how to measure information.  The Shannon entropy, developed by Claude E. Shannon in 1948, is a way to measure the average information content of a message in units such as bits.  Perhaps more intriguing, it also provides a way to measure the information content that knowledge workers miss when they are unaware of a random variable.  For example, if only the last letter of a word is missing, it would be relatively easy to determine the word, as the other letters would provide context.  However, if only one or two of the letters in the word are presented, it will be much harder to determine the word, as there is little or no context.

Since this is ongoing work, and many of you readers have backgrounds in this area, we would like to hear from you in the coming weeks.  What do you think is the most valid way to measure information?  How much work related information do you estimate you are exposed to on a daily basis and how are you making these estimates?

Please participate in the discussion below.

Jonathan B. Spira is CEO and Chief Analyst at Basex.
Cody Burke is a senior analyst at Basex.


google