» Archive for the 'Information Management' Category

Information Overload in Government: Costly and Dangerous

Wednesday, July 1st, 2009 by Cody Burke

Information overload is typically thought of as impacting large corporations. Indeed, the needs of the public sector have seemingly been ignored in such discussions (including those by Basex).  For a variety of reasons, government entities are in many respects far less prepared to deal with the problem than their corporate equivalents.

In a typical corporate environment the primary focus is naturally on turning a profit.  As a result, when presented with hard numbers that show the impact the information overload is having on the bottom line (Information Overload costs the U.S. economy ca. $900 billion per year), smart managers have little choice but to consider how to reduce their exposure to the problem and recapture lost revenue.

Public sector organizations, however, operate with a level of bureaucracy that leaves even the largest corporation looking like a mom-and-pop shop.  Due to the complicated and somewhat politicized nature of budgets and contracting, as well as the not-for-profit operating model of government programs that may prioritize job creation and services rendered over efficiency, it is far more difficult for government entities to move with alacrity to address these issues.

Any discussion of information overload in a government context should incorporate two key points, finance and national security.

As with any organization, be it public or private, the overabundance of information that confronts knowledge workers directly impacts their ability to do their jobs in an efficient and effective manner.  In an era of extreme belt tightening and budget cuts at all levels of government, there is no better time to address information overload in this context.

Basex estimates that local, state, and federal governments spend at least $31 billion managing information each year.  Indeed, it is almost impossible to work for the government and not spend significant time managing information. In a recent survey of government and education workers conducted by Xerox as part of a study on public sector e-mail use, 58% of those surveyed reported spending nearly half of their average workday filing, deleting or sorting paper or digital information.

A second and decidedly more sinister problem is of information overload’s impact on national security.  For instance, misuse of e-mail can be deadly to government agencies.  In early January of 2009, a cascading series of reply-to-all e-mail messages in the State Department snowballed and nearly shut down the e-mail system in what amounted to a self-administered denial of service attack.  The matter was not taken lightly: a warning was sent to all State Department employees promising unspecified “disciplinary actions” for using the reply-to-all function on e-mail with large distribution lists.  My colleague Jonathan Spira is preparing a case study on a similar occurrence that took place at Maxwell Air Force Base, ironically during a conference on cyber-security.

These instances demonstrate how vulnerable government agencies, including those tasked with the mission of protecting our country, can be and how threats can come from the simple misuse of a common communication tool.

Trouble exists on structural and cultural levels as well. Intelligence and law enforcement agencies depend on knowledge sharing to spot threats and create actionable intelligence, yet the ability for the various agencies to share that knowledge is hamstrung by outdated and somewhat nonsensical classification systems, incompatible tools, and a culture that promotes extreme siloing of information.  This stovepipe mentality, where information moves up or down in a hierarchal manner, not horizontally to where it is needed, fails to match the innovative, network-based threats that we face in the form of non-state actors, terrorism, and organized crime.

Additionally, the sheer volume of content has increased significantly as the intelligence community rightly begins to shift away from a culture of secrecy, where classified information is deemed to have more value simply by virtue of its classification, to a more open model that leverages Open Source Intelligence (OSINT) assets (anything legally available, including newspaper articles, social content, and conference proceedings).  In 2005, following the recommendation of the 9-11 Commission and the WMD Commission, the Office of the Director of National Intelligence (ODNI) created the Open Source Center (OSC) to collect, analyze, and disseminate open source intelligence, as well as train analysts to take full advantage of these resources.

There have been a few bright spots in government information sharing, such as the Intellipedia, a Wikipedia-style information resource developed for the U.S. intelligence community in 2005, and the Center for Army Lessons Learned, a resource for military personnel to share best practices and local knowledge.  Sadly, the overall effort of the military and intelligence community to address information issues has not yet caught up to the efforts that are being made in the private sector.

Just as government is not immune from information overload, it is also not immune from taking steps that will reduce the extent of the problem and at the same time improve information sharing and collaboration.  We’re just scratching the surface here (we’ll have more research on this topic in the coming months) but we would love to get feedback and comments from readers in these sectors.

Cody Burke is a senior analyst at Basex.

Questions and Answers About Knowledge Management

Thursday, June 18th, 2009 by Jonathan Spira

Victor, a senior manager at HP, posted an insightful question concerning the current state of knowledge management in the Basex Information Overload Network on LinkedIn (if you aren’t yet a member, and over 100 people joined in the last month, click here).

With his permission, I am reproducing his question and my reply with the hope that the discussion continues below.

Victor:
“I’ve some questions about KM.  First, what is the most important function of a KM system?  The content management?  The collaboration based communication channel?  The security control mechanism?  The all-in-one portal?  The fast multi-faceted based search engine?  Now that we have a dedicated function for knowledge management and there are CKOs who are in charge of that, then what’s the core mission of it?  To my understanding it’s not only about technologies, or just setup some document management system, or an enterprise wide SNS system… Then what is our major target?  If I’m asked by the boss ‘what’s your strategic value?’ how can we answer that question?  A position or team without a clear vision and goal is worthless.  Say for IT department it is a business automation enabler, for sales department it’s the source of revenue.  Then what is the added-value of KM?  Sorry for the layman question but I’m curious to get the answer.”

Jonathan:
Victor, to me KM is more of a discipline than a specific system.  In order for companies to remain competitive, they have to ensure knowledge sharing, knowledge transfer, and collaboration.  One of the greatest problems in this area is that individual problems are looked at in isolation, without an understanding or regard for the “big picture” so to speak.  Someone managing a document management project may not take into consideration what someone managing a new or existing search tool or workflow system is planning.

My approach has to be more holistic; I sometimes refer to it as “putting the pieces of the puzzle together” (despite the fact that this oversimplifies).  I have found that, when working with companies that are trying to answer questions similar to yours, few understand where one technology, such as content management, stops and where another (workflow, search, unified communications) begins.

To help end-user organizations better understand how to put the pieces of the puzzle together, we organized our coverage of all things knowledge-sharing and collaboration around the concept of what we called a “market supersegment,” which is essentially an amalgam of 22 markets most people think of as separate and distinct.  If you look at knowledge-sharing and collaboration from this viewpoint, you will find it much easier to address many of your questions.  Every company and CKO will have different core missions by the way.  I would surmise that the underlying commonality will be to keep information flowing and break down barriers.  How one gets there will vary greatly by organization.  You also have to take into consideration tremendous variances in corporate culture, which will then dictate how comfortable people feel about different forms of collaboration and knowledge sharing.

I hope this at least begins to address some of what you were trying to understand.

Jonathan B. Spira is CEO and Chief Analyst at Basex.

In the Briefing Room: BA-Insight Longitude

Thursday, June 11th, 2009 by Cody Burke

Without question, search is the Achilles heel of knowledge work.  It is almost universally acknowledged that 50% of all searches fail.  The dirty little secret in search – and one that we uncovered through research we conducted in 2007 – is that 50% of the searches that knowledge workers believe to have been successful also fail in some manner (i.e. outdated information, second-best information, or content that is just outright incorrect).

Obviously, failed search is a major issue and large contributor to information overload.  Part of the problem is not the search technology per se, but the selection of the source that provides the results.  If a search only looks through unstructured data, it ignores the valuable information that exists as structured data.  Search tools need to look at all information sources in order to return not only complete results, but to rank results for disparate data sources accordingly.

For reasons that are inexplicable to this writer, many companies have not chosen to deploy search tools that examine every nook and cranny of a company’s information assets.  A few smart companies are deploying search tools that do look in every knowledge repository.  Exercising due diligence in searching can avoid failures that result from searching in a partial source set.

BA-Insight is a company that is attempting to even the odds through Longitude, its search enhancement for Microsoft SharePoint and Microsoft Search Server, as well as connectors to ERM, CRM, and EMC platforms.  The premise is simple; by expanding the sources through which a search is conducted, as well as improving the user interface, search results will have more value, be found in less time, and be easier to utilize once found.

Longitude search product enhances SharePoint Server and Microsoft Search Server by presenting results in page previews, eliminating the need to download the document.  The preview is presented in a split screen, with the search results on one half, and the preview panel on the other.  When a document is selected, the preview opens to the relevant page, not just the beginning of the document.  This saves time in two ways; one, the time it would take to open what might be an undesirable document, and two, the time it would take to find the relevant text in the document by scrolling though it manually.  Longitude also supports collaborative work by making functionality such as e-mailing documents, editing, and adding tags and bookmarks, available from within the preview panel.

Longitude supports federated search through multiple repositories of both unstructured and structured data via connectors for Lotus Notes, Documentum, Exchange, Microsoft Dynamics DRM, and Symantec Enterprise Vault among others.  Content is assigned to metadata automatically as users search and find content, and search is guided through Parametric Navigation that takes the metadata into account to search using complex queries.

Knowledge workers spend on average 15% of the day searching.  We know that 75% of those searches fail when we account for the two types of failure previously mentioned.  Clearly the odds of finding what one is looking for are against the searcher.  Most tools in a company don’t search in enough places, and because of technology sprawl, knowledge workers are just as likely to have stored critical information in a vat that is not touched by the search system as one that is.  Tools such as Longitude go a long way towards evening the odds for the knowledge worker.

Cody Burke is a senior analyst at Basex.

In the Briefing Room: Vasont 12

Thursday, May 21st, 2009 by Cody Burke

The enterprise equivalent of reinventing the wheel, that is, the recreation of already existing content, is a major and costly problem.  It is also a symptom of information overload.  When an organization and its knowledge workers are not able to find what they are looking for, due to too much information, they often end up recreating the work of others, wasting valuable time and energy.

To counter this trend and better leverage existing content, companies need to deploy systems that promote the reuse of content when and where it is needed.  Content is traditionally thought of at the document level; when a knowledge worker creates a document it is named, saved, tagged, and categorized in folders, databases, and document libraries.  Unfortunately, this method does not treat content as modular on a more granular level.  A knowledge worker, viewing a document in its entirety, with its corresponding file name, tags, and other metadata, may miss the fact that a single chapter in the document is relevant to another project.  Extracting that single chapter for reuse could save hours of work recreating it.

One company that does look at content management precisely in this manner is Vasont Systems, a content management software and data services company.  Its content management system, now in version 12, focuses on what Vasont calls component content management (CCM), that is, content that is organized on a granular sub-component level, not a document level.  The advantage of CCM is the ability to store content once, and reuse it in a much more precise way.  CCM is particularly useful for multilingual content delivery to multiple channels.  Content components can be translated as needed, and assembled to form the document that is required.  The benefits of CCM include increased accuracy because content is the same in all instances it is used and reductions in recreation time due to individual components being easier to locate and reuse.

As a CMS, Vasont 12 allows users to create, store, and reuse multilingual content, with all content stored in a singe repository.  The  interface is clean and relatively intuitive; on the home page the user is presented with modules including those for notifications, tasks, workspaces, collections, and queries.  If changes are made to content, the change can be reflected dynamically in all other instances of that content, or other users of that content can be alerted via a notification so they can approve the change if they wish to do so.  Changes in content are indicated by a status icon, making component status clear.

In Vasont 12, project management capabilities have been strengthened to show overall status of projects and workflows in graphical form, a collaborative review process has been added, and a new translation interface shows the number of words and the percentage of a document left to be translated.  Also new is a preview panel that shows content in XML, with comments and annotations.  Vasont 12 is available both as licensed software and via the Software-as-a-Service (SaaS) model.

Cody Burke is a senior analyst at Basex.

Encarta: 1993 – 2009

Wednesday, April 1st, 2009 by David Goldes

Perhaps not surprisingly, Microsoft announced, via a notice posted on the MSN Web site, that it would stop selling Encarta CDs as of June and discontinue the online version of Encarta by the end of 2009.

Microsoft’s move is a recognition on the part of the company that the business of publishing information has once again changed dramatically.  In the early 1990s, traditional print publishers, such as the Encyclopaedia Britannica, found in Microsoft a formidable competitor when Microsoft launched Encarta on CDs and included copies of it in Microsoft Windows.  Microsoft purchased non-exclusive rights to the Funk and Wagnalls Encyclopedia, which continued separately as a print edition until the late 1990s; the company had reportedly approached Encyclopaedia Britannica first but its owner, worried that sales of the print edition would be hurt, turned down the offer.

Microsoft continued to enhance Encarta by purchasing and incorporating into it Collier’s Encyclopedia and the New Merit Scholar’s Encyclopedia.

Yet Encarta’s time in the sun was fleeting as online information resources, such as the Wikipedia, grew in size (it now has over 10 million articles in over 260 languages).  By comparison, Microsoft’s online Encarta offering currently has 42,000 articles and the complete English language version has only somewhat more than 62,000 articles and is updated much less frequently than the Wikipedia.

“Encarta has been a popular product around the world for many years,” Microsoft wrote in its posted notice. “However, the category of traditional encyclopedias and reference material has changed. People today seek and consume information in considerably different ways than in years past.”

David M. Goldes is the president of Basex.

The Googlification of Search

Thursday, March 19th, 2009 by Jonathan Spira

Google’s clean home page, combined with the simple search box, has made it easy to look up something online.  Indeed, using Google may just be too easy.

Google uses keyword search.  The concept sounds simple.  Type a few words into a search box and out come the answers.  Unfortunately, it isn’t that simple and it doesn’t really work that way.

Search is a 50-50 proposition.  Perhaps 50% of the time, you will get what appear to be meaningful results from such a search.  The other 50% of the time, you will get rubbish. If you’re lucky that is.

Why does this only work sometimes?  This is because there are two types of searchers, or more accurately, two types of searches.  One is keyword search, the second is category, or taxonomy, search.

It is possible to get incredibly precise search results with keyword search.  Indeed, there is no question that keyword search is a powerful search function.  Being able to enter any word, term, or phrase allows for great precision in some situations – and can result in an inability to find useful information in many others.

However, the use of a taxonomy, or categories, in search, allows the knowledge worker to follow a path that will both provide guidance and limit the number of extraneous search results returned.  Using a taxonomy can improve search recall and precision due to the following factors:

1.)    In keyword search, users simply do not construct their search terms to garner the best results.
2.)    Users also do not use enough keywords to narrow down the search.
3.)    Google’s search results reflect Google’s view of the importance of a Web page as determined by the company’s PageRank technology, which looks at the number of high-quality Web sites that link to a particular page.  This doesn’t necessarily mean that the first pages in the search results have the best content but only that they are the most popular.
4.)    Web site owners can manipulate Google and other search engine results through search engine optimization (SEO).  There is an entire industry built around this service and the use of SEO can dramatically impact the positioning of a Web site on the results page.

Unfortunately, in part thanks to Google’s ubiquity as well as its perceived ease of use, the concept of search to most people seems to equal keyword search.  As more and more Web sites and publications (the New York Times being one prominent example) move to a Google search platform, the ability to find relevant information may be compromised.

In the case of the New York Times, much of the functionality previously available disappeared when the Times deployed Google Custom Search.  Only those visitors who know to click on “advanced search” can specify a date range and whether they want to search by relevancy, newest first, or oldest first, although even the “advanced” search experience is still lacking compared to the Times’ earlier system.  Thanks to the Googlification of search, however, most visitors only access the search box, and their ability to find the answers they are seeking is hobbled by the system’s limitations.

Jonathan B. Spira is the CEO and Chief Analyst at Basex.

Information Overload in Government: $31 Billion Spent Managing Information

Thursday, February 19th, 2009 by David Goldes

If you’ve ever wondered what the typical government worker does in the course of his workday, it’s a good chance he spends a lot of time filing, deleting, or sorting paper and/or digital information.  According to research released today by Xerox and Basex, based on a survey conducted by Xerox and Harris Interactive, 58% of surveyed U.S. government and education workers spend nearly half of the typical workday doing just that.  Our research found that the effort to manage information costs local, state, and federal governments a minimum of $31 billion per year.

Today, with cutbacks in services looming if not already in place, tackling the problem of information overload is a good place to start eliminating some of these costs.  Taking such steps will speed up work processes, reduce stress levels, and save time and money.

The survey itself was quite revealing.  57% of those surveyed said that not finding the right information was more frustrating than being stuck in a traffic jam.  38% said that they had to redo reports or other work as a result.  24% said they later discovered they had used the wrong information in preparing their work, and 37% agreed that their organizations are drowning in paper (yes, paper: 50% of the processes of those surveyed are still paper based).

If you are curious about your organization’s exposure to Information Overload, visit our Information Overload Calculator.  The calculator allows you to estimate the impact of the problem on your own organization.

So far, well over 5000 people, in industries ranging from advertising to zoology, have determined their exposure.  If you haven’t yet put a dollar value to your exposure, please fasten your seatbelt and try it yourself.  You’ll be glad you did.

David M. Goldes is the president of Basex.

Google Glitch: Human Error the Culprit

Sunday, February 1st, 2009 by Jonathan Spira
The Google warning Saturday morning

The Google "warning" Saturday morning

A glitch in the Google search service caused the company to warn users – including me early Saturday morning – that every Web site listed in the results could cause harm to their computer.

While doing a search on Google at that time (yes, my work-life balance has been decimated), I noticed something funny about Google’s results.  Every result included a disclaimer that “[T]his site may harm your computer.”  Fearing a virus or other malware (although I couldn’t see how it could possibly have this effect), I tried several other computers including a Mac running Safari.  All searches, regardless of topic, computer, and browser, returned similar warnings.  In addition, although they were present and highlighted in green, the links to the actual Web sites were not clickable.

The problem seemed to last for about an hour.

Google later acknowledged on its blog that all searches during that time period produced links with the same warning message.

The warning was not limited to English

The warning was not limited to English

“What happened?” Google explained in the blog. “Very simply, human error.”  Unbeknownst to most of us, Google does maintain a list of sites that install malware on visitors’ computers in the background.

The list of sites is periodically updated and Google released an update Saturday morning.  This is where the human error comes in.  A Google employee included the URL of “/” in the list and “/” is part of all URLs.  Google caught this problem fairly quickly; according to the company, the maximum duration of the problem for a given user was ca. 40 minutes.  It seemed to impact  me a bit longer than that but then the problem disappeared.

Fortunately, I made several screen captures of the error for posterity.

Google does have a reputation for an extremely reliable service although errors do creep in from time to time.  Last month, a glitch in Google Maps sent drivers travelling within Staten Island on a 283-kilometer detour to Schenectady.

Jonathan B. Spira is the CEO and Chief Analyst at Basex.

Clicking on a link led to this page on Saturday.

Clicking on a link led to this page on Saturday.

Lotusphere: Blue is the New Yellow

Thursday, January 22nd, 2009 by Jonathan Spira

This week was the 16th annual Lotusphere conference in Orlando, Florida.  It was my 16th as well, although my count includes three Lotuspheres in Berlin.

As has been the custom all these years, IBM once again unleashed a flood of information, both in the general session and throughout the event.  For those allergic to information overload, Orlando was a dangerous place.

The news, from a somewhat modder, hipper, Lotus, which trotted out the Blue Man Group (one had to wonder why it took Big Blue over a decade to book them) and Dan Aykroyd to further underscore the message of collaboration and this year’s theme of resonance.  Last year, incidentally, we said that “yellow is the new black.”   Regardless of color, the tools coming from Lotus allowing knowledge workers to share knowledge and collaborate are stronger and more powerful than ever.

Indeed, resonance can be “very very powerful,” Lotus GM Bob Picciano (attending his first Lotusphere following his appointment to the top position eight months ago) pointed it out in the opening session.  When it’s working at its full potential, he added, it will “absolutely shatter windows.”

With Research in Motion CEO Jim Balsillie present, IBM celebrated the tenth anniversary of the BlackBerry mobile device by unveiling a new BlackBerry client for IBM Lotus Sametime, IBM’s unified communications and collaboration platform, that supports Web conferencing, file transfer, public groups, and enhanced presence.  BlackBerry addicts, excuse me, users, can also open Lotus Symphony word processing documents attached to e-mail or Sametime, with eventual access to presentations and spreadsheets.   They can also download, edit, and post to Lotus Quickr team software.

The new BlackBerry client for IBM Lotus Connections social software platform integrates with e-mail, camera, media player, and the browser, and supports blogs, activities, and communities.  It also supports enhanced profile information including name pronunciations and pictures.  Previously, users on BlackBerry devices could only access Connections’ profiles and tag tools.

But there was more, lots more.

Lotus Sametime
IBM also announced Lotus Sametime 8.5.  Not surprisingly, the new version sports a brand new user interface.  It also includes a tool kit that allows customers to use Sametime to add collaborative capabilities such as presence, instant messaging, and click-to-call, to their business processes.  Sametime features enhanced meeting support, including an Ajax-based zero-download Web client and the ability to add participants by dragging and dropping names.  Other enhancements include improved audio and video, persistent meeting rooms, better support for the Mac and Linux platforms, and the ability to record meetings in industry standard formats.  The Sametime Connect client includes connectivity to profiles within Lotus Connections and pictures from contacts in Lotus Notes.  Sametime Unified Telephony ties Sametime to corporate telephone systems and allows knowledge workers to give out one phone number and set up rules that allow them to be reached based on various conditions (if one is in a meeting, the call could go directly to voicemail unless it’s one’s manager, in which case it would ring on the mobile).

LotusLive
After a year of public beta using the code-name “Project Bluehouse,” IBM announced LotusLive.  The new cloud-based portfolio of collaboration tools and social software supports e-mail, collaboration, and Web conferencing. LotusLive is built using open Web-based standards and an open business model allowing companies to easily integrate third party applications into their environment.  Two LotusLive services are available from the site, Meetings and Events.  Meetings integrate audio and video conferencing; events supports online conferences including registration.

The IBM Web site also lists LotusLive Notes, or IBM Lotus Notes Hosted Messaging in more formal IBM parlance, but unlike Events and Meetings, you can’t sign up and start the service online.  The only button to click is the one that says “Contact Sales.”

Partners for LotusLive: Skype, LinkedIn, Salesforce.com
IBM also announced that LotusLive will support Skype, LinkedIn, and salesforce.com.  LinkedIn members will be able to search LinkedIn’s public professional network from within LotusLive and then collaborate with them using LotusLive services.  Salesforce users will be able to use LotusLive’s collaborative tools in conjunction with the customer and opportunity management tools available in the Salesforce CRM application.  LotusLive users will also be able to call Skype contacts from within LotusLive

LotusLive Engage
IBM also announced the beta of LotusLive Engage, a “smarter” meeting service according to IBM.  Engage is a suite of tools that conflates Web conferencing and collaboration with file storage and sharing, instant messaging, and chart creation.  It allows knowledge workers to continuously engage – not just for one meeting – in a community-like environment.

IBM and SAP present Alloy
IBM and SAP announced their first joint product, Alloy.  Previewed at last year’s Lotusphere under the code name “Atlantic,” Alloy presents information and data from SAP applications within the Lotus Notes client and Lotus Notes applications.

If you want to look back at news from past Lotuspheres, feel free to click back to 2008, 20072006, 2005, or 2004.

Jonathan B. Spira is the CEO and Chief Analyst at Basex.

What People Don’t Understand About Information Overload

Friday, January 9th, 2009 by Jonathan Spira

Since we announced an approximate cost of Information Overload ($900 billion p.a. to the U.S. economy for 2008), there has been a lot of discussion both in the media and the blogosphere about the problem. Some bloggers have written that this is much ado about nothing and mistake what we are saying for an attempt to measure productivity as if knowledge workers should be all work and no play.

That is so far from the reality of the situation that I felt it necessary to address it here.

Information Overload is a problem because it creates a bottleneck that stops us from absorbing all of the information being thrust at us. Clearly, some information is left behind. Some of it might even be useful or important. What’s worse is that we don’t generally know what we don’t know, so we may make decisions based on the information we have available to us, even though we are overlooking some information (that may or may not be critical) simply because we’re unaware of its existence.

Should knowledge workers not stop and take a break, visit a Web site that is not work related, stop and smell the roses, go to the water cooler, play a game? “All work and no play makes Jack a dull boy” goes the aphorism. My version would read something like “All work and no play makes Jack a burnt out knowledge worker.”

So let’s set the record straight.

Knowledge workers need mental breaks and down time from their work during the course of the day, but they need to be at a time and place of the knowledge workers’ choosing, not when an interruption breaks their concentration.

Reducing Information Overload is about increasing productivity, and the ROI for the organization’s knowledge workers. Understanding the potential impact, i.e. the amount of money that it takes to pay knowledge workers during time that they may not be at their peak efficiency, allows organizations to conceptualize the problem and begin to take action.

If you don’t know how much Information Overload is costing you, you can find out at the Information Overload Calculator. But be prepared.

Jonathan B. Spira is CEO and Chief Analyst at Basex.


google